Vista Migration Danielle Ruest and Nelson Ruest

Vista Migration Danielle Ruest and Nelson Ruest
Vista Migration
Danielle Ruest
and Nelson Ruest
Introduction to Realtimepublishers
by Don Jones, Series Editor
For several years, now, Realtime has produced dozens and dozens of high-quality books that just
happen to be delivered in electronic format—at no cost to you, the reader. We’ve made this
unique publishing model work through the generous support and cooperation of our sponsors,
who agree to bear each book’s production expenses for the benefit of our readers.
Although we’ve always offered our publications to you for free, don’t think for a moment that
quality is anything less than our top priority. My job is to make sure that our books are as good
as—and in most cases better than—any printed book that would cost you $40 or more. Our
electronic publishing model offers several advantages over printed books: You receive chapters
literally as fast as our authors produce them (hence the “realtime” aspect of our model), and we
can update chapters to reflect the latest changes in technology.
I want to point out that our books are by no means paid advertisements or white papers. We’re an
independent publishing company, and an important aspect of my job is to make sure that our
authors are free to voice their expertise and opinions without reservation or restriction. We
maintain complete editorial control of our publications, and I’m proud that we’ve produced so
many quality books over the past years.
I want to extend an invitation to visit us at, especially if
you’ve received this publication from a friend or colleague. We have a wide variety of additional
books on a range of topics, and you’re sure to find something that’s of interest to you—and it
won’t cost you a thing. We hope you’ll continue to come to Realtime for your educational needs
far into the future.
Until then, enjoy.
Don Jones
Table of Contents
Introduction to Realtimepublishers.................................................................................................. i
Chapter 1: To Migrate or not to Migrate .........................................................................................1
What is Windows Vista and why should I migrate?........................................................................4
Compelling Vista Features...................................................................................................8
Compelling Features for End Users.........................................................................9
Compelling Vista Features for IT Pros ..............................................................................14
Compelling Vista Features for Developers............................................................19
Creating the Migration Business Case...........................................................................................21
Migration Strategies...........................................................................................................23
Making a case for a flexible infrastructure ........................................................................25
Chapter 2: Planning the Migration.................................................................................................27
Actual Migration Tasks..................................................................................................................27
Hardware and Software Readiness Assessment ................................................................28
Personality Capture............................................................................................................32
Creating and Deploying Base Images................................................................................33
Application Packaging .......................................................................................................35
Software Installation ..........................................................................................................36
Personality Restoration ......................................................................................................36
Migration Status Reporting................................................................................................37
An Extended View of Migrations ..................................................................................................37
Using the QUOTE© System...............................................................................................38
Question Phase — Problem Statement ..............................................................................40
Understanding Phase — Exploring the Impacts of Migration...........................................42
Organization Phase — Preparing for the Migration ..........................................................46
Transfer Phase — Performing the Pilot Project, then moving on to Deployment.............51
Evaluation Phase — Moving on to Production..................................................................54
Creating a Migration Project Plan..................................................................................................56
Project Planning Considerations ........................................................................................57
Required Teams and Roles ................................................................................................59
Chapter 3: Creating the Migration Test Bed..................................................................................61
Identifying Team Needs.....................................................................................................63
Working with Different Testing Levels.............................................................................65
Required Lab Environments ..............................................................................................68
Table of Contents
Relying on Virtual Machine Software ...........................................................................................69
Physical versus Logical Workspaces .................................................................................71
Defining Lab Requirements...........................................................................................................72
Minimal Configurations for Lab Systems..........................................................................73
Virtual Machine Configurations ........................................................................................75
VM User Accounts ............................................................................................................76
Required Server and Workstation Roles............................................................................77
Requirements for each Testing Level ............................................................................................79
Creating the Lab Environment.......................................................................................................81
Chapter 4: Building the Migration Toolkit ....................................................................................86
Technical and Administrative Migration Guidance.......................................................................86
Project Support Tools ....................................................................................................................88
Using a Deviance Registry.................................................................................................90
Lab Management Tools .....................................................................................................91
The Windows Vista Migration Toolkit..........................................................................................93
Inventory and Asset Management Tools ...........................................................................94
Personality Migration Tools ..............................................................................................96
Operating System Imaging Tools ......................................................................................99
Application Preparation Tools .........................................................................................101
Application Packaging Tools...............................................................................102
Software Virtualization Tools..............................................................................104
Software Deployment Tools ................................................................................104
Making the Case for Extended, Lifecycle Management Systems ...............................................106
Chapter 5: Security and Infrastructure Considerations................................................................108
Perform Initial Inventories...........................................................................................................111
The Inventory Assessment Tool ......................................................................................112
Technical Requirements for Inventory Tools ..................................................................116
Collecting the Basic System Inventory............................................................................117
Server-based Operations in the Lab.............................................................................................120
Build the Host Servers .....................................................................................................120
Build Central Laboratory Services...................................................................................122
Participate in the Solution Design....................................................................................123
Prepare the Detailed System Inventory........................................................................................123
Table of Contents
Validating Inventories......................................................................................................124
Rationalize Everything.....................................................................................................124
Perform the Inventory Handoff........................................................................................126
Perform a Profile Sizing Analysis................................................................................................127
Perform Project Engineering Activities .......................................................................................128
Vista OS Management Infrastructures.........................................................................................131
Manage Vista GPOs.........................................................................................................132
Manage Vista Folder Redirection ....................................................................................136
Manage Vista Security.....................................................................................................136
Manage Vista Event Logs................................................................................................137
Manage Vista Licenses ....................................................................................................138
Support the Operations Team ......................................................................................................139
Chapter 6: Preparing Applications...............................................................................................140
Application Support in Vista .......................................................................................................143
Vista Application Compatibility Features .......................................................................146
Vista File and Registry Virtualization .............................................................................148
64-bit Support for 32-bit Applications.............................................................................148
Program Compatibility Assistant.....................................................................................149
Windows Installer version 4.0 .........................................................................................151
Microsoft ACT version 5.0 ..............................................................................................152
In-house Development Compatibility Tools....................................................................154
Develop a Structured Application Management Strategy ...........................................................155
The Application Management Lifecycle .........................................................................155
Manage Commercial Software Licenses .........................................................................157
Develop a Structured System Management Process....................................................................157
Work with a System Stack...............................................................................................158
Maintain Constant Inventories.........................................................................................160
Package Applications for Windows Installer...............................................................................162
Explore Application Virtualization Options ................................................................................166
Do away with Application Conflicts................................................................................168
Review your System Construction Strategy ....................................................................170
Integrate Application Virtualization to the Migration Process ....................................................172
Chapter 7: Kernel Image Management ........................................................................................174
Table of Contents
Defining the Logical OS Configuration.......................................................................................176
Physical Layer..................................................................................................................177
Operating System Layer ..................................................................................................178
Networking Layer ............................................................................................................178
Storage Layer ...................................................................................................................179
Security Layer..................................................................................................................180
Communications Layer....................................................................................................182
Common Productivity Tools Layer .................................................................................183
Presentation Layer ...........................................................................................................184
Non-Kernel Layers...........................................................................................................185
Identifying Kernel Contents.............................................................................................185
“Thick” versus “Thin” Images.............................................................................186
Using a Single Worldwide Image ........................................................................188
Discovering the Installation Process............................................................................................189
Identifying Hardware Requirements................................................................................189
Identifying Installation Methods......................................................................................190
Using Installation Documentation ...................................................................................195
The Installation Preparation Checklist.................................................................196
Documenting PC Installations .............................................................................197
Post-Installation Processes...................................................................................197
Supported Installation Methods .......................................................................................199
Selecting an Installation Process......................................................................................200
Determining the Physical OS Configuration ...............................................................................200
Applying the Post-Installation Checklist .........................................................................201
Update the Default User Profile.......................................................................................204
Determining the OS Deployment Method ...................................................................................206
Preparation and Prerequisites...........................................................................................208
Build a Smart Solution.................................................................................................................209
Chapter 8: Working with Personality Captures ...........................................................................210
Define your Profile Policy ...........................................................................................................212
Choosing the Profiles to Protect ......................................................................................214
Differences between Windows XP and Vista..................................................................216
Completing the Personality Protection Policy .................................................................219
Table of Contents
Determine your Backup Policy........................................................................................225
Prepare your Protection Mechanisms ..........................................................................................226
Long-term Personality Protection Mechanisms...........................................................................229
Relying on Vista’s Folder Redirection ............................................................................230
Enabling Folder Redirection with Roaming Profiles.......................................................234
Finalizing the Personality Protection Strategy.............................................................................237
Chapter 9: Putting it all Together.................................................................................................238
Bring the Teams Together ...........................................................................................................241
Move into the Integration Testing Level .........................................................................242
Maintain the Deviance Registry...........................................................................244
Move to the Staging Testing Level..................................................................................245
Review the Migration Workflow .........................................................................247
Perform the Proof of Concept Project Team Migration.......................................249
Obtain Acceptance from Peers.............................................................................250
Validate your Operational Strategies ...........................................................................................251
Kernel Image Management Strategy ...............................................................................253
Application Management Strategy ..................................................................................255
Technical Outcomes of the Project ..............................................................................................257
Chapter 10: Finalizing the Project ...............................................................................................259
Closing the Organize Phase .............................................................................................260
Running the Transfer Phase .........................................................................................................262
Run the Pilot Project ........................................................................................................262
Identify Pilot Users ..............................................................................................263
Build a Deployment Schedule .............................................................................263
Initiate Communications to End Users ................................................................264
Perform the Deployment......................................................................................266
Collect Pilot Data.................................................................................................267
Perform the Pilot Evaluation................................................................................268
Performing the Final Deployment ...................................................................................268
Running the Evaluate Phase.........................................................................................................271
Project Handoff to Operations .........................................................................................272
Project Post-Mortem ........................................................................................................274
Calculating Return on Investment (ROI).............................................................274
Table of Contents
Future Systems Evolution ............................................................................................................275
Taking Baby Steps ...........................................................................................................276
Lessons Learned...............................................................................................................277
Moving on to other QUOTEs ......................................................................................................279
Copyright Statement
Copyright Statement
© 2007, Inc. All rights reserved. This site contains materials that
have been created, developed, or commissioned by, and published with the permission
of,, Inc. (the “Materials”) and this site and any such Materials are
protected by international copyright and trademark laws.
TITLE AND NON-INFRINGEMENT. The Materials are subject to change without notice
and do not represent a commitment on the part of, Inc or its web
site sponsors. In no event shall, Inc. or its web site sponsors be
held liable for technical or editorial errors or omissions contained in the Materials,
including without limitation, for any direct, indirect, incidental, special, exemplary or
consequential damages whatsoever resulting from the use of any information contained
in the Materials.
The Materials (including but not limited to the text, images, audio, and/or video) may not
be copied, reproduced, republished, uploaded, posted, transmitted, or distributed in any
way, in whole or in part, except that one copy may be downloaded for your personal, noncommercial use on a single computer. In connection with such use, you may not modify
or obscure any copyright or other proprietary notice.
The Materials may contain trademarks, services marks and logos that are the property of
third parties. You are not permitted to use these trademarks, services marks or logos
without prior written consent of such third parties. and the Realtimepublishers logo are registered in the US Patent
& Trademark Office. All other product or service names are the property of their
respective owners.
If you have any questions about these terms, or if you would like information about
licensing materials from, please contact us via e-mail at
[email protected]
Chapter 1
Chapter 1: To Migrate or not to Migrate
Today, we are at the crux of a new age in computing and it makes sense, with Windows VistaTM,
Microsoft is releasing its first true x64 operating system (OS), and x64 processors abound not
only on the server, but also, and especially on the desktop. Even better, x64 processors are multicore, delivering yet more power than the exponential growth we can expect from a move from
32- to 64-bit computing.
In addition, Vista will be Microsoft’s first OS to truly support IPv6—the next version of
TCP/IP—expanding networked communications from a 32-bit to 128-bit address space. The
timing is just right, at least for governmental agencies, with the US government’s Office of
Management and Budget having set a deadline of June 2008 “…as the date by which all
agencies’ infrastructures…must be using IPv6…”
More information on this governmental requirement can be found here.
This new age is not going to be a big bang. This time it is quiet revolution—a revolution that we
as individuals will feel whenever we use a computer, something we haven’t seen for a long time,
if ever: speed. Vista includes a host of new features that help speed it up: SuperFetchTM,
ReadyBoostTM, ReadyDriveTM, low priority I/O and much more. In addition, running it on x64
hardware grants Vista access to much more memory than ever before. Vista also provides better
TCP/IP throughput and removes traditional system bottlenecks. Are you ready for speed?
So what are you waiting for? Have you started the Vista migration process yet? No? Why not?
It’s true that before you can take advantage of a new OS on the desktop, you need to feel right
about the one you are running now. We’re in the year 2006, well into the 21st century and IT
professionals still don’t have complete control over desktops. System upgrades, software
updates, security patches, asset management all seem to be out of control, making IT
administrators react to issues and problems rather than predict and master them before they
occur. If you find yourself in this situation, perhaps it is the ideal time to consider a Vista
migration and at the same time, review and enhance the system management strategies you use
in your organization.
Chapter 1
Why not wipe the slate clean and do it right this time? A migration, by its very nature, offers the
ideal opportunity for massive IT change—all the desktops will be involved, new management
and deployment features are introduced and the deployment relies on common IT processes—
processes that can be reviewed and updated. Why not take advantage of this opportunity and
clean house? The benefits far outweigh the disadvantages: reduced diversity is always easier to
If this is what you want, then read on. You aren’t alone. According to Jim Allchin, co-president
of Microsoft’s platforms and services division, industry analysts predict that some 200 million
people will be using Windows Vista in the first 24 months after its release. He may be right. In a
recent survey which covered 715 IT officials in North America and Europe ranging from 1,000
to 20,000 employees, Cambridge, Mass. research firm Forrester Research found that 40 percent
of respondents were planning to deploy Vista within the first year of its release and 11 percent
within the first six months.
For more information on the Forrester report, go to,6454,673,00.html.
To assist you with your move to Vista, we’ve put together a complete toolkit for migration
support. This toolkit is drawn from the multitude of migration projects we have worked on—ever
since Windows 3.1, in fact—projects that have made our clients successful in desktop
Here’s what you’ll find in this and future chapters:
Chapter 1 starts with the business case, offering a template business case you can adapt to
your needs in support of your own deployment project.
Chapter 2 will provide you with a structured migration strategy, the QUOTE System,
which is a system we have been using for almost a decade to help customers manage
change in their networks.
Chapter 3 will outline how to rely on virtualization technology to create migration testing
environments so that you can ensure that your solution is completely functional before
you put it into production.
Chapter 4 discusses the various tools you need to build the migration toolkit. This
includes both technical and administrative or non-technical tools. In short, everything you
need to make the migration to this new operating system as smooth as possible at every
level of the organization—end-user, technical, support—and keep it under budget.
Chapter 5 looks at the changes you need to make in your existing infrastructure to
support the deployment. It also identifies all of the security elements of the migration,
focusing on the principle of least privilege but still ensuring that this critical project runs
Chapter 6 covers application compatibility and will introduce the concept of software
virtualization—a new technology that could very well prove to be the most significant
reason for moving to Windows Vista or performing a migration project of this scale.
Chapter 1
Chapter 7 helps you build the system kernel or the core system image you will be
deploying to your PCs. This chapter will focus on the creation of a single worldwide
image and cover how this image is to be maintained and updated once it is deployed.
Chapter 8 introduces the concept of personality captures, focusing on making user
documents, data and preferences available to them immediately after the deployment.
Personality data also includes software applications or the tools users need to properly
perform their everyday work. With software virtualization, software becomes just another
component of personality captures. This provides better business continuity during the
Chapter 9 brings it all together linking the different components of the project—system
images, applications, personality data and so on—to perform the delivery of Windows
Vista to your desktops and mobile PCs.
Chapter 10 concludes the guide by walking you through the pilot project, identifying all
the elements you need to monitor during this first deployment test, then goes on to the
actual deployment, focusing on the routine of repetitive tasks you need to perform. This
chapter finishes with the post-mortem you should perform at the end of any project of
this scale, identifying which project elements need to be transited into production now
that your systems infrastructure has been modified.
Together, these chapters form the Desktop Deployment Lifecycle (DDL)—a cycle that you will
repeat each time you need to update your desktop infrastructure (see Figure 1.1).
Figure 1.1: The Desktop Deployment Lifecycle.
Chapter 1
Our goal is to assist you in getting the ball rolling and to help you avoid pitfalls. After all, with
Microsoft releasing new versions of its OS on a regular basis, migrations are just one more IT
process you need to master and master well. If you set this project up properly, you should reap
automatic benefits, perhaps the most important of which will be the delivery of an infrastructure
that provides ongoing management of all systems and provides excellent return on investment
(ROI). Now is the time to deploy Vista as it will also help you prepare for the next version of
Windows Server Codenamed “Longhorn” which is slated for release late next year.
Send us some feedback. If there is something you need and can’t find it in this book, write us at
[email protected]
And as coauthors of the supporting documentation for the Microsoft Solution Accelerator for
Business Desktop Deployment 2007, we will also be drawing on this guidance to provide
complete, concise documentation on what you need to do to make this project the most
successful deployment project you have ever performed. This guide complements the BDD
guidance with non-Microsoft processes and practices for deployments, and helps you put it into
The Microsoft Solution Accelerator for Business Desktop Deployment 2007 can be found at
What is Windows Vista and why should I migrate?
The question is not if you will migrate, it is rather when you will migrate. In an informal poll,
Windows Server News asked over 1,000 readers when they would migrate to Vista. About 15%
said they would migrate within the first six months of its release. More than 18% would migrate
after the first six months. Another 33.8% said they would use system attrition to perform the
migration, for example, upgrading failing systems as they are replaced. The rest said they would
wait for the first service pack to be released. This means that the timing of this book, with one
chapter released every month over the next ten months is directly in line with the migration plans
of over 40% of the organizations deciding to migrate (Source: WServerNews Volume 11, #41 –
October 9, 2006 – Issue #597 from The timing is just right.
The timing is also right for Windows Vista. As Microsoft’s flagship operating system and the
first to be delivered under Microsoft’s new Trusted Computing Platform (TCP) initiative, Vista
may be Microsoft’s most secure OS to date. Under TCP, each of the developers working on the
Vista code received extensive security training. Microsoft claims this will be its most secure OS
ever. It may be true, only time will tell, but with features such as integrated anti-spyware,
integrated system protection features, user account control, code execution prevention, network
access protection, integrated firewall, Windows service hardening and BitLocker full drive
encryption all integrated right into the OS, it seems Vista is on the right track. In addition, Vista
sports a brand new version of Internet Explorer, IE 7, that Microsoft claims will herald a new era
in safe Web surfing. And with some 5 million potential beta testers, you’d think that Microsoft
will have learned its lesson and made sure the code is stable and solid without the need for a
service pack. Once again, only time will tell, but if they maintain the record they set with
Windows Server 2003 (WS03)—providing a stable platform that did not require a service pack
before deployment—then we’ll know they are on the right track.
Chapter 1
Microsoft also claims that this will be the last 32-bit operating system it develops, heralding a
new era in 64-bit computing. As such, Vista for x64 systems will no longer support 16-bit
applications, but you can always use virtual machine technology to run legacy operating systems
to provide backward compatibility for these applications.
More information on manufacturer about CPU and graphic processor capabilities can be found:
From the point of view of simple productivity, Vista will sport some impressive speed
enhancements, display graphics in 3-D and provide a transparent glass-like interface that is
simply a delight to work with—if you have the right hardware of course. Even if you’re not
considering Vista yet, the very least you should do is consider Vista hardware requirements in all
PC purchases moving forward from today, ensuring that the systems you buy today will run
Vista tomorrow.
Base hardware requirements for Vista are not too unusual considering the type of hardware
available right now. They are outlined in Table 1. Two sets of requirements are defined: Vista
Capable and Vista Premium PCs. The first allows you to run the base level Vista editions and the
second lets you take advantage of all of Vista’s features.
If you want to plan for the future, you should really opt for a Vista Premium PC. You’ll be
buying new PCs anyway through attrition programs, why not buy the right systems?
Vista Mode
Vista Capable PC
At least 800 MHz
Minimum Memory
512 MB
Graphics Processor
Must be DirectX 9 capable
32-bit: 1 GHz x86
64-bit: 1 GHz x64
1 GB
Graphics Processor
Support for DirectX 9 with a WDDM driver,
128 MB of graphics memory*, Pixel Shader
2.0 and 32 bits per pixel
DVD-ROM drive
Audit Output
Internet access
Vista Premium PC
* If the graphics processing unit (GPU) shares system memory, then no additional memory is required. If it uses
dedicated memory, at least 128 MB is required.
Table 1.1 Vista system requirements.
More information on manufacturer about CPU and graphic processor capabilities can be found:
Chapter 1
Vista is both for home and business use. Microsoft has made this very clear in the various
editions to be published for this new OS. For home, Vista includes the Home Basic, Home
Premium, and the Ultimate editions. For business, the Business, Enterprise, and once again,
Ultimate editions are available. Each of these business editions requires a Vista Premium PC to
run and includes:
Vista Business Edition: This is the base edition for small to medium business. It
includes the new Aero 3D interface, tablet support, collaboration tools, advanced full disk
backup, networking and remote desktop features as well as all of the base features of the
Vista Enterprise Edition: This edition is only available to organizations that have either
software assurance or enterprise agreements with Microsoft. It adds full drive encryption,
Virtual PC Express, the subsystem for UNIX and full multilingual support.
Vista Ultimate Edition: For small businesses or others that want to access the full gamut
of new Vista features, Microsoft is releasing the Ultimate Edition. It includes a host of
features including all of those in the Enterprise Edition, but also entertainment tools such
as Photo Gallery, Movie Maker and Media Center. Though you might not want these
programs on business computers, this edition might be the only choice for any
organization that wants full system protection and does not want to enter into a long-term
software agreement with Microsoft.
Microsoft is also releasing a Vista Starter Edition; this version will be available to emerging
markets because it is designed to give them access to low-cost versions of Windows and avoid
To learn more about each of the different Vista Editions, go to
For a fun overview of the different Vista Editions, look up
Better yet, test your systems right now. Microsoft offers a new and enhanced Windows Vista
Upgrade Advisor (VUA). VUA will scan your systems and tell you whether they are able to
upgrade to the new OS. It will also tell you which edition it recommends based on your system
capabilities. In addition, it will identify both device driver and application compatibility issues.
Vista uses a new file system driver, so don’t be surprised if disk-based utilities such as anti-virus
or disk defragmentation tools are singled out as requiring upgrades (see Figure 1.2). In order to
run VUA, you’ll need a version of the .NET Framework as well as MSXML, but if they are not
present, it will automatically install them for you.
Chapter 1
Figure 1.2: The results of a system scan with Windows VUA.
To download the Windows VUA, go to You’ll need
administrative rights on the PC to install and run this tool.
VUA is an interactive tool and only works on one PC at a time. For a corporate-wide scan of all
systems, use the Application Compatibility Toolkit (ACT) version 5.0. ACT will be released at
the same time as Vista. As its name implies, ACT is designed to focus on applications, but it
does include rudimentary hardware assessments.
ACT will be discussed in more detail in Chapters 4 and 6.
Microsoft is also working to release a free Windows Vista Readiness Assessment (VRA), an
agent-less network scanning tool that is aimed at small to medium organizations. VRA will let
you scan multiple computers from a central location and generate a compatibility report for each
of them. VRA is in beta right now but should be available by the time you are ready to move
More on VRA can be found at:
In the end, the best tool to use for readiness assessments is a true inventory tool, one that is part
of a desktop management suite and one that can help support all aspects of a migration.
Chapter 1
Compelling Vista Features
There are a host of new features in Windows Vista, but for the purposes of a migration,
especially for the justification of a migration, you’ll need to concentrate on just a few to make
your point. Three communities are affected by a new desktop OS: users, IT professionals, and
developers. Users can take advantage of improvements in productivity and security. IT
professionals will focus on new deployment and management features. Developers will want to
address new infrastructure components and the application programming interfaces (API) that
give them access to those components.
For users, Vista provides key improvements in the following areas:
Integrated search
Performance improvements
Desktop experience
Networking experience
For IT professionals, Vista offers improvements in:
Operating System deployment
Management technologies
For developers, Vista includes more than 7000 native APIs linking to three new core feature sets.
The Windows Presentation Foundation
The Windows Communication Foundation
The .NET Framework version 3.0
For a complete list of Vista features, go to
Chapter 1
Compelling Features for End Users
According to Microsoft, Vista heralds a whole new era in user productivity. Heard it before?
Think it’s a marketing slogan? Well, it is, but for once there may be some truth in it. In the past
couple of years, the hot button for users has been search or the ability to find information
anywhere at any time. Computer systems and past Windows versions have helped us generate
tons of personal, professional and public digital information. Just organizing all of the various
files we create ourselves is a task in and of itself. Few people can spend a whole day without
searching for at least one thing on the Internet, ergo Google’s immense success.
Despite the fact that we’ve been very good at creating information, we haven’t been all that good
at teaching our users how to use good storage practices so that they can find what they create.
Even showing them how to name files properly would have been a help, but training budgets are
usually the first to go during cutbacks. As IT professionals, we’re now faced with having to
install and deploy third-party search tools—tools that may or may not respect the security
descriptors we apply to data within our networks.
To resolve this situation, Microsoft has integrated search into the basic user interface (UI) of
Vista. Search is how you access all information on a Vista PC. The Start menu now sports a
search tool and provides constant search, the Explorer sports a search tool, and IE sports a search
tool—search is everywhere. At least in IE, you can choose which search engine you want to rely
on. On the desktop it is a different story. Search indexes everything it has access to: personal
folders, system tools, legacy shared folders, removable drives, collaboration spaces and so on—
all driven by the PC’s capability to index content. When Windows Server Codenamed
“Longhorn” is released, search will be able to rely on server-based indices and take a load off of
the local PC.
Search is such an integral part of the system that Windows Explorer now boasts new search
folders—folders that are virtual representations of data based on search criteria. It’s not
WinFS—Microsoft’s flagship file system due to replace NTFS—but it works and it works really
well. Working on a special project such as the Vista migration? All you have to do is create a
virtual folder based on these key words and you will always have access to the data so long as
you have the proper permissions and you are connected to the data. It’s simple, just perform the
search, click Save Search, and give the folder a meaningful name. Saved searches are dynamic,
so any time new content is added, it will automatically be linked to your virtual folder. Figure 1.3
shows how saved search folders work as well as laying out the new Windows Explorer window.
In addition to having access to indexed content, you have full control over the way you view and
organize data in Windows Explorer. New buttons sort information in new ways, new views show
extensive previews of document contents, and new filters let you structure information just the
way you like it. Even better, you can restore a previous version of any document so long as it
existed before the last shadow copy was taken—that’s right, Vista now does shadow copies on
the desktop. With Vista, there is no reason why anyone would ever lose a document again.
For more information about Volume Shadow Copies and Previous Versions, especially how you can
start using them even before you deploy Vista, go to “10 Minutes to Volume Shadow Copies”.
Chapter 1
Figure 1.3: The new Windows Explorer in Vista allows you to save searches into virtual folders.
Beyond search, Vista will sport several new features aimed at productivity. We already discussed
speed and performance; Vista includes several features to improve performance on a PC, 32- or
SuperFetch will learn from your work habits and preload your most common applications
into memory before you call on them. When you actually do call on them, they will
launch from RAM and not from disk. They will be lightning fast.
ReadyBoost will rely on flash memory to extend the reach of normal RAM and reduce
hard disk access times. For example, Samsung is releasing a new 4 GB flash drive that is
specifically designed for the ReadyBoost feature. This will also vastly increase system
ReadyDrive will work with new hybrid hard disk drives—drives that also include flash
memory—to cache frequently used data and access it from the flash memory while the
hard drive takes time to spin up. In addition, sleeping systems will wake up much faster
since they will use the flash memory to restore the working state of the PC.
With the exception of ReadyBoost, all performance improvements are completely transparent.
Other speed enhancements include self-tuning performance and diagnostics that will detect and
attempt to self-correct any performance-related issue. In fact, if you configure it correctly, the
Vista PC will be so fast; you’ll have to change your habits. If you’re used to stepping up to your
PC in the morning, turning it on and then going for coffee, you’ll have to learn to go for coffee
first because you won’t have that interminable lag between turning on the power button and
actually facing a logon prompt. This is bound to force some habit changes.
Chapter 1
Our advice: run Vista on a 64-bit PC if you can, especially a multi-core system. Since x64 operating
systems are still in their infancy, nobody has figured out how to slow them down yet. Now is the time
to take advantage of all of the speed you can get.
Vista also includes a whole series of improvements at the security level—improvements that
both users and IT professionals can take advantage of. The most significant is User Account
Control (UAC). With UAC, Windows Vista will allow administrators to execute most processes
in the context of a standard user and only elevate privileges by consent and vice-versa for users.
When you are logged on as administrator and a process requires elevation, a dialog box
requesting your response is presented. If you agree, the process is allowed, if you disagree, the
process is denied. When you are logged on as a standard user, UAC will prompt you for an
appropriate administrative account and its password each time an administrative task is
UAC prompts are impossible to miss because the entire desktop is dimmed when UAC is
activated and only the UAC dialog box is displayed clearly (see Figure 1.4). UAC will require
significant adaptation since it is a completely new way to work as a standard user. When
computer systems are properly prepared and managed, corporate end users should very rarely if
ever face a UAC prompt, yet they will benefit from the anti-malware protection running as a
standard user provides.
In addition to UAC, Vista supports Fast User Switching even in a domain. If someone wants or
needs to use a computer that is already in use, there is no need to log off the current user, just
switch users, perform your tasks and then log off. The existing user’s session is still live and the
user may not even know someone else used their computer.
Figure 1.4: Using an application requiring UAC elevation.
Chapter 1
Another security element is Windows service hardening. By default, Vista monitors services for
abnormal activity in the file system, the registry and network connections. If any untoward
behavior occurs, Vista simply shuts down the service. In addition, Vista includes a series of antimalware technologies such as Windows Defender, an anti-spyware detection and removal
engine, a new version of Windows Firewall that users can access and control to some degree, an
updated version of the Windows Security Center; and a new engine for security updates. As
previously mentioned, it also sports a new version of IE, version 7 which has several
improvements in terms of ease of use—tabbed browsing, RSS feed integration and improved
Web page printing—but its major improvements are in secure Web browsing. The following list
highlights IE 7’s new features:
New phishing Web site identification and reporting tools.
A clearer way to determine whether you are connected to a Web site using either the
Secure Sockets Layer (SSL) or Transport Layer Security (TLS).
ActiveX opt-in, which will easily let you determine which ActiveX controls are safe to
Single click deletion of all browsing history.
Automatic protection from domain name spoofing.
Control of uniform resource locators (URL) processing to avoid URL parsing exploits.
Protected mode, isolating itself from other applications running in the OS.
As this list shows, this new, sand-boxed version of IE provides safer, cleaner Internet browsing
(see Figure 1.5).
The desktop experience is also vastly improved. The new Aero interface provides glass-like
windows that show the background desktop when in use. In addition, the integrated support for
three-dimensional graphics paves the way for a very high-quality graphical experience. If you’re
going to use a graphical user interface (GUI), why not make it the best possible experience?
That’s exactly what Vista delivers.
Chapter 1
Figure 1.5: Internet Explorer version 7 provides a better experience than any previous version.
Finally, end users will find that their networking experience in Windows Vista will also be vastly
improved. Not only has the entire TCP/IP stack been rewritten in Vista to significantly increase
networking speeds, but also Vista now fully supports IPv6, the next version of the TCP/IP
networking protocol. IPv6 automatically gives each computer its own unique IP address—unique
in the entire world, that is—and protects it through integrated IP security (IPSec) support.
Whether you are running IPv4 or IPv6, the new networking stack will herald vast speed
improvements, especially between PCs running Vista. Servers will have to wait for the next
version of the Windows Server operating system to access these benefits, but users will notice
speed improvements even in mixed networks.
To download a PowerPoint presentation on new Vista features, go to
Despite this new feature set, user training for Vista should be relatively painless because
everyone should be already familiar with the standard Windows UI. This doesn’t mean that no
training is required, but that training could easily be performed through short presentations and
demonstrations, either live or online, which take users through each of the new capabilities.
Chapter 1
Compelling Vista Features for IT Pros
There are several compelling features in Windows Vista for IT professionals. The first is in
operating system deployment. Vista supports three deployment scenarios by default: new
computer (even a bare metal system), PC upgrade, and PC replacement. These three scenarios
are supported by several technological improvements. All versions of Vista now ship in the new
Windows Image Format using the .WIM extension.
WIM images are file-based system images as opposed to some other tools that use a more
traditional sector-based disk image. The WIM file-based image uses a single instance store to
include only one copy of any shared file between different images. This allows Microsoft to ship
multiple editions of Vista, even Windows Server Codenamed “Longhorn” when it becomes
available, on the same DVD. The version you install is based on the product identification (PID)
number you use during installation. This also means that when you create images for
deployment, you can use the same image to deploy to workstations, mobile systems and tablet
PCs. WIM images are mountable as NTFS volumes and therefore can be edited without having
to recreate reference computers.
Microsoft is not the first to use file-based images. Altiris Deployment Solution supports both file- and
sector-based images though most users will opt for the file-based .IMG format as it is much easier to
manage in the long run.
Microsoft has created several tools to work with WIM images—ImageX, a WIM image
generator, Windows System Image Manager, a WIM image editor, and Windows Deployment
Services (WDS) which replace the previous Remote Installation Services (RIS) to support bare
metal WIM deployment. In addition, Microsoft is releasing Windows Preinstallation
Environment (Windows PE) to the general public. Windows PE is not new, but has been updated
in support of Vista deployments. Previously, Windows PE was available only to customers
acquiring either enterprise licenses of Windows XP or software assurance for the OS.
To learn more on Vista deployment technologies, go to
Existing disk imaging technologies can still be used to deploy Windows desktops in
organizations. Tools such as Symantec Ghost, Altiris Deployment Solution, Arconis True Image,
and so on continue to be viable options for OS deployment. One significant advantage some of
these tools have over the WIM image format is their support for multicast image deployment.
Multicasting allows you to deploy the same image to multiple PCs at the same time using the
same data stream. By contrast, unicast deployments send unique copies of the operating system
to each target device. If you are deploying to 50 PCs at a time, multicasting will require a single
data stream whereas unicasting will require 50 copies of the same data stream. You do the math.
Even if you can keep your WIM image as thin as possible, without support for multicasting, it
will take longer to deploy than with other imaging tools.
Chapter 7 will provide more information about disk imaging.
Chapter 1
Don’t get us wrong. WIM imaging is still a boon to organizations that don’t have any system
management software in place—though you should consider using a massive deployment of this
kind to introduce new systems management software. However, if you still aren’t ready for a
full-fledged management system, go ahead and rely on WIM and the other Microsoft tools which
support it. You’ll find that most of them are command-line tools that require some scripting
expertise to use. You’ll also find that each tool is contained within itself and offers little
integration to the others. Your best bet is to look for systems management tools that can interact
with both WIM and sector-based imaging, then choose which suits your needs best. Several
manufacturers will release updated versions of their management suites to fully support Vista in
the coming months.
Another area where Vista will assist IT professionals is in management technologies. Once
again, Microsoft is investing in Group Policy, its flagship system management technology. Vista
sports almost 800 more policy settings than Windows XP, bringing the total number of settings
to 2450. In Vista, you can use Group Policy Objects (GPO) to manage everything from the
firewall to wireless networks to removable storage devices, diagnostics and power settings. In
addition, Vista can support multiple local GPOs, letting you specify different settings for
different users on the same computer. This feature is useful for shared computer environments
such as kiosks or manufacturing environments.
Microsoft recently purchased Desktop Standard, a provider of advanced Group Policy tools. This
acquisition adds a change control system for all Group Policies including roll-back, software
update delivery through Group Policy, the ability to create conditional policies, the ability to
modify any registry setting, the ability to control non-Microsoft applications through Group
Policy, and much more to Microsoft’s existing Group Policy toolkit. Microsoft is making these
tools along with others available through the Microsoft Desktop Optimization Pack for Software
For more information on Microsoft Desktop Optimization Pack for Software Assurance products, go to
Vista also includes a vastly improved Task Scheduler (see Figure 1.6) which is now much more
of a job manager than just a scheduler. Tasks can now be configured to launch based on specific
events. For example, if a hard disk drive runs out of space, you can automatically launch a task
to clean up temporary files and alert desktop support teams. Tasks can also run when a user locks
or unlocks the computer, letting you specify custom scripts to execute even if the user does not
reboot their machine. Tasks can run when the computer is idle for a specified period of time.
Finally, tasks can be conditional, running when another task is activated or when a specific
condition is met. These new event-based tasks give you much more power over remote systems.
Backup is also much easier in Vista. Vista sports a brand new backup system that will take
complete system images and store them in Microsoft’s virtual hard disk drive (.VHD) format.
Backup images can now be mounted with either Microsoft Virtual PC or Virtual Server,
VMware Workstation or Server, modified if needed and restored to the original system. This is a
powerful new backup system that anyone can work with.
Chapter 1
Vista includes a new implementation of Windows Remote Management (WinRM), Microsoft’s
implementation for the Web Service Management standard, letting you manage and administer
systems through common HyperText Transport Protocol (HTTP) ports. And of course, Vista
relies on the Microsoft Management Console (MMC) version 3.0, providing a much more
comprehensive and pliable task-oriented management interface.
Figure 1.6: The new Task Scheduler provides much more functionality than previous versions.
Even managing updates has been made easier in Vista through the Restart Manager. During
update installations, if a restart is required, Restart Manager will automatically take note of open
applications, save their state and restart them once the system is back up and running. This
makes the update process completely transparent to end users. Of course, applications have to be
written to take advantage of this feature.
Print management has been improved in Vista through the use of a new Print Management
console which gives you centralized access to all the printers in your organization, letting you
view their status in one single place. Print Management can manage printers for both Vista and
Windows XP, letting you manage mixed environments. It also lets you automatically deploy
printers to groups of computers through Group Policy. The Print Management console supports
delegation of authority; as a result, you can assign aspects of this task to support personnel,
providing much needed relief for administrators.
To learn more on Vista management technologies, go to
Chapter 1
The Event Viewer has also received important upgrades. Many more events are now found in the
Windows Event Log, especially events that were previously stored in separate text files. The
Event Viewer now supports event forwarding, so you can configure custom events, for example,
security events, to be forwarded to a central location. This feature is very useful in support of
compliance requirements such as those needed to comply with the Sarbanes-Oxley Act. Events
are now much more informative and provide more meaningful resolution information. Events are
linked to the Task Scheduler so that you can use an event to generate a new task in one single
step. This facilitates event management and troubleshooting for administrators.
Windows Vista also sports several security improvements. Because of UAC, the default
administrator account on a Vista PC is disabled by default. This does not mean that it is secure,
as every administrative account should have a strong password whether it is disabled or not.
UAC also provides registry and file system virtualization to let applications that require
machine-level access be automatically redirected to user-level access. This provides better
application compatibility than even Windows XP because older applications will run without any
changes to their structure. In addition, Vista includes Windows Resource Protection (WRP)
which protects both the registry and the file system from unauthorized changes. WRP works in
conjunction with UAC to ensure that only authorized changes occur on the PC.
Though it is not recommended, you can disable UAC. This can be done in one of two ways. Through
the Control Panel | User Accounts, you will find a control to Turn User Account Control on or off. The
second way is through the Local Security Policy under Security Settings | Local Policies | Security
Options where you will find nine policies that affect UAC (see Figure 1.7). Think long and hard before
you decide to do this as it may endanger your systems by exposing them to malicious behavior.
Perhaps the best way to turn this off is to use the setting User Account Control: Behavior of the
elevation prompt for standard users: Automatically deny elevation requests. This would also be an
excellent opportunity to use multiple Local Security Policies, one for non-technical standard users that
have this setting and one for technical staff that allows them to request elevation rights.
To test your applications and determine if they will properly operate in standard user mode, test them
with the Standard User Analyzer (SUA) which will tell you which elevated rights the application
requires. Obtain SUA from
Figure 1.7: Group Policy settings for UAC.
Chapter 1
For network protection, Vista relies on the Windows Firewall with Advanced Security (see
Figure 1.8). End users only have access to the default Windows Firewall that is found in the
Control Panel, but administrators have access to a more advanced version through a custom
MMC. This allows you to completely control both inbound and outbound connections on any
PC. It also automatically restricts resources if they behave abnormally. For example, if a service
that is designed to send messages on a specific port tries to send messages over another port,
Windows Firewall will prevent the message from leaving the computer, possibly preventing the
spread of malware.
Figure 1.8: Windows Firewall with Advanced Security provides very comprehensive protection for each PC.
One of the most famous security features of Vista is BitLocker Drive Encryption. BitLocker
encrypts the entire contents of the system drive, protecting not only user data but also the entire
OS. BitLocker relies on the Trusted Protection Module (TPM) version 1.2—a special hardware
module—to store decryption keys. If a TPM chip is not available, keys can be stored on USB
flash drives, but this is obviously less secure because many users might store both the USB key
and the computer in the same bag. Lose the bag and the entire system can be compromised.
BitLocker is only available in the Enterprise and Ultimate editions and requires two partitions on
the system volume, one for the boot process and one for the encrypted operating system. This
consideration will be vital during system preparation for deployment. Considering that according
to Times Magazine, 1,600 mobile computers are lost or stolen every day in the U.S., BitLocker
will be a boon to businesses of all sizes.
Chapter 1
By default, Windows Vista includes Network Access Protection (NAP) or the ability to
quarantine computers that do not meet specific health parameters as outlined by your security
policy. In fact, because NAP is really a server-based technology—it requires servers to define the
health policy, create and maintain the quarantine network, and provide security and other updates
to client computers—Vista really only includes the NAP client. But, because the NAP server
components will not be released by Microsoft until Windows Server Codenamed “Longhorn” is
released—about a year from now—you might think that this is not much of a feature. You’re
right, unless of course you rely on another product to provide this level of protection.
Microsoft has worked with Cisco Systems to make sure that its NAP will be fully compatible
with the Cisco Network Admission Control (NAC). If system quarantine and system health
validation is important to your organization, you can deploy Vista today along with Cisco’s NAC
server components to protect your network. Because Vista includes a built-in NAP client that is
interoperable with Cisco’s NAC, it will automatically be able to take advantage of this feature—
other client operating systems would require an additional Cisco client to be deployed. Then,
when Microsoft releases its new server code, you can simply integrate its features into your
already protected network.
To learn more on NAP and NAC, go to or
At press time, it is unclear whether the integrated Vista client is in the original release of Windows
Vista. Make sure you have this integrated client before building your NAC environment.
Vista provides a better management story than its predecessors. But simplifying desktop
management isn’t just a matter of new features in an OS. It requires careful thought and
planning, preparations that must be performed during the set up of the migration project. This is
just another reason why massive desktop migration projects are an excellent vehicle for a move
towards service-oriented management of all your systems.
To learn more on Vista’s security features, go to
Compelling Vista Features for Developers
Windows Vista is built on managed code, relying on a new programming model to provide a
completely updated desktop platform. Developers will want to take advantage of the new feature
set in Windows Vista to create new, three-dimensional applications that rely on this new model.
Developers and IT Professionals alike will want to take a very close look at the Windows Vista
Compatibility Cookbook, a document that outlines all of the compatibility changes between Vista and
Windows XP. The cookbook can be found at:
In addition, developers may want to take advantage of the additional resources available on the
Innovate on Windows Vista portal. The portal can be found at:
Windows Vista is built on three core technologies. The first is the Windows Presentation
Foundation (WPF). WPF, formerly codenamed ‘Avalon’, is a series of components that allow
Chapter 1
developers to build high-quality user interface experiences. It blends together documents, media
and traditional as well as alternative forms of input such as the Tablet PC. It links to the new
Windows Color System Microsoft developed for Vista to produce higher quality output to both
the screen and print media and it provides integration points into the Windows shell to enhance
the experience of the user. Developers should definitely take the time to examine how WPF can
help them provide better information to users.
For more information about Windows Presentation Foundation, go to Videos on WPF can be found at by searching
for the words ‘presentation foundation’.
The second important technology that is included in Vista for developers is the Windows
Communication Foundation (WCF). WCF, formerly codenamed ‘Indigo’ is designed to assist in
the construction of connected systems, Microsoft’s version of service-oriented architectures
(SOA). WCF is built around common Web Services to secure transactions on the Web. It covers
transports, security components, messaging patterns, networking topologies, hosting models and
software encodings to provide the basis for new transaction models. Although it is being
delivered directly in Vista, WCF will also be available for previous versions of Windows, such
as XP and Windows Server 2003.
For more information on the Windows Communication Foundation, go to
Finally, Vista will also include WinFX or as it is now named, the .NET Framework version 3.0.
This is Vista’s managed code programming model and is an extension of the previous versions
of the Framework. In fact, the new version of the Framework brings together all of the
components built around WinFX into one single programming model. This includes the
Windows Communication Foundation, the Windows Presentation Foundation, the Windows
Color System, CardSpace, and Windows Workflow Manager and merges them into one single
rapid application development environment. Organizations of all sizes will want to look to this
version of the Framework for all Windows-based or Web Service-based development that
pertains to Vista. Of course, Vista also supports traditional development through unmanaged
code such as C++ or Dynamic HTML and JavaScript. The choice of which language to use will
depend on the type of tool or functionality you want to develop.
For more information on the .NET Framework version 3.0, go to
These feature descriptions are by no means exhaustive. They are designed to give you a taste of
what Vista will offer your organization. You will need to examine the entire Vista feature set to
determine how your organization can benefit most from its expanded feature set, but at least with
the features outlined here, you will already have an idea of where to look.
Examining the feature set is one of the most significant aspects of building a business case for
migration. This is part of the first task anyone must perform in any migration project. After all, if
the business case isn’t approved, there won’t be much of a migration. This is why business cases
are a very important aspect of any migration project and should be your starting point.
Chapter 1
Creating the Migration Business Case
Business cases are structured documents that provide technical, financial and management
information related to a significant undertaking that will take the form of a project in an
organization. One of the purposes of a business case is to assess the readiness of the organization
to proceed with the recommended change. The business case structure is fairly stable and should
include the following items:
Executive summary
Background information
o Current issues
o Preparation team
Description of the upcoming project
Proposed approaches and alternatives
o Issues to be addressed
o Recommended approaches
o Potential alternatives and expected results
Impacts of the project on current operations
o Business environment
o Technology
o Risks
Cost/benefit analysis
Projected schedule
Verification mechanisms
Implementation strategies
Required approvals
A sample business case for a Vista Migration project can be found here. A one-time registration is
required to access this business case. Click on the New members button to register.
Chapter 1
There are several reasons to introduce change into an organization. The first is that change is
constant and is at the very core of the IT world. Therefore introducing change offers the
opportunity to control change and adapt it to your needs. The second reason is to reduce risk. For
example, the U.S. government’s need to move to an IPv6 infrastructure is an excellent example
of a change introduced to reduce risk since IPv6 is much more secure than IPv4 because of its
very structure and offers single, individual addresses for each host connected to a network. This
is much more than IPv4 can ever offer given the need to use private versus public addresses and
network address translation (NAT) to connect multiple hosts to the Internet. A third reason is to
add value or functionality to the services you offer. If you are a private corporation, then this
goal is to increase profits, but if you are an organization that operates without profits, adding
value may only mean providing a higher quality service to your clienteles.
The advantage of controlling change before it controls you is that you are in the driver’s seat.
This is why we recommend migrating to Windows Vista in a structured and controlled manner.
As any migration project aims to reduce risk while increasing value, you should consider the
following four reasons for moving to Vista:
1. Take advantage of a new, more sophisticated and more secure desktop operating system.
2. (Optional) Take advantage of 64-bit processing, increasing speed and security for the
3. (Optional) Take advantage of the integrated IPv6 capabilities in Vista and use the
migration project as a carrier to migrate to a new generation of TCP/IP.
4. (Optional) Take advantage of the migration project to improve desktop management
operations and move to a completely managed infrastructure.
Three of these justifications are optional, but each of them adds significant value and helps
organizations move into the 21st century in terms of desktop infrastructures.
Use a structured process to design your business case. We use the IDEA, an eight-step process
that helps gather the information required to populate the different sections of the business case
document (see Figure 1.9).
Figure 1.9: The Vista IDEA, a structured approach for the elaboration of the Vista Migration Business Case.
Chapter 1
The IDEA begins with a review of the current situation. For example, one excellent source of
information for this review is a collection of desktop-related issues reported to the help desk. It
then moves on to a market analysis, or a review of the market press and advisory sources on their
opinion of Vista. Step three is the definition of the goals of the project. Step four focuses on risk
analysis and mitigation. Step five looks at possible infrastructure changes to support the project.
Step six outlines the deployment approach. Step seven lists required resources—human, financial
and technical. Step eight looks to the future beyond the implementation of the requested change.
Each of these is taken into consideration in the template business case you will find online.
Migration Strategies
As you move forward with your migration project, you’ll need to consider different migration
approaches. Several are available:
Let’s just install, users be damned! Surprisingly, organizations actually use this approach
in many migration projects. This is especially evident in projects that are completely
driven by IT and do not include feedback or input from other parts of the organization.
Unfortunately, these projects are doomed to fail because of their naive approach.
Hardware Refresh. The hardware refresh approach is a better approach because it relies
on attrition to make the change. Organizations normally have three to four year PC
replacement programs and can rely on these programs to carry the migration.
Unfortunately, too many organizations who choose this approach don’t do a better job
than the “Let’s just install” approach because there is little structure in the approach to
replacement. Just look at organizations that buy new servers with Windows Server 2003
R2 pre-installed, only to replace it with the previous version of Windows Server. Given
the small differences between the functionality of R2 versus WS03, it is really surprising
that anyone would even assign any resources to this effort. If these organizations fear the
small changes in WS03 R2, then how will they fare with the massive changes in
Windows Vista?
Gradual Migrations. This approach looks at specific workloads and addresses increasing
performance demands by migrating them to the new operating system. This way, only
key user populations are migrated, one at a time, making the project more manageable
because it runs in specific chunks. Because of this, this approach is much more viable
than the first two.
Chapter 1
Forklift Migrations. This approach tends to be the best for several reasons. A forklift
migration means that every single desktop and mobile system in the organization is
migrated in the same timeframe. This vastly reduces the need to manage mixed
environments and reduces the strain on the help desk as well as operations teams. The
management of mixed environments is controlled by the project and lasts only for a
specific time. In addition, forklift projects because of their very nature, are often much
more structured than gradual or hardware refresh projects. For this reason, they include
more comprehensive communications and training programs—two tools that can either
make or break massive change management projects. Also, because a forklift project
changes everything at the same time, they often tend to be architected more thoroughly.
This is because massive changes need more comprehensive forethought and planning
than gradual changes.
Carrier Projects. This approach relies on carriers—key functionality upgrades in the IT
infrastructure—to introduce new changes. x64 computing or IPv6 are excellent examples
of carriers that could support the introduction of a new OS within your organization. Both
require careful planning and architecting to provide the promised benefits. Carrier
projects are very similar to forklift migrations in nature because everything is changed at
The best recommendation is to use a combination of the last two approaches if possible.
Consider your user base. You have to prepare for a Vista migration in the next year or so,
otherwise users will start asking for it and you won’t have a ready response for them. User
sophistication increases exponentially with time as more and more users grow up with access to
computer systems. Better to be prepared and have your answers ready when users start
grumbling for the Vista features they have at home. Using a combination approach allows you to
be prepared ahead of time and control user expectations rather than react to them.
No organization can reliably expect any migration approach based on attrition to work properly unless
they have performed a proper forklift migration beforehand. This is because forklift migrations, by
their very nature, allow organizations to clean house and implement proper management structures.
Most organizations that invest in these types of migrations and maintain a stable client network once
the project is complete will be able to profitably use other migration strategies for Vista.
If you haven’t performed a proper forklift beforehand and you still want to use an approach based on
attrition, then make sure the design and engineering portions of your migration project are fully
completed before starting to deploy Vista PCs. This guide will assist you in both these aspects no
matter which migration strategy you choose.
Chapter 1
Making a case for a flexible infrastructure
Migrating to Vista should also be the time to upgrade and modify your PC management
infrastructure. If you already have a proper management infrastructure, you’ll be good to move
forward with a migration because it will already be an integrated process in your normal
operations. But if you don’t, consider this.
According to Gartner, an independent research firm, organizations implementing a well-managed
PC platform can reduce costs by up to 40 percent. Organizations using some form of
management, save 14 percent over unmanaged systems (see Figure 1.10).
Figure 1.10: According to Gartner, well-managed PCs can bring a 40 percent reduction in costs.
Gartner Research note entitled “Use Best Practices to Reduce Desktop PC TCO, 2005-2006 Update”
by Michael A. Silver and Federica Troni, published 8 December, 2005.
Well-managed PCs are locked-down PCs, and users only have access to standard user accounts.
Vista makes great strides in this area because it includes a complete revision of what is and what
isn’t locked down for standard users. Two examples are the fact that standard users can now
control the time zone on their PCs as well as control the different power modes the PC is running
under; two areas that are completely locked out from them in Windows XP. The Gartner results
were compiled for Windows XP Service Pack 2 (SP2) running on microchips supporting the no
execute (NX) instruction set to protect against malicious code.
In the same paper, Gartner also claims that according to their research, the ideal attrition rate for
PC replacement should be three years as organizations running well-managed PCs reap the most
benefits and the best return on investment (ROI) when replacing one third of their PCs every
year. This is a very good argument for making a significant PC purchase for the Vista migration,
selecting new x64 systems running multi-core microchips and the latest in video hardware.
Chapter 1
Well-managed infrastructures involve the use of a complete PC management toolkit as well as a
reliance on properly trained resources both in operations and at the end user level. Locking down
PCs is a significant challenge as many organizations still allow users to run with administrative
rights. There are no justifications for doing so today, unless, of course, you have money to throw
away. Surprisingly, lots of organizations do. According to a report conducted in 2006 by Applied
Research West Inc. for Symantec, 48 percent of American IT managers say concern about
disabling or reconfiguring of security systems by employees is increasing. Does this mean that
48 percent or more of employees are running with administrative rights? If so, you need to learn
to just say ‘no’.
Surely we don’t have to spell out that the best way to manage desktops is to rely on three year
leases, frequent renewals and less diversity in computer models. Doing this is just common
sense. If you’re not, then you should seriously consider it. And if you’re not doing this because
of some resistance in upper management then show them this chapter. It’s time to get it right. As
one of our customers would say, “If it’s in print, it’s got to be true, right?”
In one of our most successful forklift migration projects, the client organization reduced PC-related
support calls by a factor of seven through the implementation of a well-managed and locked down
environment. This is just one example of the benefits you can reap from a well-organized and
planned migration.
At the expense of repeating ourselves, if you want to move to the new Vista feature set and at the
same time reap the benefits of locked-down systems and well-managed infrastructures, stay
tuned. We will provide you with the tools we have relied on to make all of our migration projects
a complete success. The toolkit provided in this guide will of course be focused on Vista but will
also be useful for any migration. It will include processes, sample documents, management tool
recommendations, structured and detailed instruction sets, and everything else you’ll need to
finally make migrations just like any other process in your IT operations. If you already have
proper desktop management systems in place, then you can also profit from this guide, reviewing
your own practices to make sure they are best of breed. Migrations should never be at issue
Chapter 2
Chapter 2: Planning the Migration
So you’ve decided to migrate to Windows Vista. Good for you! Your goal now is to design the
very best migration possible and to ensure that the network remains stable during and after the
migration has been performed. Successful migration projects rely on a proper structure. For this
reason, this chapter will focus on how the migration project itself will be organized. To do this,
we will begin by looking at the actual tasks required to perform a migration on a PC in order to
help you understand what a tangible migration process involves.
But, because you know what to do to migrate one PC doesn’t mean that the migration of dozens,
hundreds or even thousands of PCs will just work. Migration projects are massive undertakings
that require the coordination of hundreds of tasks—tasks that must be correctly orchestrated to be
delivered on time, under budget and with a very high level of quality. To meet these
requirements, you need a change management strategy, one that will help you blueprint each step
of the process and ensure that everything works as planned.
Finally, you need to launch the migration project. Armed with the structured change
management strategy, you can begin to lay out the steps you need to undertake to complete this
project. That’s why we close the chapter with a look at the actual project plan for the migration
and the process you need to put it in place.
Actual Migration Tasks
Migrating a PC and making that migration transparent to a user is hard work. Not that the work
is actually hard, but rather, it is the coordination of multiple sequential tasks and the intricate
linking of each task that is often difficult. You need to make sure that predecessor tasks are
performed accurately, otherwise all of the tasks will fall out of sequence and the result will be
less than successful. This is the old computer principle of GIGO or ‘garbage in, garbage out’ and
it applies even more when the migration addresses more than one PC. You need to understand
which tasks are required and how they fit together.
When you perform the migration, you need to coordinate seven specific tasks and link them
together (see Figure 2.1):
1. Hardware and software readiness assessment
2. Personality capture
3. Creation and deployment of base images
4. Application packaging
5. Software installation
6. Personality restoration
7. Migration status reporting
Each task is performed in sequence and each has its own particularities.
The PC Migration Cycle was originally based on Altiris’ Six Step Migration Process. More on the
Altiris process can be found here.
Chapter 2
Figure 2.1: The PC migration cycle.
This guide is about Vista migration. Many organizations will choose to include a new productivity suite
such as Microsoft Office 2007 in the migration to a major new OS. Though this guide will not cover
the migration to Office 2007, you can rely on the same processes and procedures to add it to your
Hardware and Software Readiness Assessment
Before you can begin to even consider installing a new operating system (OS) on any PC, you
need to assess its current content and structure to first see if it will support the new operating
system’s requirements, and second, have any issues with hardware drivers, installed applications
and expected system purpose. One of the best ways to perform this assessment is to rely on a
checklist that leads you through the requirements analysis. If you’re performing the migration for
a single PC, you’ll face a reduced list of questions, but if you’re performing the migration for a
number of PCs—any number, you’ll face additional questions—questions that have to do with
volume and address issues of location.
Of course, if you’re working on a single PC, all you really need to do is to run a compatibility
analyzer on it. As described in Chapter 1, the best way to do this for single PCs is to use the
Vista Upgrade Advisor (VUA). This tool runs against your system and provides you with
information on the latest requirements and compatible components. But running this on multiple
PCs—any more than 10 actually—is less than practical. That’s why you’ll need to run a system
check using either a deployed inventory tool, the Application Compatibility Toolkit version 5.0,
or the Windows Vista Readiness Assessment (VRA), both from Microsoft. Note that both VUA
and ACT require administrative credentials to install.
More information about VUA, VRA and ACT can be found at;;
If you have an existing system management tool, you can use its own readiness assessment
reports to determine whether your systems are up to snuff. For example, tools such as Altiris
Inventory Solution or LANDesk Inventory Manager will include updated reports that allow you
to assess both hardware and software readiness in one single pass.
Chapter 2
More information on Altiris Inventory Solution can be found at:
More information on LANDesk Inventory Manager can be found at:
Whichever tool you choose to rely on, you should concentrate on the following questions:
1. Do the PCs meet the system requirements for the new operating system? System
requirements for Windows Vista were outlined in Chapter 1.
a. Which proportion of your systems meets these requirements?
b. Which proportion can meet them with minimal component changes?
c. Which components require changes—random access memory (RAM), video
cards, displays, printers, others?
d. Would you rather change components or replace the system in order to aim for a
standardized set of PCs in the network? Is this within budget?
e. How many systems do not meet the minimum requirements? Are you ready to
replace them? Is this within budget?
2. Are there any driver issues on any system?
a. Which components are missing drivers?
b. Do the manufacturers of these components offer updated drivers?
c. If not, which do?
d. Are there compatible drivers you can rely on until the manufacturer updates
3. If hardware upgrades or replacements are required, should the upgrades be sized to meet
any other project, present or future, requirements since change is required?
4. If PCs are to be replaced, are they in the same location or do you need to cover multiple
5. Which operating systems are running on your systems? Are they homogeneous or
heterogeneous—will you have to upgrade from many different operating systems?
Note that Windows XP, Windows Media Center Edition and Windows XP Tablet Edition will all
upgrade to Windows Vista as there is no longer a distinction between these operating systems as far
as the Vista upgrade process is concerned. Of course, you will need the appropriate edition of Vista—
Business, Enterprise or Ultimate—to perform the upgrade.
6. Are there any licensing costs associated with the migration? If yes, how will they be
addressed? Remember that Vista uses a completely new licensing model (discussed
further on) that requires you to put in place internal validation systems if you don’t want
all of your computers to be activated through the Microsoft Web site.
More on this licensing model will also be covered in Chapter 5 as we discuss infrastructure changes
to support the migration.
Chapter 2
7. Which applications are deployed on your network?
a. Will they run on Windows Vista?
b. Better yet, are they actually in use? Do you have metering information to tell you
if an installed application is actually being used by that PC’s principal user?
c. If you are also considering moving to the x64 platform, will your current
applications run on it?
d. Are there any applications that should be deployed to every single user? If so,
should these applications be included in the PC image you will prepare?
8. What about user data?
a. Is it already located on servers and protected?
b. Is it located on each PC and will it need to be protected during the OS
c. Is there any data that can be filtered out, especially non-corporate data, to speed
the replacement process?
9. Are any infrastructure changes required for the migration?
a. Is the project running on its own or will your organization include other
changes—changes such as deploying a desktop deployment or management suite,
moving to a new directory structure, including mergers and acquisitions—with
the migration?
b. Will the migration only include the operating system or are some core
applications—Microsoft Office 2007, for example—also included?
c. Is the network able to support in-place migrations? Are additional migration
servers required?
d. If not, will you use a PC rotation process to bring each PC into a staging area and
upgrade or replace it?
e. Which is the best schedule for the migration? If multiple locations are involved,
will the change be location-based or based on lines of business?
10. Which non-IT processes will be required?
a. Do you need to start a communications plan for the migration?
b. Is training required along with the OS deployment?
c. Are any other administrative processes impacted by the migration?
The purpose of this questionnaire is to determine what is required to properly plan the migration.
You might have a series of other questions to address your own particular situation, but this list
is a good starting point. It should provide you with the financial, technical, administrative and
organizational answers that will form the nucleus of the project.
Chapter 2
Much of this information is gathered in inventories—inventories that need to be automated. Two
inventories are required. The first occurs at the very beginning of the project and is deemed a
cursory inventory. Here you’re focused on numbers of PCs, numbers of users, PC types and
configurations, locations of each PC and a general inventory of the software in your network.
The purpose of this inventory is to give you a broad general idea of the scope of the project,
allow you to generate a budget, and produce the business case that was discussed in Chapter 1.
The second inventory is much more detailed and is focused on the content of each individual PC.
This content, detailing software, data, utilities and optional hardware components, will be used to
actually replace the operating system on the PC and return it into the same state or rather, as
similar a state as possible, once the migration is complete. In order to reduce the migration effort,
this detailed inventory needs to be validated. Validation can occur through metering data if it is
available. This will tell you if the software that is installed on the PC is actually used by the
principal user of that PC. If you do not have access to metering data, then the inventory needs to
be validated with the users themselves. This is a tough and time-consuming job.
Traditionally, organizations do not manage PCs very well. They tend to load programs on each
PC based on the requirements of the principal user; but, as users evolve in the organization and
move, add, changes (MAC) are performed on the PC, repurposing them for new users, software
keeps being piled onto the PC. After a period of time, the PC begins to contain products that are
not required by its current principal user. This can cause a lot of issues, especially in terms of
licensing. Most software licenses are based on installation state: if a piece of software is
installed, it requires a license, whether it is in use or not. Yet, so many organizations do not
remove unused software when they repurpose the PC.
Beyond the licensing and non-compliance issues, having unused software on a PC becomes a
real problem when it is time for an OS migration. That’s because the project needs to take the
time to validate the presence of each piece of software or utility on a PC with the principal user.
This is never easy as users often have no idea what the software or utility actually does and
cannot confirm whether they use it or not. We have been in countless migration projects where
software inventories were validated with users and software removed from the migrated system
only to find out that they actually did use the software but didn’t know the name of the product
or even that they were using it. This is usually because they address the software through other
interfaces or through the performance of another task in their work.
It is extremely important to validate these detailed inventories by actually walking through a
typical day’s activities with the user instead of just producing a list of items and asking them if
they use it or not.
More on inventories and detailed validations will be covered in Chapters 5 and 9 as we discuss how
to perform them and how to transfer the validated inventory to the deployment or release manager.
So, in the end you’ll want to produce automated inventories as much as possible. Make sure you
use a good inventory tool. Too many products produce only a list of all application components,
registered dynamic link libraries (DLL), executables and so on, delivering an incomprehensible
list of content for each PC. For this reason, make sure you look to the right requirements if you
need to evaluate an inventory tool.
For a review of asset management products, look up “Cover your Assets”, an article examining four
asset management products which can help with your readiness assessments.
Chapter 2
Personality Capture
When inventories are complete, you’ll want to move on to the capture of the personality of each
PC. PC personalities include the personal settings of each user that logs onto the computer,
including their profile, favorites, data, application settings, desktops, mapped drives and so on.
You also want to include any custom application properties such as custom dictionaries,
templates, email data files and so forth.
This information is vital to the success of the migration project. For a user, there is nothing
worse than having to recreate this personality once the OS migration has been performed. This
step is related to the quality of service or service level that the project will provide. An OS
migration is an IT task—a task that should be as transparent as possible to the end user. After all,
it is IT that is responsible for all computing processes, not users. For users, a migration project or
the delivery of a new tool should be business as usual and the disruption of business processes
should be as minimized as possible. For this reason, it is so important to properly design
computer systems, clearly identifying what belongs to users, what belongs to the corporation and
what IT is responsible for. It is the portion that belongs to the user that makes up the personality
This division of responsibility will be covered in Chapter 7 as we discuss building locked-down
computer images.
In addition, personality captures must be evaluated for each user profile you find on the
computer. Most organizations assign a PC to a principal user—at least it is easier to manage PCs
when they are assigned to one single principal user. So, during the personality capture, you need
to evaluate if existing profiles are in use or not, rationalizing unused profiles so that you do not
restore ‘garbage’ to the new PC.
Some organizations however do not have the luxury of assigning a principal user to PCs. These
include organizations that have 24/7 service offerings or organizations that are in the
manufacturing industry and have shifts that work around the clock. In these cases, several users
access the same PC. Organizations often use generic accounts for this so that only one profile
needs to be captured. This may change with Vista since it supports Fast User Switching, even
when the PC is joined to a domain. Generic accounts are a security risk because you can never
guarantee you know who is logged on with the account. Organizations should look to the
removal of generic accounts from their production networks.
Several tools are available for personality captures. And of course, the best tools will also
identify the validity of a profile and whether it should be moved or not. The best policy is to
capture all profiles, store them on the network, archive them after the project and restore only
those profiles that are actually in use. This way, you have access to old profiles that you may
have unwittingly marked as obsolete when they were not. This can happen when users are on
prolonged leaves of absence or on vacation during the project timeframe. This policy costs the
project in storage space and network bandwidth, but provides the very best quality of service.
We have seen projects where vital personalities have been lost—personalities of users with
several years’ worth of data stored locally—because this service level was not put in place. Best
to be safe than to be sorry.
Many tools will be discussed in Chapter 8 as we cover the capture and replacement of personalities
on computer systems.
Chapter 2
Of course, the very best policy is to rely on network features such as folder redirection to map
user profiles and data to networked drives where they can be protected at all times. This will be
one of the considerations for infrastructure modifications in preparation of the migration.
For more information on personality captures for Windows Vista, look up Migrating to Windows Vista
through the User State Migration Tool. Captures can also be performed with a number of third-party
tools such as Altiris PC Transplant Solution, LANDesk Management Suite or CA Unicenter Desktop
Creating and Deploying Base Images
The very best value for organizations relying on PC-based technologies is to reduce diversity, or
reducing the numbers of hardware configurations, buying PC systems in lots with identical
configurations, reducing the number of operating systems in use and rationalizing everything
from applications to utilities to external devices. Too many organizations, especially
organizations that are brought together through acquisitions and mergers, have several different
antivirus systems, several different deployment systems, or several different monitoring
technologies. The principle of standardization can never be overused when it comes to IT and
systems management. Many organizations find themselves very happy to deal with
heterogeneous systems, but usually, these organizations are driven by IT, not by business
Today’s organizations need to drive this point home with their IT groups: business is the ultimate
driver for the use of technology, nothing else. That’s why you should push for a single
worldwide global image for your computer systems. With Vista, this is not a dream, but a reality.
Vista no longer has issues with different hardware abstraction layers (HAL)—the component that
is particular to each PC configuration and is absolutely required to run Windows. Vista can now
apply different HALs at load time from a single system image, removing the need to create and
maintain one image per HAL. Of course, today, it is possible, through scripts and other
mechanisms, to create a single disk-based system image and deploy it to multiple computer
configurations, but few organizations take the time to make this effort. With Vista, this effort
will be greatly reduced.
The PC system image needs to be designed from the ground up. It should contain everything that
is generic to a PC and should ensure that the PC is secure and protected at first boot. You need to
determine if you will create an image with the OS only or whether it should contain all generic
applications as well. There is a strong argument for both positions and in the end, the most
determining factors will be how you stage systems and which tool you use to do it. For example,
if your staging tool supports multi-casting—sending one single stream of data over the network
to multiple endpoints and thereby greatly reducing network traffic when deploying multiple
computers—you’ll probably look to the ‘fat’ image or the image that contains as much as
possible since deploying one or fifty PCs at once will take the same time. If not, you’ll want to
stick to ‘thin’ images as each deployment will take considerably more time.
Chapter 7 will provide a detailed discussion on thick versus thin images.
Chapter 2
Windows Vista introduces new concepts and new ways to work with system images based on
three installation scenarios: upgrade, refresh and replace (see Figure 2.2). In the upgrade
scenario, you can now use Vista to actually perform a non-destructive upgrade of the system,
replacing the complete contents of the operating system without affecting user data or
application settings. This is the first time that the upgrade scenario actually works, but it remains
to be seen if organizations are ready to move to this installation format.
In the refresh scenario, you capture personality data, wipe or erase the computer’s disk drive and
reload a brand new operating system, reload applications and restore personality data. This is the
scenario most organizations are familiar with. There is a lot to be said for it as it reformats the
disk drive and creates a new, pristine environment. But, with Vista, reformatting the hard disk
drive may no longer be necessary. Because it is installed as an image by default, Vista has the
ability to non-destructively replace an operating system either 32- or 64-bit without destroying
existing files. Then, because it includes an automated disk defragmenter, it will automatically
begin to defragment the hard disk drive using low priority I/O once the installation is complete.
It may be worth examining this scenario as it offers the ability to use local disk drives for
personality captures and restores and would save time by cutting down network traffic.
In the replace scenario, you capture data from an old computer and then perform a bare metal
installation on a new computer—a computer with no preexisting OS—reloading applications and
data once the installation is complete. This is also a favorite among organizations and IT
professionals. Whichever scenario you decide to use, make sure your original system image is
properly designed and ready for prime time before you begin testing with it. Finally, you’ll need
an image management strategy, allowing you to update the image as new patches and hotfixes
are provided for the components it contains.
Figure 2.2: Windows Vista supports three migration scenarios.
Chapter 2
Application Packaging
Inventories will be very useful not only to determine how but also which applications will be
deployed to end systems once the OS is loaded. Ideally, all applications will be rationalized,
removing duplicates from the network and of course, removing any application that is made
obsolete by the features of the new OS you are deploying, leaving you with a reduced list of
applications to deploy. Deployment methods will be discussed in the next step, but before you
can move on to deployment, you need to package all applications for automation.
Software packaging is one of the most important aspects of any deployment or migration project,
yet it is often overlooked or addressed in cursory manners. Application management is a
lifecycle which involves several steps, one of which is the management of all automated
installations of applications. After all, if users are focused on the business, you don’t want them
to lose productivity by trying to run software installations on their systems. Besides in a lockeddown environment, they just won’t be able to. So you need to automate installations, and these
automations need to be standardized. You just can’t afford to deploy software packages that run
several different installation methods. If you decide to install applications locally on each
system, then you’ll want to package for the Windows Installer Service (WIS) as it provides a
single installation strategy for all locally installed software in Windows Vista.
You can also choose this migration project as a vehicle for change in the way applications are
managed on desktops. If you do, then you should consider application virtualization—a process
that installs all applications within a sandbox on the PC and protects the operating system at all
times from any changes required by the application. Software virtualization offers many
advantages, the least of which is the ability to run conflicting applications on the same PC. Since
all applications are sandboxed both from the OS and from each other, two versions of the same
application can run on the same system at the same time. More on this will be covered as we
discuss application preparation, but for now, the important aspect of this topic is that software
needs to be packaged through standard processes—processes that need to be transferred to
normal production operations once the project is complete.
For more information on software packaging in general, visit the Articles page on the Resolutions
Web site and look for the Application Lifecycle Management section.
For more information on software virtualization, look up our white paper on Software Virtualization.
Chapter 2
Software Installation
Software installation refers to the deployment of any software that is required by a user to
complete their function within the organization. Once again, PC design is very important here. If
you choose to rely on thin images, you will need to deploy more software to your users. If you
choose to rely on thick images, fewer applications will require deployment. Whichever model
you decide to use, you’ll need to properly identify the targets for software deployment. Ideally,
these targets will group users into role-based categories, letting you deploy appropriate software
based on the role the user plays within the organization. This way, finance users will receive all
of the financial applications; manufacturing users will receive manufacturing applications and so
on. Grouping applications doesn’t only help in deployment, it also helps in software management
and package preparation as you only have to make sure that the applications in a given group
will work together and don’t need to verify the application against any others. You inevitably
have to deploy applications on an ad hoc basis as well, but this should be to a greatly reduced
group of users.
One caveat: if you want to minimize application management tasks, you’ll have to make sure
that your deployment tool supports the automated removal of applications, not only when the
application is upgraded from one version to another, but also when the application falls out of the
scope of deployment. For example, if you want to avoid finding applications that don’t belong to
a user role on a system and you want to make sure you have a tight control over your licensing
costs, you’ll want to make sure you implement a software management system that automates
the process of changing a PC role, removing obsolete applications and applying appropriate
applications automatically.
Personality Restoration
Once the applications have been restored to a system along with the new OS, you’re ready for
the return of the system’s personality. Ideally, this process will be automated along with the
preceding steps. Remember that profiles also need to be rationalized and only valid profiles
should be restored. You’ll probably want to change the way the personality is designed on user’s
systems. For example, Vista completely changes the way user information is stored on the PC,
no longer relying on the Documents and Settings folder, but now focusing more on a new Users
folder. In addition, because of the built-in search engine in Vista, users should no longer rely on
desktops cluttered with unused icons, but should rely on more efficient access methods for
applications and documents.
If you do change the design or the way anything in the personality functions, make sure that you
communicate all changes to users. There is nothing worse for a user to find something missing
without any explanation.
Chapter 2
Migration Status Reporting
The last step in the actual migration process is status reporting. If you’re working on a single
system, you’ll want to make sure that it is stable, up-to-date in terms of security patches and
running properly. You’ll want to verify that all applications have been installed and that the
user’s personality has been restored as it should be.
If, on the other hand, you’re working with a multitude of systems, migrating several locations in
parallel and managing all of the deployments from one central location, you’ll want to make sure
that each and every success is properly reported back to central administration for the project.
You also want to track any issues that may arise, any migration failures and any unsatisfied
users. Ideally, you will have prepared a special help desk team for the support of the migration
and you may even have gone through the trouble to provide local coaching resources for
migrated users. You’ll want to track all of these issues if you want to control project costs.
In addition, you want to be able to have a single view of the progress of the project at all times.
How many PCs are deployed? How many are left? How many are being deployed right now?
Are we on target? Do we need to modify our schedules? Deployment management requires
intricate planning and lots of administrative hard work as users move and shift in the
organization during the deployment. Having the right deployment reporting tool is one of the
most important aspects of project management for migrations. Make sure you get the right
information and have access to it when you need it.
You’ll also want to rely on this information for the project post-mortem. This will help you
generate return on investment (ROI) reports and demonstrate the value of the massive effort your
organization just undertook.
An Extended View of Migrations
The migration of an actual PC is a seven-step process, but managing change on this scale in
organizations of any size is much more than just running through the seven steps described
previously. The project requires proper structure and organization. Few sources exist for this
type of information—information that guides you through the change management process
implied in OS migrations. Because of this, Microsoft has produced the Solution Accelerator for
Business Desktop Deployment 2007 (BDD).
The Microsoft BDD can be found here.
The documentation contained in Microsoft’s BDD attempts to put each and every step of the
deployment program into perspective. It relies on the Microsoft Solutions Framework (MSF) to
do so, but unfortunately, MSF is a framework that is designed for software development, not
systems integration. It makes sense that Microsoft would rely on MSF since it is an organization
whose focus is nothing else but software development.
Chapter 2
Also, the BDD guidance is divided into multiple feature teams. In our experience, there are really
two major feature teams: one who is in charge or preparing all aspects of the desktop and its
construction, and one who is in charge of preparing all aspects pertaining to server components
for the deployment. Within these two teams, you’ll find applications specialists, security experts,
management gurus, and other technical staff with all sorts of expertise; in fact, the broader the
experience of your teams, the better. Managing two teams makes it easier to control the outcome
of the project and keeps everyone in better contact.
But, nevertheless, BDD is a very valuable source of information on deployments and there is no
doubt that the tools it includes are a great help for organizations of all sizes. We’d like to
supplement its guidance with our own deployment management strategy—a strategy which is
focused on the integration, not development, tasks that make an OS migration project work. This
system has been in use for over eight years and has been successfully adopted by a multitude of
firms. It helps identify the critical path for any migration project.
Using the QUOTE© System
The QUOTE© System is a structured approach for change management which is based on five
phases (see Figure 2.3):
Phase 1, Question: This is the diagnostics or
discovery phase. It is focused on a situation review,
followed by a needs analysis. You need to identify the
factors of the change situation. First is the fact that
change is coming. Second is the scope of the change.
Third is where and how the change originated. Fourth
is what you already have in place and how it will be
affected. Question phases always involve an inventory
of some sort.
Phase 2, Understand: This is the planning phase and
focuses on initial solution development. Now that you
have all of the information on the change in hand, you
can start planning how to influence the change. If the
change is technological in nature—for example, a
desktop deployment—it is essential to fully
understand how the new technology feature set will
impact and improve the elements and processes
already in place within the organization. This stage
may include one or several proofs of concept
Figure 2.3: The QUOTE system.
(POC)—working mock-ups of the solution to come.
These POCs will help flesh out the details of the solution you will deploy because they
provide additional, and often non-technical, evaluations of the way the solution is
structured. This phase also finalizes the scope of the project and its structure.
Chapter 2
Phase 3, Organize: This begins the execution portion of the project. It is focused on
preparing the deployment mechanisms and the deployed content; in fact, all of the
engineering tasks of the project. It includes everything from detailed conception of the
solution to complete validation, testing all of the facets of the solution to ensure quality of
service for the next phase. Since changes such as OS deployments tend to affect the
organization as a whole, it is vital to validate all the elements of the solution before they
are implemented.
Phase 4, Transfer: This is the second part of the execution portion of the project and
focuses on manufacturing processes or massive operations because of the repetitive
nature of deployments. All systems are “Go!” and a production environment is used to
deploy the solution on a massive scale. Begin with the pilot project to validate that
everything works as planned, and if not, make final modifications. Then, when all
solutions have been fully tested and adapted, you begin to transfer them to the sections of
the organization that are impacted by the change.
Phase 5, Evaluate: Finally, you need to evaluate the change you implemented in order to
best view how it can continue to evolve within your organization. In addition, the
project’s performance as a whole is evaluated against its original objectives. Rules are
derived for future projects and adjustments are made to future approaches. This is also
the stage where the solution passes into ongoing operations. You need to transfer project
mechanisms into production, finalize all support training and transfers of knowledge and
report on your successes. Because change works in cycles, you’ll also need to re-initiate
the discovery phase to ensure the continued evolution of the solutions you deployed. This
re-initiation is crucial if you want to fully benefit from the change you implemented.
Many projects fail because this re-initiation is not performed and the project is closed off
once it is complete.
This approach can be used in almost any change situation, personal or organizational. Using this
approach within organizations ensures that organizational and especially, IT change is always
structured. It greatly simplifies and reduces the impact of change in organizations of all sizes. In
Chapter 1, we argued that operating system deployments are part and parcel of PC lifecycles. In
IT, change is constant as manufacturers constantly improve on existing products and new
products abound. You should rely on the QUOTE System to manage all IT change.
Forklift versus Hardware Refresh Migration Strategies
As discussed in Chapter 1, there are several different strategies for migrating to Vista. Most
organizations will use one of two main strategies: forklift—migrating all systems at once; or hardware
refresh—migrating systems as they are replaced through purchasing programs. Both methods are
valid, but in order to make your migration a success, you’ll need to properly prepare the design and
engineering aspects of the project. This means that no matter which migration method you use, you’ll
still need to run through the Question, Understand, and Organize phases of the QUOTE. Where the
strategies differ is in the Transfer and Evaluate phases. With forklift migrations, the Transfer phase is
a massive effort affecting all PCs in the organization. With hardware refresh migrations, the Transfer
phase actually carries on for several years until each and every PC is replaced.
Whichever one you use, don’t make the mistake of not being thoroughly prepared before delivering
the new OS to your users.
Because of its nature, the QUOTE System is ideal for the preparation and review of deployment
projects. Here’s how it works.
Chapter 2
Question Phase — Problem Statement
As mentioned earlier, the Question Phase relies on inventorying the organization (see Figure
2.4). You need to collect and review several different inventories and refine them if the
information you need isn’t available. If you’re already running a modern infrastructure, you’ll
always be ready for change because you’ll already have this information in hand and if you
don’t, you’ll already have deployed the tools that can provide this information to you. If you
don’t have an up-to-date infrastructure yet, then you’ll have to determine how you’re going to
generate these inventories. And, make sure you take advantage of the project to put in place an
automated inventory tracking process, one that will be part of your default infrastructure after
your deployment is complete.
The Question Phase
This phase includes:
• Situation Review including:
Internal Inventories/Problem Identification
Hardware Readiness Assessment
Software Readiness Assessment
Infrastructure Readiness Assessment
Organizational Readiness Assessment
• Needs Analysis
Initial Solution Design
Selection of New Technologies
Review of New Technology Features
• Initial Project Evaluation Including:
Potential Benefits
Initial Budget Estimates (~25% error)
Initial Project Scope
• Cost/Benefit Analysis
• Production and Presentation of the Business Case
• Go/No Go for the Project
In deployment projects, the Question Phase is usually initiated by a series of potential situations:
Software Product Owners—if you have them—have identified that new versions of the
technologies they are responsible for are available and meet the organization’s criteria for
project initiation.
Users have identified that new technologies are available, are using them at home and
they want to take advantage of them at work.
Business objectives or business processes have changed and new technologies are
required to support them.
The organization is ready for change and requires the introduction of new technologies to
promote it.
A major manufacturer or software publisher has released a new version of their product.
A major manufacturer or software publisher has announced the end of life for a specific
product you are currently using and will no longer provide support for that product.
Chapter 2
In the case of a Vista migration, the project initiation can originate from a number of these
factors, especially end-of-life or user demand. It is good practice for you to initiate the project
before these factors come into play. When you do so, you’ll want to begin with a situation
review and then follow with a needs analysis to identify the benefits that could be acquired
through the integration of Vista’s host of new features. The situation review needs to focus on
readiness assessments—hardware, software, infrastructure and organizational—that will indicate
whether your assets are compliant with Vista’s requirements.
Then, the needs analysis will focus on a list of potential benefits including existing problems that
could be solved by the technology. Remember to include the Help Desk database in your
assessments; it is a very valuable source of information for this type of project. The needs
analysis should include a scope definition—who will be affected, what the impact of the change
will be and how you wish to approach it—and initial budget estimates with plus or minus 25
percent error margin. This is enough to generate the relative size of the budget you will require.
You should also generate an initial solution design, outlining how the new technology will
improve and address issues you are facing today and how these benefits can turn into real value
for the organization. Keep an eye on the potential return on investment your organization will
reap when it moves to the proposed solution. This will help you create first, a risk analysis, then,
a cost/benefit analysis to ensure that the organization will profit from the change.
Figure 2.4: The Question Phase focuses on inventories and initial solution design.
Chapter 2
Where to Spend your Dollars
Research has shown that those who spend little time engineering their solutions before moving on to
deployment, have much more painful experiences than those who do. Remember the 80-20 Rule,
spend lots of time getting all of the engineering right and your project will run more smoothly.
The Value of Automation
Make sure you aim for a fully-automated deployment. In our experience, costs for manual
deployments average between $500 and $1,000 per PC. Semi-automated or Lite Touch
deployments, to use a term from the BDD, average about $350 per PC. Microsoft’s goal for fully
automated or Zero Touch deployments is less than $50 per PC. That’s why you should aim for a fullyautomated deployment strategy. That’s also why this is the strategy recommended by this guide.
All of the previous elements will become part of one major deliverable: the business case.
Prepare your business case with care and have it validated by the other members of your initial
team. Then, when you feel that you have everything in place, present the business case to the
stakeholders. This phase culminates in a Go/No Go decision for the project.
Remember to go to the DGVM’s companion Web site to obtain the tools provided in Chapter 1: the
Business Case Template and the List of New Features for Vista. These tools will greatly help your
own project’s Question Phase.
Understanding Phase — Exploring the Impacts of Migration
The Understand Phase is used to begin mastering the new technologies to be implemented and
identify solutions to known problems. The first step is to list the features of Windows Vista
which you have decided to implement. Then, using the results of the Needs Analysis created in
the previous phase, you can map features to current issues. This will serve to refine the initial
solution design that was created in the previous phase.
Next is project definition. Once again, you refine the initial project plan and scope prepared in
the previous phase. While the objective of the previous phase was to get the business case
approved, the objective of this phase is to refine all aspects of the project so that it can move
forward on to the other three phases.
Refining the project definition means preparing and generating all of the elements of the project:
Vision, Objectives, Scope, Structure, Teams, Tools and Timelines; all this in order to create the
definitive project plan. This time, the project definition should include a refined budget estimate
to within plus or minus an error margin of 10 percent. This becomes the project’s official budget.
This budget should be supplemented by two additional documents: an evaluation of the potential
impacts of the project on all aspects of your business and the migration strategy. The impact
evaluation should cover items such as work patterns and operational processes, user interaction
with IT Systems, system changes, operational changes, structural changes, and so on. The
summary migration strategy that was presented in the business case is refined to identify the
strategies that will be used for systems deployment, interactions with users during the
deployment, training strategies, remote and local deployments and so on.
Chapter 2
The Understand Phase
This phase includes:
• Secondary Review of the features of the new technology including:
Identification of features to be implemented
Final mapping of features to known issues
• Project Definition including:
Budgets & Impact Analysis
Project Plan
Project Organization Manual
• Initial Communications Program
• Logical Solution Definition including:
Naming and other standards that apply to the solution
Identification of Problem Resolution Strategies
Logical Solution Architecture
Initial Training Solution
Detailed Deployment Strategy
• Initial Acquisitions
• Initial Laboratory Creation including:
Operating and Testing Standards
Preparation and Testing Environments
Initial Tests
Solution Mock-up
Technical Training
• Proof of Concept
• Go/No Go for the Project
Also, now that the project is officially in place, the project’s communications program needs to
be designed and initiated. Its scope depends on the scope of the change the organization will
bear. In the case of Vista deployments, you should make sure you invest in communications
early. It’s never too early to promote comprehensive change projects like an OS migration. Your
objective is to promote good will towards the project and its deliverables. There are several ways
to do this. Don’t make the mistake of underestimating the importance of the communications
plan in this project! Lack of proper communications programs can make even the best
engineered solutions fail because users will resist the change. One good example of the kind of
help the communications program can provide to the project is through the creation of a project
‘welcome kit’—a tool that outlines all of the major points of the project, its structure, its
objectives and the role each person will play in it. With projects of this scale, this kit will quickly
become a vital tool as personnel rotate through the teams.
Chapter 2
For a detailed discussion on the requirements of a communications program for your Vista project,
see Communications Plans: a Sample Chapter from Preparing for .NET Enterprise
Technologies at
You also need to launch the initial training program and training analysis. Little training is
required at this time and is often limited to basic technical training for the core project team, but
this is the time to begin examining all of the training options that will be required by the
project—end user, technical, support teams, and so on. As with communications, training
programs are critical to the success of a Vista project. Actual user training does not need to be
overwhelming; it needs to be direct and to the point. Often simple presentations outlining new
ways to do things and offered to end users are all you need. Technical training needs to be more
comprehensive, but can once again, be very creative, running in-house or Web-based programs
to cover just what you need and nothing else. The training program needs to be tightly integrated
to the solution you design. Make sure you budget for this very important task.
At the same time, you can begin to refine the initial solution you outlined in the business case.
This will include the definition of the standards that will guide the architecture of solution. It will
rely on these standards to identify problem resolution strategies and eventually culminate into the
Solution Architecture. This architecture serves as input into both the communications and
training programs and will provide direction for the engineering work that lies ahead.
Because you need to refine the solution, you’ll need to begin the creation and preparation of the
laboratory you’ll require to support the preparation of the technical elements or engineering work
of the project. You may only need to adapt existing laboratories if they are already in place; if
not, you’ll have to create it from scratch. In either case, you may need to perform some initial
acquisitions in order to populate this laboratory. And, since this laboratory will be used for the
duration of the project, you will need to define its operational and testing standards.
Chapter 3 will outline how to go about preparing the laboratory for the Vista migration. It will focus on
the use of virtualization technologies as these technologies are ideal for the support of laboratory
It is a good idea at this stage to create a working mock-up of the solution. This mock-up can
greatly assist in budget definitions through cursory testing of existing applications and systems.
For example, with a migration to Windows Vista, you should use the lab to identify what the
acceptable baseline is for existing hardware. Should this baseline be different than that outlined
in the business case, you’ll have to refine the budgets for hardware acquisition. The mock-up can
take the form of a single virtual machine image that can be used to demonstrate new feature sets
to the project stakeholders.
The laboratory serves three major roles: solution preparation, testing, and personnel training. The
latter role is as important to the success of the project as the first two. Technical staff requiring
retraining should have scheduled access to the lab so that they can begin playing with the
technologies. Often the best way to give them this access is to, once again, rely on virtual
machines running with the new OS so that they can begin to familiarize themselves with the
coming delivery. It is also vital to schedule this training officially, especially with the core
project team, otherwise no training will ever take place and the technical training program may
have to be outsourced. Keep in mind that one of the main project goals is to ensure that a transfer
of knowledge is performed to all technical staff. You want to keep all new skills in-house at the
end of this project.
Chapter 2
The laboratory is called the certification laboratory because it will be used to certify all
engineering aspects of the solution—PC images, application packages, support structures,
deployment procedures, personality captures and restores and so on. And, because several
different teams will be working in this laboratory, it is a great idea to prepare a laboratory
welcome kit that outlines how the lab works and where team members can find the resources
they require to perform their tasks.
Finally, it is often a very good idea to perform a proof of concept to clearly demonstrate the
benefits brought by the migration to the new OS in order to get refined budgets approved. Once
again, the organization must issue a Go/No Go decision for the project since new budget
authorizations are requested (see Figure 2.5).
Figure 2.5: The Understand phase focuses on solution refinement and project preparation.
The Scope of the Proof of Concept
The Proof of Concept (POC) can take a number of different forms. It could be just a mock up
demonstration to the Project Steering Committee—the management group that oversees the project.
Or, it could take the form of an actual deployment to key team members. In some cases, the POC
can eventually cover about one percent of your user population.
You’ll probably have several POCs—some focusing on individual features, others on portions of the
overall solution, and eventually, a one percent population sample. It is a good idea to treat POCs the
same way you treat any engineering project, with Alpha, Beta, and Gold deliveries.
Chapter 2
Organization Phase — Preparing for the Migration
The third phase of the project focuses on finalization of all technical aspects of the deployment
and eventually, preparation to move to the pilot project. The stage involves the bulk of the
engineering work required to perform the deployment. Everything is refined and tested and then
tested over again until it all falls into place as one integrated process. This is the phase that
addresses the actual migration tasks described earlier. In fact, the purpose of this stage is to
engineer and automate each aspect of these tasks.
The Organization Phase
This stage includes:
• Finalization of the Solution Architecture
• Solutions Development (if required)
• Solution Integration Programs including:
Deployment Infrastructure Preparation
Solution Security Elements
Application Compatibility Testing and Packaging/Certification
Automated Installation Mechanisms
User Data or Personality Recovery Mechanisms
Solution Certification
• Solution Support Programs including:
Refined User and Hardware Inventories
Validation of all Inventories with Users
Technical Training Program
End User Training Program Preparation
Special Support Mechanism Preparation
Data/Application Conversion Programs
• Continued Communications
• Deployment Strategy Finalization and Approval
• Final Acquisition Launch
• Pass/Fail Acceptance Testing
This phase also addresses the finalization of the architecture for the solution. This architecture
may need updates as engineering work proceeds and identifies actual processes and procedures
instead of expected logical outcomes.
Solutions focusing on infrastructure changes, such as the Vista deployment, require a lot of
integration and very little development, but developers should be part of the team to offer
support for any automation request through scripts and other measures. They are also called upon
to develop custom components for the solution such as Web tools or documentation libraries.
Another reason to include developers at this stage is that they are one of the final clienteles for
the deployment and their requirements are considerably different than those of normal users.
That’s because they need to have some form of access to elevated privileges in order to perform
their work. Unlike users who will run with standard privileges, developers need to install and test
code as it is being produced. The solution can either be focused on giving them virtual machines
or granting them elevated rights in development domains. Whichever you choose, try to avoid
giving them access to elevated rights accounts in your production domain because this can lead
to both security and administration issues down the road.
Chapter 2
This phase is all about the integration programs. These include system image staging
mechanisms, application compatibility testing and repackaging, the definition of data recovery
mechanisms or personality captures and restores, the definition of all of the security elements of
the solution, and the preparation of deployment infrastructures such as mobile staging rooms and
training facilities. A lot of work needs to be performed by the various teams that are engaged in
this technical aspect of the project. After all, for the deployment to work, these mechanisms must
be fully flushed out. Each aspect of these programs is fully tested and certified before it is ready
for testing. The bulk of this eBook addresses each of these programs in detail and helps you
understand how to go about preparing them.
The preparation of the support programs is also one of the tasks addressed with this phase.
Several support programs are required. First, you need to validate all of the inventories of each
PC. This validation is essential because you don’t want to be deploying unnecessary products to
end users. This is an arduous administrative task unless you have software usage or metering
data. Quality control is very important at this stage because these validated inventories will
become the production request sheets that will be handed off to the release manager for the
migration of each and every PC in your network. If mistakes are made here, mistakes will be
made during the deployment.
This is also the time where the bulk of the technical training program will be performed. Begin
with the personnel on the project team and then move on to other, non-project personnel. Now is
the time to train them because the deployment is not yet engaged and resources only need to deal
with the day-to-day issues they are already used to. Don’t wait until the deployment actually
starts because then, your resources will more strained as unforeseen issues may arise and project
teams are engaged in deployment support activities.
In addition, now is the time to begin the preparation of the end user training program. As the
solution coalesces into a final deliverable, you can begin to see which elements will need to be
covered by such training. Be creative. One great idea would be to include custom training videos
users could view during the deployment and then access later if they need a refresher course.
Another aspect of the project that needs to be covered here is the preparation of special support
mechanisms: Will the regular support team absorb support of the deployment, or should a
special, separate team be put in place? How long will this special team be required? When does a
migrated user move back to the regular support team? What is the project guarantee time period?
The answers to all of these questions form the support mechanisms for the project. In most cases,
deployment support is offered by the project for a period of 20 working days before the user is
returned to recurring support mechanisms. This 20-day period forms the project’s deployment
support guarantee.
Finally, you need to put in place data conversion programs, especially if you decided to include
Microsoft Office into the deployment of your client operating system. This might mean
deploying conversion tools to existing PCs as you deploy the new OS and then, performing a
final data conversion after the deployment is complete. You may also need to perform
application conversions. For example, Microsoft Access databases and custom tools often need
to be upgraded when new versions of the suite are deployed. These conversion programs are as
important as any other aspect of the project because they deal with your most valued asset:
Chapter 2
If you find you need to perform application conversions such as those for Microsoft Access, we highly
recommend you redesign these application programs. For more information on gaining control of
Microsoft Access applications, see Decentralized Deployment Strategies.
This is also the time to initiate all of the acquisitions for the project; getting the right licensing
agreements for Windows Vista and for any other software you might choose to deploy, for
example, Microsoft Office Professional 2007. It also means obtaining any hardware upgrade
components or new computer systems to be able to supply the deployment rate in the next phase.
Microsoft Windows Vista Licensing Changes
Microsoft has put in a lot of effort into Vista to protect itself from theft or software pirating. Many of
these features are also designed to protect your organization and to ensure that your licensing keys
are not being used outside your organization. These issues address the most common problem with
Windows XP volume licenses: theft or misuse of the volume licensing key. After all, you don’t want
your licensing key to be used on computers that no longer belong to you or being used by users who
no longer work in your organization. The new licensing changes address just this type of situation.
Organizations have three options for the acquisition of Windows Vista. The first and simplest is to
acquire retail licences of the product either with new PCs or through upgrades. In this case, you’ll
probably want to acquire Windows Vista Ultimate Edition as it is the only one that includes all of the
features that make Vista truly great, but is also the most expensive edition.
Organizations that acquire Vista through volume licensing or software assurance programs have two
more options. The first is to deploy a centralized key management service (KMS) which controls the
activation of Windows operating systems without requiring individual PCs to connect to a Microsoft
Web site. At the time of this writing, KMS can run on either Vista or Windows Server Codenamed
“Longhorn” (WSCL). It may be adapted to Windows Server 2003 in 2007.
To run KMS, organizations need to have at least 25 PCs running Vista or five WSCL servers
consistently connected to an organization’s network to operate—virtual instances of operating
systems do not count. KMS can support the activation of hundreds of thousands of PCs from one
single KMS device, though organizations should have at least two KMS devices in the network, one
main device and a backup system. Location of KMS devices can be performed through autodiscovery relying on the DNS service or through direct connections, entering the machine name and
port number for the connection.
Client computers must renew activation by connecting to the KMS device at least once every 180
days. New, un-activated clients will try to contact the KMS every 2 hours (configurable), and once
activated will attempt to renew their activation every seven days (configurable) to renew their 180-day
lifespan. This 180-day lifespan ensures that any system that leaves your premises with a copy of your
license will eventually expire.
Organizations requiring multiple activations, but with fewer than 25 systems or with special situations
that does not allow them to connect to a KMS, can rely on Multiple Activation Keys (MAK). MAKs are
special activation keys that will support individual PC activation with no time limits or can go through a
MAK Proxy to activate several keys at once.
If the copy of Windows Vista becomes deactivated for some reason, the following features will no
longer work:
• The Windows Aero user interface will no longer operate.
• Windows Defender will no longer remove non-critical threats.
• Windows ReadyBoost will no longer operate.
• The Windows Update Web site will no longer provide downloads.
• Windows will provide persistent notifications that this copy is unlicensed.
Chapter 2
Un-activated or de-activated PCs have a 30-day grace period before requiring re-activation. Copies of
Windows that go beyond the grace period enter Reduced Functionality Mode (RFM). In addition to
the reduced functionalities listed above, a PC in RFM mode will:
• Run a default Web browser when the user opens a session.
• Run sessions with no Start Menu, no desktop icons, and a black desktop background.
• Log users out after an hour without warning.
Make sure you take these aspects of licensing into account when engineering your deployment.
Note that Microsoft will be releasing a version of the KMS for Windows Server 2003 in the first half of
2007. You can then run KMS on a server as it should be instead of relying on a Vista PC.
As each of the engineered elements of the solution is released from the solutions integration
programs, it needs to undergo certification—arduous testing programs that validate that the new
component will work under any condition. If it fails certification, the component must return to
the integration center and be modified or updated to meet certification. If it passes certification,
then the product passes on to acceptance testing—testing that is performed by appropriate
clienteles, such as user representatives, local developers, line of business representatives and so
on. These users have a pass or fail authority on each component and will determine if it is ready
for prime time. Once all components pass acceptance testing, they are ready for the final test: the
pilot project (see Figure 2.6).
Since change is a constant in IT, it is often better to lease than to buy. Leasing programs include
automatic upgrades on a scheduled basis. In addition, they provide consistent costs over the years.
Chapter 2
Figure 2.6: The Organize phase concentrates on finalizing the technology transfer methodology.
Enterprise Project Planning
It is important to identify existing and ongoing projects in preparation of the next phase because yours
may conflict with other project schedules. If synchronization is possible in delivery schedules, you
should do everything to make it happen. After all, your driving principle is little or no disruption to
business operations.
Chapter 2
Transfer Phase — Performing the Pilot Project, then moving on to Deployment
Once all engineering aspects of the project are complete, you need to take the time to complete
the administrative and support aspects of the program before you test actual deployments. This
includes the finalization of the end user training program, the communications program, the
support program and the deployment schedule. Each of these needs to wait until all engineering
is complete before it can be finalized. Don’t try to rush things. Much of this effort can be
performed in parallel with the engineering tasks, but not all of it. You need to reserve a period of
time after engineering finishes, but before the pilot to finalize the content of these programs.
After all, it is hard to finish the training kit or the user guide until you are 100 percent sure of
what the image will look like once it is deployed.
The Transfer Phase
This stage includes:
• Finalization of Deployment Procedures Including:
Training Program
Communications Program
Support Program
Deployment Schedule
• Pilot Project including:
User Communications
Solution Deployment
Training Programs for Users
Training Programs for Technical Staff
Solution Support
Pilot Evaluation
Approach Modifications (if required)
• Deployment Launch including:
Deployment Coordination
Deployment Risk Management
User Communications
Massive Solution Deployment
User Training Programs
Residual Technical Training Programs
Deployment Support
Hardware Disposal Program
Ongoing Program Evaluation
Chapter 2
Once all components, administrative and technical, are ready, you can move on to the final test:
the deployment of the pilot project before you move on to massive deployment (see Figure 2.7).
Pilot projects should address up to 10 percent of the user population and should include at least
one remote site if your organization includes remote sites. The purpose of the pilot project is to
test of all of the facets of the deployment process to ensure quality of service for the actual
deployment. The pilot project should represent every deployment activity from user
communications warning them of the upcoming deployment to PC and software deployment as
well as training. It should deliberately test every deployment mechanism. For example, if the
project aims for the replacement of PCs, a select target group of users should be chosen to
‘forget’ applications on purpose during the PC staging process. This allows the testing of the
support mechanisms for post-deployment rapid application distribution. You need to choose a
select group for this type of test in order to avoid skewing the deployment results which will be
used to help determine if you are ready to move on to massive deployment.
Using a System Stack
This guide outlines how to use a system construction model to build the PC image. This allows you to
structure how applications will be deployed during the project. Generic applications or applications
that are targeted to 100 percent of the user population are prepared first, which means that you can
begin to deploy to users who only require generic applications as soon as they are ready. This often
means as many as 50 to 60 percent of your user population. Then, as you are deploying to these
users, your software packaging teams can complete the packaging of applications that are outside of
the system stack. These applications should be grouped by role and when a role is complete, then
you can include it into your deployment schedule, giving you time to complete the next role and so on.
This lets you manage deployment activities in parallel and shorten the actual deployment schedule.
Your pilot should aim to meet the goals of the massive deployment. For example, if your goal is
to deploy 50 PCs per day during the massive deployment, then you should aim to meet that same
objective during the pilot. Also, in order to capture comments from pilot users, you should
include regular meetings with user representatives to determine the ongoing status of the pilot.
Marry Training with Migration
No matter how good your engineering is, migrating each PC will take time. Depending on your
approach, it could take between one to three hours. It is always good to estimate half a day for each
group of migrated PCs.
Of course, you can’t expect users to hang around for that time doing nothing as their PC is out of
commission. If you did, you couldn’t consider your approach as having no impact on business as
usual. One excellent approach is to marry the user training program along with the PC migration. Call
users into training, either through Webcasts or in larger presentation rooms, and introduce them to
the new desktop and tool set while their PC is being upgraded or replaced. This way, users will find
themselves moving to the new OS from the moment they walk into the training room. When they
return to their desk after the training program, they find their PC migrated and ready to go.
Give users some time to get used to the new system. Vista is not a major change, but it is different
from any previous version of Windows and if you want them to be as productive as possible, let your
users have some time to get used to the new system at their own pace.
Chapter 2
Since the purpose of the pilot is to validate and refine every aspect of the process, you’ll need to
perform an evaluation at its end, sort of a pilot post-mortem. You should collect comments from
every pilot participant: project managers, coordinators, administrators, technicians, users,
trainers, support personnel—centralized and local, communicators, and any other personnel that
participated in the pilot. Comments and recommendations should be used to modify deployment
approaches in order to improve the quality of deployment services. You don’t want to identify
issues with the approach when you’re ready to perform massive deployment operations, so get it
right during the pilot.
If you find that some re-engineering is required, pilot project users should be first on the
deployment list for the next phase. Because they are the first to receive the new Vista operating
system, pilot users accept to bear any inconveniences the project may bring if its approaches and
solutions are not completely fine-tuned. That’s why it is important to “reward” them with the
first reception of the refined solution.
Figure 2.7: The Transfer phase concentrates on repetitive deployment tasks.
Final modifications should be made to the deployment strategy before you start. Make sure you
get management to buy off on this strategy before you can proceed. So, now you’re ready for
deployment at last. All of your processes and solutions have been tested and approved and you’re
ready to proceed with the deployment itself.
Chapter 2
Actual deployments tend to be quite complex to orchestrate—users aren’t available for training
sessions, parts don’t come in on time, technical staff fall sick—sometimes its seems as if
everything is going wrong, especially with massive. Two factors will help: strong deployment
coordination and deployment risk management. Since the deployment is in fact the coordination
of a sequence of activities, the personnel you select to administer it require very strong
coordination skills. You can also rely on the communications program which should warn users
one month, three weeks, two weeks, one week and them one day before they are migrated.
As for risk management, it means identifying potential project weaknesses and preparing for
them. Before launching the deployment, make sure that you’ve covered all the bases. Your pilot
project should have helped identify the majority of these risks. Remember: always have a backup
plan. For example, you should have a select list of replacement users ready and on hand in case
some of the targeted users are not available. This way, you can continue to meet your daily
deployment objectives and return to missed users later.
When you proceed with the deployment, you’ll quickly see that after a while it becomes quite
repetitive. All the better, practice makes perfect! You’ll soon be repeating all of the activities in
the pilot project on an ongoing basis. Don’t get lulled by the monotony of the process! Always
perform ongoing project status evaluations and be ready to modify approaches if something goes
wrong. Also provide status reports to management so they can see you are continuing to meet
objectives and staying right on target.
Evaluation Phase — Moving on to Production
The Evaluate Phase
This phase includes:
• Project Post-mortem Evaluation
• Transfer of Project Processes into Recurring Administration including:
Final Transfer of Knowledge to Operations Staff
Deactivation of Project Support Mechanisms
Final Documentation Delivery
• Beginning of the Solution Evolution
The final phase includes three major activities. The first is to perform a project post-mortem to
evaluate the project’s performance against its objectives. Did it complete within budget? Did it
meet every objective? Did it fit within defined timelines? Where there any overruns? Answers to
these questions help form the rules and guidelines for future projects. After all, you know there is
another OS deployment in your future.
The second is the transfer of the deployment solution to recurring operations. If you didn’t have
a managed desktop before the deployment, then now is the time to make sure you maintain all of
the deployment processes in your ongoing operations, otherwise, you’ll find yourself in the same
situation next time you need to run a project of this type. To do this, you need to finalize all
project documentation and have project personnel perform a final transfer of knowledge to
administrative and operations staff. All project support mechanisms will be deactivated.
In a modern architecture, it is important to deactivate and not dismantle project support structures
since you know they are required on a recurring basis.
Chapter 2
And third is the evolution of the solution. Once operational staff has accepted delivery of the
new technology and signed off on it, secondary evolutionary projects can begin. For example,
after deploying and implementing Windows Vista, you can now begin to learn how to master it
and leverage its capabilities to increase quality of service within your IT infrastructure. This is
the right starting point for this evolution because many of Windows Vista’s features are only
available to a network that has completely migrated to this operating system. These
modifications can often be performed in the background with no further disruption of
organizational or operational activities. This way, the Evaluate Phase becomes the starting point
of a new Questioning phase and process as your organization moves on and evolves with the new
capabilities you just implemented (see Figure 2.8).
Figure 2.8: The Evaluate phase begins the productive use and evolution of the solution.
Chapter 2
Creating a Migration Project Plan
Now that you understand the QUOTE© System, it becomes important to see how it maps out to
the Desktop Deployment Lifecycle (DDL) presented in Chapter 1 (see Figure 2.9). As you can
see, it maps out as follows:
Question Phase: step 1, generate the business case and step 2, strategy development
Understand Phase: step 3, build the test environment and step 4, identify toolkits
Organize Phase: step 5, infrastructure analysis & update, step 6, applications strategy,
step 7, pc image creation, step 8, personality protection strategy and the first part of step
9, integration strategy
Transfer Phase: the second part of step 9, integration strategy and the first part of step 10,
deployment strategy
Evaluate Phase: the second part of step 10, deployment strategy
As you can see, the bulk of the technical work is within the Organize phase.
Figure 2.9: Mapping the QUOTE© System to the desktop deployment lifecycle.
Chapter 2
Project Planning Considerations
Each IT project—whether a deployment project or not—relies on the five phases of the
QUOTE© System. You should structure your project to align with these phases. But even with
such an alignment, you’ll need to consider the actual objectives you aim to achieve with this
project. In Chapter 1, we discussed the ability to rely on this project to finally gain complete
control over your PC environment. We reiterate it again: if you don’t have control now, then get
it; if you have control now, then refine it.
Making Common Mistakes
Don’t make the most common mistakes with your project. For example, make sure the technical lead
isn’t also the project manager. Both roles are vital and need to be assigned to different people.
Deployment projects are intricate, not to say complicated. Make sure you do not underestimate the
level of work required to make this project a success.
Also, define your scope and stick to it. As time goes by, others will want to add this or that to your
project scope. Don’t let scope creep be the downfall of your project.
If you choose to lock down the desktops and you should, don’t let anyone become an exception. In
projects we have run, less than one percent of users have full administrative access and then, only
through the Run As command. We’ll give you tools to make it work in future chapters.
Make sure you track project progress and report on it to the Steering Committee. You might even
want to put it into the communications plan—sort of a series of coming soon announcements—to
build anticipation for the release and to ensure management knows how well you’re doing.
Worst of all; don’t let politics dictate how the project will run. Public sector organizations are the worst
for this, unfortunately for them, but private organizations are not immune to this either. This is one
more reason why communications are so important. Offset politics by managing expectations.
Finally, test, test and retest. This is the best advice we can give. Don’t make the mistake of delivering
poor quality products. If you are in a service industry, then quality of service is how success is
measured. Go for the gold!
There are four pillars to any project: Cost, Scope, Quality and Time (see Figure 2.10). Each is
directly tied to the other. If you change the value for one, the value for the others will
automatically change. As the project proceeds, you will be under pressure to do exactly that.
When deadlines loom, management will want you to cut on quality to save money, but shortterm quality cuts will turn into long-term costs. The same thing with scope; management and
others will pressure you to get rid of technical ‘wish lists’. Don’t do it. Each time you get rid of a
project component to try to cut corners, you’ll have to pay for it in the end. Build some leeway
into your project plan so that you can have some flexibility, but aim for the highest quality. Your
organization will benefit from it in the end.
Chapter 2
Figure 2.10: The four pillars of the project.
Go to the DGVM’s companion Web site to obtain a copy of a Sample Project Plan relying on the
QUOTE© System to help build out your own project. Remember that you can supplement its
information with the information contained in the sample project plan Microsoft provides in the BDD.
Case Study:
Supporting a Migration
Organization type .......................................................................................................................Public Sector
Number of users..................................................................................................................................... 2,500
Number of computers............................................................................................................................. 3,000
Project focus ...........................................................................................................Migration to Windows XP
Project duration ................................................................................................................................ 8 months
Specific project focus ........................................... Migration from multiple operating systems to a single OS
Administrative type........................................................................................................................Centralized
An organization that moves to a new operating system must use complete support mechanisms during
the migration. These systems must support not only the users during their familiarization with new tools,
but it must also support the migration process itself and the teams that operate within it. This is often
more than an internal Help Desk can manage because their own personnel are already occupied. This
organization chose to supplement its support program with special help for the duration of the migration.
Thus they had to increase staffing levels according to the following table:
Group ______________________________________________________________________ Increase
Technical support personnel _________________________________________________________ 40%
User support personnel _____________________________________________________________ 20%
Project information support personnel __________________________________________________ 10%
Average temporary staff increase ___________________________________________________ 24%
This increase was based on the following key elements:
The architectural solution was very sound.
The deployment was very well structured.
The project used a very complete communications program.
Appropriate levels of training were offered.
The support team was given intensive training.
The project included a data conversion program.
Technicians were very well prepared before the beginning of the deployment.
These key elements helped the organization control all support costs during the migration. Most of the
temporary personnel were used to replace ongoing operations and free internal technicians to work on
the project, keeping all knowledge in house after the project.
Chapter 2
Required Teams and Roles
Also, now is the time to start building up your teams. Remember that we strongly recommend
that you get two types of external help if you need it, but you probably will: expert help to
supplement technical leadership for the project, and technical help to replace your staff in
ongoing operations and free them to work with your project team and gain immediate and direct
knowledge on the new technology. After all, one of your main goals is to make sure this
knowledge stays in house after the project. If you acquire external technical help and put them on
the project, the knowledge will leave with them when the project is over. Don’t make this
Rely on the DDL to determine which teams are required. Build on the team you gathered for the
preparation of the Business Case. In the end, you’ll require the following team roles:
Logical Architect or Technical Lead
Technical Architect
Project Manager
Project Administrator/Coordinator
Lab Coordinator
Lab Assistant
Project Steering Committee
o Project Sponsor
o Business Unit Manager(s)
o Chief Technical or Information Officer
o Upper Management
IT Team Members
o Inventory Analysis
o Infrastructure Assessment
o Security Considerations
o Application Compatibility Analysis
o Application Rationalization
o Application Packaging
o System Image Preparation
o Personality Capture/Restore
o Personality Rationalization
o Deployment Mechanisms
o System Integration
Chapter 2
Support or Help Desk member(s)
User Representative(s)
o Software Product Owners
o Standard Users
o Power Users
o Business Unit Experts
o Local Application Developer(s)
Financial Analyst or Asset Management Expert
o Communications Lead
o Technical Writers
o Relay Agents
o Training Lead
o Technical Writers
o Technical Trainers
o End User Trainers
Remember that these roles are important even if you don’t have a massive number of PCs to
deploy. You may even have the same person playing several roles, and therefore, wearing many
hats during the project. Just make sure each role is covered. Of course, in larger projects, you’ll
want actual teams and team leaders for each crucial activity in the project.
You’re off and running. Your project is in its building stages and you now have a very good idea
on how to proceed. Next, you need to begin preparing for all of the technical activities of this
project. That’s exactly where future chapters will take you. Meanwhile, you have a lot of work to
do. Good luck!
Remember, if you want to give us feedback on this project and the content of the eBook, feel free to
do so. All you need to do is write us at [email protected]
Chapter 3
Chapter 3: Creating the Migration Test Bed
The testing laboratory is perhaps the most important part of any technical implementation
project. Testing and retesting solutions before they are deployed is the only way to ensure high
quality. After all, you do not want to find yourself in a situation where you are deploying
garbage into your network. Garbage aside, you’ll quickly find that the lab environment is one of
the most exciting aspects of the project. Things move fast, you’re playing with new technologies,
you’re the hub of all technological requests; it’s just fun to work in the lab. This is another
reason why you want to have everything in place the right way.
When it comes to the deployment of Windows Vista, make sure you follow the golden rule of
high quality solutions and provide a testing environment that meets and exceeds every need. But
a testing lab is not only a technical solution; it also requires processes and procedures that must
be followed to a ‘T’ if you want it to succeed.
To build the appropriate lab, you need to understand what your technical teams will require.
What type of project they are in and how will the project teams be organized? In the case of PC
deployments, two main tracks must be covered: the creation of the PC structure and the
modification of server infrastructures to support new PC operating system (OS) features.
Dividing the technical aspects of the project into these two streams will help you understand
what your technical team will require and when they will require it.
As discussed in Chapter 1, the Desktop Deployment Lifecycle (DDL) involves several
preparatory steps prior to deployment. Each of these steps—infrastructure modifications,
application preparation, personality protection and PC image preparation—involves activities
that are either concentrated on the PC or at the server level. Teams work together to prepare each
aspect of the complete whole. Therefore, when it comes to the laboratory, you’ll need to make
several important decisions:
1. Will the laboratory involve physical space? What kind of physical space will be required?
2. Will you rely on virtual machine technology and if so, to which extent?
3. What are the minimal configurations you need to use for testing?
4. Which server and workstation roles are required in the lab?
5. Which testing levels will you want your teams to proceed through?
6. How will you create each testing environment?
7. Which strategies will you use to control the graduated testing that is required to refine the
The answer to these questions will help you formulate your laboratory environment as well as
the procedures it will rely on.
Chapter 3
Begin by identifying your technical teams (see Figure 3.1). Every desktop deployment project
will involve at least three teams:
The administrative team is responsible for all project logistics as well as for coordination
of all efforts.
The PC team is responsible for the creation of the PC solution.
The Server team is responsible for the preparation of the server environment to support
the PC solution.
You should ensure a very tight collaboration between the server and the PC teams and
interchange personnel between the two teams whenever possible. For example, your project may
only require one security professional whose job will involve the analysis of all security features,
the preparation of local security policies, the preparation of access rights on file shares in the lab,
the preparation of Group Policy Objects (GPO) for production and so on. By properly scheduling
activities throughout the project, you should be able to create virtual teams that intermix skills
and improve communications throughout all the technical aspects of the solution you are
Figure 3.1. The Structure of the Technical Team
Chapter 3
Identifying Team Needs
Every technical activity for this project needs to be tested in the lab somehow. For example, if
you need to deploy a tool, script or otherwise, into production to perform your inventory or
readiness assessment, then you should test it in the lab before deployment. If you need to
determine which tools you will use in your deployment, then you’ll need to test them. If you
want to use a new method to deploy PC images, then you’ll need to test it. Everything needs to
be tested.
And, since you know your teams will focus on two areas, PCs and servers, you’ll need to provide
functional support for each technical aspect. In addition, your testers and technicians will begin
small and grow as their solution progresses. For example, at first, PC image designers will only
need a workstation and perhaps some file server space to test their part of the solution, but then
as their portion of the solution grows, they will require servers with special services in support of
image deployment, systems to manage image servicing, systems to test image deployment results
and so on.
For a good overview of testing activities and testing schedules, review the Test Feature Team Guide
in Microsoft’s BDD 2007. The guide is included with the other BDD documentation which can be
found here. As you’ll see, the guide runs through several testing levels and helps you identify which
tests need to be performed when.
The best way to identify the needs of your teams is to sit down with them and run through the
processes they intend to use to develop their portion of the solution. For the PC preparation
aspect, this should include:
PC Image Preparation
o Workstations representative of the baseline systems you’ve selected, both mobile and
desktop systems
o Image management and servicing systems
o Reference computers
o Image storage space on a server
o Network connectivity components to link test systems to image sources
o Deployment mechanisms
Application Packaging
o Source software for all the retained applications that are moving forward to Vista
o An electronic repository of all source software
o A software packaging tool, including server-based workload management tools and a
software repository
o Test systems for quality assurance
o Packaging workstations
o Database system for storing all software packaging information
Chapter 3
Security Preparation
o Test systems for the preparation of all security aspects of the workstation
o Server-based security technologies such as Group Policy and therefore an Active
o Tools for testing the security measures put in place
o Security utilities such as antivirus and others to harden the systems
Personality Protection
o Test systems for the preparation of the solution
o Personality protection tools
o Server-based storage for protected personalities
o Backup technologies for personality archiving
And of course, each team will need appropriate workspaces, access to the Internet and search
engines, documentation tools and so on.
You basically need to reproduce your production environment for the teams to be able to
perform their work. But you also need to have central control systems to manage all of the work
and activity within the lab. For example, the lab coordinator will need to have a lab schedule that
will be maintained on a daily basis to help coordinate access to limited resources without
negatively impacting the project schedule. In addition, the lab needs its own technical
mechanisms. For example, when someone uses a workstation, it needs to be reset back to its
pristine state for the next person to use it. This is only one example of the technical requirements
for the lab. It also needs protection mechanisms for all the deliverables prepared in the lab—
software packages, scripts, PC images and so on. Therefore the lab needs its own infrastructure
as well as the more volatile infrastructure used for testing and preparation.
Ideally, your lab will be set up in a permanent space and will continue to be used once this
project is over. Laboratories of this type grow and shrink according to need, but continue to meet
all of the testing requirements of the organization. They are formal structures that are part and
parcel of the IT infrastructure and need to be administered as such. Whether it is for training,
testing or development, it’s really handy to have a readily available working environment you
can just jump into when you need it.
Chapter 3
Working with Different Testing Levels
Another requirement for the lab is testing levels. Testing is performed in a graduated process
which gradually evolves into more and more complicated testing levels. For desktop
deployments and other IT integration projects, there are five testing levels:
Pilot Project
Each level integrates more complexity as technical teams progress through them.
The Unit testing level is designed for information discovery. Its purpose is to let individual
technical team members discover how the feature they are tasked with designing actually works.
For example, the PC Image Preparation technicians should use this testing level to discover how
the Windows Vista installation actually works, play with initial Unattend.XML scripts in support
of installation automation, discover how the SysPrep feature of Windows has been modified for
Vista and generally familiarize themselves with the entire installation process. For this, they need
access to Windows machines, including bare metal systems—systems with no OS installed—as
well as Windows XP systems they can upgrade.
The Functional testing level is designed for initial solution testing. Here, the technical team has
determined just how a feature works and now they want to test the automation mechanisms they
are preparing. They also want to demonstrate it to their peers. In the example of the PC Image
Preparation task, technicians will need bare metal machines as well as upgradeable PCs.
Once Functional testing is complete, you may decide to perform a larger proof of concept and
perform an early deployment to select members of the project team. This will give them the
opportunity to comment on the quality of the solution and provide additional feedback into the final
configuration of your systems. Proof of concepts of this type do not need to be deployed
automatically, though if you can do it, this type of test provides an additional level of assurance that
the deployment project will succeed.
The Integration testing level starts bringing each individual component of the technical solution
together. For example, here you would blend PC Image Preparation with Personality Capture and
Restore as well as other tasks. If you remember the PC Migration Cycle presented in Chapter 2,
you’ll find that the first time you run through the entire cycle is when your technical teams reach
the Integration testing level. The objective is to make every aspect of the cycle work smoothly
with the others.
Chapter 3
The Staging testing level is focused on getting everything right. Basically, this level will provide
an environment that is very similar to your production environment. While in Integration, you
blended every aspect of the solution together, in Staging, you want to make sure you can
reproduce every technological aspect from A to Z without a hitch. You’ll have to repeat the
process until it is absolutely right. This way, you’ll know exactly what to do when you move to
production and you won’t make any mistakes. Technical implementations are 80 percent
preparation and 20 percent implementation, but you can only get there if you’ve fully tested each
The final testing level is the Pilot Project. While all of the other testing levels focused on
technical testing, the Pilot Project focuses on the complete solution, including any administrative
aspect of the process. Here you test absolutely everything: the technical solution,
communications, deployment coordination, training, support and so on. This test will validate the
logistics of your solution as well as the technical aspects. Make sure you’ve completely
evaluated each aspect before you move on.
These testing levels require graduation from one to the other (see Figure 3.2). Each level will
have both exit and entry criteria. For example, to leave the Unit level, technicians must prove
that they have covered all of the activities for this level. To enter the Functional level,
technicians must meet key requirements, and so on. You’ll build your exit and entry criteria as
you learn to work more fully with the lab, but basically, they should aim to make sure that
technical teams are fully prepared to move from one level to another. With the high demand
you’ll have for the resources of the lab, you don’t want one team to hog resources when they
weren’t ready to access them. Also, you want to make sure no damage is done to any of the high
level environments—for example, Integration and Staging—causing you to have to restore them
from backup or worse, recreate them from scratch. The lab is for the use of all technical team
members and should never be monopolized for one single aspect of the technical solution.
For a view into the Integration Architecture—the architecture that is used to build and manage testing
levels—as well as a sample Exit Criteria sheet, have a look at this article which is part six of a seven
part series on Enterprise Architectures.
Chapter 3
Figure 3.2. The Five Different Testing Levels
Remember that building the lab is part of the Understand phase of the QUOTE System. Once the
initial components of the lab are ready, technicians can undertake Unit and Functional testing. Then,
when you move on to the Organize phase, testing will progress to Integration and Staging. The final
testing level, Pilot, is part of the Transfer phase.
Chapter 3
Required Lab Environments
Besides providing different testing levels, you’ll find that your lab will also have to provide
multiple environments. A minimum of three environments are required:
The Certification Center
The Conversion Center
The Acceptance Center
The first, the Certification Center, is the core environment required for technical testing. It
includes the five testing levels discussed earlier and is designed to support the preparation and
certification of the technical solution. Certified solutions help protect your production
environments by validating that each aspect of the solution passed all of your testing
requirements and is ready for prime time. Unlike the other centers, the Certification Center is
virtual and does not require a specific physical location.
Certification is one of the most important processes in IT implementation preparation. It is closely tied
to two other concepts: standardization and rationalization. Together, these three concepts help you
maintain a dynamic infrastructure that can quickly meet any business requirement. For its part,
certification focuses on validating technical solutions. When solutions have been certified before
delivery, you’ll find that you will have fewer support issues. In some of the projects we have
performed, certified solutions were often delivered with zero defects and not one single support issue.
That’s why we believe that certification is worth its weight in gold for all IT environments.
The second, the Conversion Center, is required once the solution has begun to gel into a cohesive
whole which is often after it has entered the Integration testing level. At this stage, you begin to
understand just what your solution will look like and how it will affect both your in-house
applications and the data in your organization. For example, if your only moving to Vista, you
might find that you don’t have a lot of data to convert since it does maintain the same data types
as previous versions of Windows. But, if you decide to include Microsoft Office Professional
2007 with your Vista deployment, you’ll have to provide a space for your users to build a
conversion program. This means converting Word or Excel templates as well as document
repositories. You might also need to develop an interim strategy, deploying conversion filters to
people that don’t have the new solution yet so they can continue to exchange documents with
their peers.
In terms of applications, the Conversion Center can be used by in-house developers to upgrade
the applications they are responsible for. Depending on the scale of the applications to be
converted, you may find that you have to build up the scale of the Conversion Center. For
example, if you are converting a line of business application that is supported by a team of
developers, you might create a Conversion Center space just for them so that their work does not
impact others. But, if you are looking to convert custom applications build by user developers,
for example, Access applications, a single Conversion Center may be the right ticket. At the
same time, you should look to restructure the way your organization uses Access and other user
development tools.
If you need to convert Microsoft Access applications, you should take a look at how to gain control of
decentralized application proliferation through Decentralized Deployment Strategies.
Chapter 3
If you do need to convert data and applications because you are moving to Microsoft Office
Professional 2007, then you should definitely get a hold of the Microsoft Office Compatibility Pack as
well as the Microsoft Office 2007 Resource Kit.
The Conversion Center can be opened as soon as you have a working solution for the creation of
new PCs. The solution doesn’t have to be deployable in an automated way. It all depends on
whether you are using virtual machine technology or physical machines for testing and
conversion. If you are using virtual machine technology, then you can just reset each virtual
machine after each test, but if you use physical machines, then you’ll have to at least have some
system images of the machines so you can easily reset them after each test. More on virtual
machine technology and its role in the laboratory is covered later in this chapter.
Finally, the Acceptance Center also needs to be maintained and supplied by the laboratory.
Acceptance doesn’t occur until components are ready for it so this part of the lab rarely comes
until at least the Integration testing level has been reached. Exceptions are software packages.
Each software package must be accepted by the appropriate software product owner—a subject
matter expert who is also responsible for the evolution of a product in your network—when it
has been prepared. Several packages need to be prepared before the complete solution needs to
be integrated. As each package is finished, it should be passed through acceptance testing as
soon as possible. This allows you to run parallel streams of activities and reduce the overall
project timeline.
For full acceptance of the entire solution, tests must be done from the Staging testing level and
must include all aspects of the solution. In this case, acceptance is performed by various IT
stakeholders—stakeholders who will be responsible for the solution once it is in production.
At the expense of repeating ourselves, we strongly recommend that if you are bringing in outside help
for this project, you make sure that the bulk of the help you get is to replace your staff in their day to
day tasks. This liberates them to work on this project. Its more ‘fun’ than normal day to day work and
it gets them to learn new tricks and techniques. What’s better is that in the end, you get to keep all the
knowledge in house, letting you move forward at a faster pace.
Relying on Virtual Machine Software
Virtual machine (VM) software—software that emulates a physical machine and lets you run
other instances of an operating system—is an ideal complement to the laboratory. There are lots
of reasons why this is the case. First of all, in almost all instances, you can virtualize most of the
servers in the solution. Ideally servers will be running Windows Server 2003 R2 (WS03) to
ensure that they offer the most up to date server capabilities. In most of our tests, we’ve been
able to run file server roles with as little as 256 MB of RAM allocated to the VM. Of course, you
may need to increase the amount of RAM when you add roles to the server, but if you have the
appropriate host, you should easily be able to run any server role you need.
Chapter 3
Windows Vista client machines are a bit more problematic. Vista requires a minimum of 512
MB of RAM. You can cut it down to lower RAM levels, but performance will seriously
decrease. Remember though that some physical PCs will still be required to test Vista image
deployment. Several aspects of this test cannot be driven through virtual machines—driver tests,
hardware abstraction layers (HAL) and the Aero user interface—at least not yet, so you’ll need
to have access to come physical PCs for testing.
Nevertheless, working with both client PCs and servers through VMs will seriously cut down the
cost of building the laboratory. You can rely on virtual technology from either Microsoft or
VMware as both offer free copies of their tools, once again reducing lab costs. Both offer full
support for running Windows servers or PCs. In addition, you may want to obtain tools for the
conversion of physical machines to virtual instances. This saves a lot of time as you simply point
to the physical machine you want to capture, and easily transform it into a virtual instance.
Get your free copies of the various products you need to support virtualization in your lab:
• Microsoft Virtual Server 2005 R2
• Microsoft Virtual PC 2004 SP1
• VMware Server
• Microsoft Virtual Server 2005 Migration Toolkit
• VMware Converter 3.0
In addition, working with virtual machines, you can ‘virtualize’ the laboratory. Many of our
clients buy some very large host machines to run their laboratories. The ideal machine will be an
x64 server running several multicore processors, lots of RAM and lots of disk space. Use
Windows Server 2003 R2 x64 Enterprise Edition as the host OS for this server, install the
virtualization technology you’ve chosen and you’re off and running. When you consider the cost
of these machines compared to the cost of having a complete physical lab, they really cut down
the overall cost of the lab.
For example, one customer was able to build an entire collaboration testing environment in less than
32 hours with only three people, two of which could not touch the keyboard since they were foreign
nationals. Think of it: less than four days to build three physical hosts and more than 10 virtual
machines playing roles as various as Active Directory, Exchange, SharePoint Portal Server, Content
Management Server, SQL Server, Live Communications Server and more. This also included all of
the standards for the install, all of the documentation for installation and configuration procedures,
and of course, the virtual machines themselves including source machines for WS03 Standard
Edition, Enterprise Edition and Windows XP Pro. In addition, they are now able to reuse and even
duplicate this environment for other testing purposes. There is no doubt that this level of ROI is simply
not available with physical laboratory environments.
There are a whole series of operations you can perform with virtual machines that you just can’t
perform with physical machines. For example, you can very easily create a source machine.
Install a first instance of the OS into a VM, customize its configuration, update its default user
profile, update it in terms of patches and once it is ready, copy it and SysPrep the copy. Voila!
You now have a source machine that can be copied and reused to seed any machine role you
need. That’s a lot easier than working with a physical machine.
Chapter 3
Another major benefit is the use of Volume Shadow Copies (VSC) on WS03. Since virtual
machines are nothing more than a series of files on a hard disk, you can enable automatic backup
protection for them by enabling VSC and then relying on the Previous Versions client to restore
any damaged machine almost instantly. VSC automatically takes two snapshots per day and can
store up to 64 different snapshots which provides a very adequate level of protection for your
VMs. This doesn’t replace proper backups, but it is at least a first line of defense that costs very
little if anything.
Physical versus Logical Workspaces
Another advantage of virtual machine technology in the testing lab is that you don’t need the
physical space a conventional lab usually requires. If you create the lab space in a datacenter by
hosting a series of virtual machines on a given set of servers, the host servers can easily be
located in the normal datacenter and profit from the standard operations applied to any
production server—backup, patch updates, antivirus updates and so on. Then, your technical
team can connect to the VMs these servers host through the normal network. There may be
reason for you to provide your teams with a separate network segment to isolate their network
traffic, but if everything happens in the datacenter on the same server hosts, then network traffic
is not really an issue.
You can create a single workspace for both technical and administrative project team members;
that is, if they are in the same physical location. A workspace of this type will really help foster
team building as well as ensure that there is constant communications between all team
members. This type of work environment can seriously cut down project timelines and increase
team bonding, one of the major factors in project success (see Figure 3.3).
Figure 3.3. The Layout of the Project Workspaces
Chapter 3
In the best projects we have run, a central workspace was created for team members. If other
locations were involved in the project, this central workspace was duplicated in each location.
Within the central workspace, each team member had their own PC for day to day work.
Technical team members were given more powerful PCs with lots of RAM so that they could run
local VMs if required. In addition, examples of each of the baseline PCs retained for the solution
were available. All of these systems had a connection to the VMs in the datacenter either through
the VM client or through Remote Desktop Connections. This forms a ‘virtual’ Certification
As far as the other centers were concerned—the Conversion and Acceptance Centers—they were
located in separate rooms, not too far from the central workspace. Separate rooms were used
because of the nature of the work conducted in each center and because we needed to provide
support to the users working at both conversion and acceptance. Each location was linked to the
datacenter and to each other. Using a central workspace allowed team leaders to conduct ad hoc
meetings whenever issues came up. These issues were often resolved before they became
problems. This strategy provided the very best results and supported every aspect of the project.
In addition, it fostered team building as all team members, even administrative members, grew
excited as they saw the progress of the solution through instant demonstrations by technical staff.
Defining Lab Requirements
Now that you understand the various needs for the lab, you can begin to prepare for it. This may
require some initial acquisitions in order to properly populate the lab. Too many organizations
populate labs as afterthoughts, bringing in the oldest machines in the network and leaving lab
technicians with having to cannibalize various machines to try to put something decent together.
Don’t make this mistake! This lab is the epicenter of the project so make sure you populate it
appropriately. Here’s an example of what you might need. The scale of the requirement will
depend on the scale of your project. Keep in mind that you can run to five or six virtual machines
per processor if all of the other requirements—RAM, Hard Disk Space—are available.
Chapter 3
Minimal Configurations for Lab Systems
The minimal configurations required to support the lab should resemble the following list.
1. Host Server(s)
Dual x64 dual-core SMP Server
512 MB RAM for the host OS
256 to 512 MB RAM for each VM running on the host
At least 2 disks for RAID 1 (mirroring)
Preferably, 3 or more disks for RAID 5 (stripe with parity)
Use the largest disk size you can, currently about 300 GB
Retain about 30 GB for the system drive
Assign the bulk of the space to a data drive which will store the VMs
Retain about 30 GB for a third drive which will host the shadow copies produced by VSC
Dual network interface cards (NIC) at a minimum speed of 100 Mbps to support multiple
connections to the virtual machines hosted by the server; verify with the virtualization
manufacturer to see if you can team the NICs
Use 64-bit processors because they break the memory barrier and can handle significantly more
network throughput. 32-bit processors are limited to 4 GB of RAM and have to use special
procedures to access any RAM above this limit. 64-bit processors can natively access up to 32 GB of
RAM. There’s just no comparison when it comes to running VMs.
Also, make sure you acquire the proper versions of 64-bit processors because you will want to run
x64 VMs. Currently, all AMD 64-bit processors support 64-bit VMs, but only VT-enabled processors
from Intel will do so.
Finally, you might consider selecting AMD processor-based servers since AMD currently guarantees
that it will use the same socket footprint for quad-core processors when they are released later in
2007. To upgrade your server, simply pop out the existing processor and pop in a quad-core.
For more information on x64 servers, view the Move to the Power of x64 webcast from Resolutions.
2. Vista Premium Ready PC(s)
1 GHz processor, 32- or 64-bit
1 GB of RAM minimum
DirectX 9 support with at least 128 MB of dedicated graphics memory
DVD-ROM drive
Audio Output
A Premium Ready PC configuration is listed here, but if you opt to keep some Vista Capable PCs in
your network, you should also include examples in the lab. In fact, you need to have examples of
each machine configuration you will retain after the migration to Vista.
Chapter 3
3. Technician Workstation(s)
1 GHz processor, 64-bit if possible
2 GB of RAM minimum
DirectX 9 support with at least 128 MB of dedicated graphics memory
DVD-ROM drive
Audio Output
Technicians should have as powerful a PC as possible. They will be running VMs both locally and
remotely and may need to run more than one VM at a time.
Only technician workstations are listed here, but you might consider giving each staff member, even
administrative members, PCs that meet at least the baseline systems you have selected for your
project. The entire project team will act as guinea pigs when you perform the proof of concept and get
the entire project team to test the solution. This is another strategy that fosters excitement within the
project team.
4. External Hard Drive(s)
External drive of 80 GB at 7200 RPM with USB 2.0 or Firewire (IEEE 1394)
For Unit and Functional testing levels, the lab can deliver canned VMs on external hard disk drives.
Technicians can then work from their own PCs with their own VMs without impacting any other
person in the team. Running these VMs on high-speed external drives provides much better
performance than running them on the system disk. In addition, because of the average size of a
VM—from 4 to 20 GB per VM—delivering them on external hard disks makes a lot more sense than
trying to place them on spanned DVDs. Using external hard disks, you will also be able to deliver
VMs to remote offices where technical staff may perform activities in support of the project.
Chapter 3
Virtual Machine Configurations
Virtual machines should be configured as follows:
1. Standard Server VM
RAM: ..................................... 512 MB of RAM minimum
OS: ........................................ WS03 Standard Edition
Service Packs:...................... All applicable service packs and hotfixes should be installed
Number of Disks: .................. Depends on the role; can be from 1 to 3
Disk Size: .............................. Drive C: 20 GB expandable disk (leaves room for upgrades)
Drive D: 50 GB expandable disk (optional based on server role)
Drive E: 10 GB expandable disk
Network Cards:..................... At least one NIC per VM
CD/DVD Drive: ..................... When providing VMs for use either locally or remotely, you should include
ISO files for the installation media; this lets technicians add roles to the
machine and generally control its feature set
2. Enterprise Server VM
RAM: ..................................... 512 MB of RAM minimum
OS: ........................................ WS03 Enterprise Edition
Service Packs:...................... All applicable service packs and hotfixes should be installed
Disk Size: .............................. Drive C: 20 GB expandable disk (leaves room for upgrades)
Drive D: 50 GB expandable disk (optional based on server role)
Drive E: 10 GB expandable disk
Network Cards:..................... At least one NIC per VM
CD/DVD Drive: ..................... When providing VMs for use either locally or remotely, you should include
ISO files for the installation media; this lets technicians add roles to the
machine and generally control its feature set
3. Bare Metal PC VMs
RAM: ..................................... 512 MB of RAM minimum
OS: ........................................ No OS
Service Packs:...................... No fixes or service packs
Disk Size: .............................. Drive C: 20 GB expandable disk (leaves room for Vista installation)
Network Cards:..................... At least one NIC per VM
CD/DVD Drive: ..................... When providing VMs for use either locally or remotely, you should include
ISO files for the installation media; this lets technicians add roles to the
machine and generally control its feature set
4. Vista PC VMs
RAM: ..................................... 512 MB of RAM minimum
OS: ........................................ Windows Vista based on the editions you decide to deploy
Service Packs:...................... All applicable service packs and hotfixes should be installed
Disk Size: .............................. Drive C: 20 GB expandable disk
Network Cards:..................... At least one NIC per VM
CD/DVD Drive: ..................... When providing VMs for use either locally or remotely, you should include
ISO files for the installation media; this lets technicians add roles to the
machine and generally control its feature set
Chapter 3
5. Windows XP PC VMs
RAM: ..................................... 512 MB of RAM minimum
OS: ........................................ Windows Vista based on the editions you decide to deploy
Service Packs:...................... All applicable service packs and hotfixes should be installed
Disk Size: .............................. Drive C: 20 GB expandable disk (leaves room for the upgrade)
Network Cards:..................... At least one NIC per VM
CD/DVD Drive: ..................... When providing VMs for use either locally or remotely, you should include
ISO files for the installation media; this lets technicians add roles to the
machine and generally control its feature set
When creating client VMs, keep in mind that Windows Vista requires 15 GB of disk space to perform
an installation or an upgrade. Stay on the safe side and make the disks even bigger. Since you’re
using expandable disks, the actual space won’t be used until the system needs it.
Also note that no Windows 2000 machine is required since you cannot upgrade from 2000 to Vista.
You need to perform a clean installation.
Also, if you are working with VMware, check out this blog on the Realtime Publishers’ Vista
Community Web site for this on how to make it work right.
VM User Accounts
User accounts are also critical when setting up VMs for distribution to the technical team. With
the Unit and Functional testing levels, it is safe to give administrative access to the technicians
on your project team in both server and workstation VMs because they are standalone
environments. But as you proceed through the testing levels, you need to tighten down change
control and grant access to high privileged accounts only to the lab administrator. After all,
capturing the changes required to the infrastructure is the purpose of these environments.
When you deliver standalone machines for either the Unit or Functional environment, you should
endeavor to make servers domain controllers. Their behavior will be different than member
servers, but it will be easier to assign different roles to the accounts the technicians will require.
In many cases, these testing levels will require either a single PC VM or a PC VM coupled with
a server VM where the server is playing a series of different roles. Technicians need to have
appropriate access rights to discover how the Vista installation process works or to work with
any utilities which will be required to support the migration.
When you grant users access to the VMs that make up either the Integration or Staging testing
levels, you give them accounts with appropriate access rights as outlined by the information the
technical team will provide you.
Remember that in Windows Vista, the default administrator account is disabled at installation. Your
technicians will require their own administrative accounts. You should make sure that whenever you
provide access to machines in a shared testing level, that the administrative account includes a
strong secret password.
Chapter 3
Required Server and Workstation Roles
Within each testing level, you’ll need to assign or create several different machine roles. As
mentioned previously, in the Unit and Functional testing levels, server roles can be integrated
into one single VM, but as you move up the testing ladder, you’ll want to separate different roles
to represent the production environment you’re running. For example, your Integration level may
still join several roles together into a few machines, but when you get to Staging, it should be as
similar as possible to the production environment. Staging is that last technical test before you
start making changes in production itself so you need to get it right (see Figure 3.4).
You can even use VMs to simulate remote sites. Windows Server 2003 includes routing capabilities
that are very similar to those of Cisco’s devices. You can enable Routing and Remote Access
(RRAS) on two different VMs and use them to simulate the routing of data from a local to a remote
site. Then you can add branch office server roles to the site located behind the remote routing server.
In the end your lab will need to provision several different types of levels and several different
environments. Remember that when you build the more permanent testing levels or
environments, you’ll want to build in redundancy into the lab infrastructure. For example, the
Staging testing level should have at least two domain controllers for redundancy. File servers
will need to have several different file shares to provide storage for PC images, software
repositories, and user data repositories. Deployment systems will need to support several features
such as inventory, PC image deployment, software deployment, status reporting and perhaps
software metering if you need it. Each level will also require some form of database server
because several deployment components require it. And if you simulate remote sites, you’ll also
need to include replication engines into your solution to replicate objects such as software
packages, PC system images, and so on.
Rely on WS03 R2 for all replication scenarios. WS03 R2 includes the Distributed File System
Replication System (DFSRS), a new delta compression replication engine that is a snap to configure
and replicates only changes between files. It is much better and easier to use than either the
traditional File Replication Service (FRS) or scripts based on the Robocopy utility found in the WS03
resource kit. Try it, you’ll love it!
Chapter 3
Figure 3.4. The Different Roles required in the Virtual Lab
Reproducing Business Critical Applications
In some instances it is not possible to reproduce business critical applications—applications such as
SAP databases—within the lab environment. There are several strategies you can use (see Figure
3.5) depending on the size of your organization. Small organizations will want to link testing PCs to
the production SAP database and will need to proceed with utmost care. Medium organizations may
be able to generate a duplicate SAP database with scrubbed data. Large organizations will already
have development versions of their SAP databases and can link to those to perform application and
compatibility testing. In all cases, testing systems should be isolated from the production network as
much as possible.
In any event, make sure you fully back up all critical systems before you begin any form of testing
against them.
Chapter 3
Figure 3.5. Complex Business Critical Testing Scenarios based on Organization Size
Requirements for each Testing Level
Table 3.1 outlines guidelines for the requirements for each testing level. Adjust them according
to your own needs.
Test Level
Virtual Machines
Physical Machines
Discovery of new
features and feature
• PC Team: Typical Windows
XP PC and Bare Metal
• Server Team: Multi-purpose
Server plus Windows XP PC
and Bare Metal
• None
Automate features and
obtain peer review
• All Teams: Same as Unit
• None
Link all aspects of the
solution together
• All Teams: Several Singlepurpose Servers plus
Windows XP PC, Bare Metal
and Vista
• Testing Stations: all
models of retained
configurations plus all
external hardware devices
Finalize all technical
procedures and prepare
for final acceptance
• All Teams: Servers represent
small-scale production
environment plus Windows
XP PC, Bare Metal and Vista
• Same as Integration
Finalize all technical
and administrative
• Few VMs if any
• Use production systems in
preparation of massive
Table 3.1. Requirements for each Testing Level
Chapter 3
Each of the five environments has its own requirements, but fortunately, you can reuse many of
the requirements of a previous level for those of the next. Here’s the breakdown of machine
Unit level: individual technicians work with their own machines stored on an external
disk and linked to their own PC.
Functional level: as team members graduate from Unit to Functional, they reuse the
original individual machines they had in Unit since they are no longer needed at that
Integration level: all team members begin to share the same set of virtual and physical
machines as they begin to integrate their part of the solution into a single environment.
Change control is activated to capture all practices.
Staging level: all technical team members share an environment which is a scaled down
version of the production environment. Change control is absolute and no change can be
performed without being tracked. Procedures are completely finalized and are tested from
end to end.
Pilot level: All team members, including administrative staff, use production systems to
target a deployment to about 10% of the population.
In addition to the requirements for testing levels, you’ll also need to understand the requirements
for the three environments—Certification, Conversion and Acceptance—the lab will support.
The Certification requirements are listed in Table 3.1 because this environment comprises each
of the testing levels. The requirements for the other two environments are listed in Table 3.2.
Virtual Machines
Physical Machines
Convert data and
applications, both enduser developed and
• Data Conversion: Possibly a
Vista PC linked to the
Staging test level
• End-User Apps: A Vista PC
linked to either Integration or
• Corporate Apps: A Vista PC
linked to either Integration or
• All: PCs with access to the
virtual environments
• Corporate Apps: Possibly
Vista PCs with Aero
interface linked to either
Integration or Staging
Provide acceptance of
software packages and
of the overall solution
• Software Owners: Vista PC
linked to Integration
• IT Stakeholders: Reuse
• All: PCs with access to the
virtual environments
• IT Stakeholders: All
baseline systems retained
to support Vista
Table 3.2. Requirements for Conversion and Acceptance Environments
Chapter 3
Creating the Lab Environment
The actual creation of the laboratory environment is simpler once you understand how it should
work. If you want to get your technical team members going early, all you need to do is prepare
the virtual machines that are required for Unit and Functional testing. These are fairly quick to
prepare, but you should still proceed with care. Ideally, you will already have initiated the
acquisitions required for these VMs—external drives and perhaps high performance PCs—and
have determined which virtualization technology you want to use as well as obtain the
virtualization software. This will be the first of many source installation files that the lab will be
responsible for maintaining. You will also need to obtain source installation media for all of the
operating systems you will be working with. To make them easier to work with, transform the
install media into ISO files because these act as CD drives in virtual machines.
Once you’re ready, prepare these two environments in the following order. Rely on an existing
workstation with sufficient capabilities to perform these steps.
1. Install the virtualization technology on a workstation with sufficient capabilities to create
virtual machines.
2. Create your first virtual machine, assigning RAM, disk space and network interface cards. It
is best to create a new folder with the machine name and store all VM component files into
this folder.
3. Connect the VM’s CD/DVD drive to the ISO file for the OS you want to install.
4. Perform the installation, running through your standard procedures for OS installation in
your organization.
If you also elected to obtain a physical to virtual conversion tool, you can also create this machine by
converting an existing physical machine into a VM.
5. Customize the machine as you normally would, update the default user and apply any
patches or updates that are required.
6. For any copy of Windows prior to Windows Vista, copy the following files from the file located on the appropriate Windows installation CD to a new folder called
SysPrep on the %systemroot% drive—usually the C: drive:
a. Setupmgr.exe
b. SysPrep.exe
c. Factory.exe
d. Setupcl.exe
7. Run Setupmgr.exe to generate a SysPrep.inf file. Use your organizational standards to
provide the answers required for this file. Close Setupmgr.exe.
8. Copy the VM’s entire folder, rename the folder and the VM files to “machinename
SysPrepped” and open the new machine in your virtual machine tool.
9. Run SysPrep.exe on the machine to select the Reseal option, depersonalize it, and shut it
down. You now have a source machine from which you can generate a number of copies.
Chapter 3
10. Since Unit and Functional levels require at least three source machines: WS03 Standard,
WS03 Enterprise and Windows XP, repeat this process until each one is created.
11. Create a fourth source machine with no OS. This will become the bare metal machine testers
will use for the Vista installation.
12. Document each machine, listing user accounts and capabilities, copy them onto the external
disks and provide the disks to your technical team members.
That’s it. Now that your team members are ready to proceed with their own work, you can move
on to create and prepare the working environment for the lab as well as preparing the Integration
and Staging environments.
Virtual Machines and Software Licensing
Even though you’re working with virtual machines, you still have to be conscious of licensing issues,
especially if you’re building a laboratory to last. For this reason, we don’t recommend using evaluation
copies of software or operating systems. Here are some general guidelines on how you should license
virtual machines. You should verify with your vendor to make sure these guidelines meet their licensing
SysPrep machine: A SysPrep machine does not require a license because it is a machine that is
used only to seed other machines and doesn’t actually get used as is. Once you’ve copied the
SysPrep machine and start personalizing it, you need a license for the machine.
Running virtual machines: Each machine that is named and is running on a constant basis needs to
have its own license.
Copied virtual machines: Each copy of a virtual machine does not need its own license so long as
they are not running at the same time.
Copied and renamed virtual machines: Each time you copy a virtual machine and rename it, you
need to assign a license to it. A renamed machine is treated as a completely different machine and
therefore needs a license.
Using either the Microsoft Developer Network (MSDN) or TechNet Plus subscriptions, you have access to
ten licenses of each product, though each license needs activation. Both subscriptions support the legal
reuse of virtual machines.
If the host operating system is WS03 Enterprise Edition, then you can run four server VMs at no
additional cost. This makes a good argument for making the host systems run this OS.
More information on virtual machine licensing for Microsoft Windows Server 2003 can be found here:
When you’re ready to build the lab itself as well as Integration and Staging, you’ll need to
perform the following activities:
Begin with the base server installation(s) for the host servers
Create file shares for the lab repositories
Install virtual machine (VM) software on host servers
Create the VMs that simulate production servers and other aspects of the production
Chapter 3
You’ll find that you need a great deal of storage space. You must plan for storage space, keeping
in mind the following requirements:
Space for storing all lab data
Space for the installation media for all OSs and for all retained software products
Space for the images built during testing—images of both VMs and physical systems
Space for backing up virtual hard disks and duplicating entire virtual environments
On average, a minimum of 200 GB of space is required, but this number is affected by the
number of disk images, VMs, and application packages your project covers and it does not
include the storage space required on the host servers themselves.
When you build the Integration and Staging test levels, remember that they need to be as similar
to production as possible. Also remember that they don’t need to be identical. For example, there
is no reason to use the same Active Directory forest or domain names since they do not affect the
PC deployment process. Ideally, you’ll use proper names for AD components, names that make
sense and that you can keep on a permanent basis. In our opinion, it is always best to actually
acquire proper domain names for this purpose because they are permanent and this low cost is
not a burden on the project’s budget.
When you work on the client computer portion of the lab, design it to test the same functions and
features currently in use or planned use in the production environment. Include the same types of
hardware, applications, and network configurations.
Remember, the hardware components of the lab can be limited to the actual computers targeted for
deployment and the host servers for VMs. In addition, the host servers can double as file servers for
lab data. There is no reason for the actual production server hardware to be duplicated so long as all
of the services provided in production are duplicated in VMs.
With the use of the right hardware, the lab can also be transformed into a portable environment.
Several manufacturers are now releasing very powerful portable hardware. For example,
NextComputing LLC offers a portable lab server through its NextDimension system. This system can
include multiple dual-core CPUs, loads of RAM, multiple serial ATA (SATA) drives with lots of space
and even multiple monitors, all in a transportable format. For more information on the NextDimension,
go to
Once you’ve built out the entire lab including machines for use in each of the testing levels, you
can build a custom Microsoft Management Console (MMC) using the Remote Desktops snap-in
to create a single console that will give you access to all machines, virtual and physical (see
Figure 3.6). Add multiple copies of the Remote Desktops snap-in—one for each of the different
testing levels and one for the physical hosts—to differentiate between each level. Also, use
significant names to easily identify which machine you are working with. Make sure Remote
Desktop connections are enabled in each machine and you can use this console to manage each
and every environment.
Chapter 3
Figure 3.6. Multiple copies of the Remote Desktops snap-in in an MMC give you access to all testing levels
The lab coordinator and the lab assistant are responsible for the construction of the lab. They will
draw upon some technical resources for this construction, and then need to put in processes for
the reuse of VMs and other lab components. The lab coordinator will also be responsible for the
creation of a lab schedule and perhaps a team workspace. Microsoft SharePoint Portal Server is
ideal for the support of both of these tasks. You can use a copy from your MSDN or TechNet
licenses to put it in place. SharePoint is also ideal for the creation and maintenance of the
deviance registry—the tool you will use to track any issue, both technical and administrative, the
project will encounter as it creates the solution for deployment. Once again, the lab manager will
be responsible for the maintenance of this registry.
You can use either Windows SharePoint Services (free with a license of WS03) or SharePoint Portal
Server. SharePoint Services are easier to implement, but both require a back end database. In
addition, you can find a template for a project team site here. More templates for SharePoint can be
found here.
Chapter 3
The lab includes project deliverables, the lab itself being one of them. Other deliverables include:
Laboratory description: this outlines the strategy you will use to create and implement the
environments supported by the lab.
Technical laboratory deliverables: this helps identify how the deliverables from the
laboratory can be used in support of other testing or development scenarios. With virtual
laboratories in particular, it’s really easy to include pre-constructed machines as
deliverables to other projects.
Laboratory management practices: the practices you’re going to use for the management
and operation of the laboratory.
Future plans and projected growth: to look beyond the immediate and cover both best
practices and recommendations for future lab usage as well as for the creation and
management of a distributed virtual laboratory structure as more and more members of
the organization require access to virtual technologies.
When the technical builds of the test levels are complete, they should be fully tested to ensure
that they do indeed reproduce the production environment and that they will support testing of
the entire deployment solution, including image and software deployment.
Then, once this is complete, your lab will be ready for prime time and will be able to support all
aspects of the solution preparation.
You can build your own lab, or if you want to fast-track this aspect of your project, you can rely on
virtual appliances—pre-built VMs that include all of the required features to support each testing
level—that have been prepared by third parties. Click here to find out more about virtual appliances in
support of the Desktop Deployment Lifecycle.
Chapter 4
Chapter 4: Building the Migration Toolkit
Now that the test bed is ready and some members of your team are in the process of beginning
the unit tests they need to complete to get the engineering aspects of the migration project going,
you can begin the selection or verification process for the different tools your project will
require. The toolkits you will need to rely on cover three procedural and technical different
requirements for this project:
General project guidance as well as technical guidance for the engineering aspects of the
Tools to provide support for the administrative and other processes the migration project
itself will require.
Tools to support the actual migration.
The first two help provide structure for the project and all of the technical processes the project
requires. In addition, the technical guidance obtained for the project will support the third
requirement: tools to sustain the actual migration. For this, you should rely on the seven-step PC
Migration Cycle (PMC) introduced in Chapter 2. Ideally, the tools you select for this project will
be tools you can carry forward into production once the project is complete. As such, this toolkit
must also aim to support ongoing management and administration of Vista PCs once they are
deployed into your network.
Technical and Administrative Migration Guidance
To properly run a migration project, especially a PC migration project, you need to follow some
form of guidance, ideally, best practices guidance. Up until now, there has been very little such
guidance available and each and every organization had one of two choices:
Build their own guidance through actual project realizations, or,
Hire high-powered consultants or consulting firms to provide guidance they have
developed over years of practice in this field.
Both had their own risks. The first led you to build your own expertise through trial and error,
most often errors. While this is an effective way to build internal expertise, it can be costly,
especially in terms of potential loss of data or a potentially disgruntled user base. The second can
also be costly, although the project’s end result should be of better quality.
Fortunately, things have changed. Microsoft and other vendors have realized that it is important
if not essential, to provide guidance to client organizations on how migration projects should be
run. The result has been the publication of white papers and other documentation on PC
migrations along with tools that are designed specifically to assist in the migration process.
Of course, we like to think that this particular guide is an excellent source of information on PC and
other migrations as it stems from our own experience as high-powered consultants providing this type
of guidance to our clients. But it is important for us to point you to any other materials you can rely on
to supplement what we provide here. This way, you won’t forget anything in this most important
project and make it the resounding success it should be.
Chapter 4
Microsoft has been trying to alleviate the lack of guidance through the release of its own
Business Desktop Deployment Solution Accelerator (BDD). BDD has been mentioned before,
but it is important to note that BDD, especially the new 2007 version, is not just a set of tools,
but that it also includes guidance for the migration. What’s different with BDD 2007 is that
Microsoft has now generalized the guidance, removing any mention of its own particular tools in
support of migrations. This means that Microsoft partners that specialize in migrations and
system deployments can adapt the BDD guidance to their own toolkits, while relying on the
generic guidance it includes.
The BDD guidance can be found at
Partners Altiris Corporation and LANDesk Corporation have done so and supplement BDD with
their own recommendations, based of course, on the tools they provide to support OS migrations.
As you may have noticed as you read through the BDD guidance, it provides a ton of
information on the migration process. This is why both partner organizations have kept their own
guidance shorter and more to the point, using sort of a ‘notes from the field’ or hands on from
extensive consulting experience with actual customers approach. Though both Altiris and
LANDesk provide supplementary guidance, they are not the only ones to do so. Several Web
sites provide additional guidance on OS migration. This is one reason why you should also
research the terms ‘OS Migration’ on the Web using your favorite search engine if you feel that
you’re still missing information after using all of these different sources, including this eBook.
Altiris migration guidance can be found at
LANDesk migration guidance can be found at
Realtime Publishers also has a Vista community Web site that is chock full of information. Find it at
Other Web sites are also useful:
AppDeploy talks a lot about application packaging:
MyITForum also includes a lot of information about deployments:
Desktop Engineer is also a site that offers good migration advice:
Of course, Resolutions Enterprises Ltd. has a ton of articles on migration and other management
practices at
Along with this structural guidance, you’ll need technical guidance—information on how the
technical tools you need to work with work and how you should use them. Because this guidance
will be required for each of the different steps in the migration process and because it is also
required in support of other, related technical process, it will be addressed as we discuss each
tool you need in the remainder of this chapter.
Note that this chapter has two goals. The first is to provide you with sources of information in support
of building your migration toolkit. The second is to help you build the list of requirements for each tool
you will need so that you know how to proceed during the selection of the tool itself.
Chapter 4
Project Support Tools
As mentioned in previous chapters, you need a series of different tools to support the
administrative processes tied to this project. These tools range from simple spreadsheet
management tools to sophisticated collaboration systems you can use to share information with
the entire project team.
A project such as this requires a lot of coordination. Several actors are involved, each with their
own tasks to perform at different levels of the process. More detailed information about each task
will be covered in coming chapters, but if you rely on the QUOTE System as outlined in Chapter
2, you’ll at least have a good starting point on who needs to do what. Also, Chapter 2 provided a
sample project plan to help you get started with your own. This project plan was prepared in
Microsoft Project Professional 2003 and can also be used in Microsoft Project 2007. You’ll need
this tool if you want to begin the process. In fact, Microsoft Office or some other office-like suite
is an important part of the project management and administrative process:
Microsoft Word is used to prepare the Business Case (also provided as a template
download for Chapter 1), as well as for supporting documentation for each of the
technical or engineering aspects of the project. It is also a must-have tool for the
communications and training groups.
Microsoft Excel is required to manage several lists the project will produce: inventory
validations, financial analyses, issue tracking and so on.
Microsoft PowerPoint is used for the preparation of presentations to both officials and to
introduce the project to new team members. It is also of high value for training and
Microsoft Visio is essential to prepare network diagrams or logical constructs of the
technical solution as it is devised.
It is often a good idea for project team members to use the latest and greatest technology, or at
least the technology they plan to deploy among themselves. Using the latest technology acts as a
proof of concept and helps program members learn how the new technology will benefit their
organization. If you plan to include Microsoft Office 2007 in your deployment, then you should
aim to deploy this tool as early as possible to all team members so that they can begin to use its
added functionality in support of the project.
There are several options for the deployment of these new tools to the team. One of the best and
easiest options is to deploy these tools inside virtual machines and make the virtual machines
available to team members. But most often, project teams will elect to deploy PC operating
systems (OS) and productivity suites directly onto team members’ physical systems, while
deploying and server-based technologies onto virtual instances of the servers. This is often the
easiest way to introduce these technologies in a manner which allows the project team to profit
from advanced feature sets immediate without disrupting the production network.
Chapter 4
Creating virtual instances of servers would then be ideal for two key products:
Microsoft Office Project Server 2007 ( )
which helps regroup project information into a central location, distributing workloads
and task assignments to each team member and updating the central repository as tasks
are completed.
Microsoft Office SharePoint Server 2007 ( ) which regroups the functionality of both SharePoint Server 2003
and Content Management Server 2002 providing a single repository for collaborative
efforts including change control and content management.
As mentioned in Chapter 3, creating a collaborative environment is an excellent way to share
information with the different members of the project team. It lets you store all project
information and project updates in one single location, it lets non-technical teams such as
training and communications, even upper management keep up with project progress and it
serves as a mechanism to store and protect all project information—information that can be
carried over to the production environment once the project is complete.
To build the project collaboration environment, you can use either Windows SharePoint Services (free
with a license of WS03 or incorporated into WS03 R2) or SharePoint Portal Server. SharePoint
Services are easier to implement, but both require a back end database. In this case, you should rely
on Microsoft SQL Server 2005 rather than a runtime version of Microsoft’s database because it is
more robust, more secure and will make it easier to carry over information once the project is
The collaboration environment will let you store critical components of the project in one
location. These include:
Project membership lists
Project plans and task list
Project schedules
Laboratory coordination information such as lab welcome kit, lab schedule and calendar,
lab request sheets, lab testing sheets
Deviance registry or the registry of all of the issues that come up during project
Project presentations and other documentation
Project member My Sites which can be used to list team member expertise in support of a
skills cross reference system where team members know who to turn to when they need
Much more can be added to the SharePoint site. Of course, you will have different features based
on whether you are working with SharePoint Services or Office SharePoint Server. The version
you choose will of course depend on the size of your project and the scale of your project team,
but if you can, you should opt for Office SharePoint Server since it will provide much more
functionality including search, change control, blogs—which can be used to provide in-depth
information about issues or approaches, wikis and more (see Figure 4.1).
Chapter 4
Figure 4.11. Using Office SharePoint Server to build the collaboration environment
Using a Deviance Registry
Perhaps the most important document of the entire project, the deviance registry is used to record
deviances between the technological solution and the logical design prepared at the very
beginning of the project. This issue-tracking tool, often called a bug tracking or change log tool,
will list the different elements of the logical architecture for the deployment and be used to
record any technical element that deviates from expected behavior. Issues are classified into
different categories and are assigned priorities. To keep it simple, use three different categories:
1. High: these issues are show-stoppers and must be resolved before the project can move
2. Medium: these issues are important, but can be resolved in parallel as the project
continues to progress.
3. Low: these issues are ‘nice to haves’ and would be nice to include in the solution, but if
budget controls do not provide allowances for fixes to these issues, they will not be
resolved before the solution is finalized.
In addition, you will want to prioritize the issues within each category so that you can determine
in which order the issues need to be dealt with. Perhaps the best person to manage the deviance
is the lab coordinator as this role is the closest to both the lab schedule and the technicians
working on the engineering aspects of the solution.
Make sure that you don’t only include technical issues in your deviance registry. Administrative
issues such as those related to communications and training are also important to track as they
will have a direct impact with the way your customers—end users—view the project as a success
or failure.
Chapter 4
To assist with the creation of your new collaboration environment, a Microsoft Office SharePoint
Server 2007 template for a team site including a deviance registry list is available on the companion
Web site at This template includes a sample
deviance registry which helps you identify how to categorize issues.
Lab Management Tools
Another required technical tool is a tool to manage the virtual machines running in the
laboratory. Several types of tools can do this, but only a couple are specifically oriented at
laboratory management. Of course, the type of tool you use will also depend on the virtualization
provider you select.
If you’re working with Microsoft Virtual Server, then you can use Microsoft’s upcoming System
Center Virtual Machine Manager or VMM (see Figure 4.2). VMM is not really designed to
manage testing laboratories, but with its ability to provision virtual machines (VM) from source
images and to monitor as well as manage virtual machines on multiple hosts at the same time, it
offers powerful management and administration capabilities.
At the time of this writing, VMM is still in beta and the only way to get a hold of it is to join the beta
program at You’ll have to request to join the beta and be qualified to do
More information on VMM can be found at:
Figure 4.12. Virtual Machine Manager can be used to control multiple testing VMs
Chapter 4
If you’re running VMware ESX Server version 3.0.1 or later as a virtualization engine, then you
might consider using VMware Lab Manager. Lab Manager is a very powerful tool that is
designed to capture and manage entire series of virtual machines in support of both product
testing and product development. For example, Lab Manager could capture the entire state of the
Integration testing environment, store, it save it, and restore it as needed. Most small to medium
organizations will not have access to ESX Server as it requires shared storage solutions and
powerful hardware to run.
Information on VMware Lab Manager is available at
Information on VMware ESX Server is available at
If you’re using other VMware products such as VMware Server or VMware Workstation or if
you’re using Microsoft Virtual Server and you don’t have access to Virtual Machine Manager,
you will have to turn to other solutions to manage the multiple machines and multiple hosts that
your lab will require.
Fortunately, several manufacturers of OS deployment tools also provide support for the
management of both virtual machines and the hosts that run them. After all, provisioning a
virtual machine is very much like provisioning a physical machine. The only real difference is
that when you provision a virtual machine, you are going from a set of files on a disk to another
set of files on a disk. When you provision a physical machine, you go from a set of files on a
disk to an actual disk drive.
If you decide to use the same tool for OS deployment as for lab management as you should, then
make sure it includes the following features:
Support for the virtualization technology you use, either Microsoft Virtual Server or
VMware Server.
Support for host management and host resource monitoring.
Support for host event gathering to quickly identify potential issues on host servers.
Support for virtual machine image creation (using the Windows SysPrep command).
Support for the starting, stopping, pausing, and saving state of virtual machines.
Support for the management of virtual machine components such as hard disk drives,
network configurations, virtual local area networks (VLAN) and so on.
These are just some of the requirements you’ll need to be able to manage your virtual laboratory.
Of course, you can always just rely on the default management interfaces your virtualization
technology provides, but if you’re going to acquire a set of deployment tools, why not kill two
birds with one stone? This will save you considerable time and effort throughout the entire
project. Our recommendation: Choose the right tool and manage both the lab and the
deployment with one single interface.
Microsoft is offering free virtual labs for you to begin testing early at Since they are hosted over the Web, don’t
expect the performance levels you would normally get in your own lab.
Chapter 4
The Windows Vista Migration Toolkit
Now you’re ready to start building your Vista migration toolkit, or the toolkit that is specifically
aimed at the PC Migration Cycle (see Figure 4.3). As is illustrated in this cycle, you’ll need tools
that will support six specific steps in this cycle: readiness assessment, personality management,
OS image creation, application packaging, software deployment, and status reporting.
Figure 4.13. The PC Migration Cycle
There are a lot of commercial and non-commercial tools on the market to support migrations. We
include here the ones that seem the most popular and to us, seem to offer the best features,
particularly in regard to Windows Vista. Don’t take this as meaning that if you are using a tool or if you
prefer a tool that is not listed here, it is not a valid tool to use. On the contrary, use the guidelines and
requirements listed here to make sure your tool does meet the needs of a Vista deployment. And, if
you find that it is a valid tool, email us at [email protected] and we’ll try to include information
on it in upcoming chapters.
Chapter 4
Inventory and Asset Management Tools
A lot has been said in terms of readiness assessment to date, but if you need to evaluate if your
readiness assessment tool is ready to support your migration to Vista, or if you need to select a
readiness assessment tool to support your migration, then make sure it meets the following
First, it must evaluate both hardware and software readiness.
In terms of hardware readiness, it must be able to scan for:
o Disk size and configuration
o Network devices
o Attached devices
Most hardware inventory tools can do this by default. But, because Vista’s requirements,
especially the requirements for Vista Premium PCs, need to know about graphics cards
and especially, the amount of dedicated RAM available to the graphics card, you need to
be able to scan for this. Also, if you intend to use Vista features such as ReadyBoost—the
ability to add dynamic memory to systems through flash or USB memory devices—
you’ll need to know if your systems include either extra USB ports or flash card readers.
So, you’ll need to make sure the inventory tool you select includes the ability to scan for
the following:
o Graphics device
o Amount of dedicated RAM on the graphics device
o USB ports, busy and free
o Flash card readers
This means that the inventory tool must either include this capability right out of the box
or have the ability to add this particular through custom edits and modifications to the
default collection set.
In terms of software readiness, it must be able to scan for:
o Commercial single applications
o Commercial application suites
o Utilities such as antivirus, disk defragmentation, anti-spyware and so on
o Application runtimes such as Microsoft Access, Visual Basic, Visual C++
o Custom, in-house applications
o End user-developed applications such as Word macros and templates, Excel macros,
Access databases and so on
o Custom scripts or command files
Chapter 4
One of the most important thing to look for is to ensure that the inventory tool will not
list a series of dynamic link libraries (DLL), executables, command files or any other
component that make up an application. It needs to be able to roll up individual
components into a legible and recognizable application name as well as roll up individual
applications into application suites if they are installed. For example, Microsoft Office
should read as a suite and not individual components even if they are not all installed.
Similarly, Microsoft Word should not be represented as WINWORD.EXE or a series of
dynamic link library (DLL) files.
Ideally, the software readiness assessment tool would also be able to identify usage
information. If an application is installed on a system and it has not been used for over six
months, then you might be able to consider it for the process of rationalization—reducing
and removing obsolete or unused applications from the network prior to the deployment
of this new operating system. If usage information is not available, you’ll have to verify
with individual users if they actually need each and every application found on their
Finally, the software readiness assessment tool should be able to identify custom
corporate applications, or at least be extensible to the point where you can add definitions
of applications it should scan for.
In terms of readiness reporting, the inventory tool should be able to produce:
o Hardware assessment reports categorizing computers according to whether they are
ready, require upgrades or are simply unsuitable for any business edition of Vista.
o Software assessment reports categorizing software based on compatibility with Vista
and possibly including integration with Microsoft’s Application Compatibility
Toolkit to provide comprehensive reports on how applications will interact with
o Computer group reports by region, by IT role, by principal owner, by suitability to
o Cost analysis reports comparing cost of upgrade versus cost of replacement, including
labor costs for the upgrade.
o Ideally, reports that integrate information from your inventories to information from
the purchasing contracts you rely on to acquire both hardware and software as well as
links to support or warranty contracts for each product.
In addition, the inventory tool should include the ability to easily create custom reports to
suit the requirements of your own situation.
Chapter 4
Potential Inventory Tools
There are several types of inventory tools: interactive tools for very small organizations, automated
tools for larger organizations, free tools, and commercial tools. Several have been mentioned in
previous chapters, but they are included again here as a recap.
Microsoft offers several free tools: The Vista Upgrade Advisor:;
The Vista Hardware Assessment (still in beta at the time of this writing):;
The Microsoft Application Compatibility Toolkit which offers very rudimentary inventory features:
Commercial inventory and assessment tools include:
Altiris Inventory Solution:;
LANDesk Inventory Manager:;
Microsoft Desktop Optimization Pack for Software Assurance, especially the Microsoft Asset
Inventory Service:;
Microsoft Systems Management Server 2003:;
Specops Inventory:;
Symantec Ghost Solutions Suite:
Personality Migration Tools
The second part of any PC migration involves capturing and protecting personality
information— personal user settings, applications settings, personal data files and folders. In
Windows 2000 and XP, most of these settings will be stored in the Documents and Settings
folder where each user that logs on to the computer system will have a folder of their own to
store this information. This is called the user profile. Profiles are created each time a user logs on
to a computer for the first time. They are generated from the Default User profile so if your
Default User profile has been properly created, then there is some consistency between each user
profile and each user experience; if not, then profiles will be widely different.
In organizations using Active Directory domains, two types of profiles are stored on each
computer: local and domain profiles. In many cases, local profiles do not need to be captured if
domains are present because only technicians and support staff log on locally. This means you
only need to save and migrate the domain profiles. If you are not in a domain or if the machines
you are migrating are not domain members, then you’ll need to save local profiles. If, for some
reason, you’re using both local and domain profiles then you’ll need to save both.
Profiles have changed in Windows Vista. Vista no longer includes a Documents and Settings
folder. This folder has been replaced by a number of folders. The Users folder now stores user
preferences and user data. The Program Data folder now stores application settings.
Chapter 4
For these reasons, the tool you will select should include the following capabilities:
Inventory profiles without capturing them to support profile analysis.
Capture local profiles.
Capture domain profiles.
Capture single or multiple profiles.
Restore single or multiple profiles.
Analyze profile usage to help determine obsolete profiles.
Filter out unwanted profiles from the capture.
Filter out unwanted profiles from the restore, letting you capture profiles for backup and
restore only selected profiles to target machines.
Store profiles in a variety of locations: local hard disk, network drive, burn it to CD or
DVD as required.
Restore profile settings to appropriate locations for Windows Vista.
Support either x86 or x64 systems in both captures and restores.
Capture custom folders, for example, a C:\LocalData folder for information not stored
into the default My Documents folder.
Capture legacy application settings that may be stored in the program’s directory instead
of proper locations.
Capture settings from one version of an application and restore to a newer version,
supporting the concept of application rationalization or the reduction of multiple versions
of one product.
Scour the local hard disk for data and documents that have not been stored in appropriate
Support execution from a remote location (network drive) or interactively from a local
Support full automation for the capture and restore processes.
Support the generation of automated executables or scripts for execution in disconnected
Include encrypted administrative credentials or support the ability to run in protected user
Integrate with an automated OS deployment tool to provide an end to end deployment
Provide reports on profiles to capture in support of workload estimates.
Ideally, this tool will also include a graphical interface to facilitate the profile capture and restore
process and perhaps wizards to illustrate the required step by step process. Ideally, the selected
tool will support extensibility to provide full functionality.
Chapter 4
Potential Personality Migration Tools
If you’re migrating single profiles or profiles from one single computer to another, you should use the
Windows Easy Transfer (WET) tool that is integrated into Windows Vista (see Figure 4.4). WET is
useful for single PC to PC migrations but is not appropriate for multi-system deployments.
There are several personality migration tools on the market. Free tools include:
Microsoft User State Migration Tool (USMT, a command line tool):
Commercial personality migration tools include:
Altiris PC Transplant Solution:;
CA Unicenter Desktop DNA:;
LANDesk Management Suite (full migration suite, not only personality protection):
Figure 4.14. The Windows Easy Transfer tool
Chapter 4
Operating System Imaging Tools
Organizations have been using disk imaging in support of OS migrations for several years. The
ability to prepare a reference computer interactively, depersonalize it and then capture its image
for reproduction to multiple systems is truly a godsend when it comes to migration projects. This
is one reason why Microsoft has introduced Image-Based Setup (IBS) with Windows Vista.
Because of the new IBS, Vista now supports the WIM file-based imaging format which, unlike
disk images, does not take an exact copy of the sectors of a hard disk, but captures the files that
make up the Windows installation on that hard disk. Microsoft is not the only organization to do
this as manufacturers such as Altiris Corporation have been supporting file-based imaging for
years. But the advantage of a file-based image is clear: it is much easier to service since it is a
mountable image that can be manipulated without having to recreate the reference computer.
Managing images is an ongoing process; that’s because they have to be updated whenever
patches or updates are released for the components they include. And, because of the way
Windows relies on hardware abstraction layers (HAL) that are different from computer
configuration to computer configuration, organizations traditionally needed several images, one
for each HAL they managed. With IBS, Vista does away with the need for HAL-dependent
images. Also, because Vista provides a core OS with a separate language layer, it does away
with language-dependent images as well. The combination of these two features means that you
should be able to reduce the total number of images you manage to one single image per
processor architecture—that is, one image for 32-bit and one for 64-bit processors.
Image deployment is also important. Most manufacturers of disk imaging technologies have
ensured that they also provide a tool in support of the deployment of those images. These tools
deploy images through multicasting as opposed to unicasting. Images can be as large as several
gigabytes and when they are sent through unicasting, they must create a data stream for each
endpoint to which the image is sent. When sent through multicasting, only a single data stream is
delivered to multiple endpoints. This could make or break many deployments. The default WIM
image for Windows Vista—without any customizations—is 2.24 GB in size. Send that to 100
desktops overnight using unicast and you’re delivering 224GB of data. Send the same stream
over multicast and you’re delivering 2.24 GB. Is multicast worth it? You do the math.
Chapter 4
For these reasons, the imaging tool you select should include the following features:
Capture an image from a reference computer.
Support HAL-independence in the image.
Support multicast deployment (this may come from a secondary tool or utility from the
Support the ability to mount the image for examination or update.
Provide a graphical interface for image capture.
Support the generation of x86 and x64 images.
Support the NTFS file format for the images.
Support the capture of an image from a remote computer.
Support the automation of the image depersonalization through Microsoft’s SysPrep tool.
Support the capture and deployment of images using BitLocker—requiring two
partitions, one of which is encrypted. This feature may require a script to deploy the
partitions in unencrypted format then encrypt the BitLocker partition once the image is
Support language independence for Vista OSes.
Support the application of multiple language packs once the image is deployed.
Support automation to script the image installation.
Support the integration of the image capture and deployment to the overall OS
deployment process.
In addition, you may want to make sure your OS image capture tool can integrate with the
Microsoft WIM image format.
Chapter 4
Potential OS Imaging Tools
Microsoft offers two tools in support of its WIM image format:
ImageX, a command line tool for capturing and installing images and the Windows System Image
Manager, a graphical tool for editing image contents and generating the corresponding
Unattend.XML file. Both are included in the Windows Automated Installation Kit (Windows AIK): The WAIK also includes Windows Pre-Execution or PE which is
used to deploy the image.
Microsoft also offers Windows Deployment Services (WDS) as an update to Remote Installation
Services. WDS is designed to work with WIM images for OS deployments including bare metal
installations ( Anyone
with a license for WS03 will have access to WDS. The WDS update for WS03 is included in the WAIK
Commercial OS imaging tools will both capture the image and deploy it. They include:
Acronis True Image and Snap Deploy:;
Altiris Deployment Solution (migration suite, not only image capture and deployment):;
LANDesk Management Suite (full migration suite, not only personality protection):;
Microsoft SMS 2003 with the Operating System Deployment Feature Pack:;
Symantec Ghost Solution Suite:
Application Preparation Tools
Application management and preparation is often the biggest challenge and the most demanding
effort in an OS migration project. This is why it is so important to apply a rationalization process
to applications—or a removal of any applications that offer duplicate features or multiple
versions of the same application in your network—before any other activity. Rationalization
helps control the effort required to prepare and deploy applications. But, before you can
rationalize them, you have to first inventory, then analyze applications for compatibility. If an
application proves to be completely incompatible, it might be a prime candidate for
Inventory tools will offer some information on application compatibility, but the only tool that
will tell you if the application will run on Vista is Microsoft’s Application Compatibility Toolkit
(ACT). That is because ACT is designed to collect information on the different applications
organizations run and allow them to share information about an application’s compatibility
anonymously through the Application Compatibility Exchange. Then, when you run ACT in
your own network, you can collect information about the applications you run from the
Exchange and learn from information that has been shared by others. Because of this, ACT
should definitely be part of your application readiness toolkit.
Chapter 4
Application Compatibility Resources at Microsoft
Microsoft offers several tools and lots of information on application compatibility. For a full list of these
resources by IT role, go to
For information on application development issues in Windows Vista, see Application Compatibility in
Vista: Working as a Standard User at
But, compatibility is not the only tool required for application preparation. You also need:
Tools to package applications for automated installation.
Tools to virtualize applications if you choose to integrate this process into your migration
Tools to deploy applications once they have been packaged.
This will round out your application preparation toolkit.
Application Packaging Tools
Software packaging is a very important part of any OS deployment or migration. That’s because
if you want to automate the process from A to Z, you’ll need to have applications in a format that
will automatically install them without user intervention. Even worse, it you choose to lock down
all computers as you should, you’ll need to make sure applications install silently in the
background even and especially when users do not have administrative credentials. Fortunately,
Microsoft has done a lot of work in providing a service which can help standardize all
application installations: the Windows Installer Service (WIS). WIS has been updated for
Windows Vista and is now in its fourth iteration. WIS 4.0 includes support for the new Restart
Manager, a tool that will automatically shut down and restart open applications when
installations occur, as well as for User Account Control.
Chapter 4
Organizations have been working with WIS for several years and many will already have WIS
packages for each and every application, but because of the changes in Windows Installer 4.0,
you’ll need to make sure your existing packages will run with this service. This compatibility
check is one of the most important features you’ll need in the application packaging tool you
select. In addition, this tool should include the following features:
Full support for all WIS features.
Support for.NET Framework installations.
Support for Terminal Services installations.
Repackaging of legacy installations to the Windows Installer Service.
Editing of Windows Installer of MSI files.
Creation of transform (MST) and patch (MSP) files.
Support for editable exclusion lists—lists of items to omit in a capture—with explanatory
text for the exclusion.
Support for the inclusion of scripts to the MSI file to include activities such as security
modifications for repackaged legacy software.
Support for the Windows Logo Guidelines in regards to Windows Installer integration.
The ability to inventory all of the packaged components.
The ability to store package inventories in a database containing all of a system’s
components in order to detect and resolve conflicts.
Support for the inclusion of variable installation sources within packages.
Validation of existing MSI packages for WIS 4.0.
Support for Vista Standard User installations.
If you have a large organization, you might also be interested in workflow support for packaging
teams as well as change control and tracking for each package, especially if roll-back is
Potential Application Packaging Tools
Microsoft does not offer an application packaging tool, though several others do. The most popular
are from two vendors, though they are not the only manufacturers to offer these tools by far:
Altiris Wise Package Studio (WPS):;
Macrovision AdminStudio:
Macrovision also offers a workflow management tool for organizations wanting to control these types
of processes. Macrovision Workflow Manager
( even includes a custom
workflow in support of migrations based on Microsoft’s BDD. More information on this workflow can
be found at
For a more complete deployment tool set, Altiris offers the Migration Suite which includes all of the
components of the Deployment Solution plus WPS, providing complete support for the PC Migration
Cycle (
Chapter 4
Software Virtualization Tools
As mentioned in previous chapters, new processes have emerged for the delivery and
management of software on desktops and servers. These processes virtualize the software layer,
protecting the operating system from any changes performed by a software installation. They do
this by abstracting software layers from operating system layers. In addition, these tools provide
support for the centralization of software sources, letting you update all applications in a single
location and automatically updating all systems which have access to the application. This is
done through software streaming. Using application or software virtualization means that you
can create an OS image, deploy it to your systems, maintain and patch it as needed, but never
have to worry about it being damaged or modified in any way by any application you deploy to
it. This makes for a very powerful distributed system management model, one you should most
definitely examine closely before you migrate to Vista.
Potential Software Virtualization Tools
There are only three manufacturers of software virtualization tools today. Each offers much the same
feature set, but each uses a completely different approach. Our best recommendation is to evaluate
the capabilities of each tool and determine for yourself which offers the best features. For a
presentation examining two of these tools, download Software Virtualization — Ending DLL Hell
Forever from Citrix’ Application Delivery tool
( was in beta at the
time of the delivery of this presentation.
Some changes have occurred since this presentation was delivered. Microsoft has acquired SoftGrid
and has included it in the Desktop Optimization Pack for Software Assurance
( Altiris has included
software streaming to its virtualization technology through its integration with AppStream
( Information on Software Virtualization Solution can be found at
Software Deployment Tools
The final stage of application preparation is delivery. These tools are more complex than others
since they require an identification of the target systems prior to being able to deliver software to
them. That’s why software delivery tools are usually tied to inventory tools of some form. One
exception is Active Directory Software Installation (ADSI). That’s because Active Directory
(AD) already knows of each target system since they need to be joined to the AD structure
before they can be managed by the directory. This is sort of an inventory in reverse.
For an overview of software delivery management practices and a view into a powerful software
delivery tool integrated to Active Directory, download Software Deployment Done Right at
Chapter 4
Organizations using ADSI would normally be small in size since ADSI lacks many of the
features a true software delivery system requires. These features include:
Support for Windows Installer package deployments.
Support for the execution of scripts and legacy deployments.
Support for deployments to various targets—computers, servers, users, groups, sites, or
custom collections.
Support for dynamic collections—collections that are based on membership rules and
that are updated dynamically.
Integration or links to the Active Directory to reuse its contents for collection definition.
Support for automatic uninstalls when targets fall out of the deployment scope.
Support for conditional installations, letting you install multiple packages in a row or
letting you make sure that dependencies are met before the installation of a key package.
Support for bandwidth control, ensuring that installations do not hog the wide area
network connection when downloading installations to clients.
Support for delta-based replications, ensuring that if an installation package changes,
only those changes are replicated to distribution points.
Support for copying the installation file locally, then performing the installation from the
local copy and persisting the local copy in support of Windows Installer self-healing,
especially for mobile systems.
Ideally, support for integration of each of the steps in the PC Migration Cycle.
Graphical interface and wizards to simplify complex tasks.
It should also include extensibility to allow you to customize it to your own needs if required as
well as providing support for software patching and system administration once they are
Potential Software Deployment Tools
For the purpose of operating system deployment, Microsoft offers the BDD Workbench, a tool that
integrates OS image preparation, image deployment, as well as software and update deployment into
one interface. BDD Workbench is free as part of BDD 2007:
Commercial software deployment tools include:
Altiris Deployment Solution:;
LANDesk Management Suite:;
Microsoft Systems Management Server 2003:;
Specops Deploy:
Chapter 4
Making the Case for Extended, Lifecycle Management Systems
As you can see, there are quite a series of different tools you will need to work with as you’re
preparing your migration and there are quite a few tools to choose from for each step of the
migration process. This list is not exclusive by far. In addition, you’ll see that as we move
through the detailed steps of the PC Migration Cycle in future chapters, other tools will be
required—often custom tools that perform specific activities in your network depending on your
You have several choices. You can, if you want to, use free tools most of which are command
line driven, and script all of your installation steps manually. In our opinion, only the smallest of
organizations will choose to do this and even then, we would argue against it. Consider this.
Each and every step you will perform for a migration to Windows Vista will be a step you will
need to perform again and again in your network as you manage and maintain the systems you
deployed. Most organizations use a three to four year hardware refresh cycle. This means that
each year, these organizations need to perform many of the steps in the PC Migration Cycle each
year for a third or a fourth of the PCs in their network. In addition, each time a computer system
fails, you’ll need to use some of the steps in the PMC to either replace or repair it.
This makes a very strong argument for the selection of a tool set that will provide full support for
the PMC as well as full support for long term management and administration of the systems in
your network. PCs have a lifecycle in your network (see Figure 4.5). This lifecycle goes beyond
the PMC:
Inventories must link to asset management systems as well as contract management.
OS and software deployment must include patching and update management.
Production systems must be monitored for proactive problem management as well as
Support technicians need remote control or remote assistance tools to help users as well
as access to service desks that help them manage and track problems as they occur.
All data must be linked to a central configuration management database (CMDB) to
facilitate information sharing and system support.
Tools must be fully integrated into one seamless interface.
Tools must provide both graphical and command line support as well as wizards for the
most common tasks.
Tools must provide support for delegation of administration to help technicians focus on
the key tasks they are responsible for.
And, ideally, tools must manage any system—PCs, servers, handhelds, laptops and any
other device your network will include.
Chapter 4
Figure 4.15. Managing the Entire PC Lifecycle with an Integrated Tool Set
One thing is sure: you’ll need to use and reuse the various steps of the PMC in production once
the deployment is complete. This means you need to make sure all of the processes and all of the
tools you use for the migration are transited into production once the project is complete. There
is no better opportunity than an OS migration project to implement a full systems management
suite of tools. If you don’t have such a tool set in place, or, if your current tool set does not
provide support for each and every task listed here, then take the time to review a full
management suite as part of your migration project. You won’t regret it and you’ll make up your
return on investment in no time since every step of the processes you use for the PMC and for
ongoing management will be automated, standardized, and repeatable as well as always provide
expected results.
As we have stated before, don’t miss this opportunity to gain full and complete control of your
Potential Lifecycle Management Tools
Because of their very nature, lifecycle management suites are all commercial products. As highpowered systems management consultants, we have had many opportunities to work with and fully
review several management suites in the past. Not all are listed here. Those we recommend you
include in your review are:
Altiris Client Management Suite;
LANDesk Management Suite:;
Microsoft Systems Management Server 2003:;
Specops family of products:;
Symantec Ghost Solution Suite:
Microsoft is retiring the SMS name and will release an updated version of its management tool as
System Center Configuration Management sometime in 2007.
Chapter 5
Chapter 5: Security and Infrastructure Considerations
This chapter begins the preparation of all of the engineering tasks required to perform the
deployment. Once again, it follows the Desktop Deployment Lifecycle (see Figure 5.1) as well
as the PC Migration Cycle. The project has now moved from the preparation phases, including
Question and Understand and is beginning the Organize phase of the QUOTE System. There is
however, one part of the activities listed here that is required as input to both of the first two
phases: initial inventories, especially if you don’t have existing inventory capabilities.
Figure 5.16. Moving through Step 5 of the DDL
The activities of Step 5 of the DDL are performed now because they are required to support the
remainder of the engineering process. While most of the other engineering activities will focus
on the PC—desktop or laptop—the activities related to infrastructure preparation and security
configuration are focused on services provided through the network and therefore affect server
structure and may even affect router and switch configuration. In our experience, most
organizations do not allow PC support or administration groups to modify the services the
network delivers. Because of this, the activities related to Step 5 must involve a server-based
group of administrators (see Figure 5.2).
Chapter 5
This group should include the following roles:
A technical architect who will provide direction and evaluate the impact of changes on
the network and network services.
A security expert who may be the same as the one used in the PC team, but who must be
focused on server aspects for now.
A group of technicians focused on:
o Inventory
o Application compatibility
o Deployment
o System integration
Support and operations staff to learn new ways of doing things.
Figure 5.17. The Server Team is responsible for the activities of Step 5
Chapter 5
The entire group will be under the supervision of the logical architect or technical lead. In
addition, this group must be integrated to the team as a whole as they will need to communicate
with their PC counterparts in order to properly discover the expected changes on the network. As
such, this group will be responsible for the following activities:
1. Gathering inventories in support of the initial situation analysis
o Deploying inventory gathering tools if they are not available
o Performing inventory analysis
2. Support and server-based operation within the testing lab
o Create central data repositories
o Build host servers
o Install virtualization technologies
o Install collaboration technologies
3. Support for detailed inventories, creating an exact image of what is on each system
4. Support for profile sizing analysis, to help determine required server space
5. Project engineering activities for servers and networks
o Prepare central and distributed data repositories in support of the deployment
Prepare and implement an application compatibility database
Install the Microsoft Application Compatibility Toolkit (ACT) version 5.0 if
you decide to use it
Deploy the ACT client (in conjunction with the PC team)
Prepare application packaging systems
Prepare data repository shares in support of the PC team, mostly for
documentation and user profile captures
Configure share replication in multi-site environments
o Prepare operating system deployment tools
If tools exist, then update them to support Windows Vista deployments
If tools don’t exist, then deploy them in the network
Support the various migration scenarios
o Ensure that network equipment fully supports the deployment technology
6. Prepare Vista OS management infrastructures
o Prepare for Vista Event Log management
o Prepare for Vista Group Policy management
Active Directory preparation
Security considerations
Document management considerations
o Prepare for Vista license management
7. Support the operations team in preparing for the administration of Vista PCs
Chapter 5
Generally, the job of the server team is to perform any task that affects servers or network
components and provide server-based support to the deployment project team. This assistance
occurs both within and without the lab environment.
Inventory and other tools are discussed with a focus on organization size. Three sizes are
Small organizations (SORG) are organizations that have only one location or site. They
usually include less than 100 employees.
Medium organizations (MORG) are organizations that have at least two locations or sites
and thus need to deal with connectivity issues. They usually have up to 1,000 employees.
Large organizations (LORG) are organizations that have multiple sites, often in different
geographical zones that may include multiple languages. These organizations have the
most challenging deployments and require extensive planning before the migration can
be performed.
Each type of organization has different needs, but mostly in terms of scale, not in terms of actual
services required in support of the migration.
Perform Initial Inventories
These activities are part of the Question phase of the QUOTE System.
A lot has been discussed about performing inventories in previous chapters, all with the purpose
of leading up to this point—the point where you actually perform the inventory. Two inventories
are required. The first is the initial inventory which is focused on gathering information in
support of the situation analysis. Server administrators are involved in this activity because
inventories are collected centrally and if the organization includes several sites, must first be
collected locally, and then assembled centrally.
An excellent source of information on inventory requirements and infrastructure preparation is the
Infrastructure Remediation Guide of the Business Desktop Deployment Solution Accelerator
2007, especially its two appendices. This guide can be found at
The second inventory is discussed further below, but it is focused on the specific details of each
system you intend to migrate.
Chapter 5
The Inventory Assessment Tool
Ideally, an inventory system will have the ability to collect a variety of information from a
variety of different sources. Sophisticated inventory systems are actually tied into asset
management systems and while the inventory service is under the control of the server
administrators, the inventory data is under the control of the Finance department of the
A good inventory tool will have three key features:
Hardware inventory
Software inventory
Reporting capabilities
Chapter 4 listed many more features for the inventory tool you select, but these three are the
main focus of the tool. If you don’t have an inventory tool in place, then you should rely on the
guidelines list in Chapter 4 to identify what you need in this tool. Deploying an inventory tool
falls directly onto the server administrator’s plate because it is a central network service.
If you are a small organization and don’t want to obtain a sophisticated tool, then you can look to
free tools. There are tons of free tools on the market, but as you would expect, they require a bit
of work to get the results you’ll need. All you have to do to find them is to perform a search with
your favorite search engine and you’ll have a complete list. The organizations that use free tools
will either be very small or have very simple networks that they are very familiar with.
Microsoft offers three free tools that provide inventory. You may have tested them out by now,
but here they are again.
The Microsoft Windows Vista Hardware Assessment (WVHA) tool which unlike
other inventory tools, is designed to work from a PC and can scan networks of up to
5,000 PCs. The WVHA is in beta at the time of this writing and can be found at Eventually it
will make its way to the Microsoft Web site and be available for a free download.
WVHA is an agentless tool that scans any computer that you have administrative
privileges over. Microsoft states that it can run on a PC and it should since it requires
both Microsoft Word and Microsoft Excel (2003 or 2007 versions). Data is stored within
SQL Server 2005 Express which is installed as you install WVHA. WVHA does not
recognize the presence of a real version of SQL Server 2005 even if it is installed and
will install the Express version anyway. All the more reason for leaving this tool on a PC.
It does include a number of methods to collect information (see Figure 5.3) and once you
get through the hype, especially in the Word reports, you actually get useful information.
Chapter 5
Figure 5.18. Using the Windows Vista Hardware Assessment tool
The Microsoft Software Inventory Analyzer (MSIA) which is a graphical tool that
scans local systems for a list of Microsoft products can be found at MSIA will let you prepare a
command line input file for the settings you want to use. Once again, administrator
credentials are required, but output can be redirected to a network share. It only scans for
Microsoft products, but at least it lets you find out which ones you have.
The Microsoft Application Compatibility Toolkit (ACT) version 5.0 which is designed
to provide an analysis of the applications running on your systems, whether they are used
and their level of compatibility with Windows Vista. ACT can be found at ACT requires a SQL Server database to run and as
such should be installed on a server. In order to collect inventory, you need to prepare a
collection package which must be run on each PC you want to scan. Administrative rights
are required to run the installation of the package and since the package comes in .EXE
format, it requires special approaches for delivery to the endpoints. Once delivered, the
package runs on the local PC, collects information and then returns it to the central
collection point. ACT is useful in that it lets you share information with a community of
users through the Microsoft Application Compatibility Exchange (ACE), though the
value of ACE is only as good as the value of the input that goes into it. Since all input is
anonymous, it may be questionable. Microsoft is trying to make it as valuable as possible,
but its best value is focused on commercial software since few organizations will want to
share information on their internally-developed applications. More on ACT will be
discussed in Chapter 6.
Chapter 5
Commercial tools also abound. For example, Symantec also offers a good inventory assessment
tool for small to medium businesses. The Symantec Ghost Solutions Suite (GSS) version 2.0
consists of a console that can be located on either a PC or a server. Information on GSS can be
found at
The console uses its own database. Agents must be delivered, again with administrative rights, to
each target PC. Inventory tasks can then be performed from the console. Unlike other inventory
systems, GSS does not run inventory tasks on a schedule; you need to tell it when to run. This
way each time you collect the information, it is up to date.
GSS is not designed to cross routing boundaries or, in other words, wide area network (WAN)
connections. If you have more than one physical location, you will need to install a GSS console
into each site to collect the information for that site. There is no means, other than exporting
data, to consolidate multiple inventories from multiple sites. GSS does however offer some very
detailed inventory collections right out of the box (see Figure 5.4), especially for Vista Premium
PCs, including items such as video memory. In fact, Ghost is the first tool to do so by default.
Other tools require you to create a custom collection. Ghost is not strong on inventory
reporting—contents have to be exported to a tool such as Microsoft Excel to create charts—but it
is very strong on its traditional feature set: PC imaging and image deployment.
Figure 5.19. Ghost Solution Suite includes custom inventory collectors for Vista Premium PC readiness
Chapter 5
Organizations that are using Active Directory might choose to rely on Special Operations
Software’s Inventory tool. Specops Inventory is the only tool that integrates directly with
Active Directory to perform inventory collection and therefore does not require a secondary
infrastructure to run (see It does
however require a separate SQL Server database to store inventory data. Specops Inventory is
quite powerful if your systems are all connected to the directory service and like Ghost, is really
easy to set up and install since it relies on an infrastructure you have already deployed. Inventory
collection is performed through the standard Group Policy client any Windows machine already
includes. Just open an existing Group Policy Object (GPO) or create a new one, identify what
you want to look for and close the GPO. Inventory will begin to be collected as GPOs are
refreshed and applied. Reports are very easy to view since they rely on a Web interface.
Organizations that want to implement a full-fledged PC management system including inventory
collection can look to three products:
Altiris offers several different options for inventory support. The Altiris Inventory
Solution ( is specifically
designed for inventory collection only. But, this solution is also integrated to different
products. For example, the Altiris Deployment Solution is a full migration suite which
also supports minimal inventory capabilities
( The Migration Suite
includes all of the components of the Deployment Solution plus Wise Package Studio for
software packaging and software management as well as the full Inventory Solution
( The Client Management
Solution ( includes all of the above
products as well as full lifecycle management of all client devices, not only PCs. Altiris
has been one of the first software manufacturers to provide tools which support a
migration to Vista. Our recommendation: if you are going for a full-fledged suite, then
why not select the one that does it all and perform one single suite deployment.
The LANDesk Management Suite (LDMS) is also a strong contender for Vista
migration ( Like the Altiris products, LDMS
provides full support for the PC Migration Cycle with special emphasis on Vista
migration support. Inventories provide information on both hardware and software
readiness while the other parts of the suite provide system imaging, profile protection and
software packaging and deployment.
Microsoft Systems Management Server 2003 (SMS) which is undergoing a name
change in its next version to become System Center Configuration Manager 2007 (see Meanwhile, organizations that want
to rely on SMS to support the inventory process should upgrade to service pack 3. SP3 is
in beta as of this writing but should be released sometime in the first half of 2007. New
features include better inventory reports for software inventory, reports for Vista
hardware readiness and the ability to manage PCs running Windows Vista. In addition,
SMS 2003 SP3 provides support for software rationalization by helping you identify
applications with similar functionality, letting you consolidate on one single application
for each functionality type, thus reducing costs and reducing migration efforts.
Chapter 5
Of all the products listed, only the products from Altiris, LANDesk and Symantec offer support for
multicasting—the ability to deploy system images in one single data stream when deploying to
multiple endpoints. This saves considerable bandwidth and time during deployments. Keep this in
mind when making your choice.
Microsoft products will not have access to multicasting until the release of Windows Server
Codenamed “Longhorn” later this year. At that time Microsoft will be adding multicasting capabilities
to the Windows Deployment Services role Longhorn will support.
Technical Requirements for Inventory Tools
Every product that is oriented towards systems management uses the same basic framework.
Data is always stored in a database, either SQL Server or other, servers are always required
centrally to manage the multiple functions of the suite and secondary role servers are required if
remote sites are part of the picture. Depending on the size of the organization, secondary
databases may be required to collect information in remote sites and then replicate it to the
central site, but this situation usually only applies to LORGs.
For deployment support, a central repository is required and once again, if remote sites exist,
secondary repositories will be required. Data for inventories is collected from the remote sites
and sent to the central site. Data for software and operating system repositories is provisioned
centrally and replicated to the remote sites. Status of the deployment jobs is sent from remote
sites to central repositories. This means that bi-directional replication technologies as well as
bandwidth control technologies are required in the management solution. Finally, an agent is
required on the PCs to communicate with the management service and perform management
operations locally.
The overall solution will also need to take into consideration the management and administration
of Windows Vista. Event log management is different in this OS as is Group Policy support.
License management is also different. Server administrators will need to keep all of these
considerations in mind when addressing the requirements for the migration (see Figure 5.5).
Our advice: Start in the lab and then build out your systems management product deployment.
Learn its features and then and only then, deploy it to production to collect your inventories.
Chapter 5
Figure 5.20. Infrastructure considerations in support of the migration
Procedures for the preparation of the infrastructure in support of the migration are outlined
further in this chapter when we discuss the server preparation activities in the lab.
Collecting the Basic System Inventory
For the first inventory, you need to identify the scale of the deployment. This means you need to
collect bulk data related to how many PCs you have in the network, what they have on them,
which PCs meet which requirements and which PCs don’t. Administrators will help collect this
data from systems that are both online and offline, connected and not and provide it to the
analysts which will help formulate the baselines for your migration.
The inventory collection process actually goes through a series of steps (see Figure 5.6):
1. Collection: Bulk information is collected from all of the systems in the organization.
2. Data extraction: Numbers and facts are drawn from the bulk information. These
numbers and facts are sufficient to support initial project cost estimates and produce the
business case for the migration.
3. Refinement and additions: A refinement process is applied to the original data to filter
out unnecessary data and focus on more precise data—who is the principal user, what is
actually used on the system, what needs to be recovered post migration and so on.
Additional data may need to be collected to produce the required level of detail. For
example, in order to determine if the applications found on the system are actually used
by both the principal and other users of the system, an actual visit and discussion with the
principal user may be required.
4. Rationalization: Rationalization processes are applied to bulk data to further refine the
data. Three processes are required:
Chapter 5
a. Reduce: The objective of rationalization is to reduce the number of applications
that will be carried forward during the migration. Be strict and remove anything
that is not necessary. This means any duplicates in terms of functionality—only
one drawing tool, only one word processor, only one browser, and so on—as well
as multiple versions of the same product. Any deviations should require serious
b. Reuse: Learn to reuse systems as much as possible. One of the best ways to do
this is to standardize on all aspects of the solution—standardized procedures,
standardized PC images, standardized applications, and so on. This will cut costs
and ensure a more stable solution.
c. Recycle: If you have invested in some management efforts in the past, then focus
on reusing as much as possible during your migration. For example, if you have
existing Windows Installer packages, look to their conversion instead or
rebuilding them. Aim to adapt systems instead of reinventing the wheel.
5. Net values: Net values are produced after all of the refinements have been applied.
6. User data sheets: User data sheets are produced for each PC to be migrated. These sheets
include everything that is now on the PC and everything that should be ported once the
PC has been migrated.
7. Release management: User data sheets are provided to the release manager. The release
manager will produce the migration schedule from this information—information which
will also be provided to the technicians that perform the migration.
This process is essential to the success of the migration as the hand off to release management is
the driver for the actual migration. Remember, users are only happy if everything is as smooth as
possible for them. This means that if you know items are going to change, make sure you
integrate these changes into your communications plan and announce them well ahead of time.
Chapter 5
Figure 5.21. The Inventory Collection Process
As identified in the inventory process, information from the initial inventory analysis is aimed at
identifying bulk values. The specific questions you need answered are:
1. How many desktops are in the organization?
2. How many laptops are in the organization?
3. Which PCs—desktops and laptops—need upgrades and how much do the upgrades cost?
4. How many PCs per user do you have? Will you carry over the same ratio with Vista?
5. How many applications are in the network?
6. How many of these applications will need to be carried over into the deployment?
7. Where are the PCs located?
These core questions help your team determine how much effort is required to perform the
migration and how much it will cost. Reports should ideally be Web-based and dynamic so that
users with different levels of authority can access and view them easily. They should also include
graphics if possible—charts and graphs that can easily be copied into Microsoft Word documents
or PowerPoint presentations to augment the value of the message you need to convey. In fact,
this should be part of your selection criteria for the inventory tool. It is the goal of the systems
administrators to assist project staff in collecting this initial data, whether it means deploying a
new tool or not. Ideally, you’ll already have this under control.
Chapter 5
Server-based Operations in the Lab
These activities are part of the Understand phase of the QUOTE System.
Once bulk inventories are captured, the project team has enough information to drive the project
forward. This all occurs within the Question Phase of the QUOTE System. Then, once the
business case is presented and approved, you’ll move on to the Understand Phase. This is where
server administrators will be called upon to assist in two major activities:
The preparation of the central services the laboratory will rely on.
The preparation of the logical solution design.
As outlined in Chapter 3, the testing laboratory should be relying on virtualization technologies.
As such, the server team will be called upon to prepare the host servers that will drive the virtual
machines as well as preparing any virtual machine that will act as a server in testing. If you
already use virtualization technologies, you can start with the preparation of the core virtual
machines that will seed all of the other servers required to support the testing. If you don’t use
virtualization technologies yet, you’ll need to begin with their installation.
Build the Host Servers
Host servers should be beefy machines that have a lot of power since their main purpose is to run
virtual machines. This means lots of RAM, lots of disk space and several processors, ideally
multi-core processors. The server hardware should also be 64-bit if possible as it will give you
access to more memory than a 32-bit machine and it will let you create 64-bit virtual machines if
you are using the right virtualization technology. Refer to Chapter 3 for the recommended
configuration of the host server as well as for the virtual machines.
To prepare the host server:
1. Begin with the preparation of the server hardware, locating it in a rack if it is a rack-
mounted server or in a blade casing if it is a blade server. Smaller shops will use a tower
casing. Connect all required components.
2. Install Windows Server 2003 (WS03) R2 Enterprise Edition. This edition lets you run up
to four virtual machines running Windows operating systems for free as their cost is
included in the license for the host server OS. Remember to create three disk volumes:
a. C: drive with 30 GB for the OS.
b. D: drive with the bulk of the space for data.
c. E: drive with 30 GB for volume shadow copies.
Replace WS03 with “Windows Server Codenamed Longhorn” once it has been released,
especially the version that includes Windows Virtualization. This version is the ideal Windows
operating system for host servers.
3. Prepare the server OS according to your standard configurations. Make sure you update
the OS and install anti-virus and backup software.
Chapter 5
4. Configure Volume Shadow Copies (VSS) for drive D:. Place the shadow copies on drive
E: and use the default schedule. Shadow copies protect any data that is on shared
volumes. Since the D: drive is automatically shared as D$ by the system, the data it
contains is shared by default. For information on how to configure shadow copies, look
up 10-Minute Solution: Using Volume Shadow Copy at
If, by the time you’re reading this, Microsoft has released System Center Virtual Machine Manager
(VMM) and you want to use Microsoft virtualization technologies in your lab, then rely on this tool
instead of Microsoft Virtual Server because VMM can deploy Virtual Server without requiring the
installation of IIS. More information on VMM can be found at
5. Install the virtualization software. If you’re using Microsoft Virtual Server 2005 R2, then
you’ll need to install Internet Information Services first; remember this is done through
adding and removing Windows components in the Control Panel. If you’re using
VMware server, then you just need to install that product.
6. Once the virtualization software is installed, you can begin to create your virtual
machines (VM). This basically uses the same process as with the installation of the host
server. If you use multiple editions of WS03 in your network, you’ll want to create at
least two core VMs, one for the Standard Edition and one for the Enterprise Edition.
Then, use these VMs as the seeds for every server role you need to configure.
7. Don’t forget to copy the seed servers and run Sysprep on them to generalize the
installation. This way you can use and reuse the machines as much as needed.
The original VMs should be created as soon as possible because Unit testers will need to have
access to server technologies, most likely a multi-purpose server image that includes Active
Directory, file sharing, a database server, deployment services based on the technology you
choose to use and so on. Once these machines are ready, they should be loaded onto external
USB hard drives and delivered to the testing teams—one of which will be the server team as
there will be new server-based technologies and configurations to test.
If you want to fast-track this aspect of your project, you can rely on virtual appliances—pre-built
VMs that include all of the required features to support each testing level—that have been prepared
by third parties. Go to to find out more about
virtual appliances in support of the Desktop Deployment Lifecycle.
Chapter 5
Build Central Laboratory Services
Now that some core VMs are built and other technical teams can go on with their work, you can
begin to build the systems that will support both the project and the lab itself. These systems can
be built inside virtual machines as they are protected by VSS and other backup technologies. The
systems or server roles the lab and the project require include:
A directory service which may be the same as your production directory. In most cases, it
is best and easiest to create either a separate forest as you would in the case of all of the
testing levels you need to use—Unit, Functional, Integration and Staging. But, for the
laboratory’s own use, you might just create a separate domain within your production
forest or just create organizational units (OU) inside your production domain. It all
depends on the level of isolation you need for this project. Our recommendation: use a
separate domain in your production forest. This will automatically create trusts between
the production domain and the project domain so users can continue to work with their
normal, production user account, but grant them different privileges within the project
domain. This is something you cannot do when placing them in OUs within the same
domain. Remember that you will need at least two domain controllers to protect the
information contained in the domain.
A database server which will be used to host data from a number of different
technologies, namely the software packaging system, the management system,
Microsoft’s Application Compatibility Toolkit if you decide to use it, Windows
SharePoint Services and any other structured data the project requires. Our
recommendation: install SQL Server 2005 SP1. SQL Server is a comprehensive
database tool that doesn’t break the bank when you acquire it and will provide a series of
different protection services for the data it stores. In addition, SQL Server 2005 SP1
supports data mirroring which will let you automatically transfer data from the lab
environment to a production database server when you’re ready.
A server running Windows SharePoint Services version 3.0 (WSS) which will be used to
host the project’s collaboration team site. WSS can be found at You will need to install version 3.0 of the .NET
Framework in support of this tool. You can find a link to the Framework on the same
page as the WSS download. To speed the preparation of the team site, upload the team
site template from the companion Web site ( and follow the included instructions to load it into
your WSS environment. Assign a WSS administrator to begin granting access to team
members. Most members should be contributors while the project manager and the
project assistant should be site owners. The lab coordinator and the lab administrator
should be owners of the lab subsite. Data for this site should be stored in the SQL Server
you prepared earlier.
This should be enough to get the project and the technical teams going at this stage.
Chapter 5
Note: All project documentation—plans, presentations, communiqués, technical documentation,
training materials, everything—should go into the SharePoint team site and should be protected by
proper backup technologies and systems as well as proper security settings. In addition, technical
tasks should be drawn out from the project plan and should be assigned to team members on the
site. This will give you an excellent tool for tracking project progress as well as storing all project data.
Participate in the Solution Design
The server team also needs to provide support to the design of the solution the project will
implement. As outlined in Chapter 2 and in the QUOTE System, you begin with a logical
solution design, focusing on the new features of the technology you will implement to draw out
how your organization will benefit from them. This is done by examining several sources of
information: help desk logs for the most common problems, new feature information for the
technology to be implemented, industry sources for best practices and recommendations and so
The server team will focus on anything that is centrally based and that will provide enhanced
support for the new technology. This involves items such as modifications to existing
infrastructures, not in support of the operation of the project, but rather in support of the
migration itself and the operations the organization will need to perform once the deployment is
complete. The items that need to be covered in this initial solution are outlined in the remainder
of this chapter.
Prepare the Detailed System Inventory
The next step is to provide support for the collection of detailed inventories. Remember that the
initial inventory was designed to provide bulk information. Now you need to assist the project in
obtaining detailed information on each system to be migrated. One good place to start is with a
detailed topology map of the network. Ideally, this map will identify which machines, printers,
servers and switches are connected to each other. This will assist in the redeployment of the PC
OSes as well as help determine if switches and routers are properly configured to support the
deployment process. One great tool for this topology map is Microsoft Visio. Visio can generate
topology maps from network scans and even better, can rely on Microsoft Baseline Security
Analyzer (MBSA) to scan systems and then provide the information to Visio to generate the
image of the network. The Microsoft Office Visio 2003 Connector for MBSA can be found at
At this stage you need to make sure you have inventory data for each and every system in the
network either connected or not. This inventory needs to determine everything that will be
required to rebuild the system as close to what it was as possible. One of the best ways to do this
is to rely on User or PC Data Sheets. These sheets list everything about a system: applications,
principal user(s), printer connections, optional hardware, peripherals and so on.
A sample User Data Sheet is available on the companion Web site at New visitors must fill out a onetime registration to access these
Chapter 5
Validating Inventories
In order to minimize the amount of work required to rebuild a PC, it is important to validate all
of the content of the User Data Sheet. One of the most difficult items to validate is the list of
installed applications. This is because many organizations do not use a proper application
licensing strategy and will install applications on systems when needed, but never remove them
when obsolete. In order to avoid reloading applications that are no longer required and in order
to improve your licensing compliance once the project is completed, you want to validate this list
as much as possible before you deploy the system.
One good way to perform this validation is to use a software metering system which monitors the
use of software applications on each computer system. Applications that are not used are marked
off and are not reinstalled on the migrated system. But, if you don’t have such a tool, you’ll need
to validate the inventory with the principal user of the machine. This takes time and is best done
by administrative staff that have been trained to understand what should and what shouldn’t be
on a system. The goal is to update the User Data Sheet as much as possible.
Rationalize Everything
The concept of rationalization has been covered several times before because it is a process that
is vital to the success of the project. It will also greatly reduce project costs as it cuts costs in
licensing as well as reducing the time to prepare and perform the deployment. The activities you
need in support of this process involve the following:
Define clear rationalization guidelines and rules.
Commit to the rationalization guidelines and rules as a project.
Obtain organizational, read ‘management’, buy-in to the rationalization process. There
will be resistance and if you don’t have this buy-in, you won’t succeed.
Prioritize software rationalization as much as possible. If your users have more than one
machine, then also prioritize PC rationalization. With the advent of virtualization, there
are very few reasons for having more than one system per user.
Initiate a communications plan to all users on the benefits of rationalization. This will
help reduce resistance.
Request an initial inventory purge by all IS and IT directors. They will be able to identify
obsolete systems immediately.
Involve users and user representatives in the rationalization. They have their say and will
be happy to be consulted.
Request an inventory purge by users themselves. They often know best.
Establish a User Representatives Committee (URC). The URC will represent users when
contentions arise and will be the project’s representative with respect to users.
Obtain a Rapid Decisional Process from executives so that you can cut short any
rationalization debate.
Chapter 5
Once the inventory is validated, proceed through the rationalization process using the following
Remove all applications that are not actually installed on a PC.
Remove server and administrative applications. They should be stored on servers and
administrators should use remote desktop consoles to servers for this.
Remove any non-PC applications.
Remove multiple versions of the same application.
Remove games and utilities such as music software or anything that is not corporate in
Remove all applications that will be replaced by the software kernel or the core image
that will be installed on all PCs (for more information, see Chapter 7).
Remove applications fulfilling the same function. This means one single drawing
program, one single financial analysis program and so on.
Identify number of users for each remaining application. This will help in the design of
application groups and user roles.
Identify value to business for each application that is retained. If value is low, remove it.
Begin negotiation with users early. This is best done through the communications
Ensure that an application and data conversion strategy is in place for rationalized
applications. For example, if you are removing a drawing program, then you need to be
able to convert existing drawings into the format of the retained tool.
Ensure that software obsolescence and lifecycle guidelines are in place so that this does
not have to be done again once the project is complete.
Once the list of applications, devices and peripherals has been reduced, you can move to the
creation of groupings. For example, you should group applications into the following categories:
Mission and business critical applications
Common commercial applications
Limited use packages
In house applications
Non standard hardware
Chapter 5
These groupings will help structure how you manage these applications. Finally, use a risk
management strategy to reduce the impacts of rationalization. This means:
Prepare common applications first since you will be able to deploy them to the largest
user base.
Group role-specific applications into IT user roles and target the largest groups first.
Keep limited use packages for the end of the deployment since their audiences are small.
Do not provide conversion for applications that are not Windows Vista compatible; run
them inside virtual machines running older OSes if they absolutely need to be retained.
Provide shared virtual machines with older OSes for applications that users will not give
up but are obsolete.
This strategy will help reduce the carryover from the existing to the new deployment.
The last two strategies are hard core and should be applied to anything you really want to get rid
of. If you can’t and you can afford it, convert the applications to Vista.
Perform the Inventory Handoff
Now that everything has been validated and approved, you are ready to hand off the User Data
Sheets to the release manager. This hand off is critical because it controls how the deployment
will be structured. The release manager will then use the network topology map along with the
User Data Sheets to determine which sites will be deployed first and how each PC will be built.
Ideally the information in the data sheets will have been input into a database that can help the
release manager control the deployment. As the deployment occurs, this manager will need to
select replacement users for deployment as targeted users are unavailable or don’t show up for
their portion of the migration activities.
Having a bank or pool of users that you can draw from during the migration will help maintain
deployment rates and keep the project on time and on track. Otherwise, dates will slip as it
becomes impossible to deploy as many PCs as targeted during each day of the deployment. This
gets the release manager ready for the Transfer phase of the QUOTE System.
Chapter 5
Perform a Profile Sizing Analysis
One other aspect of the migration that will be critical is personality protection. Remember that
users will not be pleased if their favorite desktop background does not appear or they lose their
custom Word dictionaries once they have migrated to a new OS. But profiles tend to be rather
large and will require some form of protection during the migration. Because of this, server
administrators must assist the personality protection team to determine where these profiles will
be stored and how they will be protected.
Several tools today allow the storage of the profile on the local hard disk during the actual OS
deployment. If you decide to use this strategy, then server administrators will have little to do
here, but if you determine that you want to raise the quality assurance level of this process and
you have enough bandwidth to do so, you should store profiles on a network location. There are
several ways to do this.
The most common is to simply use the migration tool to copy the profile to a server before the
migration, migrate the PC, and then restore the profile from the server once the migration is
complete. Other strategies involve the use of technologies such as folder redirection or roaming
profiles to protect user data. Whichever method you decide to use, if you choose network
protection then two tasks will fall to server administrators:
Estimating storage sizes for profile migration during the deployment and preparing
network shares that can support these storage requirements.
Implement a retention policy for the profiles where the profiles are backed up for
protection, retained on disk for a period of time and archived once the time period runs
The retention policy quickly becomes a rotation policy that will prove essential if you do not
want to find yourselves running out of space during the migration. You’ll also want to look to
creating local shares if you have remote sites and implement replication technologies to bring
remote data back to central locations for long-term storage.
More on personality protection is covered in Chapter 8.
Chapter 5
Perform Project Engineering Activities
These activities are part of the Organize phase of the QUOTE System.
While the Understand phase focuses on logical solution design, the Organize phase of the
QUOTE System focuses on engineering tasks or the actual implementation of the solution. To
date administrators have had the opportunity to test and play with different server-based
technologies as affected by the deployment and the coming of a new PC operating system. Now,
it is time to define how this solution will actually take shape. As such you now begin to
implement several additional technologies, usually in the Integration and Staging test
environments. This is one reason why documentation of each parameter and setting used to
deploy the solution components is so essential at this stage.
You’ll need to work on the following systems:
Support for the deployment/management tool: Begin with at least one server running
the management tool role. Depending on the tool you selected or even if you already have
a tool, this process will allow you to learn how this tool will support your migration. Link
this tool to the database server you created earlier. If you have multiple sites, you’ll have
to look to secondary servers for this tool if required. If you already have a management
tool in place, then look to how it needs to be adapted—deployment of an upgrade or
deployment of a service pack—to support the Vista migration.
Support for software installation: Your project will need to choose whether you will be
using traditional Windows Installer packages or whether you will aim for software
virtualization or even a combination of both. This topic is discussed at length in Chapter
6. If you decide to use Windows Installer or MSI packages, then you will need a
packaging tool. These tools require installation on a server, the creation of file shares in
support of their use and linkage to a database server. If you opt for software
virtualization, you’ll find that these solutions usually require an agent on the PC and
often rely on either server shares or server streaming services to deploy the applications.
You can also choose both and use the packaging tool to integrate your applications to the
software virtualization engine.
The Microsoft Application Compatibility Toolkit: If you choose to use it, this tool will
require installation on a server as well as linkage to a database. ACT collects information
about applications, including usage information which can be quite handy in support of
the rationalization process. Keep in mind that ACT works through the creation of
packages in the form of executables that must be deployed and delivered to PCs. The PC
package requires administrative rights for execution so you will need to come up with a
solution for deployment. More on this is discussed again in Chapter 6.
File sharing services: You’ll need to deploy file sharing services, usually in the form of
a file server which will be used to store vast amounts of information. Remember that it is
always best to secure file shares through NTFS permissions than through share
permissions. Required shares include:
Chapter 5
o Virtual machine source: The source virtual machine images which must be shared
as read only for technicians that want to access copies of the machines for testing on
their own systems and read-write for server administrators that create and modify the
o Software installation media: A source folder for software installation media. The lab
coordinator and administrator should have read-write privileges and packaging
technicians should have read privileges.
o Software preparation: A working folder for software packages where packaging
technicians have read-write access. They use this folder as they prepare packages.
o Software release: A repository that is the official final repository of quality
controlled software packages. Technicians should have read access and only the lab
coordinator and administrator should have read-write access as they are responsible
for the final release of these products to the project.
o OS installation media: You also need a source folder for OS installation media,
updates and device drivers. The lab coordinator and administrator should have readwrite privileges and imaging technicians should have read privileges.
o OS custom media: Imaging technicians will need a working folder for OS system
images where they have read-write access.
o Custom OS releases: A repository where final OS image(s) are released. This should
have the same access rights as the software release folders.
o Data protection: A folder with read-write access for personality protection
technicians will also be required as they test out the tools they will use to protect
Unique namespaces: Another useful technology organizations with multiple sites should
include in the preparation is namespaces. Namespaces are very useful because they
eliminate the need for customization inside the individual packages you will deploy. That
is because namespaces such as those based on the Distributed File System (DFS),
especially the DFS service from the R2 release of WS03, provide the ability to map to
one single UNC name inside every single package no matter which site you are located
in. That is because a DFS namespace uses a \\domainname\sharename format instead of a
\\servername\sharename format when publishing shared folders. With DFS namespaces,
you map this unique name to different targets within each site. This way, when a package
needs to refer back to a server share either for installation or for self-healing or updates, it
will always work. Namespaces are mapped to Active Directory and because of this are
available in any site in the domain. They are much easier to work with than mapped
drives because they always refer to the same name structure. The directory service is
responsible for linking each endpoint with the appropriate target share, eliminating the
possibility of error.
Chapter 5
Replication services: DFS, once again, the DFS service included in WS03 R2, also
provides delta compression replication (DCR). This means that once shared folders are
synchronized through an initial replication, DFS replication (DFSR) will only replicate
modifications to any of the contents. If one single byte changes in a 1 GB file, then one
single byte is replicated. In addition, DFSR is really easy to implement, taking literally
lest than five minutes to set up. If you have more than one site, you should definitely look
to DFSR. It is included by default in any version of WS03 R2 and it just works. Linked
with namespaces, it becomes a very powerful engine in support of migrations.
Replications with namespaces should be set for:
o Custom OS releases since they need to be available in all sites. Replication should
be set one-way only from the lab’s release folder to production servers. If multiple
sites exist, replication should also go one-way from the central production site to all
other sites.
o Software releases since they will be required for the deployment. Once again, oneway replication from the lab to production servers is all that is required. If multiple
sites exist, use the same strategy as for OS releases.
o Profile protection should be captured from their location, stored on a local server
and replicated to a central site for backup and protection. If a namespace is also used,
then the profile protection scripts only need reference one single share name.
Both of these scenarios are provided by default in the File Server Management console of
WS03 R2 (see Figure 5.7). Setting these scenarios up is very straight forward.
Data mirroring: SQL Server 2005 SP1 offers data mirroring services which are pretty
straightforward to set up. As the project proceeds, you’ll find that there is data held
within the various SQL Server databases that should be available in production. Using the
data mirroring feature of SQL Server 2005 SP1, you can simply replicate this data from
the lab to production servers. Data mirroring in this case is usually targeted one-way from
the lab to the central production environment only.
Switch and router configuration: In some instances, organizations modify the default
settings of their switches and/or routers to block multicast traffic. Many network
administrators do so to stop users from watching or listening to streaming feeds from the
Internet. But, even if this is necessary, this traffic should be blocked only at the gateway
to the Internet, not on the links to the internal network. Make sure your routers and
switches will support the ability to stream multicasts to PC endpoints so that you can save
considerable time and effort during your deployment as you should be using a
multicasting tool for deployment. Or, if you can’t or don’t have access to their
configuration, then select a tool that will rely on proxies to perform WAN multicasting,
bypassing routers and switches altogether and keeping all multicasting traffic on the
These different activities will ensure that the deployment project has all of the server-based
support it needs to complete smoothly. Other activities are also required, but these will be in
support of operations, not deployment.
Chapter 5
Figure 5.22. Creating replication scenarios in WS03 R2 is as easy as following a simple wizard
Vista OS Management Infrastructures
With Windows Vista, Microsoft elected to change and modify several different aspects of
Windows management. Many of these will not be fully effective until Windows Server
Codenamed “Longhorn” is released later this year. For example, even though the network access
protection (NAP) client is built into Vista, Microsoft does not currently offer any NAP server
service. But some of the technological improvements are available now. Active Directory can
manage Vista Group Policy Objects (GPO), WS03 can manage folder redirection, WS03 can
manage Vista licensing and you can take advantage of some of the new security features built
into the PC OS. These activities will require server administration assistance since they will
impact central services.
Chapter 5
Manage Vista GPOs
Any organization that has taken the time to design and deploy an Active Directory forest and
domain structure within its network has already realized the power and potential of this tool.
That’s because AD is not only designed to provide authentication and authorization as was the
domain in Windows NT, but it is also designed to provide a complete infrastructure for the
management of users, groups, printers, servers and PCs. All of these objects are managed
through GPOs—directory objects that are designed to define the way a user’s computing
environment appears and behaves. GPOs were first introduced with Windows 2000 and are
supplemented each time a new operating system (OS) or a significant update to an operating
system is released. For example, Vista brings more than 800 new settings to Group Policy among
other changes.
For best practices information on how to design an Active Directory for object management,
download this free chapter from Windows Server 2003: Best Practices for Enterprise
Deployments published in 2003 by McGraw-Hill Osborne at
With its extensive settings and controls, Group Policy provides an extremely powerful engine for
the management of every aspect of a system’s configuration from compliance to security
baselines. GPOs can control registry hives, assign scripts, redirect data folders, deploy software
and manage security settings. And, if you don’t find the one setting you need to control, you can
always add a custom administrative template to the mix and make your own modifications.
Most organizations using AD will use GPOs extensively, but will ensure that each GPO is
designed to manage a single object type. This means that some will be designed to manage users,
others will manage PCs and still others will manage servers. Segregating GPOs in this manner
will not only improve the speed with which each GPO will be processed, but will also help in
your delegation of administration structure.
Group Policy is an excellent vehicle for system administration, but it is possible to overdo it.
Begin your Vista GPO strategy by inventorying the GPOs you have in place and then look them
over to see if there is room for rationalization. Since Vista brings so many new settings, you
don’t want to find yourself in a situation where you are proliferating GPOs.
Microsoft provides a good tool to inventory Group Policy which can be found at
To learn how to rationalize the number of GPOs in your network while providing compete
management services, download Redesigning GPO Structure for Improved Manageability at
While in previous versions of Windows, GPO processing occurred through the WinLogon
process, in Vista, Group Policy processing has been decoupled from this process to run on its
own, providing a more robust processing model. In addition, Microsoft has added several classes
of objects that were previously difficult if not impossible to manage through GPOs (see Figure
Chapter 5
Figure 5.23. New GPO settings in Windows Vista
Some important new settings include:
Device Installation to control whether or not users can plug in devices such as USB
Folder Redirection which has been vastly improved to protect a complete user’s profile.
Print Management which is tied to the WS03 R2 ability to publish printers through
Local Services which allow you to automatically change settings on PCs to match an
employee’s language settings.
Logon and Net Logon to control the logon behavior.
Power Management to help reduce the power consumption of PCs in the enterprise.
User Account Control which lets everyone run with a standard user token.
Chapter 5
Wireless client controls to ensure that wireless connectivity conforms to organization
Windows Components including everything from Movie Maker to the Windows Sidebar.
Windows Firewall which controls the state of the firewall both when connected to the
internal network and when roaming outside the office.
There are many more settings. Take the time to review all of them and determine which ones
should be set. Since these settings control mostly PCs, it will be the PC team that will identify
which settings to apply along with recommendations from the server team. But one aspect of
GPOs for Vista that affects server administrators is the new format Vista uses for administrative
For guidance on deploying Group Policy with Vista go to
Prior to Windows Vista, all GPO definition templates used an ADM file format—pure text files
that were organized in a structured manner. With Vista, Microsoft is introducing the ADMX
format—a format based on the Extended Markup Language (XML) which provides much richer
content for GPO templates. ADMX templates are now language independent, globalizing Group
Policy settings. Each ADMX file is accompanied by one or more ADML files which include
language-specific content. Global organizations will want to include an ADML file for each
language their administrators work in. In addition, ADMX files can be centrally stored as
opposed to the distributed approach used by ADM files—one on each domain controller in a
particular AD domain. And, because of the increased number of policy settings in Vista, 132
ADMX files are included in the release version of Vista.
Because of the changes to Group Policy in Vista, the ADMX format is incompatible with the
ADM format meaning that environments managing a mix of Windows 2000 and/or XP with
Vista will need to either translate their existing templates to ADMX format or create new ones.
Organizations that want to make sure critical settings are applied to all of their Windows clients
will need to put in place a strategy that will support the translation of ADM to ADMX and vice
versa, but of course, only for the settings that apply to any Windows version.
AMD/ADMX Conversion Tool
Microsoft licensed an ADM to ADMX conversion tool from FullArmor Corporation. This free utility is
available at .
Chapter 5
While server administrators will not be involved as of yet with the conversion of ADM to
ADMX templates, they will be involved with the creation of the central store for ADMX
templates. In previous versions of Windows, each time a new ADM template was created it
would be copied from the local system to the SYSVOL share on the domain controller. It would
then be copied to every DC in the domain. With Vista, ADMX templates are referenced locally
on the PC they were generated from, but if you have several PC administrators working on these
templates, you’ll want to create a central storage container that everyone will reference when
working on new or existing templates. Creating a central store is easy, but it needs to be planned
and performed by server administrators.
1. Log on with domain administrative rights.
2. Locate the PDC Emulator domain controller in your network. The easiest way to do this
is to open the Active Directory Users and Computers console and right-click on the
domain name to choose Operations Masters, click on the PDC tab to find the name of the
DC. Then use Explorer to navigate to its SYSVOL shared folder. You use the PDC
Emulator because it is the engine which drives GPO changes in the network.
3. Navigate to the SYSVOL\domainname\Policies folder where domainname is the DNS
name of your domain.
4. Create a new folder called PolicyDefinitions.
5. Copy the contents of the C:\Windows\PolicyDefinitions from any Windows Vista PC to
the new folder you created in step 4.
6. Include the appropriate ADML folders. For example, US English systems would use the
en-US folder.
7. Launch the Group Policy Editor (GPEdit.msc). It will automatically reference the new
central store as will all editors on any Vista PC in your domain.
Test this in the laboratory and then introduce the change to production when you begin the
Note: There is no Group Policy interface for loading ADMX files into a GPO. If you want to add new
settings based on an ADMX file, create the ADMX file and copy it to your central store. It will appear
in the Group Policy Object as soon as you reopen the GP Editor.
A spreadsheet listing all of the new GPO settings in Vista can be found at:
Chapter 5
Manage Vista Folder Redirection
As with former versions of Windows, Vista includes the ability to automatically redirect
common user folders to central locations. This policy is much better than using roaming profiles
because it will automatically reconnect users without having to download a vast amount of
content and it is transparent to the user. In addition, Vista can localize the experience,
automatically renaming common redirected folders into the appropriate language the user
prefers. This means that folders such as Documents are named in the proper language when
localization is enabled.
Other folder redirection enhancements include better synchronization. Vista includes a new
synchronization engine that relies in delta compression replication—that’s right, the same DCR
that is available in WS03 R2 with DFSR. This provides a much better performance enhancement
than with any previous version of Windows. These enhancements make folder redirection the
best choice for protection of user documents and preferences. In addition, if you implement
folder redirection with your launch of Vista, future migrations will make personality protection
much simpler and easier than when profiles are stored locally.
Server administrators need to be involved in this activity because folder redirection relies on
central folder shares. The biggest concern here is providing proper amounts of folder space for
each user as well as making sure there is a strong backup and protection policy for these folders.
Manage Vista Security
There are several different aspects of security that you need to manage with Windows Vista—
networking, wireless, User Account Control, and more—but one of the most important is the
ability to run more than one local Group Policy Object on a system, up to three in fact. Vista
applies these local GPOs in layers. As in previous versions of Windows, the first layer applies it
to the computer system. The second applies it to a local group, either the Administrators or a
Users group. The third can apply a local policy to specific local user accounts. This gives you a
lot more control over computers that may or may not be connected to an AD structure.
The use of multiple local policies can make it easier to ensure that highly protected systems use
different settings based on who is logging on, something that was a challenge in previous
versions of Windows. To create multiple local GPOs, use the following steps:
1. Log on with local administrative rights.
2. Launch a new Microsoft Management Console using Run, mmc /a.
3. In your new console, go to File, Add/Remove Snap-in.
4. In the left pane, scroll down until you find the Group Policy Object snap-in and click
5. For the first GPO which is applied to the local computer, click Finish.
6. Add another GPO snap-in.
7. In the Select Group Policy Object dialog box, click the Browse button.
8. In the Browse dialog box, click on the new Users tab (see Figure 5.9).
9. Select Administrators or Non-Administrators and click OK. Click Finish to add this
10. You can repeat steps 6 to 9, this time selecting a specific user account if you need to.
Chapter 5
Edit each GPO setting as needed. Then, in order to apply these policies to multiple systems, copy
them into the PC system image that you will be creating for these system types.
Figure 5.24. Applying a Local GPO to the Administrators group.
There is a lot of information related to Vista security and one of the best sources for this is the Vista
Security Guide:
Manage Vista Event Logs
The Event Log is one of the best ways to discover if anything untoward is going on in your
system. And, if you’re using Vista, you’ll soon discover that its Event Log records a host of
events that were unheard of in previous versions of Windows. In these previous versions,
Microsoft used a number of different mechanisms to record events. Many products and subfeatures of Windows recorded information in their own logs as if they didn’t even know the
Event Log existed.
It’s no wonder that most administrators didn’t even bother to verify any logs unless a specific
event occurred or they were spurred on by others: security officers for example. It was just too
much work. With Vista, most of these tools now record events properly and store them into the
Event Log (see Figure 5.10). This is bound to make administration of Vista PCs easier, but of
course, only when all your systems have been upgraded to Vista.
Chapter 5
Vista’s Event Viewer now categorizes events to make it easier to understand what changes have
been performed on the system. Vista also provides detailed information on events, demystifying
those arcane numbers and messages you could never understand. In addition, Vista can forward
events to a central collector. Right now, that collector is another Vista PC since Windows Server
Codenamed “Longhorn” is not released yet, but it is still a step forward.
Server administrators should be aware of these changes in preparation of the release of Longhorn
Server. This is one reason why they should assist the PC operations team with Event Log
configuration. This will get them ready for event management when Longhorn Server is
Figure 5.25. The Vista Event Viewer is rich with information.
Manage Vista Licenses
Organizations with volume license agreements with Microsoft will find they will need to
implement a central Key Management Service (KMS) to activate and maintain Vista PC
activation in their network. As mentioned in Chapter 2, anyone using a volume activation key
(VAK) will need both activation and re-activation in order to maintain a proper level of user
experience with the system. This protects volume activation keys in ways that have never been
possible before. Organizations using multiple activation keys can also rely on KMS to provide
activation services. The major difference between the MAK and the VAK is that the MAK
requires only a one-time activation. The VAK requires constant re-activation (every 180 days).
In addition, the MAK requires a communication with Microsoft at least through the proxy
service if not from each machine using a MAK whereas VAKs never require access to
Microsoft’s activation Web site.
You can set up the Key Management Service on either Vista, Windows Server 2003 with SP1 or
Longhorn Server. At the time of this writing, Longhorn Server was not available and, since you
would not want to put this essential service on a workstation, Vista is not an option. Therefore,
you should install this service on WS03 SP1. Make sure you run it on at least two servers to
provide redundancy,
Chapter 5
For more information on Volume Activation in general and to set up a KMS service, go to
To download KMS for WS03 with SP1, go to
Support the Operations Team
As you can see, there are several different operations which need to be performed by server
administrators in the Vista PC migration process. Several are outlined here and you will no doubt
discover more as you work on your own migration project. This is why it is so important for the
server team to take an active role in the outline, preparation, and delivery of the technical
solution you will create to manage this new operating system.
Server team members should be ready to assist with any operation and should ‘audit’ non-server
operations such as PC system image creation as this process will be carried forward to the
coming Longhorn Server release. This will help give them a heads up for when it is their turn to
deploy new technologies and new services.
In addition, server team members will be called upon to assist in the transition of project
administration and operations procedures to production operations staff. If server team members
participate early and eagerly in this process, then they can guarantee that there will be no
administrative ‘surprises’ once the project is complete.
Chapter 6
Chapter 6: Preparing Applications
Application or software management is one of the most challenging activities related to PC
management and migration. Application incompatibilities, application conflicts, application
installations, application delivery, application license management, application retirement are
only a few of the issues you must master if you want to be in complete control of your desktop
and mobile network. In fact, an entire science has been built around the management of software
and applications with both manufacturers and experts weighing in to add their grain of salt.
As you might expect, the focus of this chapter is to help you make sense once and for all of how
to prepare, distribute, manage and control applications in your network. First, some definitions:
The term program refers to compiled code that executes a function.
The term application software refers to programs designed to operate on top of system
software such as an operating system. This differentiates between the OS and the
programs that run on top of it.
The term software usually refers to an off the shelf commercial program. This category
includes items such as Microsoft Office, Adobe Acrobat, Corel Draw, Symantec
Corporate Antivirus, and so on—all commercial products you can buy discretely for your
The term application usually refers to custom in-house development. This category
includes anything that you develop in-house or that you have developed through
outsourcing, but is a custom version of a tool that will only be used by your organization.
It includes items such Web applications, line of business systems, or anything that is
generated by tools such as Microsoft Visual Studio. It also includes user-developed
programs such as those created with Microsoft Access or even just macros and templates
created in Microsoft Office.
Many sources use the terms software and application interchangeably, but in an effort to avoid
confusion as much as possible, this guide will use the term applications to refer to both
applications and software. If a specific reference is required to either commercial software or inhouse applications, they will be addressed as such.
You might wonder why applications are discussed before operating system images in this guide. In
any migration project, the bulk of the engineering work is focused on application preparation and the
larger the organization, the greater the effort required. This is why it is so important to stringently
apply the rationalization principles outlined in Chapter 5—they help reduce this massive level of effort
and keep it as minimized as possible. Since applications take considerable time to prepare, it is a
good idea to begin this preparation process as soon as possible in the project. This way, applications
will be ready if and when they are needed by the OS team as they build the standard operating
environment (SOE) that will be deployed in your network.
There is no better time to build and implement a structured application management strategy than
during a migration project. Each and every application in your network must be examined for
compatibility, it must then be packaged, and then delivered to endpoints along with the new OS.
Since you’re making all this effort, why not take the time to revise your application management
strategy and reconsider your approaches to application deployment?
Chapter 6
If you will be doing in-place upgrades, you will not have to redeploy every application to your
desktops. Of course, few people opt for the in-place upgrade as it has a very poor reputation from
previous versions of Windows. While the in-place upgrade actually works in Vista, you still need to
remove and replace key applications such as anti-virus or other system utilities and you may need to
provide corrections for some of the applications that exist on the upgraded systems. Whichever OS
deployment method you choose, you need to review your application strategy during this deployment
as you should in every deployment.
This is the goal of this chapter: to help you develop a structured application management strategy
that will help ensure you are always compliant in terms of usage and licensing as well as
responding to business needs. To do so, this chapter provides a structured look at each activity
related to application management. These include:
Windows Vista features in support of application operation
Tools which assist in application compatibility testing
The components of a structured application management strategy
The components of a structured system management strategy
Application packaging activities
Application virtualization and application streaming
Application distribution mechanisms
Preparation for system and application deployment
Each of these items forms a cornerstone of a structured application management strategy.
You are now at application preparation in the deployment process (see Figure 1). Activities in
this part of the project are performed by the PC team (see Figure 6.2). In fact, the PC team now
begins several engineering activities that will continue throughout the Organize phase of the
QUOTE system until every element of the solution is brought together for integration.
Chapter 6
Figure 6.26. Moving through Step 6 in the DDL
Figure 6.27. Activities in this step focus on the PC team
Chapter 6
Application Support in Vista
By now, you have had a chance to play with and examine Windows Vista and are becoming
familiar with its core feature set. You should also be aware that many, many things have changed
in Vista and no longer work as they did in previous versions of Windows. This is the case for
application support. Microsoft has changed the very structure that Windows uses in support of
applications. Of course, Windows Vista still offers a central control environment that exposes all
hardware and system resources to applications, but the way applications interact with the system
is once again, different from previous Windows structures.
For an overview of application compatibility in Vista, see Application Compatibility in Vista,
Working as a Standard User available at:
In addition, like every version of Windows since Windows 2000, Vista includes the Windows
Installer service (WIS). In Vista, WIS has been upgraded to version 4.0—a version that is only
available for Vista or Longhorn Server. Ideally, each application you install on Vista will be able
to take advantage of this service to integrate properly to the operating system.
For more information on Windows Installer in general and Windows Installer 4.0, download Working
with Windows Installer, a free white paper from Resolutions Enterprises Ltd. and
from either or
Several other items have changed in Vista and each of them affects applications in some way.
Release-related changes:
1. Version numbers have changed for Windows Vista. It is now version 6.0. Applications
that look for specific versions and have not been updated to include version 6 of
Windows will fail.
2. Vista also includes a 64-bit version which is highly compatible to 32-bit, but has some
key changes that affect application operation. Microsoft has not made any official
statement to this effect, but Vista may well be Microsoft’s last 32-bit desktop OS.
Changes focused on security:
1. User Account Control (UAC) now runs all processes with a standard user token.
Applications that require elevated rights will fail unless they are Run as Administrator.
2. Windows Resource Protection (WRP) has been enhanced to include registry keys as well
as critical files and folders. Applications that write to protected areas of the registry or the
Windows folder structure will fail.
3. Session 0 is now completely restricted. Session 0 includes only kernel mode processes.
Applications now run exclusively in user mode. Applications that need to operate in
kernel mode will fail.
4. The Windows Firewall has been enhanced. It now relies on the Windows Filtering
Platform, a new system that filters at several layers in the networking stack and
throughout the operating system providing better system protection. Applications that
cannot take into account firewall restrictions will fail.
Chapter 6
5. The Vista Web platform has been upgraded to Internet Information Services (IIS) version
7. IIS 7 now includes a completely componentized structure letting administrators install
only those components that are required to deliver Web services. The logic is that a
component that is not installed does not need patching and cannot become a security risk.
Applications that have not been updated to operate with IIS 7 will fail.
6. The .NET Framework for Windows Vista has been upgraded to version 3.0. Managed
code that is not compatible with .NET Framework 3.0 will fail though older versions of
the .NET Framework can be installed on Vista.
7. The assembly identity grammar for managed code has been modified in Vista. The best
example of this is the new logon screen and logon structure. Vista no longer relies on the
Graphical Identification and Authentication (GINA), but uses the Credential Manager for
logons. Applications that do not take this into account will fail.
Changes that may hinder the user experience:
1. Vista now supports Fast User Switching (FAS) even when joined to an Active Directory.
Applications that do not support FAS may cause compatibility issues for end users.
2. Windows Vista sports a new Desktop Window Manager (DWM). DWM is designed to
support several new features—Flip3D, a new structure for switching windows which
includes content previews and content previews of open applications on the task bar—all
of which will cause incompatibilities or at the very least inconveniences to users when
applications do not support these new features.
3. Vista includes some shell or Windows Explorer changes that may affect applications. For
example, the Documents and Settings folder has been replaced and split into two. The
Users folder now includes all user data and the Program Data folder includes application
settings. Applications that do not rely on variables and use hard-coded values will fail.
Vista also includes new user interface themes. Applications that do not take advantage of
these themes will cause operational issues.
Other changes that affect users:
1. Vista no longer supports kernel mode printer drivers, only user mode drivers are allowed.
2. Some components are deprecated and no longer available. These include FrontPage
Server Extensions, Point-to-Point (POP3) services and Services for Macintosh. Other
strategies are in place for these features. Windows SharePoint Services replace the
FrontPage extensions and Services for UNIX replace the previous Macintosh services.
3. Two older Help File formats are also being deprecated. Vista will no longer use the CHM
or HLP Help File formats. Help is now all based on XML data structures.
Chapter 6
These fifteen system changes all affect application compatibility to some degree, but in many
cases, there are workarounds. Here are the most common:
Version checking can be corrected in a number of ways. It may be possible to edit the
installation file to include the new Windows version, but in some cases this may not be
enough especially if the application includes internal version checking that it relies on
before operation. If you cannot change the internal version for the application, you might
be able to run it in a Windows OS compatibility mode. Vista includes support for
Windows versions from 95 to XP SP2.
64-bit versions of Vista will require applications that are at the very least 32-bit. That’s
because 64-bit versions no longer include any support for 16-bit applications. In some
cases, Vista will automatically replace 16-bit installers with their 32-bit equivalents, but
the ideal mitigation strategy is to run 64-bit applications on x64 versions of Vista.
User Account Control will generate most of the compatibility problems you will face,
especially during application installation. Applications may not install or uninstall
properly because they cannot use elevated rights to do so. You can modify the installation
logic of the application to make sure it requests elevation properly. This is done in the
Windows Installer file for the application. Another option is to run the installation
interactively with administrative credentials, but this would obviously only work in very
small shops. Finally, if you cannot change the code, then you may have to use one of two
options. The first is to give elevated rights to your users which is not recommended. Why
bother having UAC if you don’t use it? The second is to look to commercial application
compatibility mitigation tools.
Windows Resource Protection will also generate a fair share of compatibility issues.
Applications that persist data in either the Program Files or the Windows folders fail
because they do not have write access to these locations. The same goes for applications
that try to replace system dynamic link libraries (DLL). WRP does not allow any changes
to these key components. Mitigation strategies include running the application in
compatibility mode; modifying the application so that it will write to the new C:\Users
and C:\Program Data folder structures as well as in the HKEY_Current_User section of
the registry; or once again, look to commercial application compatibility mitigation tools.
Note: Several products are designed specifically to overcome application access rights limitations.
These products provide appropriate access rights on a per user basis when otherwise the application
would fail due to WRP or UAC. They either use a local agent to do so or use Group Policy extensions
to apply rights through Active Directory. These tools are better than modifying access rights directly
for an application to run because of several reasons. First, the modifications they make are not
permanent and do not affect the system for the long term. Second, the modifications are on a per
user basis and therefore are not available to every user of the system. And third, these tools are
policy-driven and centrally-controlled. This makes them good stop-gap measures when modifying the
structure of an application is not possible.
Vendors of application compatibility mitigation tools include:
Altiris Application Control Solution:
BeyondTrust Privilege Manager:
Chapter 6
Session 0 issues or issues related to user versus kernel mode operation are also a cause of
incompatibility. These issues are rare, but when they do occur, they break applications
completely. In Vista, only services can run in session 0. In addition, session 0 does not
support any user interface. Applications fail or worse crash when they try to display a
user interface in session 0. To mitigate this issue, Vista includes the ability to redirect
user interfaces from session 0 to user sessions. But the ideal mitigation is to update the
application to use global objects instead of local objects and have all user interfaces
displayed in user mode.
Other issues will be discussed as we address them, but you can see that there are a number of
potential issues with applications running on Vista. Of course, if you have the means to upgrade
every commercial application in your network and run at least proper 32-bit versions, then your
issues will be limited, but few organizations have the ability to do this. In our experience,
organizations running deployment projects will have budget for the upgrade of some, but not all
applications. Applications such as productivity suites, antivirus applications and perhaps backup
applications will be upgraded as part of the OS deployment project, but every other application
will need to have a separate budget to pay for their upgrade. In most cases, they are not upgraded
and are transferred over in their current state. This is another reason for rationalization.
It will often be custom in-house applications that will cause the most grief. Of course, if you use
proper development strategies and program applications according to best practices and
Windows Logo—the recommended structure Microsoft provides for applications running on
Windows—guidelines, then you will have few issues, but this is not always the case. If you don’t
want to end up running a parallel redevelopment project for mission critical corporate systems,
then you’ll turn to the application compatibility mitigation strategies outlined in this chapter.
For more information and access to a list of tools for application compatibility, go to the Vista
Application Compatibility Resource Guide at
Vista Application Compatibility Features
From the previous list of the fifteen system changes in Vista, you’d think that all you’ll run into
are application compatibility issues. Surprisingly, or perhaps not as Microsoft planned for it, very
few applications have issues with Vista. Of all of the applications we run here at Resolutions,
none of them had any issues. Of course, we had to upgrade anything that interacted at a low level
in the operating system such as antivirus, disk defragmentation, or other utilities, but once that
was done, most other applications worked just fine. In most cases, this is exactly what you’ll find
in your own networks.
Chapter 6
Microsoft is keeping track of each application that passes either the designed for Windows Vista or
the compatible with Windows Vista bar and is providing weekly updates through Knowledge Base
article number 933305. See for more information. At the time
of this writing this article listed 129 designed for Vista and 922 compatible with Vista applications.
Microsoft recently released an application compatibility update for Windows Vista. It is documented in
Knowledge Base article number 929427 and can be found at
Make sure your systems include this update before you deploy them.
Also, the ieXBeta site includes a list of applications that work, that have issues and that don’t work
with Windows Vista at
The University of Wisconsin also publishes a list of compatible applications for Vista at
Microsoft has documented known issues with applications in several tools they have made
available for application and system compatibility checking. Chapter 1 introduced the Vista
Upgrade Advisor (VUA). This tool installs and scans one machine at a time so it’s not an
enterprise tool by any means, but if you run it on some of the most typical and even the most
challenging PC configurations in your network, it will give you a very good idea of the issues
you’re likely to run into. VUA requires administrative rights for both the installation and the scan
you run—very weird things happen with VUA if you run it as a standard user. If you want a
quick peek at how challenging your application picture will be, then run the scan on a few
systems and print out the report.
For a more advanced application compatibility analysis, you’ll want to use the Application
Compatibility Toolkit (ACT) which is discussed below. Among other things, ACT can provide a
systems-wide scan of all PCs and report on the state of their applications. But, it is also important
to know which features Vista itself includes for application compatibility. There are several:
File and Registry Virtualization
64-bit Support for 32-bit Applications
Program Compatibility Assistant
Windows Installer version 4.0
In addition, Microsoft has provided additional tools to assist in-house development projects in
analyzing potential compatibility issues.
Chapter 6
Vista File and Registry Virtualization
Because of some of the changes in the operating system and because of the implementation of
Windows Resource Protection, Microsoft has instituted a small form of file and registry
virtualization in Windows Vista. This is a small form of virtualization because Microsoft elected
to provide only basic support for virtualization. For full application virtualization, organizations
must look to commercial tools.
Vista’s file virtualization is designed to address situations where an application relies on the
ability to store files in system-wide locations such as Windows or Program Files. In these cases,
Vista will automatically redirect the file to a folder structure called C:\Virtual Store\SID\Program
Files\... where the SID is the security identifier of the user running the application.
Similarly, Vista’s registry virtualization redirects system-wide registry keys. Keys that would
normally be stored within the HKEY_Local_Machine\Software structure are redirected to
Vista-based virtualization does not work with every application but it does work. You’ll have to
test each suspect application to determine whether it interacts properly with this level of
virtualization or if it needs repairs.
64-bit Support for 32-bit Applications
As mentioned earlier, x64 versions of Windows Vista do provide support for 32-bit applications
but no longer run 16-bit applications. Each x64 system includes a Windows on Windows or
WOW64 emulator which runs 32-bit applications. This WOW64 emulator will also be able to
convert some well-known 16-bit installers into their 32-bit equivalents. This is done through the
inclusion of some installer mappings within the registry.
But x64 systems do not support the installation of 32-bit kernel mode drivers. If kernel mode
drivers are required, they must be 64-bit and they must be digitally signed. Digital signature of
drivers ensures they are the actual drivers provided by the manufacturer and they have not been
tampered with.
In addition, x64 systems make it easy to identify x86 processes—each process includes a *32
beside the executable name in Task Manager. They also include a special Program Files (x86)
folder along with the standard Program Files folder to differentiate between installed 32 and 64bit applications. For files installed in the Windows folder, a special SysWOW64 folder is used
instead of the System32 folder. And the registry includes a special key called
HKEY_Local_Machine\Software\Wow6432Node to store 32-bit information. The WOW64
emulator automatically redirects all 32-bit application requests to the appropriate supporting
x64 systems offer better memory support than their x86 counterparts accessing up to 32 GB of
RAM and 16 TB of virtual memory and because they do not have the operational limitations of
32-bit processors, 64-bit processors can grant a full 4 GB of memory to running 32-bit
applications, something they will never be able to obtain on a x86 system.
Chapter 6
Because of these support features in x64 systems, moving to an x64 platform does not have to
occur all at once. You can begin with the move to the OS itself along with low level utilities and
continue to run 32-bit applications. Then you can migrate your applications one at a time or on
an as needed basis until your entire infrastructure is migrated to 64-bits. You should normally
experience noticeable performance improvements as soon as you begin the process.
For an overview of a migration to x64 systems, see Move to the Power of x64, a session presented
at Microsoft Management Summit 2007 available at:
Program Compatibility Assistant
Microsoft also introduced the Program Compatibility Assistant (PCA) in Vista. PCA replaces the
Program Compatibility wizard in the Help and Support as well as in the Compatibility tab of an
executable’s file properties in Windows XP. PCA is designed to automate the assignment of
compatibility fixes to applications when they are required. PCA runs in the background and
monitors applications for known issues. If an issue is detected, it notifies the user and offers
solutions. Note that PCA is a client-only feature and is not available for servers and as such will
not be in the Longhorn Server OS.
PCA can detect and help resolve several issues:
Failures in setup programs
Programs failures while trying to launch installers
Installers that need to be run as administrator
Legacy control panels that may need to run as administrator
Program failures due to deprecated Windows Components
Unsigned drivers on 64-bit platforms
In addition, PCA manages its own settings in the registry and will automatically inform users
about compatibility issues with known programs at startup. PCA can also manage application
Help messages. Programs can be excluded from PCA through Group Policy. Group Policy can
also be used to control the behavior of PCA on PCs. After all, you don’t want users being
pestered by PCA messages once applications are deployed.
PCA automatically pops up when issues arise (see Figure 6.3) and will automatically reconfigure
application compatibility settings based on known issues. These settings appear in the
Compatibility tab of a program executable’s file properties (see Figure 6.4).
Chapter 6
Figure 6.28. A Program Compatibility Assistant Message
Figure 6.29. The Compatibility Tab demonstrates that Vista supports several compatibility modes
PCA modifications are applied at the individual user level in keeping with the user mode
approach used by Vista. Settings can be modified for all users, but elevated privileges are
required to do so. This lets PCA work even with standard user accounts.
Of course, PCA is not the solution for all applications, but it is a start and will make sure many
applications that would normally fail will work properly in Vista without modification.
Chapter 6
Windows Installer version 4.0
As mentioned earlier, Vista also includes a new version of Windows Installer, version 4.0. There
are strong arguments made for the inclusion of all software installations to this service, mostly
because of the powerful feature set it offers. WIS offers many features, but the most important
Application self-healing—if untoward events occur in the structure of an installed
application, WIS repairs it before it starts.
Clean uninstallation and installation rollback—Because WIS relies on an installation
database, it can undo any changes it makes to a system and do so cleanly in the event of a
failed installation or in the event of a complete removal of an installed application.
Elevated installation rights—WIS is designed to work with Group Policy Software
Delivery to provided automatic rights elevation when applications are installed.
Integrated patching capabilities—WIS will support in-line patching of installed
applications as well as the update of installation logic to include patches prior to
application installation.
There are many more features and readers interested in knowing more should look up the white
paper mentioned at the beginning of this section (Application Support in Vista). Anyone using a
standard application management strategy in Windows should endeavor to integrate all
applications to WIS and many organizations have already done so. But, application installation
packages that are designed as MSIs—the WIS file format—will not necessarily work with Vista
because of the enhancements to WIS 4.0.
Many commercial application packaging tools support the evaluation of existing MSIs against
the requirements of WIS 4.0 through the application of internal consistency evaluator (ICE) rules
provided by the WIS 4.0 software development kit and report on required changes. WIS now
supports several Vista features:
The Restart Manager, a tool designed to stop and restart applications while patching
occurs instead of causing a system restart.
User Account Control to properly elevate rights during silent installations.
User Account Control patching to ensure patches use proper rights elevation when
Windows Resource Protection to properly redirect files and registry settings when
applications are installed.
The support of these new features should be carried through to all MSI packages. And, to make
sure packages work properly, each MSI should include a manifest that describes the application
components to the OS. It also might be easiest for organizations to digitally sign all software
packages if they haven’t already been signed by manufacturers to avoid future issues related to
UAC once the application is deployed.
For example, software packaging products such as Altiris Wise Package Studio
( allow organizations to not only upgrade
existing packages to function with WIS 4.0, but also allow them to capture legacy installations and
transform them into WIS 4.0 compatible installations.
Chapter 6
Microsoft ACT version 5.0
Since applications form such a big part of every OS deployment project, Microsoft has
endeavored to build tools that help mitigate migration efforts. The Application Compatibility
Toolkit is one such tool. ACT has been around for several generations, but Microsoft took
special care to provide as many features as possible in ACT version 5. As has been mentioned
before, ACT requires a SQL Server database to store data. The server team should already have
been preparing this database service in support of the migration effort. So, if you decide to use
ACT, you can simply make use of the existing database service at installation.
ACT uses a simple three step process for compatibility evaluation:
First, it inventories applications and gathers application compatibility data through its built-in
69. Then it lets you analyze applications, letting you prioritize, categorize, rationalize and
otherwise consolidate applications. This is done partly through the tracking data that
ACT can collect from end user systems. Additional data can be collected by
synchronizing your local ACT database with the online Application Exchange.
70. Finally, it lets you test and mitigate compatibility issues if they arise by packaging fixes
and corrections for the applications.
ACT includes several compatibility evaluators:
User Account Control Compatibility Evaluator (UACCE) which runs on XP and detects
when applications attempt to modify components a standard user does not have access to.
It can also tell you whether the file and registry virtualization included in Vista would
correct this behavior.
Internet Explorer Compatibility Evaluator (IECE) which runs on both XP service pack 2
and Vista to detect issues related to IE version 6 and IE 7, especially the latter’s
execution in protected mode.
Update Compatibility Evaluator (UCE) which runs on Windows 2000, XP and WS03 to
detect applications that depend on files or registry entries that are affected by Windows
Windows Vista Compatibility Evaluator (WVCE) which enables you to evaluate issues
related to the GINA deprecation, session 0 and other Vista deprecations.
ACT includes an Inventory Collector which is in the form of an executable package that must be
installed on each local machine. Installation of the collector requires elevated rights. In managed
environments where everyone is running as a standard user, you will need to use a deployment
method to install this package. Once installed, the package runs for several days or weeks,
inventorying the applications on a system, analyzing potential compatibility issues, and
analyzing application use. It then reports back to the central ACT database, creating an entry for
each system (see Figure 6.5).
Chapter 6
Several tools are available for the remote installation of the collector package on a system. These
tools are required because administrative rights are required for installation. If you already have a
systems management system in place such as Altiris Deployment Solution or any other product, then
you can rely on this tool for the deployment of the ACT package. If not, then you can either look to the
recommendations in Chapter 4 to select one. But, deploying a systems management infrastructure
may be overkill at this point. After all, you only want to install this one component and a deployment of
a systems management infrastructure, if it is part of your project, will come as you deploy new OSes.
Several more discreet tools can support this installation. For example, iTripoli (
offers AdminScriptEditor (ASE). ASE allows you to generate scripts that can run in different
administrative contexts, encrypting the credentials to protect them. You can therefore create a logon
script that will run the installation with elevated privileges in complete confidence, letting you make
use of ACT without having to relax any security measures.
Figure 6.30. ACT provides a central repository of application compatibility data
The usage analysis is especially useful as you wouldn’t want to spend any time with applications
that are installed but not used. And, if you’re so inclined, you can share your application data
with the Microsoft Compatibility Exchange (MCE). MCE is a system that collects anonymous
data on application compatibility. It relies on administrators like you to provide information to
others. In return, you receive information others have shared on the applications that are found in
your network. Microsoft also collects information from commercial application testing and the
Microsoft Logo Certification program.
Chapter 6
While the MCE is a great idea, its value is only as good as the data it stores. There are no real
standards in regard to the structure of this data or the evaluation ratings organizations will submit
to this program and it is unlikely organizations will submit data on their own internallydeveloped systems. But, in regards to commercial applications and the potential issues you may
encounter in relation to them, it is a valid resource.
More information on ACT can be found at You can also rely on the BDD 2007 guide for Application
In-house Development Compatibility Tools
ACT doesn’t only include tools for IT professionals. It also includes tools for developers. ACT
includes three tools for developers—tools for testing application setups, tools for testing Web
sites with IE7 and tools for testing applications with UAC. Specifically, these tools are:
The Setup Analysis Tool (SAT) which will verify that installers will work correctly and
avoid the issues that make them fail such as kernel mode driver installation, 16-bit
components, GINA components or modification of components protected by WRP.
IE Test Tool which collects any issues related to Web sites and will upload the data to the
ACT database.
Standard User Analyzer (SUA) which identifies any issues related to applications running
under UAC.
But ACT isn’t the only source of information for developers. Microsoft has put together the
Application Compatibility Cookbook which was mentioned in Chapter 1 as well as the Windows
Vista Application Development Requirements for UAC Compatibility guide. The first details
application compatibility changes in Vista while the second identifies how to design and develop
UAC compliant applications. A third guide outlines Best Practices and Guidelines for
Applications in a Least Privileged Environment.
The Application Compatibility Cookbook can be found at, the UAC Compatibility guide can be found at, and the Best Practices and Guidelines for
Applications in a Least Privileged Environment guide can be found at
Microsoft also produced a Vista Readiness Hands on Lab which works on a Vista virtual
machine and runs you through the most common issues you’ll face with Vista. This lab helps
you learn how these potential issues can affect your own applications.
The Vista Readiness Hands on Lab can be found at
Chapter 6
Finally, Aaron Margosis, a senior consultant with Microsoft Consulting Services, has developed
a nifty little tool to test application compatibility with User Account Control. The Limited User
Access (LUA) Buglight is a free tool that scans an application as it runs to identify any activity
that requires administrative rights. Once these activities are identified, it is easier to either correct
the code, correct the application’s configuration or try running it in a compatibility mode because
you know what specifically needs to be fixed. Aaron’s blog also provides a lot of information on
potential solutions for running applications in ‘non-admin’ mode.
LUA Buglight and Aaron’s blog can be found at
Relying on these resources should greatly reduce the likelihood of running into issues when you
try to run your custom applications on Vista.
Develop a Structured Application Management Strategy
With all of these resources to assist your assessment of compatibility for your applications, the
process should be relatively smooth. Next, you’ll need to prepare the applications themselves.
This is a fairly complex process as you run through each application, evaluate its compatibility,
prepare mitigations if required, package the application, test deployment and uninstallation and
then have the application validated by a subject matter expert prior to deployment. Considering
that organizations often have a large ratio of applications per users—a ratio that tends to increase
with the number of users in an organization—it is easy to understand why this process is the
process that will require the largest amount of effort and resources in the project. Fortunately,
there are ways to reduce this level of effort. One of the best ways to do this is to implement a
lifecycle approach to application management.
The Application Management Lifecycle
Few organizations know offhand what software can be found in their network. Fewer still can
guarantee that there are no unused software products on their users’ PCs. This issue stems from
the very nature of distributed systems, but it can be circumvented with the introduction of an
application lifecycle management process.
This lifecycle is based on four major phases and can be applied to both commercial software
products and corporate applications, though there are slight variations in the initial phases. The
four phases include:
Commercial Software Evaluation or Application Preparation: This involves the
identification of the requirement followed by the selection of a commercial software
product and/or the design of a corporate application.
Software Implementation: This phase focuses on software packaging, quality assurance
testing and deployment.
Maintenance: This phase focuses on ongoing support activities for the product. It will
involve the preparation, testing and distribution of scheduled updates.
Retirement: The final phase is focused on removal of the product from the network due
to obsolescence or on the reallocation of the product to someone else. A removal may be
followed by a replacement which would initiate the lifecycle process once again.
Chapter 6
Every application has a lifecycle. It begins the moment the software development project is
initiated by a manufacturer until the moment the software is retired from the marketplace. For
user organizations, the lifecycle focuses more on when it is acquired, when it is deployed, how it
is maintained and supported and when it is retired from the network. In the case of custom
corporate applications, it begins the moment corporate developers begin to work on the project
until the product is retired from use (see Figure 6.6).
Every application also requires patching during its lifecycle in the network. If you adopt an
application early in its lifecycle, you will need to patch it once it is deployed. If you adopt it later
in its lifecycle, you will most likely be able to patch it before it is deployed. Whichever method
you use, you will need to make sure your application management processes take both predeployment and post-deployment patching into account.
Figure 6.31. The Application Management Lifecycle
Chapter 6
Manage Commercial Software Licenses
You also need to include licence management as well as performance assessments in your
maintenance task list. With the advent of compliance regulations for organizations of all sizes,
licence management takes on a much more important role in application management today. The
easiest way to perform license management is to use a proper application allocation process and
make sure that it runs through the entire lifecycle, especially all the way to removal when the
product is no longer needed by the user it was deployed to. There is no justification today for an
incomplete or inadequate application lifecycle management strategy.
Application removal is probably the most unused portion of the entire software lifecycle, yet it is
the most important if organizations want to maintain a legal and compliant network. The issue is
simple. When users move from position A to position B, they change their role within the
organization. With new roles come new tasks, IT groups are quite adept at making sure users’
PCs are updated with the applications required to meet their new requirements because if they
don’t users will be quick to log a support call. However, the same IT groups are not quite so
adept when it comes to removing unused applications from the same PC.
That’s because application removal is viewed as complex and cumbersome. This myth needs to
be dispelled. Many IT professionals still think that the problem with application removal is that it
is seldom effective. Modern applications are made up of a series of private and shared
components. Removing any of these components can affect the stability of the system. Because
of this, IT often makes the decision to opt for stability at the expense of legal compliance. After
all, systems undergo regular hardware maintenance at which time they are reinstalled anyway. If
the system isn’t compliant, it is only for a short period of time.
This is one more justification for packaging applications to work with the Windows Installer
service. WIS is just one feature that Microsoft has embedded into Windows operating system in
support of application stability, but it is the one with the most impact because you can control the
interaction of your applications with this service. WIS fully supports the complete and effective
removal of every application component from a system so long, of course, as the application was
installed using WIS in the first place. Proper application packaging methods will involve
complete package testing and quality assurance including proper and non-damaging removal of
packaged applications. This is why packaging is such an important part of application
preparation in migration projects.
Develop a Structured System Management Process
Migration projects could not work without some form of automated application delivery to go
along with the automated OS delivery the project will support. Much has been said to date about
the various tools or suites you can use to do this. The fact is, medium to large organizations often
run from 200 to 500 applications both in-house and commercial within their network. Managing
application installations and maintenance for several hundred products can be complex. While
the application lifecycle will help because it lets you understand all of the activities required to
manage the application in your network, you’ll find that it also requires another model to help
simplify management issues: the system stack.
Chapter 6
Work with a System Stack
Using a system stack simplifies the application management process because it structures how
computer systems are built. Few organizations will ever need to install 200 to 500 products on
the same computer. Using a model for system design will ensure that applications are
categorized properly and regrouped into families that work together to provide the functionality
required to fulfill specific job roles within your organization. Resolutions has been promoting the
Point of Access for Secure Services (PASS) model for more than 10 years (see Figure 6.7).
Organizations relying on this system stack have a proven track record of proper system and
application management.
Figure 6.32. The PASS System Stack
Chapter 6
This system stack is based on the construction of a computer system that responds to corporate
needs in three ways:
The PASS system “kernel” is designed to meet the needs of the average or generic user.
It contains all of the software components required to perform basic office automation
and collaboration tasks. In addition, it is divided into a series of layers similar to the OSI
Networking Model. Like the OSI Model, it uses seven layers to provide core corporate
services. Because its functionalities are required by all personnel, this kernel is installed
on all computer systems. More on the kernel will be discussed in Chapter 7 as we prepare
the OS system image for deployment.
Role-based applications and commercial software are added on top of the kernel to
meet the requirements of the special Information Technology roles every person plays
within the organization.
Finally, an ad hoc layer responds to highly specialized IT requirements that are often
expressed on an individual basis. This ad-hoc layer can be applied at any time and
traverses traditional vertical IT roles.
In the PASS model, the kernel is considered as a closed component that is reproduced on all
systems. Layers that are located beyond the kernel are considered optional for all systems.
Constructing systems based on a system stack such as the PASS model greatly reduces system
management efforts because it reduces the number of programs that must coexist on any system.
First, a good portion of systems, sometimes up to 50 percent or even more, will only require the
system kernel. Remember that the kernel should contain every single program that is either
royalty-free and required by the entire organization (for example, Adobe’s Acrobat Reader or the
new Microsoft XPS Document Reader), every program that is mandated by internal polices—
antivirus tools for example, or every program for which the organization obtains an enterprisewide license (for example, many organizations obtain an enterprise license of Microsoft Office).
Second, by grouping programs into role-based configurations, organizations are able to reduce
the number of applications that must coexist on a system. Role-based configurations include
every program that is required by every member of the IT role grouping. For example, Web
Editors would require a Web editing tool, a graphics tool, a Web-based animation tool, and other
Web-specific utilities. This group of tools can be packaged separately, but should be delivered as
a single unit on all systems belonging to the IT role. Role-based configurations often include no
more than 10 to 30 individual programs depending on the role. Only these groupings need to be
verified with each other and against the contents of the system kernel. There is no requirement to
verify or test the cohabitation of programs contained in different configurations because they are
not likely to coexist on the same system.
Third, ad hoc programs reduce system management efforts even further because they are only
required by very few users in the organization. They are still packaged to enable centralized
distribution and automated installation, but once again, they only require testing against both the
kernel and the configurations they will coexist with but, because of their ad hoc nature, they may
coexist with all possible configurations.
Chapter 6
As discussed earlier, each application has a lifecycle of its own that is independent of its location
within the system construction model. The difference lies in the rate with which you apply
lifecycle activities to the application. Components of the kernel will have an accelerated lifecycle
rate—since they are located on all systems, they tend to evolve at a faster pace than other
components because they are supported by corporate-wide funding—while products within the
outer layers of the model will have slower lifecycle rates which will be funded by the groups that
require them. Ideally, the rate of evolution of these products will be monitored by the subject
matter experts or application sponsors your organization identifies for each non-kernel
Application sponsors are responsible for several activities, four of which are:
- Subject matter expertise for the application.
- Acceptance testing for the application package.
- Application monitoring or watching for new versions or patches.
- Rationalization justifications or justifying why the application should be in the overall
software portfolio.
Maintain Constant Inventories
Another key aspect of the application management lifecycle process is the maintenance and
upkeep of corporate inventories. This is another area where a system stack like the PASS model
can play a role because of the way it is designed to work. With a system stack, maintaining an
inventory need only focus on identifying role groupings for role-based configurations.
Applications that are contained in the kernel already have corporate-wide licenses so they are
easy to track. Similarly, ad hoc products are only located on a few machines which also makes
them easy to track.
This leaves the vocational groupings as they become the mainstay of the inventory system.
Constant application inventories should be directly integrated to application management
practices which should include several elements (see Figure 6.8).
Package Repository: A central deposit for all authorized applications for the network.
This repository is the source for all application distributions within the organization.
System Kernel Inventory: The system kernel must be completely inventoried. This
inventory will be linked to the Package Repository because many of its components will
be stored in packaged format to facilitate automated system construction and conflict
Role-based Configuration Deposit: This deposit identifies each of the packages found
within each configuration. It is tied to the Package Repository to support system
construction and vocational changes.
Vocational Groupings: This deposit regroups all of the users belonging to a given IT
role. It is tied to the Role-based Configuration Deposit because it identifies which
configurations systems require on top of the system kernel. Ideally, these groupings will
be stored within your organization’s directory service (for example, Active Directory)
since each grouping contains only user and machine accounts.
Chapter 6
Core Inventory Database: The core inventory database brings everything together. It
includes inventories of all computer systems in the network, all user accounts, all
application components, and much more. It is used to validate that systems contain only
authorized components as well as assign ad hoc product installations since these are
mostly performed on a case-by-case basis. This database is maintained by the
organization’s systems management tool and forms the core configuration management
database (CMDB).
Web-based Reporting System: Another function of the organization’s systems
management tool is to provide detailed information on the inventories that have been
collected as well as provide consistency and compliance reports for all systems.
Figure 6.33. A Structured Inventory Management and Systems Management System
System staging and application distribution can be performed on the basis of the inventories your
organization maintains. In addition, when systems change vocations (i.e., a user changes IT
roles), the inventories are used to remove applications that are no longer required and add
applications required by the new role-based configuration. This means that there must be a very
tight relationship between the inventory process and the application distribution tools and
Chapter 6
Locked Systems and the Standard User
For processes based on system construction and inventory management to work, organizations
must ensure that software installations originate only from centralized and authorized sources.
When employees work as a Standard User, they cannot install applications on their own.
Package Applications for Windows Installer
Because they need to be installed silently and in the background, especially during system
reconstruction, applications must be packaged to automate the installation process. Yet,
application packaging is often one of the most overlooked processes in IT. But, because it is the
process that controls application installations, it is one of the most important processes in any
application management strategy.
Few people in IT haven’t performed an interactive software installation. The installation
experience can range from good for an advanced user to very bad for a novice. The simplest
installation will ask only a few basic questions but some of the more complex installations are
enough to stymie even the most sophisticated PC technicians.
The best installation is the automated installation—the one where no user input is required. And
the best of these is the customized version of the automated installation—the version that is
designed to work within the specifications your organization outlines for its network.
In comes enterprise software packaging (ESP). ESP deals with the preparation of standard,
structured automated installations for deployment within a specific organizational environment.
These automated installations or packages take into consideration all of the installation
requirements for the organization: organizational standards for software usage and desktop
design, multiple languages, regional issues, and especially, application-related support issues. In
addition, packaging should cover all applications including both commercial software and inhouse applications.
There is a difference. Most commercial software products already include the ability to automate
their installation. Unfortunately, there are no official standards in the plethora of Windows-based
commercial software products for installation automation. Fortunately, this state of affairs is
slowly changing with the advent of Windows Installer. With WIS, it is now possible to aim for a
standard, consistent installation methodology for all software products within an organization.
This standard approach is at the heart of Enterprise Software Packaging.
There are a whole series of different software packaging tools on the market. These tools are
designed to assist IT personnel in the preparation of automated, interaction-free application
installations. The focus of these tools is simple: allow IT personnel to perform an application
installation and configuration, capture this customized installation and ideally, reproduce it
successfully on every PC in the enterprise.
Two of the most famous software packaging tools on the market are:
Altiris Wise Package Studio: and
Macrovision AdminStudio:
But application packaging must be structured if it is to succeed. Standards must be set and
maintained. Application packages must be documented and detailed in the same manner no
Chapter 6
matter who performs the packaging process. Quality assurance must be high; installations must
work in every instance even if the configuration of destination computers varies. Package
repositories must be maintained and updated according to organizational application acquisition
policies. All of these activities are part and parcel of the ESP process.
Chapter 6
In addition, there are several secondary reasons to perform packaging. One of the most important
is application conflicts. Applications in a Windows environment are composed of several
different components which are either private or public. Public components are shared between
different applications. In fact, Windows Resource Protection was designed by Microsoft to help
alleviate the transport and installation of public or shared components by applications because of
its potential for disrupting system stability. This is why one of the strongest features of
application packaging tools is conflict management—the ability to inventory all of the
components in a system stack as well as all of the components in each and every application
package. You then use this conflict management database to identify potential conflicts and
circumvent potential issues before they occur. This is one more reason for a standards-based
packaging approach (see Figure 6.9).
Packaging tools include support for this standards-based approach in the form of packaging
templates and workflow templates. Packaging templates let you create Windows Installer
templates that are applied with default settings each time you prepare a package for WIS
integration. For example, if you decide to digitally sign every package in your network, you
could insert a digital certificate within your template and have it automatically apply to every
package you create. The preparation of these templates must be done with care before you begin
the packaging process.
There are several reasons why the inclusion of digital certificates into application packages is a great
idea. First, if your packages are digitally-signed and you sign any application patches with a similar
signature, standard users will be able to install patches without administrative rights in Windows
Vista. Second, if you digitally sign your packages, then you can control their use through Group Policy
via the Software Restriction Polices Windows Server supports. These policies ensure only approved
software is installed and runs in your network. You should look to both practices during application
preparation. After all, the only thing you need is one single digital certificate which is quite easy to
Workflow templates control the process your packaging team uses during the creation of a
package. If you design your templates before you begin packaging, then you can guarantee that
every package will be created in the same way. For example, you can structure the process to
follow the workflow example illustrated in Figure 6.9. This means you can assign junior
packagers to most applications and rely on expert advice only for tricky applications which
require more knowledge to work.
Packaging tools also include lots of guidance and some sample templates to work with. But, if you
feel you need additional packaging expertise, rely on tools such as the AppDeploy Library from This library includes the use of a tool called Package Cleaner. Package
Cleaner will automatically scan your WIS packages and provide advice as to its contents, especially
identifying items which can and should be removed. What makes this tool great is that it does not
remove the contents of the package you select, it just marks them as non-installable. This way if you
find that you needed a component you removed, you can simply run it through Package Cleaner to
mark it for installation again.
Chapter 6
Figure 6.34. A Structured Packaging Approach
Packaging tools will also scale well. If your organization has multiple sites with technical staff in
each site, you can easily set the tool up in each site and have its packaging database replicated to
a central location. This allows distributed teams to work together to build the package repository.
A good source of information on packaging is the BDD 2007 Application Management guide:
Chapter 6
Finally, to perform your packaging, you will need to categorize all applications—identifying
which type each application falls into. Usually there will be three or four types in each
Native Windows Installer commercial applications—applications designed to work with
Legacy commercial applications—applications that must be repackaged for WIS.
Native Windows Installer in-house applications
Legacy in-house applications.
Some organizations may further subdivide these categories to Win32, Win64 or .NET
applications. Categorization will greatly facilitate the packaging activity because each
application type requires a different approach. For example, you should never repackage a native
Windows Installer application. Instead, you should create transforms which will modify the
Windows Installer process to meet organizational standards. You will however, want to
repackage all legacy applications to turn them into MSIs.
For more information on application management and packaging in general, refer to and look up the Application Lifecycle Management section. Of special
interest will be the 20 Commandments of Software Packaging, a guide which has now become the
leading authority in the industry, as well as Enterprise Software Packaging, Practices, Benefits
and Strategic Advantages which offers a complete overview of application packaging.
Explore Application Virtualization Options
As you can see, application packaging and general application preparation is a lot of work and
forms the bulk of the PC preparation activities. Now, wouldn’t it be nice if you could vastly
reduce this amount of work while ensuring that applications always work even when applications
that are known to cause conflicts operate together on the same PC? Don’t believe it? Well,
welcome to application virtualization. Application virtualization tools isolate or abstract
application components from the operating system and other applications yet provide full
capabilities for application and operating system interaction. Think of it as Vista’s file and
registry virtualization capabilities on steroids.
In and of itself, application virtualization offers many benefits and may warrant an immediate
implementation, but because of its nature, it requires redeployment of all of the applications you
run in order to take full advantage of the virtualization capabilities. This is why it is ideal to
adopt this technology during a migration project. Otherwise, you would have to replace all of the
applications that are already deployed in your network—uninstalling the application and then
redeploying it as a virtual application. If you haven’t been working with Windows Installer
packages, then uninstalling may leave behind traces of the application leaving the operating
system in a potentially unstable state. This is why the best time to do this is when you are
deploying a brand new, “clean” PC.
Chapter 6
There are several different types of application virtualization technologies, but all of them
basically produce a similar feature set:
All applications are isolated from the operating system to the point where the application
thinks it is interacting with the system, but when you look at the system directly, you see
that no changes are made to system files, folders or registry keys.
All applications are isolated from each other, letting you concurrently run applications
that are known to cause issues and conflicts when on the same system. Manufacturing
applications which are often never updated can run together. Different Microsoft Access
versions can run on the same PC at the same time and even interact with each other
through cut and paste and other system services. The potential is unlimited.
Device drivers or hardware components cannot be virtualized and must be integrated
directly to the system.
Applications that interact at a low level in the operating system cannot be virtualized. For
example, antivirus software should not be virtualized. Many also include Internet
Explorer in this category, but several organizations have been able to properly virtualize
IE and have it run on their systems.
Virtualized applications are often structured for data streaming or if they are not by
default, can be teamed with streaming technologies to provide the same effect. Just like a
video that is streamed over the Internet, streamed applications are divided into small
blocks of data that can start working as soon as enough content has been delivered to the
system. The remainder of the content is then streamed in the background.
Virtualized applications do not capture an installation, but rather an installed state. This
means that the application need only be copied to the system for operation and does not
need to run through an installation process as with applications packaged for Windows
Installer. In fact, the crudest form of installation is the XCopy which just copies the
application’s files to a system.
Users do not need elevated privileges to “install” a virtualized application. Since the
application does not need to reside in folders protected by WRP, no elevated rights are
Application virtualization platforms include the ability to integrate application access
with specific user groups, groups that can be managed and maintained within a central
directory such as Active Directory.
Application virtualization follows Vista’s operating model because when virtualized,
applications only interact with the system in user mode.
Applications that have been packaged in virtualized layers will work on both Windows
XP and Windows Vista because the virtualization layer is in charge of translating systems
calls and operations in the appropriate manner for interaction with the OS. In some cases,
virtualized applications will work with any version of Windows from NT on.
Application virtualization supports the ‘software as a service’ model since no installation
logic is required to run a virtualized application on a PC. Applications are copied to
systems in their running state and can therefore support an on-demand delivery strategy.
Chapter 6
These reasons and especially, the last reason, make application virtualization very attractive. And
since you are in the midst of a migration project to Windows Vista, then this is the ideal time to
be looking to the adoption of application virtualization.
As mentioned in Chapter 4, there are several versions of application virtualization. Make sure you
review each product’s technical feature set before you select the one you wish to go with.
71. Altiris offers Software Virtualization Solution (SVS) which is a filter driver that is installed on the
OS. The filter driver manages the virtualization process. More information on SVS can be found at: SVS applications can be
combined with AppStream’s AppStream 5.2 server to offer streaming capabilities. More information
on AppStream and SVS can be found at:
72. Citrix offers the integration of its Tarpon beta technology within its Presentation Server version 4.5.
Tarpon uses a principle similar to Microsoft’s SoftGrid and streams applications to desktops.
Presentation Server 4.5 supports Windows XP application virtualization but not Vista. More
information can be found at:
73. Microsoft offers SoftGrid as part of the Desktop Optimization Pack for Software Assurance
(DOPSA). Microsoft acquired SoftGrid in mid-2006 and has since been reprogramming the SoftGrid
client to get it to run with Windows Vista and x64. Support for Vista is slated for release in mid-2007
while support for x64 will be in 2008. More information on SoftGrid can be found at:
74. Thinstall offers the Thinstall Virtualization Suite (ThinstallVS). Thinstall incorporates its
virtualization engine directly into the software package it creates. As such no pre-deployment
preparation is required. More information on ThinstallVS can be found at:
75. At the time of this writing, the only two solutions that worked with Windows Vista were Altiris SVS
and ThinstallVS and both worked very well. Keep this in mind when you choose the application
virtualization solution you want to implement.
76. Pricing for each solution is relatively similar as some require direct acquisition costs and others are
subscription based. In time, all costs become equivalent.
Do away with Application Conflicts
Despite Microsoft’s best efforts, application conflicts still exist in Windows. This is partly due to
the sheer number of applications organizations must run in order to support their operations.
While small organizations may get away with running small numbers of applications to operate,
medium to large firms often find themselves running hundreds of different applications with all
sorts of functionalities and features, each one requiring some specific component to run properly
within Windows.
Using application virtualization, you can do away with conflicts once and for all and never have
to worry about them again. Just package the application in the right format and then deliver it on
an as needed basis. Unlike installed applications, virtualized applications do not embed
themselves deep into the OS structure (see Figure 6.10). The virtualization layer protects the OS
from any changes affected by applications.
Chapter 6
Figure 6.35. Application Virtualization protects operating systems from modifications
Consider your strategies. If you decide to include software virtualization into your deployment
project, you will be able to obtain considerable savings in both time and effort. Virtualization
reduces the time required to package applications since you know you no longer have to verify
potential application conflicts. You still need to affect proper quality assurance on each package,
but still, each package should take considerably less time to prepare. One of the most demanding
processes for software packaging is the need to constantly return to a ‘clean’ system image to
ensure the package is as clean as possible, but if you capture applications into virtualization
layers, the machine is not affected at all. Therefore, no need to return to a pristine OS—the OS is
always pristine. This alone will save considerable time and effort.
In addition, since application virtualization does not require significant infrastructure
modifications, especially when the virtualization technology consists of either a driver or is
included into the package itself, you can take advantage of it immediately. As soon as an
application is packaged for virtualization, you can use it in either XP or Vista. Because of this,
you might consider packaging your most troublesome applications immediately and deliver them
to existing systems without waiting for the deployment project to update the OS. Several
examples are available:
Access applications. Most organizations cannot afford to upgrade the multitude of
Microsoft Access applications their user community has developed. Properly converting
these applications to a client-server structure, using back end databases and front end
screens which can operate through the Access runtime is the best way to deal with this
issue, but if you haven’t taken this step, then virtualize them! This will completely isolate
them from the latest version of Access you need to deploy.
More information on running and managing Access applications in-house can be found at under Decentralized Development Strategies.
Custom in-house applications. If you have custom applications that just won’t
cohabitate with any other, you can virtualize them and have them finally cohabitate with
any other on any system.
Chapter 6
Custom industrial applications. If you have custom industrial applications, for
example, manufacturing applications, that require different settings for each
manufacturing plant you run, you can now easily virtualize them to run them all on the
same system.
This approach will let you get your feet wet with application virtualization and learn what
advantages it truly brings to application management while you’re waiting for the deployment
project to complete.
Review your System Construction Strategy
When you’re ready to integrate application virtualization with your operating system
deployment, you might change the way you perform this deployment. For example, you might
change the way you create your machine build. Before, organizations tended to create a massive
system ‘kernel’ that included all of the most common applications and utilities found within the
organization (remember Figure 6.7?). This ‘fat’ kernel is difficult to build and even more
difficult to test because it requires the integration of vast numbers of components. In addition,
using a fat kernel makes it more difficult to deploy because massive amounts of data must be
sent to each machine. Using multicasting technologies reduces the time to deploy to each
machine, but if there is a way to thin down the image, then why not take advantage of it? Finally,
using a fat kernel means more work when it is time to update its core components. The sheer
number of components means more updates more often.
Using application virtualization is the best argument for a ‘thin’ kernel. Application
virtualization lets you create a new layer in the PASS system stack: the Generalized Layer. Each
application in this layer is virtualized, but still distributed to each user in the organization. The
kernel itself becomes much thinner because it no longer needs to include these applications (see
Figure 6.11).
Your thin kernel is composed of the core operating system along with any required updates,
adding core utilities such as antivirus, anti-spyware, firewalls, management agents, and
virtualization agents if required. You still create a single core image that will include everything
that is common to all desktops, but now, you can focus on the proper construction of your core
operating system and expect it to maintain its pristine state for the duration of its existence within
your organization. Just imagine the benefits!
Chapter 6
Figure 6.36. Application Virtualization affects several layers of the PASS model and supports a Thin Kernel
Reference computers are now much easier to construct. Very few applications or non-OS
components are included in the kernel making it simpler to update, maintain and deploy.
In addition, many virtualization platforms let you convert MSI installations to virtual
applications, saving you time and effort during your migration project. This is the case with
Altiris’ SVS since it integrates directly with Wise Package Studio.
Chapter 6
Chapter 7 will focus on the creation of a Thin PASS Kernel. Software virtualization is here to stay and
should be used by every organization that wants to reduce application management overhead costs.
This is one of the core recommendations of this guide and the processes it recommends will reflect
this recommendation.
Integrate Application Virtualization to the Migration Process
In addition to changing the way you construct systems, using application virtualization will also
change the way you deploy systems, especially if you include streaming technologies. You can
continue to rely on multicasting technologies to deploy the system kernel and then, you can
immediately stream the applications in the generalized layer to each system as soon as the
system is up and running. Because the applications are streamed, you don’t need to worry about
bandwidth issues as much. Users get the components they need to begin working immediately
even if the streaming is not complete.
Then once the generalized layer is delivered, you can begin the deployment of the role-based
layers if they are required. Again, it uses the streaming process so there is little impact on
bandwidth. You can also use the same process for ad hoc applications.
Streaming technologies rely on Quality of Service (QoS) as a control mechanism to provide
different priorities to users or data flows in routers and switches. Because of this, you may want
to involve your networking group in preparation of the deployment to make sure you don’t run
into network bottlenecks.
Application virtualization and streaming running on thin kernel systems makes a lot of sense in
the modern datacenter—more sense than using massive servers to offer desktop services to end
users, mostly because to avoid a single point of failure, you’ll always need more than one server
to provide the service. Each endpoint has its own resources—CPU, RAM, hard disks and other
peripherals. Streaming servers are nothing but file servers and do not require massive amounts of
memory or processing capabilities unlike Terminal or Citrix Servers which in fact replace the
processing power of the endpoint. Endpoints are also easier to manage since the only thing they
contain is the thin kernel and can therefore be reimaged with little impact. After reimaging, the
applications the user needs are streamed back onto the system. And, since applications are stored
in a central repository and streamed from there to endpoints, you only have to update one single
source when patching is required. They will automatically be re-streamed to each one of the
endpoints that requires them.
Chapter 6
Of course, you need more services in support of such a strategy. For example, if you want to be
able to freely re-image PCs, then you need to make sure user data is protected at all times. The
best way to do this with Vista is to use Folder Redirection Group Policy Objects (GPO). Folder
Redirection automatically redirects key user folders such as Documents, Pictures, Application
Data and more to server-based file shares. These file shares are configured with offline caching
to ensure a copy of the data is always local to the PC, but since the original data is on the server,
it is protected and backed up on a regular basis. This also means you need to fully support user
data and to do so, you will need lots of backend storage, but disks are cheap today unlike
massive processing servers. Another useful technology is the Distributed File System Namespace
(DFSN). DFSN maps local shares to a global share name stored in the directory. Then, the
directory redirects users to a share in the local site whenever they try to access the global share.
And, to make sure the same content is in each local site, you can use DFS Replication (DFSR)—
Windows Server 2003 R2’s remote differential compression (RDC) replication engine—to keep
each share in synch without breaking your WAN bandwidth. This makes it much simpler for
users to access data throughout your network.
More on the Folder Redirection, DFSN and DFSR strategy will be discussed in Chapter 7 as you build
the system image and prepare the services required to support it.
In addition, streamed applications are cached locally for better operation. Most streaming
solutions will let you control the duration of the cached application, letting you control license
costs and making sure mobile users have access to applications even when they are not
connected (see Figure 6.12). And, if you don’t have a streaming solution, you can always place
the virtualized applications on a network share configured for offline caching.
Figure 6.37. A Simple Application Virtualization and Streaming Architecture
Application virtualization may not apply to every single situation, but in many cases, it can be
considered the Terminal Services ‘killer’ and do away with virtual desktop infrastructures (VDI)
where you store virtualized OS images on massive servers and use them to provide services to
users. In each scenario, the user requires access to an endpoint anyway. Why not make an
intelligent choice and do away with the requirement for a massive server? Better yet, why not
take the monies you would spend on monster servers and use it to pay for your application
virtualization solution? Application virtualization finally lets you use the client-server model to
its fullest. Don’t miss this opportunity to upgrade your data center to the 21st Century!
Chapter 7
Chapter 7: Kernel Image Management
Each time you perform a PC operating system (OS) deployment, you need to figure out how the
installation will proceed. For this, you must discover exactly how the OS installation process
works and then, you determine how you can automate the process. With Vista, Microsoft has
introduced a completely new installation process called Image-based Setup (IBS). Basically,
each version of Windows Vista includes a system image file—called a .WIM file—that contains
the installation logic for different editions of Vista. During the IBS installation process, this
system image file is copied to the system disk and then expanded and customized based on the
hardware that is discovered during the process.
WIM images contain several different editions of Vista. Common files are not duplicated within
the WIM as it relies on a single instance store (SIS) to include only one copy of each common
file as well as individual copies of the additional files that build different editions. The edition
you install is determined by the product key you insert during installation. This lets Microsoft
ship every edition of Vista on a single DVD. Of course, two system images are required: one for
32-bit and one for 64-bit systems as the architecture for the x86 or the x64 version is
incompatible with the other and cannot be contained within the same image. Despite this, having
to manage two DVD versions of Vista is a vast improvement over previous versions of Windows
where each edition was contained in a different CD or DVD. Of course, you’ll have to use a
preparation process to create these images (see Figure 7.1).
Figure 7.38. The OS Deployment Preparation Process
Chapter 7
The preparation process for OS deployment involves several steps which include:
Defining the Logical OS Configuration
Discovering the Installation Process
Determining the Physical OS Configuration
Determining the OS Deployment Methods
This is the focus of the PC Image Development portion of your deployment project or the
Desktop Deployment Lifecycle (see Figure 7.2). Once again, the focus of these activities is the
PC Team and the bulk of the work will be performed in the Organize phase of the QUOTE
system. But, as in other processes, some activities must be performed in previous phases. For
example, the Logical OS Configuration is usually defined during the Understand phase of the
QUOTE as is the discovery of the installation process. And, as is the case with all other
engineering aspects of the solution you are preparing, the prepared system image and
deployment process you select must go through a final acceptance process before it can move on
to other phases of the QUOTE and be deployed through your network.
Each of these activities and their related duties are discussed in detail through this chapter. And,
as discussed in Chapter 6, the focus of this chapter is to create a system image which will support
both application virtualization and standard software installations though you should aim to
move to application virtualization since it provides a completely new way of managing
applications and maintaining stability within deployed PCs. Virtualization protects the OS at all
times and provides significant cost reductions for managed systems.
Figure 7.39. Moving through Step 7 of the DDL
Chapter 7
Defining the Logical OS Configuration
This activity is part of the Understand phase of the QUOTE system.
Designing a computer system for end users is more than just a technical task. After all, end users
will be working with the design IT produces every working day. Remember that the goal of end
users is to produce work related to the organization’s business goals, not to work with or
troubleshoot computer systems. With that in mind, IT needs to consider exactly how it can
configure a PC operating system to provide the fullest and most user-friendly support for the
work they do. To do this, IT must have a sound understanding of the business their organization
is in and must design the structure of the OS to be deployed with this business architecture in
mind; thus, the need for a logical OS configuration and design.
For information on how business architectures drive IT services, see the Architectures article series at This set of seven articles was designed to assist
organizations and IT professionals understand the different cogs that make up enterprise
Chapter 6 introduced the concept of a system stack: the PASS model. This model lays out the
logical organization of deployed components on a PC. While chapter 6 focuses on the
application-specific layers of this system stack, this chapter focuses on the components that make
up the system kernel—or the elements that are deployed on every single PC in your network (see
Figure 7.3).
Figure 7.40. The Seven Components of the PASS System Kernel
Chapter 7
In addition to the seven layers of the kernel, the PC team responsible for system image creation
and deployment will need to be involved with the physical layer. That is, they need to participate
in the creation of your baseline computer systems for the Vista operating system. The content
and the implications of each of the kernel layers as well as the physical layer are discussed
Physical Layer
The physical layer covers PCs, laptops, tablet PCs, cabling, printers, scanners and all other
physical components of the PC infrastructure. Standards should be set for baseline systems. Each
model you elect to support should conform to these minimal standards. The breadth of this layer
for should cover:
Processor and speed.
RAM capacity and minimum RAM requirements.
Drive bay capability, minimum hard disk storage as well as minimum required amount of
free disk space.
Networking components.
Power management (ACPI).
Video capability.
Additional input and output components (CD/DVD readers and writers, keyboard,
mouse, USB and Firewire ports, and so on).
Security components such as Trusted Protection Modules (TPM), biometric or smart card
authentication devices and processor-embedded antiviral capabilities.
Ideally, all the hardware you will purchase will be based on industry standards such as those
proposed by the Distributed Management Task Force ( One of the
great standards to emerge from the DMTF is the ability to remotely deploy hardware-specific
components such as the computer basic input/output system (BIOS) and other firmware. Make
sure you select systems that support this ability and that your provider either delivers its own tool
for firmware management or delivers firmware upgrades that are deployable through systems
management tools.
Chapter 7
Operating System Layer
The operating system layer focuses on the components that make the OS run on a system. This
The Windows OS, in this case, Vista, but also which editions of Vista you choose to
Appropriate levels of service pack and/or hot fixes; ideally, these are included in the
system image you deploy.
Manufacturer-certified drivers for all peripheral equipment. Certified drivers are a key
component of your roll-out. Driver signing is configured through Group Policy. If you
can, you should aim to use only certified drivers on all hardware because they are proven
to work with Vista. If you need to include non-certified drivers make sure you test them
The dynamic link libraries (DLL) required in support of organizational applications. For
example, this could include runtime versions of libraries such as Visual Basic, Microsoft
Access, and any other engine that is required to run your applications on client PCs.
Organization-wide operating system complements such as typefaces or fonts, regional
settings, perhaps language packs if they are required and so on.
Make this layer as complete as possible. For example, if this layer includes the Access runtime,
then you will not need to deploy the full version of Access to people who only make use of
Access applications and do not develop them.
Networking Layer
The networking layer covers all of the components that tie the network together including:
Unique networking protocol: This should be TCP/IPv4 and/or IPv6.
Networking agents: Any agent or driver that is required to let the PC communicate over
local area or wide area networks as well as wireless networks or Bluetooth devices.
LDAP directory structures: For Windows networks, this means a standard Active
Directory structure for all domains, groups and organizational units—this controls what
users “see” when they browse the network.
Unique object naming structure: Every network object—PCs, servers, printers, file
shares—should have a unique and standard naming structure.
Unique scripting approach: Every script should be unified and should be based on a
single scripting language. Ideally, scripts should be digitally signed and should be tied to
Windows’ Software Restriction Policies so that only authorized scripts can be executed in
the network.
Chapter 7
Legacy system access: A unique set of components should be used to access legacy
systems such as mainframe computers.
Remote access and virtual private networking (VPN): This layer should include the
components for either remote access or VPN access to the corporate network and any
other external system required by users.
Remote object management: Every component required on PCs for remote management
and administration should be part of this layer since the network is what ties the
distributed system together. With the advent of Active Directory and the control it offers
over PCs, this now includes the Group Policy processing agent which is part of Windows
by default. This should also include agents for PC management such as those provided by
your deployment mechanism or even BIOS or firmware distribution agents.
Basically, the networking layer includes every component that lets you communicate with the PC
and lets the PC communicate with the external world.
Storage Layer
The storage layer deals with the way users interact with both local and remote storage. It covers:
Physical components: Disk sub-systems, network storage systems, backup technologies.
Storage software: The systems that must administer and manage storage elements. In
some cases, you may need to use third party tools in support of custom storage
requirements. A good example is custom DVD burning software.
File services: File shares, for Windows this also means Distributed File Services (DFS), a
technology that integrates all file shares into a single unified naming structure making it
easier for users to locate data on the network. DFS also includes replication technology to
provide fault-tolerance in a distributed environment.
Indexing & Search services: All data must be indexed and searchable. For Windows
Server 2003, this refers to the Indexing service. For Vista and Longhorn Server, this
refers to the integrated Search service.
Databases: Since data has many forms, the storage layer must support its storage in any
form. Structured and unstructured.
Transaction services: This layer should control transaction services because these
services validate all data stored within databases.
Data recovery practices and technologies: All data must be protected and recoverable.
The Vista OS includes both Volume Shadow Copies and the Previous Versions client.
This lets users recover their own files. These tools should be integrated to your overall
backup and data protection strategy.
File structure: The file structure of all storage deposits, local (on the PC) and networked,
must use a single unified structure. This structure should support the storage of both data
and applications.
Chapter 7
Data replication technologies: Data that is stored locally on a PC to enhance the speed of
data access should be replicated to a central location for backup and recovery purposes.
For Windows Server, this means putting in place an offline storage strategy and relying
on folder redirection to protect data that is local to the PC.
Temporary data storage: The PC disk drive should be used as a cache for temporary data
storage in order to speed PC performance. This integrates with the offline caching
Single unified tree structure: All PC disk drives should have one single unified tree
structure so that users do not have to relearn it. In addition, all PC management tools
should be stored as sub-directories of a single, hidden directory. Users should not have
access to this folder. In addition, you may need to incorporate a special local folder
structure for unprotected data—data that belongs to users only and that the organization
is not responsible for.
The objective of the storage layer is to simplify data access for all users. If all structures are
unified, then it becomes really easy for all users, especially new users, to learn.
Security Layer
The security layer includes a variety of elements, all oriented towards the protection of the
organizational business assets:
Ownership of IT assets: The first part of a security strategy is the outline of ownership
levels within the IT infrastructure. If a person is the owner of a PC, they are responsible
for it in any way they deem fit. But if the organization is the owner of an asset, people
must follow unified rules and guidelines concerning security. This should be the first
tenet of the security policy: every asset belongs to the organization, not individuals.
User profiles: Each user has a personal profile that can be stored centrally if possible.
This central profile can be distributed to any PC used by the person. Ideally, this is
performed through folder redirection Group Policies instead of roaming profiles.
Roaming profiles are an outdated technology that put a strain on network
communications. Folder redirection link with offline caching to better control bandwidth
utilization. Profiles are part of the security layer because by default, each profile is
restricted to the person who owns it. Security strategies must support this exclusive
Group Policy: Group Policies, both local and remote, allow the unification and
enforcement of security approaches within a distributed environment. By applying
security through policies, IT departments ensure that settings are the same for any given
set of users.
Access rights and privileges: Each individual user should be given specific and
documented access rights and privileges. Wherever possible, access rights and privileges
should be controlled through the use of groups and not through individual accounts.
Centralized access management: Security parameters for the corporate IT system should
be managed centrally through unified interfaces as much as possible. Local security
policies are available but they should be stored centrally and assigned to each PC.
Chapter 7
For more information on the use of PKI in organizations of all sizes, see Advanced Public Key
Infrastructures at
Non-repudiation technologies: Every user of the system should be clearly identifiable at
all times and through every transaction. For Windows, this means implementing using
specific, named accounts—not generic—for each user in the organization. For true nonrepudiation, a Public Key Infrastructure should be implemented for the distribution of
personal certificates which can then be used in emails, documents and other data signed
by the individual.
Internal and external access control: Access to the organizational network should be
controlled at all times. This should include every technology that can assist this control
such as firewall agents, smart cards, biometric recognition devices, and so on. For
Internet access control with Windows, this can mean implementing Microsoft Internet
Security and Acceleration Server (ISA) with Whale technologies.
For more information on the use of ISA Server in organizations of all sizes, see
Confidential storage: Since PCs, especially mobile systems, are not permanently tied to
the network, it is important to ensure the confidentiality of any data stored locally. With
Windows Vista, this means implementing either the Encryption File System (EFS) or
Security is at the heart of every IT strategy. This is why this layer is so important and so allencompassing. Make sure your security policy is public and properly broadcasted to every user
in your organization. If you have outsourced personnel, make sure the security policy is one of
the first elements they are presented with when they come to work in your organization.
Chapter 7
Communications Layer
The communications layer deals with elements that support communication in all of its forms:
Instant and deferred communications: These technologies include a variety of electronic
communications tools such as electronic mail, instant messaging, discussion groups, IP
telephony, video communications, and more. Ideally, you will provide a unified
communications strategy to users.
Workflow and collaboration: Workflow and collaboration is an essential form of online
communication today. Your IT infrastructure should support this powerful mechanism. In
Windows, this usually involves either the free Windows SharePoint Services (WSS) or
Microsoft Office SharePoint Server (MOSS).
More information on WSS can be found at Information on MOSS is at
Shared conferencing: Virtual conferencing is a tool that should be available to users to
reduce physical meeting costs. More and more providers offer this kind of technology. IT
will have to choose whether these sessions are hosted internally or through external Web
Internet browser: The browser is an essential part of any communications strategy. It
must be customized to organizational standards and must be run in protected mode as
much as possible.
Broadcast technology: IT, the organization and individual departments must have a
means of communicating vital and timely information to their respective users. This
requires some form of broadcast technology. This can be in the form of Windows Media
Services or through simple means such as network message broadcasts. In Vista, this can
be done through Really Simple Syndication (RSS) feeds delivered to Gadgets displayed
on the desktop.
Group and individual agenda scheduling: Since time management often incurs a
communication process, the network infrastructure should support agenda scheduling at
both the individual and the group level. This should also include the ability to schedule
objects such as meeting rooms, video projectors, and so on.
Legacy system access: If Terminal Emulation is a requirement; this should include a
single unified Terminal Emulator application. Ideally, this will be integrated to the
selected browser technology.
Communications is essential to make the organization work. After all, users must communicate
with each other and with external resources to perform their work. It is responsible to ensure that
these communications are facilitated as much as possible and secure at all times.
Chapter 7
Common Productivity Tools Layer
Since most office workers need to rely on productivity tools to perform their work, this layer is
an essential part of the core system image you will be deploying to all users. It should include
both common commercial tools as well as any organization-wide internally-developed tools
required by your organization.
Information production tools: Office automation tools such as Microsoft Office should be
available to every user that requires its use. Most commonly, these are deployed to every
single user. But, these tools should be deployed intelligently. If users do not require
specific components of the suite, then these components should not be deployed to them.
For example, most users will require Word, Excel, PowerPoint and Outlook, but not
necessarily require the other components of the suite. Deploy only what is required.
Generic graphics tools: All users should be able to illustrate concepts and create
drawings. Illustration is a basic communications tool; it must be part of a core set of PC
tools. The office automation suite you deploy includes these basic illustration tools. But,
you may also need to deploy graphics file viewers such as the Microsoft Visio viewer to
let users view illustration produced by more professional illustrators in your organization.
This also means implementing server-based technologies that allow the indexing and
sharing of illustrations. In some cases, this will mean integrating custom filters to the
Indexing service.
Service packs: Common tools should be regularly updated as service packs and hotfixes
are released for them.
Lexicons and vocabularies: Lexicons and corporate vocabularies should be unified to
ensure the consistency of all output.
Commercial utilities: Common commercial utilities such as Adobe Acrobat Reader, file
compression and search engines should be included here. Vista already includes many of
these, including a new XPS document writer and reader.
Corporate or organizational applications: Organizational tools such as time sheets,
expense reports or even enterprise resource planning clients should be included in this
The goal of this layer is to ensure that every tool that is required by the entire user base is made
available to them. The objective of the kernel is to provide a fully functional PC system to users.
If this layer is designed properly, it will often satisfy the requirements of a majority of your user
Chapter 7
Presentation Layer
The presentation layer is the interface layer. For users, it most often means the design of the
organizational desktop. One of the advantages of Microsoft Windows is that its design is entirely
based on one principle: common user access (CUA) guidelines. This is a defined set of rules that
outlay how people interact with PCs. CUA defines menu interfaces, keyboard shortcuts, and
mouse movements and sets standards for their method of operation.
For example, users who are familiar with Microsoft Word, have since its inception in the early
1980s, used the same set of keyboard shortcuts. None of these users has had to relearn any major
keystroke since it first became popular and these keystrokes still work today. Good user
interfaces are designed to make it easy to transfer from one version of an application to another.
Of course, Vista and Microsoft Office 2007 both offer new interfaces, but since they are still
based on the CUA, they are relatively easy to learn.
You can browse articles on the subject of user interface design at
The advantage of Microsoft’s approach is that there is a standard interaction mode within
Windows’ own presentation layer, but organizations should move beyond the default Windows
desktop and specifically enhance Windows’ defaults with their own desktop design. This custom
design should include:
Desktop design: The desktop should offer a single unified design to all users. All users
should be familiar with this design because it is the default design activated at any new
logon within the network. This design should be divided into specific desktop zones that
contain different information.
Local versus central desktop control: Users should be aware of the elements they are
allowed to modify on the desktop and which items are locked from modifications.
Common shortcuts: Every basic shortcut on the desktop should be the same on all
systems. Users can add their own, but should not be able to remove the most common
Common menus and Quick Launch Areas: Tool access should be standardized and all
menu sub-categories should be reduced as much as possible. When applications install,
they tend to install a series of extraneous shortcuts such as “Visit our Web Site for Free
Offers”. These are not required within the organizational networks.
The focus of this layer is to introduce a common desktop for all users. Some areas of this desktop
are locked so that users in your organization can quickly find what they need on any PC. Others
are open so that users can personalize desktops to some degree. Vista includes a host of features
that are now controllable by standard users and were not available in previous versions of
Windows. This facilitates the design of the PC’s presentation layer.
The key to the construction of a common presentation layer for all systems is the update of the
Default User Profile on the Reference PC before capturing its content into a system image that will be
deployed to all systems.
Chapter 7
Non-Kernel Layers
Chapter 6 discussed the non-kernel layers extensively. These layers are created over and above
the kernel and are designed to address specialized as opposed to generalized user needs. Ideally,
you will be able to group the IT roles your specialized users play to deliver non-kernel
applications in groups. Structuring your PC construction or system stack in this manner lets you
reduce system construction costs because of the following benefits:
Specialized applications are grouped into IT roles. This means they will coexist on a
system with a smaller group of applications, including the applications hosted by the
kernel. If you are using traditional application installation methods, then this reduces the
possibility of conflicts.
Since applications are grouped into IT roles, they are easier to track. An IT role
automatically incurs the cost of the specialized applications it contains, thus departments
and other groups within your organization can be charged for these applications. Kernel
applications usually have organization-wide licenses so they are also easy to track.
Since applications are grouped into IT roles, they can be delivered as a group and
removed as a group should the primary function or user of a PC change.
And, if you are using application virtualization as suggested in Chapter 6, you can easily
deliver applications in groups, activating them or deactivating them from target PCs
through simple mechanisms such as Active Directory (AD) security group membership.
Overall, the PASS system stack provides a structured approach to PC construction and long-term
management. This layered approach to IT services provides a simpler model to follow than the
traditional ad hoc approach because it relies on familiar imagery. Graphically, the PASS kernel
represents a design that is very similar to the OSI Reference Model. This model begins to
demonstrate how we can construct and present IT systems in an understandable manner.
Identifying Kernel Contents
The makeup of your own Kernel will depend entirely upon the needs of your organization, but
what you need to do at this stage is to run through the different functions and operations of your
organization to determine how you can best support it with a system kernel. Some choices are
obvious—everyone needs antivirus, anti-spam, antispyware tools and most everyone needs a
productivity suite such as Microsoft Office—while others are a bit more complex, mostly
because they are particular to your own organization.
Ideally, you would create a data sheet listing each of the seven layers of the PASS kernel and
identify the contents of each. Then, when you’re satisfied with your selections and have received
acceptance and approval on the structure of your new kernel, you can proceed to the next phases
of system image preparation.
Chapter 7
You might also add the physical layer to this data sheet as it will be important to keep the
contents of this layer in line with the expected performance levels your systems will provide
when running Windows Vista. More on the contents of this layer will be discussed as you
prepare the physical solution that matches the logical solution you just created.
Additionally, you will need to keep two other factors in mind as you prepare to construct your
system kernel. The first deals with actual image content: will you be using a ‘thick’ or a ‘thin’
system image? The second mostly concerns global organizations or any organization that has
employees who work in different languages. Will you build a single worldwide image or will
you create more than one image. The more images you create, the more your maintenance costs
will be.
PCs should be acquired both centrally and in lots as much as possible. Organizations that allow
departments to purchase their own PCs are only asking for increased headaches when it comes to
PC management. When departments are allowed to purchase their own PCs, there are no standard
configurations or even standard suppliers. Diversity is NOT a good thing when it comes to PC
management within an enterprise. You should always strive to limit the number of suppliers you
purchase from as well as limiting the types of PCs you purchase from these suppliers. Even with
these standards in place, you’ll find that the same manufacturer will ship the same PC with different
components depending on the ship date. But at least, if you implement purchasing standards for PCs,
you will have some measure of control over what you will be managing and you will be able to
simplify your OS deployments.
“Thick” versus “Thin” Images
While it is important that you include all of the proper content into the logical design of your
system kernel, you also have to decide whether your physical system image will be thick or thin.
Traditionally, organizations have been relying on thick or fat images. It makes sense in a way.
Fat images contain everything that should be within the system kernel. Since the preferred
deployment method for operating systems is the disk image or system image—a copy of the
contents of a reference computer prepared through custom processes and to be duplicated on any
number of PCs—it makes sense to make this image as complete as possible. Then, through the
use of technologies such as multicasting—the ability to send a stream of data to multiple
endpoints in one single pass—you could deploy this fat image to as many PCs as your staging
infrastructure could support. In this way, one technician could easily deploy up to fifty PCs in
one half day. And, since the PC contained all of the generic content your users required, many of
these PCs—sometimes more than 50 percent—were ready to go as soon as the core image was
Chapter 6 introduced the concept of a ‘thin’ image. This concept was introduced in conjunction
with the use of application virtualization for software management and distribution. This meant
deploying a smaller OS system image, and then, deploying a generalized application layer
(GAL) to achieve the same result you would obtain with a fat image (see Figure 7.4).
Chapter 7
Figure 7.41. The Thin Kernel with the GAL
Consider the following points as you determine whether you should use a fat or a thin image:
If you use a fat image, you can use multicasting technologies to deploy it in a single pass
to multiple endpoints.
If, however, you use a fat image, then, maintaining and patching the content of this image
can be challenging because it contains so many different components that all have their
own update cycles.
If you use a thin image and you are relying on traditional application installations, for
example, through the Windows Installer service, then deploying the GAL will take time.
Office itself is usually in the order of 300 to 400 MB and the entire layer can easily be
well over ½ GB. Since multicasting is no longer available for this deployment—at least
not through traditional software deployment tools—you then need to perform unicast or
point to point deployments to each endpoint, adding considerable time to the installation.
This will impact a technician’s ability to deploy 50 PCs per half day.
Chapter 7
If you use a thin image and you are relying on application virtualization with streaming,
then you only need deploy the thin kernel to a PC, deliver the PC to the end user (if you
are using a staging area), and then have applications from the GAL stream to the PC as
soon as the user logs on for the first time. Since applications are streamed, they do not
negatively impact network bandwidth. Since applications are controlled through security
groups in AD, the process can be transparent and initiated only at first logon. You can
also ensure that users are aware of this process through the training program you provide.
If you are using a thin image—whether you are using traditional or virtualized
applications—it will be a lot easier to maintain the system image in the long term because
it will include less content. Then the contents of the GAL can be maintained either
individually or as a group, but since the GAL is made up of individual packages, each can
be maintained on its own.
In the end, it makes sense to work with a thin image as much as possible. In the long term, the
thin image will be much easier to maintain and support. It is however, highly recommended that
you supplement this management strategy with application virtualization and streaming to reduce
application and PC management costs.
Using a Single Worldwide Image
Traditionally, system images were limited to one specific system type because of the integration
of the hardware abstraction layer (HAL) within the image. This meant that organizations needed
to create multiple images—in the case of some of our customers, up to 80 images—depending on
the size of their network and the number of different PC configurations they had to deal with.
But, with Windows Vista, system images are now HAL-independent since the HAL can be
injected during the configuration of the image on a PC. Coupled with other Vista features, this
HAL independence has several implications in terms of image creation:
One single image per processor architecture—32- or 64-bit—can now be created.
The reference computer—the computer you use as the source of the image—can now be
a virtual machine since the HAL is no longer a key factor in image creation and
deployment. This reduces the cost of maintaining the reference computer and this also
makes it easier to keep it in a pristine state.
Custom hardware drivers can be injected into system images so that your image can be
compatible with all PC types in your organization.
Vista is now language agnostic. The core OS does not have a language installed. The
language pack for the installation is selected during the installation process. This means
that one single image can now be matched to any of the languages Vista supports.
Creating one single worldwide image per processor architecture is now entirely possible whether
you use the WIM image format or not.
Chapter 7
Discovering the Installation Process
This is the core function of Unit Testing for the members of this PC team.
As you prepare your logical system design and get it approved, you can begin the discovery
process for the Vista installation. These pre-imaging processes are necessary as you need to fully
understand how Vista’s installation works to be able to learn how it will be automated in later
project phases. These pre-imaging processes include:
Identifying the hardware requirements for Vista’s installation
Identifying the installation methods Vista supports
Documenting the installation process you select
Using an installation preparation checklist
Performing post-installation configurations
Then, when you’ve fully understood how Vista’s installation proceeds and what you need to do
to prepare your physical OS kernel, you can proceed to the creation of a reference computer and
the generation of the system image.
Identifying Hardware Requirements
Before you begin testing installation methods, you need to determine which hardware
configurations you will support. This is derived from a series of information sources including
both your network inventories and Microsoft’s recommended minimum hardware requirements
for Vista. Base hardware requirements for Vista were introduced in Chapter 1 and are
reproduced here as a refresher (see Table 7.1). Two sets of requirements are defined: Vista
Capable and Vista Premium PCs. The first allows you to run the base-level Vista editions, and
the second lets you take advantage of all of Vista’s features. Since you’re preparing a
deployment for long-term management, you should really opt for Vista Premium PCs.
Vista Mode
Vista Capable
At least 800 MHz
Minimum Memory
512 MB
Graphics Processor
Must be DirectX 9 capable
32-bit: 1 GHz x86
64-bit: 1 GHz x64
1 GB
Graphics Processor
Support for DirectX 9 with a WDDM
driver, 128 MB of graphics memory,
Pixel Shader 2.0 and 32 bits per pixel
DVD-ROM drive
Audit Output
Internet access
Vista Premium
Chapter 7
* If the graphics processing unit (GPU) shares system memory, then no additional memory is required. If it uses
dedicated memory, at least 128 MB is required.
Table 7.1 Vista system requirements.
Information on compatible systems can be found at
Next, you need to identify which editions of Vista you will support. For business, the Business,
Enterprise, and Ultimate editions are available. Each of these business editions requires a Vista
Premium PC to run and includes:
Vista Business Edition: This is the base edition for small to medium business. It
includes the Aero 3D interface, tablet support, collaboration tools, advanced full disk
backup, networking and remote desktop features.
Vista Enterprise Edition: This edition is only available to organizations that have either
software assurance or enterprise agreements with Microsoft. It adds full drive encryption,
Virtual PC Express, the subsystem for UNIX and full multilingual support.
Vista Ultimate Edition: This edition is for small businesses or others that want to access
the full gamut of new Vista features but do not necessarily want software assurance or an
enterprise agreement. It includes all of the features in the Enterprise Edition, but also
entertainment tools such as Photo Gallery, Movie Maker and Media Center. Though you
might not want these programs on business computers, this edition might be the only
choice for any organization that wants full system protection and does not want to enter
into a long-term software agreement with Microsoft.
Remember that through its single instance store capability, a Vista image may contain more than
one edition. If you need to rely on several editions, you can still store them into one single
To learn more about each of the different Vista Editions, go to
Also, since you will be relying on virtualization software to create and manage the reference PC,
you should configure this VM according to the capabilities outlined in Chapter 3. In fact, the
virtual machines for Unit testing should already be available to your team.
Identifying Installation Methods
Now that you have identified your logical system configuration and you have selected the base
PC configuration, you can proceed to the examination of Vista’s installation methods. The first
and the easiest method to examine is the interactive installation. No matter what you choose to
do in the end to automate your installation, you will definitely need to perform at least one and
most probably several interactive installations so that you can fully understand what happens
during this process.
Chapter 7
The objective of this process is the documentation of each step of the installation and
specifically, the build of your entire kernel.
Begin by choosing the edition of Windows Vista to install.
Perform the initial installation to discover the process.
Document every configuration requirement and each modification you need to perform.
Document the finalization steps to create your thin system kernel.
Document each step to create and configure the GAL.
First, you need to understand how the basic interactive installation works. With Windows Vista,
Microsoft simplified the installation process to ensure there were no more blockers for the
installation to complete. In previous versions of Windows, there were several instances during
the installation where you had to provide input: CD keys, time zone, keyboard layout, regional
settings, administrative password, networking configuration, and more. Now, Microsoft has
modified the installation to collect all information at the very beginning of the process and then
have you finalize the configuration once setup is complete. This means you can start multiple
interactive installations and do something else while they run, returning to them once they have
completed. And, since machines are setup in a locked down state, you don’t even need to worry
about the setups being vulnerable when you’re not there.
Unlike previous versions, the Vista installation is completely graphical. The installation now
boots into the Windows Preinstallation Environment (Windows PE) if there is no operating
system on the PC. And if there is a previous operating system and the upgrade is supported, the
installation will run in graphical mode anyway. The first splash screen will ask three questions:
Language to install
Time and currency format
Keyboard or input method
These settings determine in which language the installation will proceed as well as which
language pack will be installed. Vista uses a language-agnostic core installation that is then
converted into whichever language you select during installation.
Next, you are presented with the Install now screen. Note that this screen also includes two
additional options in the lower left corner:
What to know before installing Windows
Repair your computer
Chapter 7
The first provides you with up to date information on system requirements and procedures for
installing Vista. It is always a good idea to review this information, especially during the
installation discovery process. The second is used to repair systems that may be damaged. It lets
you choose an existing system partition, and then when you click Next, you are presented with a
series of choices for system repair (see Figure 7.6) including:
Startup Repair: to fix problems related to the startup of Windows.
System Restore: to restore Windows to an earlier point in time.
Windows Complete PC Restore: to restore Windows from a backup image.
Windows Memory Diagnostic Tool: to verify the system’s memory.
Command Prompt: to launch a Windows PE session with an open command prompt.
Figure 7.42. System Repair Options
But, since your goal is to discover how the installation works, click on Install now. This moves
you to the next screen where you need to input the product key to use for the installation. The
version you install will be determined by the product key you enter. Note that this screen also
includes an automatic activation check box. If you are only exploring the installation and may
repeat it several times, you will probably not want to turn this option on.
Chapter 7
The next screen is where you accept the license agreement, click Next and you are then moved to
the installation type selection screen. Two main options are available:
Upgrade: select this option if there is already a supported operating system on your PC.
Custom (advanced): select this option if you are installing a new PC or if the PC you are
upgrading is not running a supported operating system for upgrade.
The upgrade process in Vista actually works! That’s because every installation of Vista uses an
image to perform the installation. Remember IBS? Because of IBS, the upgrade actually removes all
previous operating system components, protects all data and application settings as well as all
installed applications, and then installs Vista by decompressing the installation image and
customizing it to the hardware you are installing it to. You should investigate this process if you are
running supported systems for upgrade.
If you are installing on a bare metal system or a bare metal virtual machine, then select the
second option. This will lead you to a screen where you can select and create the disk partition
that will run the OS. This screen also gives you access to partitioning and formatting tools as
well as giving you the ability to load new drivers. Examine each option and then proceed with
the installation.
BitLocker Partitions: If you intend to run BitLocker and encrypt the system partition, then you need
to create two partitions. The first should be at least 2 GB in size, should be formatted as NTFS and
should be marked as active. This will be the boot partition once BitLocker is activated. The second
should also be NTFS and should normally use the remaining space on the system disk.
WinRE Partitions: You can also install a Windows recovery environment (WinRE) on the PC. In our
opinion, these partitions should be reserved for critical systems and shouldn’t be installed on each
system because they are used to repair installations (remember that you can get to this with an
installation DVD). After all, once you’ll be done with your system kernel, you’ll be able to deploy it to
any PC within half an hour. It usually takes longer than that to repair a PC. But, if you want to use
WinRE anyway, consider these options:
WinRE Partitions without BitLocker: If you intend to install WinRE on the PC, then you’ll need the
same kind of partition as you would for a BitLocker system.
WinRE Partitions with BitLocker: If you install both BitLocker and WinRE, then you need to install
WinRE into the OS partition to protect the system from tampering through WinRE. Do this once the
OS and BitLocker are installed.
Once the partition is created or selected, Windows begins the installation process. From this
point on, there is nothing to do until the installation is complete. The installation will copy the
Windows installation files, expand them, install features, install updates and then complete the
installation. During this process, Windows will install and reboot into an open session once the
installation is finished.
Chapter 7
Installing x64 Operating Systems
If you’re installing an x64 version of Vista, you will run through the same process as for x86 versions
with minor differences. For example, the x64 OS will support a non-destructive upgrade from x86
OSes—that is, replacing the existing OS and maintaining data on the system—but the end result will
retain all data as well as application folders except that applications will be non-functional and will
need to be reinstalled. The best way to perform this type of installation is to actually move the data off
the system if there is data to protect, then reformat the OS partition and install a fresh version of the
x64 OS.
The installation process includes five (5) steps. The system will restart several times during the
Copy Windows files
Expand files
Install features
Install updates
Complete the installation
There is no time-to-finish information display anymore; instead it displays the percentage of
each step. It can take between 20 to 40 minutes for an installation to complete, depending on
system configurations.
Once the installation is complete, Vista displays a Set Up Windows dialog box requiring a user
name and password. Remember that Vista automatically disables the default administrator
account. Therefore a core account is required. Though this account will be a member of the local
administrators’ group, it has a different security identifier (SID) than the default administrator’s
account and will therefore be subject to User Account Control (UAC).
Type the username, then the password, confirm the password and type a password hint if you
need it. Click Next when ready. The next screen requests a computer name and description. Type
these values in and click Next. The third setup screen lets you set up the update configuration.
Choose the configuration that suits your environment. Now, you need to indicate the time zone,
time and date. Click Next when ready.
The last screen lets you identify which network you are connected to. Each choice sets different
parameters in the Windows Firewall. Choose Work and then click Start on the Thank You!
Screen. Vista completes its configuration, checks your computer’s performance, and displays the
log on prompt.
When you log on, Vista displays the Welcome Center (see Figure 7.6). This center displays a
summary of the results of your installation. The first part displays a summary of the computer’s
configuration. The second lists tasks to complete in order to finish the installation. And the third
displays offers from Microsoft. For example, this is where you would download and install
Windows Live Messenger.
The most important part of this screen is the second part because it deals with activities required
to complete the system’s configuration. Usually thirteen items are listed in this section, but of
course, since this is a summary, not all items are displayed. To display all items, you need to
click on Show all 13 items. This expands the section and lets you see each item in the list.
Chapter 7
This Welcome Center is oriented mostly towards the home user, but it can be useful for
newcomers to Vista installations and configurations to peruse through the choices it offers in
order to become more familiar with Vista itself.
Figure 7.43. The Vista Welcome Center
Using Installation Documentation
Now that you are familiar with the installation process, you can begin the configuration process
for the remainder of the kernel layers. This is an ideal time to implement installation
documentation and begin your installation documentation process. Focus on three activities:
Installation Preparation
OS Installation
Post-Installation Configuration
Each requires specific documentation.
Chapter 7
The Installation Preparation Checklist
In this type of migration project, you want to make sure that everyone performs the same
operations all the time. The best way to do this is to prepare specific checklists for operators to
follow. For example, you should use a recommended checklist for installation preparation (see
Figure 7.7). It is possible to upgrade from Windows 2000 or XP to Vista. This checklist takes
this consideration into account.
Figure 7.44: The Installation Preparation Checklist
The installation checklist takes a few items into account:
In the Upgrade, remember that there is no upgrade path from 32-bit to 64-bit.
Vista Editions are selected based on whether the PC will remain in the office (in house).
If this is the case, you can use Vista Business because it does not include BitLocker.
If the PC is mobile, then you may need a Vista edition that includes BitLocker. If you
have volume licensing or an enterprise agreement (EA), then you can use Vista
Enterprise. If not, then you must use Vista Ultimate.
If you upgrade and retain the existing partition, you cannot install BitLocker as it requires
an extra partition.
If you are performing a new installation, then you can partition the system for either
BitLocker or WinRE.
These considerations will help you prepare for your own installations.
This checklist is not a checklist for PC replacement or deployment. It is intended only for PC
technicians working to discover the installation process.
Chapter 7
Documenting PC Installations
In addition, you’ll need to document the PC installation itself. The best way to do this is to use a
standard PC Data Sheet. This sheet should include vital information such as:
System Name
System Role
System Location
Hardware Specifications
BIOS Version
Disk Partition(s)
Kernel Version (including Operating System Versions, Service Packs and Hot Fixes)
Installed components
Any additional comments or information you feel is required
Ideally, this data sheet will be in electronic format so that data can be captured as the installation
proceeds. It can also be adapted to database format. In support of the PC installation, you might
also create a Kernel Data Sheet outlining the contents of the PC kernel for this particular version
of the kernel. Each sheet should provide detailed and up to date information to technicians.
Post-Installation Processes
After the installation is complete, you’ll want to perform a post-installation customization and
verification (see Figure 7.8). Use a post-installation checklist to customize the system and to
perform a quality assurance verification of the installation. This checklist also lets you complete
the installation and preparation for the system kernel.
Chapter 7
Figure 7.45: The Post-Installation Checklist
The activities outlined in this checklist are detailed further in this chapter.
Chapter 7
Supported Installation Methods
Windows Vista offers four installation methods:
Unattended with an Answer File
System Imaging with the System Preparation Tool
Remote OS Installation through Windows Deployment Services
You may end up using each of these as you proceed through the preparation of your installation
process. For example, you need to use the interactive installation to perform the discovery
process. Then, the unattended installation will be required in two instances: the upgrade, if you
choose to use it, and the build of the reference computer. You’ll use system imaging for both
computer replacements and for bare metal—computers with no OS installed—installations. And,
if you need to support remote deployment of the OS, you may use Windows Deployment
Services. In fact, this is one of the decision points that must arise from this discovery process:
choosing which installation methods you will decide to support (see Figure 7.9).
Figure 7.46. Selecting an Installation Method
Chapter 7
Selecting an Installation Process
Based on these different options, you can select which installation scenarios you will support:
Upgrade which aims to replace the existing operating system without damaging data or
installed applications.
Refresh which is a wipe and load approach where the system disk is wiped out,
reformatted and a new operating system is installed.
Replace where a brand new or bare metal system with no existing operating system is
prepared and a new OS is installed.
Each scenario focuses on a specific massive installation method. Few people have supported or
opted for the upgrade path in the past. This may once again be the case because of the limitations
of the upgrade path. For example, if you want to enable BitLocker drive encryption, then you
cannot use the upgrade because you need to repartition the disk drive. Of course, you could use a
disk partitioning tool, but it is easier to just perform a new installation. Another instance where
the upgrade does not work is upgrading from a 32-bit OS to a 64-bit OS. In the end, you will
decide if the upgrade process is useful for your organization. If not, then you will focus on the
other two scenarios. In every case, you need to create a physical OS configuration before you
Determining the Physical OS Configuration
At this stage, you should be moving into the Functional testing level.
Now that you’ve discovered the installation process and you have documented the steps you
need to perform it, you can move to the creation of your physical OS configuration. This focuses
on the creation of the reference computer. Keep in mind that when you do this, you need to make
sure undoable disks are enabled in the virtual machine (VM) you will be using. This way, you
can be sure to capture only settings that you are absolutely happy with. If you make a mistake,
reset the VM and start the step over. This activity focuses on the post installation configuration
You’ll also need to use other tools to finalize the preparation process for the reference PC. Be
sure to document all configuration modifications you retain. This will be important for when you
need to reproduce the reference PC later on. Your documentation must also be specific; i.e., you
must specifically detail the steps you need to perform to complete the core system’s
Chapter 7
Applying the Post-Installation Checklist
This process should include all the steps in the Post-Installation Checklist, but special attention
should be paid to the following:
Setting a strong password for the default administrator account
Configuring networking
Enable updates
Download and install updates
Enabling the Remote Desktop
Configuring the Windows Firewall
Configuring the Windows Vista Interface
Updating Default User Settings
Perform the tasks as they appear in the Post-Installation Checklist.
Begin with the default administrator account password. Launch the Computer Management
console using Run as Administrator. To do so, type Computer Management in the Search bar
of the Start Menu, then right-click on Computer Management to select Run as
Administrator. Approve the UAC prompt. Then, move to Local Users and Groups, then Users
and right-click on the Administrator account in the details pane and choose Set Password.
Assign a complex password. This password should include at least eight (8) characters and
include complex characters such as numbers, uppercase and lowercase letters, as well as special
If you have difficulty remembering passwords, you can replace letters with special characters.
For example, replace the “a” with “@”, replace the “o” with “¤” and so on. This makes
passwords more difficult to crack. Even so, if a hacker or an attacker has access to the system,
they can use password cracking tools to display the text of the password. If this is an issue, you
can use a combination of Alt plus a four digit Unicode key code to enter characters into your
password (for example, Alt 0149). The advantage of this method is that these characters often
display as a blank square or rectangle () when displayed as text by password cracking software.
If you’re really concerned about password security, then either use more than 14 characters—
password-cracking tools stop at 14—or implement a two factor authentication system for IT
It might also be an idea to rename this account. Make sure you keep it disabled.
The default administrator account in Windows Vista is a special account that is not subject to User
Account Control. Other accounts that are members of the local administrative group are, on the other
hand, subject to UAC. If you develop a policy around UAC use on your network, leaving the default
administrator account disabled will ensure that every user—administrative or standard—will be
subject to the rules of your UAC policy. In addition, you want to avoid the use of generic accounts
such as this one because even though its activity is tracked as will all other accounts, you cannot
‘know’ who is using it since it is not a named account. Every administrator and technician in your
network should use named accounts for high privileged operations.
Chapter 7
Next, use the Control Panel to access the Network and Sharing Center. Click on View status
and then Properties for the connection you want to configure. By default, Vista installs and
enables two versions of the TCP/IP protocol: IPv4 and IPv6. IPV4 is set to receive an automatic
address from a server running the Dynamic Host Configuration Protocol (DHCP). IPv6 is set to a
private address by default. If you decide to use IPv6 in your network, you’ll need to change this
The Network Properties dialog box is the same as in Windows Server 2003 so it should be
familiar to most administrators. Use your corporate guidelines to assign settings to both IPv4 and
IPv6. Close the Network Connections window when done.
If you plan to use IPv6, and you should because Vista networks can rely on this new communications
protocol, you will need to obtain an IPv6 address scope either from your Internet provider or for your
own use. IPv6 is enabled and configured by default in all installations of Windows Vista. But this
configuration uses a link-local address with the default fe80::/64 address prefix. Link-local addresses
are only used to reach neighboring nodes and are not registered in DNS. More useful IPv6
connectivity must be configured either manually or through DHCPv6 which won’t be available until
Windows Longhorn Server ships later this year. IPv6 scope addresses can be obtained from Regional
Internet Registries (RIR). The most common five RIRs are:
— American Registry for Internet Numbers (ARIN) for North America (
— RIPE Network Coordination Centre (RIPE NCC) for Europe, the Middle East and Central Asia
— Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region (
— Latin American and Caribbean Internet Address Registry (LACNIC) for Latin America and the
Caribbean region (
— African Network Information Centre (AfriNIC) for Africa (
Once you obtain your scope, you can use it to configure your servers. Configuration of IPv6 settings
is very similar to that of IPv4. You need to configure the following settings:
— IPV6 unicast address
— Subnet prefix length—by default this is 64
— Default gateway—again in IPv6 unicast format
— Preferred & alternate DNS servers —again a unicast address
You can use the advanced settings to add either multiple IPv6 addresses or additional DNS servers.
There are no WINS servers for IPv6 since it does not use NetBIOS names.
You should also enable updates according to the settings in your organization. Select Control
Panel, Security, Windows Update, Change Settings to modify the update settings. Make sure
you have enabled updates for more products and then set the update value according to your
organizational standards. It is a good time to apply all available updates to this system.
Reference Computer: The networking properties for the reference computer might best be left at
default values unless you have specific values you can use for default settings. Remember that
whatever is configured in the reference computer will be retained in the system image you create
from it.
Chapter 7
Use Control Panel, System and Maintenance, System, Remote Settings to enable the Remote
Desktop option. Click on the Remote tab and select the appropriate setting. The most secure
setting uses Network Level Authentication, but requires connections from systems running the
Remote Desktop Connections 6.0 client update. Make sure this update has been deployed in your
network before you deploy Vista systems. You also note that Remote Assistance is on by default.
Use Control Panel, Security, Windows Firewall to set the default Firewall behavior. The
Firewall is an element of Windows that can be controlled through Group Policy. Make sure this
configuration is in the base configurations for all Vista systems.
Now configure the Vista interface. Turn on the Windows Sidebar and apply the gadgets you
need. Then, customize the Quick Launch Area. You want to do this to ensure that every user in
your organization will have the same or at least a very similar experience whenever they access a
PC. Begin by doubling the size of the Taskbar. Do so by moving the mouse pointer to the top of
the Taskbar beside the Windows Start button until the pointer transforms into an up-down arrow.
Click and drag upwards to expand the Taskbar.
Using internal Really Simple Syndication (RSS) feeds
You can rely on the RSS feeds gadget to send internal communications messages to end users
through the Vista Sidebar. This way, organizations can send internal messages to all users. All you
need is a Web page that provides RSS feed information, a subscription to this feed in the Vista PC
and the appropriate feed displayed in the gadget. One gadget can be used to connect to each feed
you want to use. Departments, the organization as a whole and IT can all use these feeds to display
key information to the user base without clogging the desktop. This is a simple and easy way to
communicate with users without having to implement complex communications systems.
The Taskbar includes running programs as well as the Quick Launch Area. Each area is preceded
by a row of four series of dots at the very left of it. Move the pointer on top of this row for the
running programs list until it turns into a left-right arrow. Click and drag the running programs
bar to the bottom left of the Start button. Now you should have running programs displayed
below the Quick Launch Area. Right-click on the taskbar and select Lock the Taskbar.
Next, click on the Start button, then on All Programs and run through the default programs to
add the ones users will use the most to the Quick Launch Area. To add each program shortcut,
right-click on it and select Add to Quick Launch.
Chapter 7
Add the following items:
Under Accessories:
o Calculator
o Command Prompt
o Notepad
o Windows Explorer
Under Accessories | System Tools:
o Character Map
o System Information
The resulting Taskbar should include most of the tools anyone will need to use to interact with
PCs in your organization. The Quick Launch Area should be updated each time a new common
tool is added to the system. Order the tools in the order of most used from left to right (see
Figure 7.10). Your interface is set.
Figure 7.47: A well-managed PC Taskbar
Update the Default User Profile
Whenever a new user logs on to a system for the first time, Windows generates a new profile for
them by copying the contents of the default user profile. If you customize your environment and
then update the default user profile from your customized environment, you can ensure that each
time a new profile is generated it includes a core set of tools and interface enhancements. In an
organization that wants to ensure that all of their users rely on a standard computing
environment, updating the default user profile is absolutely essential.
Return to the Computer Management console to create a second administrator account. This
account may or may not be required according to your organization’s security policy, but it is
required at least temporarily to update the default user profile. Expand Local Users and Groups,
then right-click on Users to select New User. Name the account BUAdmin—or use your
organizational standard—give it a full name of Backup Administrator, add a description, give
it a strong password and assign the Password never expires right. Click Create, then Close.
Next, right-click on BUAdmin and select Properties. Move to the Member Of tab, select Add,
once the dialog box opens, click Advanced, then Find Now, double-click Administrators and
OK. Click OK to close the dialog box. Your account is ready.
Chapter 7
Vista does not allow you to copy an open user profile to another because many of the open
features are volatile and are therefore stored in RAM and not persisted until the user logs off. So
to update your default user, you must use the backup administrative account created earlier. Use
the following procedure
77. Log out.
78. Log into your backup administrator account. Vista creates a new profile based on old
79. Open Windows Explorer and set Folder Options to view hidden files.
80. Use Control Panel, System and Maintenance, System, Advanced system settings and the
Advanced tab to click on the Settings button under User Profiles. Select the profile you
customized and click the Copy to button.
81. Use the Browse button to navigate to the Users folder on the C: drive to find the Default
profile. Click OK.
82. Click Yes to replace existing files.
83. Close all dialog boxes and log off of the backup administrator account.
84. Log back into your original account.
85. Launch the Control Panel and select System and Maintenance, System, Advanced
System Settings and click on the Settings button under User Profiles.
86. Select the backup administrator’s profile and delete it. Confirm deletion.
87. Close all dialog boxes and go to the Start button and use the right arrow beside the lock to
select Switch Users.
88. Log into the backup administrator’s account. This will test the default user profile. Note
that you now have a copy of the customized profile. Log off BUAdmin.
You’re done. You still need to complete the discovery process, especially with reference to the
GAL components. Remember, each time you add a new component to the system kernel, you
should make sure the Quick Launch Area and the default user profile are both updated.
Chapter 7
Keep the reference PC in a workgroup. This will make it easier to manage. Also, keep the
BUAdmin account on the reference computer so that you can use it in the future to update the
default user profile. Remember to remove it from the copy of the reference PC you will use to
create your system image
If you use application virtualization for the components of the GAL, you can make sure the proper
shortcuts appear in the Quick Launch Area by including them in the application capture you
perform. This way, they will automatically update the user’s profile when the virtualized
application is streamed to their system.
You should repeat the process to create a new reference computer. If you’ve documented each of
these steps, you should be able to repeat this process without flaw. This reference computer will
be the model you use for your massive installation method.
Windows Activation: Do not activate the installation as you are performing discovery. You have
thirty (30) days to do so which is ample time to perform the discovery. You can, however, activate
the reference computer since it will be a machine you keep on a permanent basis.
Determining the OS Deployment Method
This activity is part of the Organize phase of the QUOTE system.
Now that you fully understand how the interactive installation process occurs and you know how
to build your reference PC, you can proceed to the next step: determining how you will be
performing massive deployments of this OS. As mentioned earlier, there are several tools and
processes you can use. Microsoft has delivered a series of tools in support of the image-based
setup Vista supports. Other vendors have updated theirs to work with Vista. Both Microsoft and
third parties tools support one interactive and three automated deployment strategies.
The first is the interactive installation. In organizations of all sizes, this installation
process is really only used for discovery purposes. After all, no one wants to have their
technicians go from PC to PC installing Vista interactively. Even with a very well
designed instruction sheet, this method would definitely give you mitigated results. Only
very small shops would use this method. But even then, they will not have a method for
system rebuilds since everything is manual and interactive. In the long run, it is always
best to use an automated method.
The second is the unattended installation based on a response file. With Vista, Microsoft
has reduced the number of response files to one; well two really, but they are the same
file. Response files are now in XML format and are named Unattend.XML. Using a
response file automatically feeds input to Vista during its installation, letting you go on
with other tasks as the system installs itself. The second response file is the
AutoUnattend.XML. This file uses the same content as the original file, but its name
allows it to be automatically applied when it is provided either through a USB memory
stick or through a floppy drive during installation. Just insert the Vista DVD, insert the
memory stick or the floppy and boot the machine.
Chapter 7
o Unattended installations are usually valid for organizations with very few
o Larger organizations will rely on this unattended installation method to reproduce
their reference computer when they need to rebuild it.
o Unattended installations are also useful for in-place upgrades. Even today, few
organizations choose to perform these and take advantage of OS migrations to
‘clean’ house and reset each and every one of their PCs.
The third method is the system image. This system image relies on the capture of a copy
of an installed system. This copy is depersonalized first through the use of a special
command: Sysprep.exe. This tool is now part and parcel of every Vista system. Sysprep
is located in the %SYSTEMROOT%\SYSTEM32\SYSPREP folder. This tool is
designed to prepare a configured system for redistribution. Then, you capture the system
image with another tool to reproduce it on other computers.
The fourth method is the remote OS installation. With Microsoft, this means using
Windows Deployment Services (WDS), an update which is available for Windows Server
2003 through Service Pack 2. WDS replaces the previous remote installation services
delivered with Windows 2000 and supports the remote deployment of Windows XP,
Windows Server 2003 and Windows Vista. WDS requires a complex support architecture
including Active Directory, DHCP, and DNS to operate. Special procedures must be used
to put it in place. If you already use or have selected third party OS deployment tools,
you will not use WDS, but rather rely on these more comprehensive tools for this type of
deployment. But one thing is sure, the remote OS deployment method is by far the most
popular since it relies on the creation of a system image first, and then, provides you with
a tool to remotely deliver this image to any PC endpoint whether you rely on WDS or a
true systems management tool.
The selection of the appropriate deployment method is a key activity for the PC team responsible
for system imaging.
Microsoft has also released other tools that will make the preparation of your system images
easier to work with. They include:
The Windows System Image Manager (Windows SIM) which is used to build and
customize automated installation answer files.
WinPE which is a 32-bit operating system that has only a 24-hour duration at any given
time—it can only run for a maximum of 24 hours at a time, though it can be rebooted any
number of times—and includes a limited set of services. WinPE is aimed at
preinstallation and deployment of Windows Vista.
ImageX which is a command-line tool that supports the creation and manipulation of
system images for installation and deployment. ImageX is limited in that it is commandline only and does not support image multicasting. Deployments using ImageX have to
rely on unicast image transfers which can have a negative impact on bandwidth
utilization. If you already have or have selected a third party tool for OS deployment such
as Ghost Solution Suite or Altiris Deployment Solution, it will have its own system
image creation tool and you will not use ImageX.
Chapter 7
These tools are contained in the Windows Automated Installation Kit (AIK) which can be
obtained from the Microsoft download site at Make sure you
obtain the latest version of this kit before you begin to prepare for installation automation.
The Windows AIK is also contained within the Microsoft Business Desktop Deployment (BDD) toolkit.
This BDD toolkit can be obtained at
Preparation and Prerequisites
In order to build and test OS image deployments, you’ll need a series of other items:
o The reference PC you’ve prepared. This should be running inside a virtual
machine since you’ll only need it to create the automation system image.
o The Vista installation media you want to create images for. Remember that
installed versions are controlled by product key so one installation DVD should
be enough.
o The Windows AIK which you should have already downloaded.
o A management system where you will be installing the Windows AIK and
perhaps your third party OS deployment tool. You’ll use this system to create the
system image. This should also be a virtual machine or several virtual machines
depending on the prerequisites of the tools you are using.
o Your build environment should also be able to simulate a deployment situation.
This means a small network. This is one reason why this testing will be performed
in the Integration and Staging testing levels because they provide you with a
network foundation.
o A shared folder on a server to store system images.
o Your physical machine will need to include a DVD writer and of course, you’ll
need blank DVDs to store the new image you create.
The Windows AIK is a CD/DVD image. If you are working on a physical machine, transform
the image into a CD and then load it into the CD drive. If you are working with a virtual
machine, simply link the ISO file to the CD/DVD drive of the machine and launch the VM.
The Windows AIK is in .IMG format. If you do not have software that understands CD images in this
format, rename the file to an .ISO. It is the same format.
Now you can begin the automation of your Vista setups. Use the following order:
89. Build your reference computer.
90. Copy the files that make up the reference PC virtual machine. Use the copy to create the
system image. Remember to remove the backup administrator account before you
generate the system image.
91. Create one or more system images.
Chapter 7
92. Create an automated response file. Include the product key, system name, custom device
drivers, the domain join and language packs in the response file.
93. Deploy the images.
There are hundreds of settings and features you can modify during setup through the response
file. Ideally, you will keep these to a minimum and capture information from your reference
computer as much as possible. This is the benefit of a system image—it includes all of the
configuration parameters you set up on the reference PC. The response file should only include
items aimed at customizing your standard image and personalize it as you apply it to each PC.
Once your base image is ready, you’ll need to roll it out through a management platform. Most
business systems shipped over the last 5 years adhere to the Wired for Management (WfM)
specification. These machines support powerful management technologies like Wake-On-LAN
(WOL) and Preboot Execution Environment (PXE). Together, these technologies offer support
for centrally managing the deployment and configuration of networked computers.
Imaging tools that support PXE and WOL can remotely power on machines and rollout images
after hours while users are away. Some solutions can actually group disparate tasks together and
execute them as single workflow—they can schedule a personality backup, image rollout and
personality restoration to occur in a single, sequenced and automated event. Users arrive the
following day and resume work as usual with little impact on their productivity.
This is the focus of the image capture and deployment testing process: identifying each and
every step that is required to install a system image on a variety of PC configurations and
reproducing this process on an ongoing basis. Make sure the tools you use support this approach.
You should also rely on multicasting for image deployment. Ideally, the image deployment tool
you use will support the use of multicasting proxies. Proxies let you designate a target system in
a remote LAN as the source for the multicast broadcast to that LAN. This avoids the need for
modifying router and switch configurations so that they can support multicasting over the WAN.
Build a Smart Solution
System imaging is the key to the entire OS migration project. It is the most important aspect of
this project as it outlays the foundation of your future network. Building a system and then
capturing it for reproduction must be done carefully and with great precision. This is why the
preparation of your logical solution is such an important step in the process. This lets you
properly design the systems you will deploy.
Using application virtualization along with a thin PASS system kernel will let you build smart
stability within your network. Each PC will have a pristine copy of the system kernel and you
will have the ability to maintain this pristine state on each and every system in your network.
Your management solution will maintain the OS kernel, patching and updating it as necessary.
Your application virtualization and streaming solution will manage the state of each application
on the deployed OS. Your technicians will have fewer issues to deal with—if something doesn’t
work right, just redeploy the system image. Since applications are streamed and only cached
locally, they will automatically reinstall themselves. And, with the proper personality protection
approach, your users’ data will also be protected. It’s that simple.
Make sure that you thoroughly test each aspect of your PC image creation and deployment
process. Using the highest quality standards will ensure you make this the best deployment ever.
Chapter 8
Chapter 8: Working with Personality Captures
Personality protection is probably the most important aspect of any operating system (OS)
deployment project; not, of course, from the technician’s point of view, but rather, from the end
user’s point of view. After all, while we, as technical IT professionals, view a computer as a
system we need to build and maintain, end users view it as a necessary evil they need to work
with to perform their job functions. And personalities—the collection of data, favorites, desktop
settings, application customizations and more—are the most important aspect of any OS
migration for them.
That’s because users perceive their computer’s personality as part of their workspace and many
of them will spend considerable time optimizing it for the work they do. If computer
personalities are not preserved in the course of a migration project, users lose productivity as
they take time to either relearn or recreate the aspects of their old computer personality that they
depended on to get their work done. For many users, losing printer settings, email configurations,
Microsoft Word templates or even the placement of shortcuts on their desktop can compromise
their comfort level and effectiveness with a new machine and/or operating system. This
disorientation decreases productivity and increases the helpdesk workload because it leads to
unnecessary end-user support calls and training.
Therefore, preserving the personal computing environment of each user is a critical step in
mitigating the productivity impact of an OS migration and controlling its costs. As with the other
engineering processes that are required to complete an OS migration project, this process
includes several steps (see Figure 8.1).
Begin with defining the administrative policy you will use to provide this protection. For
this, you’ll need to fully understand the differences between profile structures in
Windows XP and their counterparts in Windows Vista since these profiles are not
compatible with one another; profiles are the OS components that store personalities.
Then, you’ll need to determine which mitigation strategies you intend to use as well as
how you will protect profiles once they are captured.
Next, you need to perform an analysis of your network. You’ve already performed
inventories to determine your hardware and software readiness status. Now, you need
another inventory to determine how many profiles you need to protect and how much
central storage space you need if you choose to copy the profiles you decide to protect to
a network share.
Then, once this is established, you can begin to prepare your protection mechanisms. Of
course, this will be closely related to your tool selection as the tool you selected for this
purpose will provide many of the features you’ll require for this protection.
Once the mechanisms are in place, you can move on to testing. Make sure you test this
process fully and make sure you involve end users in the acceptance testing to guarantee
that the level of protection you provide will meet and perhaps exceed their expectations.
When everything is as you expect and you have approval from end users, you can sign
off on the personality protection strategy and begin its integration into the overall
migration process.
Chapter 8
These steps form the basis of this chapter. Once again, they are mostly performed by the PC
team with support from the server team if network protection and backup has been included in
the protection policy.
Figure 8.48. The Personality Protection Process
Chapter 8
Define your Profile Policy
This is part of the Understand phase of the QUOTE System (see Chapter 2 for more information).
In order to define your protection policy, you first need to understand personalities or profiles
and how they work, and then you’ll want to categorize the profiles in your network to help
determine which you will protect (see Table 8.1). A profile is generated the first time a user logs
onto a system. Basically, the first time the user logs on, the contents of the default user profile
are copied and personalized for the user. The system automatically resets the security parameters
of these contents so that the user has exclusive access to them. This is one reason why it is so
important to properly manage the contents of the default user profile when you create your
system image as discussed in Chapter 7. By creating one single default user view, you
standardize how end users access and interact with the computer systems you deploy to them. Of
course, they will make this profile evolve, but you can at least ensure that key elements remain
common in all profiles.
Users can log on through a domain, relying on an Active Directory (AD) authentication, or
through the local system, relying on the local security accounts manager (SAM) database that
can be found on every Windows system. Each first-time logon will create a profile. This means
that technicians who log onto a system for repair purposes will automatically generate a profile
as will any other user logging on for other purposes.
Most local logons are volatile because few organizations run their network without a central
authentication database such as AD provides. This means that in most cases, you can discard any
local profiles from your protection strategy. That is, unless you have custom systems that operate
in a workgroup only. Many organizations use such systems to monitor and maintain systems
connected to a demilitarized zone (DMZ) such as those found in a perimeter network. You can
evaluate the opportunity to protect such local profiles versus having administrators and
technicians recreate them when these machines are upgraded. If your default user profile is
properly created, the value of protecting any local profile will be minimal.
Therefore, your protection policy should concentrate on profiles that are generated through
domain logins. If your network offers a single PC to a principal user, then you’ll have it
relatively easy since all you will need to do is identify the principal user’s profile on a system,
protect it and discard all other profiles. If your network uses shared computers, then it will be
slightly more difficult. Domain login profiles can also be protected by other means such as
roaming profiles—profiles that are stored on the network and downloaded to each PC as the user
logs on—or folder redirection policies—Group Policy objects (GPO) that move the contents of
local folders found in the profile to network locations. Your analysis will need to take these
factors into account if you want a protection policy that will meet the needs of every user in your
Some organizations, especially those that must operate 24/7, will also have generic accounts in
their network. Generic accounts are shared accounts that allow operators to share a machine
without needing to log on or off at the end of their shift. There are two types of generic
production accounts. The first deals with 24/7 operations and is required as mentioned above to
run a machine without the need to log on or off. Operators share the password to this account and
can thus change shifts without closing the session.
Chapter 8
The second type is for environments that have a high personnel turnover. A good example of this
is on naval vessels. Since officers and staff change almost every time the ship docks and crews
are rotated, some organizations choose to use generic role-based accounts instead of named
accounts. For example, a first officer would use the First Officer account instead of one named
after him or herself. In this situation, the password may or may not be shared. It depends on the
amount of effort the administrative staff is willing to undertake at each crew change. They can
either reset the passwords of each generic account or not, as they wish. Obviously, a policy that
would require either named accounts that could be renamed each time a crew member changes or
password changes at each crew change would be much more secure than one where passwords
are shared.
With Windows Vista, organizations that are currently using generic production accounts can mostly
do away with them because Vista supports Fast User Switching (FAS) even in a domain environment.
Organizations that rely on shared accounts to run operations and machinery can now use named
accounts because their operators can each be logged on personally while others are using the same
machine. There is no downtime for the machine as shifts change and users each have their own
account and a private password. This creates a much more secure environment.
If your organization decides to move from generic to named accounts during its
migration, then your protection policy will have to support the capture of a shared profile and its
restoration to multiple users, something few organizations face today.
Profile Type
Default User
Used to generate new profiles at first log on.
Network Default User
Stored on the network under
\\domaincontrollername\Netlogon share.
Used to generate new profiles on domain-joined
Mandatory Profile
Read only profile that is forced on users.
This profile is not saved at log off.
Super Mandatory
Like the mandatory profile except that it will force log
off if the profile cannot be loaded from the network.
Roaming Profile
Profile that is stored on the network and loaded at log
Changes are saved at log off.
Local Logon
Profile generated when logging on to the SAM.
Usually volatile profiles.
Network Logon
Profile generated when logging on to Active Directory.
Usually permanent profiles.
Principal User Profile
Profile of the main user of a machine.
Generic Account
Account used in 24/7 operations to ensure no log off
is required at shift change.
Generic Account
Account used in situations where there is high
personnel turnover.
Accounts are named by position and shared among
Chapter 8
Profile Type
users who hold this position.
Table 8.1 Different Profile Types.
Choosing the Profiles to Protect
You need to have the means to determine which profiles to protect. The best way to do this is to
use a flowchart that identifies which profiles to protect and under which circumstances (see
Figure 8.2). We’ve already discussed some of the reasons why you would or would not protect a
given personality.
Figure 8.49. Using a Decision Chart to determine the Personality Protection Policy
This means your decision flow should include the following guidelines:
Local profiles are only protected if they are situated on special machines that are not part
of the domain and cannot be replaced by a custom default profile.
Domain profiles are protected if they have been used recently and on a constant basis.
Only profiles with actual content are protected (based on profile size).
Only active profiles are protected.
If your machines have principal users, then their domain profiles are protected at all
Chapter 8
There are of course, other considerations, but these decisions should form the main crux of your
personality protection policy.
Chapter 8
Differences between Windows XP and Vista
Profiles are designed to store several different types of content:
The first is user data which ranges from user-generated documents to desktop preferences
to Internet favorites and more; basically, anything a user generates when they are
working with their PC system.
The second is user application data. This includes custom dictionaries, custom toolbars in
applications, custom settings for one particular application and so on.
The third is application data that applies to any user of a system. In XP, this system-wide
data was stored in the All Users profile, a profile that was shared as its name suggests by
all users. In Vista, the All Users profile disappears and becomes the Public profile.
Each data type has its own particularities and each requires a slightly different treatment. In
addition, profile data is stored not only in the file system, but also within the Windows registry
under the HKEY_USERS Structure. The profile of the user that is currently logged on is stored
under HKEY_CURRENT_USER, a volatile registry structure that is designed to store in
memory profile contents. Finally, profiles contain volatile information within a special profile
file: NTUSER.DAT which is located in the root of the user’s profile folder. This file contains the
in-memory information related to a user’s login session. Because of this, it is difficult to protect
when in use. All of these elements must be captured to protect a given personality.
Many organizations use roaming profiles—profiles stored on a network drive and which are
downloaded to the PC when the user logs in. This strategy is great for users that roam from one PC
to the other in a network, but roaming profiles is an outdated technology because, depending on the
profile size, it will take some time for the profile to be downloaded to a system and, should the
network not be available, then the profile won’t be either. In addition, profile changes are only saved
at log off. If the user has not logged off from one machine but wants to use another, they will get an
older copy of their profile.
XP roaming profiles are not compatible with Vista. You can however mix the use of
roaming profiles with folder redirection as discussed further in this chapter. This reduces the contents
of the roaming profile and increases roaming profile performance while giving users access to their
profile anywhere in the organization. If, however, you choose to use a different strategy, you should
turn roaming profiles off before the migration. If you do not use roaming profiles, then use the more
traditional profile protection strategy discussed here.
But, the first place to start to make sure all information is protected is to take an in depth look at
the differences in profile structures between Vista and XP.
Chapter 8
There are seven major differences between Windows XP and Windows Vista profile structures:
94. Profile location is the first. In Windows NT, profiles were stored in the WINNT folder,
allowing users read and write privileges to this most important folder. With Windows
2000 and Windows XP, profiles were moved to the Documents and Settings folder
structure. Of course, with Vista, this has been changed once again (third time lucky?) to
become the Users folder structure (see Figure 8.3).
Figure 8.50. The Evolution of the Profile Location
95. The User folder structure is the second. Several different elements of the User folder
structure have changed and have been relocated for several different reasons, one of the
main which is the need to provide better support for roaming users through a better
structured profile folder structure.
96. Microsoft has now begun using junction points or redirection points which appear as
proper folders but are there only for compatibility purposes (see Figure 8.4). Several
different folders both within and without the User folder structure act as junction points
and provide support for the operation of applications that are not Vista aware. Two types
of junction points are included in Vista: per user and system junction points.
Figure 8.51. Displaying Junction Points in the Command Prompt
More on junction points can be found in the Vista Application Compatibility Cookbook at
Chapter 8
97. The Application Data folder structure has also been modified. It is now comprised of a
series of junction points along with actual folders (see Figure 8.5).
Figure 8.52. Linking the XP and the Vista Profile Structures—New Vista Folders are shown in boxes
98. A new ProgramData folder has been introduced. ProgramData works along with Program
Files to store system-wide application information or what is also normally found under
the HKEY_LOCAL_MACHINE registry hive with legacy applications.
99. As mentioned before, the All Users profile has now been renamed Public. In addition, the
Default User’s profile is now renamed Default.
Folder redirection, or the ability to control the location of a folder within the user
profile structure through Group Policy objects (GPO), has been updated significantly and
now provides real protection for all user data.
Chapter 8
Two new subfolders, Local and LocalLow are used to store application data that remains local at all
times. Data in these folders is either specific to the machine itself or is too large to be stored on the
network. For example, Outlook personal storage files (.PST) are located under the Local folder
This is why it is important to supplement any folder redirection or roaming profile strategy
with a full profile backup. This way, there is no possibility of data loss during machine upgrades.
Other changes are included deeper in the profile structure. For example, the Start Menu is now
buried in the AppData\Roaming folder structure under the Microsoft, then the Windows folders
(see Figure 8.6). This is where you will find other corresponding contents such as printer
settings, recent documents, network connections and so on. These latter changes to the Vista
profile structure were specifically performed to support the seventh element in the previous list,
folder redirection.
Figure 8.53. The Contents of Vista’s AppData\Roaming Profile Structure
Each of these has an impact on the migration of personality content. For one thing, the tool you
use for migration must support the conversion of a profile from Windows XP and perhaps earlier
to Windows Vista.
Completing the Personality Protection Policy
Now that you are more familiar with the structure of the Vista user profile, you can complete
your personality protection policy. Two items are left to complete:
Identify the migration strategy
Define the profile backup policy
Chapter 8
The first will depend on the tools you selected. There are several tools on the market for
personality protection. Hopefully, you elected to select a comprehensive tool that supports every
part of the OS migration process and especially, one that has been upgraded to support Vista
migrations. As far as profile migrations are concerned, your tool must support the translation of
XP or earlier profiles to Windows Vista. In addition it should include several other features:
Store the profile(s) to migrate either locally or remotely on a network share.
Compress all profile contents to reduce space requirements, network traffic and migration
Support a profile collection analysis to help determine how complicated your personality
protection strategy will be.
Support the application rationalization process through the migration of application
settings from one version of an application to another.
Support the creation of profile migration templates so that you can premap migration
settings and therefore automate profile migration processes.
Support the integration of the profile collection and restoration process to the overall OS
migration process.
The degree of support your tool will offer for each of these elements will help determine the
contents of your personality protection policy.
Profiles can be stored either locally or remotely. If you decide that you do not want to reformat
disk drives during the OS migration, then you can store profiles locally. Remember that the Vista
image-based setup (IBS) is non-destructive and supports proper upgrades.
Tools such as the Symantec Ghost Solution Suite and Altiris Deployment Solution support local
profile storage even if a disk wipe is performed. Storing the profile locally will definitely save
you time since profiles tend to be large and take time to transfer over the network. But, storing
the profiles locally can be a risk to the project because there is no backup for them. If something
goes wrong, you won’t be able to go back and recover information from a locally stored profile.
If, on the other hand, you choose to store profiles on a network share, you will then be able to
back them up and protect them for the long term. Remember, as far as an end user is concerned,
the protection of the personality of their computer is the most important aspect of the project for
During one of our migration projects, the project team lost a single profile out of more than 2,500.
Unfortunately, it happened to belong to a developer who was in charge of the single, most important
application the organization was running. It turned out this user had never backed up anything from
his computer before and had never stored any single piece of code onto the network. More than five
years of work was lost.
Your project will not face such a disastrous situation, but keep in mind that a protection
policy for the data in every profile is critical to the perceived success of the project. Imagine losing
any profile from upper management—these are the kinds of pitfalls that are really easy to avoid
through proper preparation and structure.
Chapter 8
If profiles are to be carried across the network, then you need to have them compressed as much
as possible. This is one more reason why they should be cleared of all useless data beforehand.
One good way to ensure this is done is through a pre-migration analysis of both profile contents
and approximate profile sizing. Use the flowchart in Figure 8.2 to help determine what you are
looking for in this analysis. Your objective is to make the best possible use of bandwidth during
the protection.
Be sure to communicate with users before the migration to have them clean up their profile data as
much as possible. There is no need for your project to transport outdated data and favorites across
the network. Integrate this into your communication strategy and provide this as one of the actions
users are responsible for before your project reaches them.
Most profile protection tools will support this analysis through some form of pre-migration
inventory. For example, Microsoft’s command-line User State Migration Tool (USMT) supports
this type of pre-analysis, but since its findings are not stored in a database, you will need to
record them in some manner to get a global picture of the status of your network. It does provide
results in Extended Markup Language (XML) so it may not be too complicated to program some
form of automated collection tool, but why burden your migration project with this additional
task if you don’t need to?
For more information on the USMT, read Migrating to Windows Vista Through the User State
Migration Tool at
Commercial tools provide graphical interfaces for profile management. See Ghost Solution Suite
at or Altiris
Deployment Solution at
In addition, the personality protection team must work closely with the application preparation
team to determine which applications will be transferred over to the new OS. You don’t want to
capture settings for applications which will not be carried over during the migration. The
application preparation team will also be performing an application rationalization to reduce the
level of effort required to prepare applications for the migration and help streamline operations
once the migration is complete.
Part of this effort will involve the removal of multiple versions of the same application when
possible. Therefore, your migration tool must support the migration of application
customizations from one version to another. For example, an organization with multiple versions
of Microsoft Office should standardize on one single version. This means that the personality
protection tool should be able to capture settings from older versions of Office and convert them
to the version you have elected to standardize on (see Figure 8.7).
Chapter 8
Figure 8.54. Symantec Ghost Solution Suite can capture setting from multiple versions of Microsoft Office
If your organization is like most, you will rely on at least one in-house or custom developed
application. Make sure your migration tool can support the capture of data and settings for any
proprietary programs and not just standard programs like Microsoft Office.
You should also be able to create personality capture templates that can be optimized for
different departments or groups. Users in marketing appreciate having settings for Microsoft
Office transferred to their new machines while developers will want to keep their Visual Studio
.NET configuration. Advanced tools support the ability to create a template to capture the unique
settings of each user type (see Figure 8.8).
Chapter 8
Figure 8.55. Altiris Deployment Solution support the creation of personality protection templates on a per
group basis
The personality protection tool must seamlessly integrate into the overall OS migration process.
Command-line tools such as USMT let you script the process and integrate the script into both
the pre-migration and post-migration stages. More sophisticated tools let you remotely and
silently capture personalities from end user machines through a central administrator console.
In sensitive environments, you might also want to protect profile contents through the
assignment of a password to the profile package. In addition, since the ability to capture a profile
requires high-level administrative rights on the machine, you will need to make sure the profile
capture and restoration occurs in the proper security context (see Figure 8.9). Using a tool that
lets you integrate both levels of security through a central console will certainly facilitate the
overall process.
Chapter 8
Figure 8.56. Altiris Deployment Solution support the protection of profile contents as well as the assignation
of a security context for profile capture and restoration
Some users may require more personal interaction in which a project member sits down with the
user and walks them through a wizard-based, highly customized personality capture. An easy-tounderstand, wizard-based personality capture is also helpful with remote users who may need to
do the job on their own.
Any user whose settings have been saved and properly restored will remember the project team
with fondness and will not fear future migrations. Keep this in mind at all times when you are
working on this aspect of the project.
You should also consider how the personality data is stored and then reapplied. Ideally, all
personality settings should be rolled into a single compressed executable package. This helps
with remote users who may have to restore their own settings. Just double-clicking the
executable on the new machine is all that it takes to re-apply the personality data.
Finally, your protection strategy should include the ability to store personality data across
different removable media—such as a CD, DVD or USB flash drive—which will also be useful
for remote users who have to both capture and restore personalities on their own. An undo option
to remove the personality in case it is applied to the wrong machine is also a very helpful feature.
Chapter 8
If you decide that your personality capture efforts will be limited to only official data types, be sure to
communicate it to end users. Give them plenty of time to archive anything that won’t be migrated by
IT so they can access it later if need be. Many organizations tend to capture a complete image of the
legacy machines before performing a migration. This provides a failsafe method of retrieving user
data not included as part of a migration for the users that did not have time to backup their own data
or simply did not know how. This also provides a great solution for rolling back to a previous state if
an in-place hardware refresh was not completed successfully.
Determine your Backup Policy
Personality protection is, as mentioned earlier, the most important aspect of the migration for end
users. This is why most organizations opt to take the time to transfer profiles over the network
and create a centralized backup of each personality they protect. It is true that moving profiles
over the network will take time and will slow down the overall migration process, but the good
will it will buy you from users will make it worthwhile. You should aim to back up all profiles if
you can. Some organizations even choose to back up the entire PC image just in case the profile
protection misses something.
This begs a discussion of the actual migration process itself. There are several approaches:
The in-place migration
PC rotations
Hardware attrition
Each involves its own benefits and potential issues.
If you decide to perform in-place migrations, then you will be leaving PCs where they are and
perform the system replacement—whether it is an actual upgrade or a complete OS replacement
including a system disk wipe. To do so, you will need high-speed network connections and your
technicians will either have to work at night or during weekends to minimize the disruption to
the business. High-speed links and proper network storage should not be a problem in large
central offices but they may be an issue in remote offices. Many organizations use portable
servers for this purpose. These servers are designed to support many processes such as OS
system image deployment, protection for personality captures, software deployment and so on.
Performing upgrades—replacing the OS without wiping the system disk—is often the very easiest
migration process to choose. Upgrades are non-destructive so no profile migration is required. Of
course, few organizations choose to use upgrades because it does not give them an opportunity to
‘clean up’ their systems as they perform the OS migration.
PC rotations use a different process. When organizations perform OS migrations, they often have
to replace a significant portion of their PC systems. This new batch of computers are often much
higher-powered than the other systems in the network, especially if you choose systems that will
support all of Vista’s features. With this new pool of computers, the organization can set up a
rotation system that removes existing systems from the network and replaces them with systems
from the replacement pool. This often involves a cascade where each user will receive a PC that
is better than the one they were using before, once again using the systems from the original
purchase to begin the cascade. Since many users will be receiving used systems, the migration
Chapter 8
team also implements a cleaning process to make sure that each user has the impression of
getting a nice new and clean computer.
The point of this discussion is that when rotations are performed, each computer is taken away
from the network and moved to a staging area for the OS replacement. This means that the
mechanisms you need to protect personalities need only be in the staging area, reducing the cost
of performing the migration and reducing the impact on production networks. You may still need
to have a solution for remote sites, but this can often be nothing more than a mobile staging
Do not confuse the staging area with the staging test level. While some organizations merge the two
roles together, many choose to maintain the staging area—the area where new computers are
prepared—within the production domain and the staging test level—the last level of testing before
moving into the production network—uses its own, independent domain.
If you choose to rely on hardware attrition for the migration, then you should use a staging area
for system preparation. Of course, you may choose to perform the personality capture from user
systems while they are still in the network. Ideally, you will try to perform all migration
activities in a staging center because this approach has the least impact on productivity.
Whichever method you use to protect profiles in central and remote offices, once the profile is on
the network, you’ll want to back it up to permanent storage. Many organizations choose to
protect the personality, moving it to a network drive, back it up top permanent storage and then,
leave it on the network share for a period of about two weeks. This lets you define your backup
Profiles are stored on network shares for a period of two weeks.
Profiles are backed up to permanent storage.
Any missing contents from the restored system can be retrieved from the profile stored on
the network share during the first two week period.
After two weeks, the profile contents are removed from the network share.
Missing contents that are discovered after the two week period must be restored from
permanent backups.
This strategy provides a compromise between a personality protection guarantee to end users and
the amount of storage space required for this aspect of the migration process during the actual
migration. Make sure you communicate this strategy to your end users before you begin the
Prepare your Protection Mechanisms
This is part of the Organize phase of the QUOTE System.
Now that you have determined your overall protection policy, you can move on to the
preparation of all of the engineering mechanisms required to support it. Here, the PC team will
have to work with their server counterparts to complete these engineering activities.
Chapter 8
Perhaps the first place to start is to describe to the server team how you intend to proceed and
indicate to them the protection flowchart you intend to use. They can then provide the best
assistance based on your protection choices. Basically, this is what you should detail to them (see
Figure 8.10):
You’ll begin with a profile analysis to help determine storage sizing and
determine network bandwidth requirements.
If you are using roaming profiles in Windows XP and you don’t want to
implement joint folder redirection and roaming profiles, then you’ll need to turn them off
prior to the migration of each system.
Then, you’ll create templates to meet specific protection goals. Work with the
application preparation team to identify the applications you need to protect.
You’ll use the templates to protect each system’s core profile (see Figure 8.2).
If you’ve decided to back up the entire system, then this will be done after the
personality has been protected.
You’ll store the system backup offline, backup the profile and keep it available
for 2 weeks.
Profiles will be available from offline backup after 2 weeks.
Profiles will be restored after the OS migration has been completed. This should
occur after applications have been reloaded on the system.
Your profile protection guarantee will include the following:
a. Profile implementations can be rolled back in the case of problems.
b. Machine OS upgrades can be rolled back if serious issues arise after the
c. If generic profiles were in use, they will be migrated to individual profiles. In this
case, the same generic profile is migrated to multiple individual profiles. Users
will rely on Fast User Switching to provide continuous operations.
A 20 working day support guarantee will be provided to each user after the
Storage resources will be liberated once the migration has been successful. The personality
protection team members will be responsible for automating both the capture and restoration of
the profiles. If you have the right tools, this should involve nothing more than providing the
appropriate answers and settings to wizard-based interfaces.
If you decide to work with application virtualization and stream them to each desktop, you will need to
integrate the streaming process with the personality restoration. More on this topic will be covered in
chapter 9 as we discuss how you bring every element of the migration together.
Chapter 8
Figure 8.57. The Personality Protection Policy Flowchart
Be sure to perform end-to-end testing of each phase of the personality protection process and be
sure to have end users sign off on acceptance testing for the solution you propose. This will avoid any
nasty surprises during the project. Then, once all testing is done, you can sign off on your personality
protection strategy.
Chapter 8
Long-term Personality Protection Mechanisms
As you can see, preparing a personality protection strategy involves a lot of different steps. And,
you might think the personality protection team’s efforts stop when this is achieved, but what do
you do once the OS migration is complete and the new OS is in production? Microsoft has put in
a lot of effort to make personality protection an everyday process. That’s right; the entire User
folder structure has been revamped in order to provide support for one thing: folder redirection
and general profile protection.
Folder redirection is an aspect of Group Policy that will automatically redirect user folders to
central locations. When coupled with offline folder caching, folder redirection can provide the
best of all worlds for users whether they are roaming or not: folders are located on network
shares where they can be backed up and made available at all times. Their contents are cached
locally so even mobile users have access to their data even when disconnected. If any changes
are made locally, they are automatically synchronized the next time the network connection is
available. This means that both connected and disconnected users have constant access to their
You can combine folder redirection with roaming user profiles to gain the best of both worlds. Volatile
items such the NTUser.DAT file are not contained in folder redirection. This means that relying on
folder redirection alone may provide a less than complete experience for users. In addition, relying on
roaming profiles alone limits the experience because data is not available in real time as it is with
folder redirection. Combine both to have folder redirection manage the data within the profile and
roaming profiles manage the rest. The result is a profile that loads really fast because its contents are
small and users that have access to data at all times through folder redirection. An additional
advantage is that combining the two lets you support roaming users in both Windows XP and Vista.
This is great for the duration of the migration when you will be managing a mix of operating systems.
But, for many, this means rethinking how they provide protection for user data. Many
organizations have relied on the user home directory and/or roaming profiles to provide this type
of protection. The home directory was originally designed to provide a simple way to map user
shares on a network. In Windows NT, 2000 and Windows Server 2003, using the %username%
variable in a template account automatically generates the home directory folder and applies the
appropriate security settings giving the user complete ownership over the folder when you create
a new account. When the user logs on, this folder is automatically mapped to the H: drive or any
other letter you assign. But today, you shouldn’t be relying on mapped drives anymore. You
should be relying on the Distributed File System (DFS), especially DFS namespaces.
DFS namespaces use universal naming convention (UNC) shares to map resources for users.
But, instead of using the \\servername\sharename approach, DFS namespaces use
\\domainname\sharename and each namespace is mapped to an appropriate target in each site of
your organization’s network. DFS replication keeps the content of these target shares
synchronized over the WAN.
Find more information on DFS at
Chapter 8
Since Windows 2000, Microsoft has focused on the use of the “My Documents” folder as the
home of all user documents. This folder is part of the security strategy for all Windows editions
beyond Windows 2000 even though it has been renamed to “Documents” in Windows Vista.
This folder is stored within a user’s profile and is automatically protected from all other users
(except, of course, administrators).
In Windows XP, folder redirection could manage four critical folders and assign network shares
for each. These included:
Application Data which stores all application-specific settings.
Desktop which includes everything users store on the desktop.
My Documents which is the user’s data storage folder. Storing the My Pictures sub-folder
on the network is optional.
Start Menu which includes all of a user’s personal shortcuts.
When redirection is activated through Group Policy, the system creates a special folder based on
the user’s name (just like in the older home directory process) and applies the appropriate
security settings. Each of the above folders is created within the user’s parent folder. Data in
these folders is redirected from the desktop PC to the appropriate network folders. But, the user
profile includes much more than these four main folders. In fact, highly volatile information such
as Local Data and Favorites were not protected. Because of this, you can’t rely on Windows
XP’s folder redirection to properly protect a system’s personality.
For users that roam to remote offices, you can combine folder redirection with DFS namespaces and
replicate the contents of their folders to DFS target shares in remote offices. Make sure you are
running Windows Server 2003 R2 to take advantage of the new delta compression replication engine
in DFS replication.
Relying on Vista’s Folder Redirection
All of this changes with Windows Vista because it finally provides a mechanism for true folder
redirection and personality protection. This is evidenced in the settings available for folder
redirection in the Vista GPO (see Figure 8.11). As you can see, it includes redirection for much
more than XP ever did.
This makes folder redirection an excellent choice for long term personality protection. It is
completely transparent to the user. While they think they are using the Documents folder located
on their PC, they are actually using a folder that is located on the network. This way you can
ensure that all user data is protected at all times.
Microsoft provides a guide for Vista roaming user management at Rely on the online version because the downloadable document
does not contain as much information.
Chapter 8
Using a folder redirection strategy rather than using a home directory simplifies the user data
management process and lets you take advantage of the advanced features of the Vista network.
For example, even though data is stored on the network, it will be cached locally through offline
files. Redirected folders are automatically cached through client-side caching when they are
activated through a GPO. Data in these folders can also be encrypted through the Encrypted File
System (EFS). In fact, all offline files can be encrypted.
Vista lets you redirect ten different folders. When you combine the redirection of these folders
with roaming profiles, you offer the best roaming experience to users with a lower impact on
network traffic than with roaming profiles alone (see Table 8.2).
Figure 8.58. Vista’s Folder Redirection GPO
Vista folder redirection policies include more settings. For example, you can automatically delete
older or unused user profiles from your PCs (see Figure 8.12).
In addition, the settings you can use for personality protection are much more granular than ever
before. Music, pictures and video can all be set to automatically follow the policy you set for the
Documents folder. In addition, you can use the same policy to manage folder redirection for both
Windows XP and Windows Vista (see Figure 8.13). This provides powerful system management
Chapter 8
Figure 8.59. Windows Vista lets you control the behavior of User Profiles on each PC
Profile Type
AppData (Roaming)
This folder contains all roaming application data.
Redirecting this folder will also support Windows XP
clients with limitations.
Users should not store data or any other items on
their desktops; they should rely on the Quick Launch
Menu instead. This reduces the size of the folder to
redirect. Include this in your communications to them.
Redirecting this folder will also support Windows XP
Start Menu
The contents of the Start Menu are redirected. If you
use application virtualization, then users will always
have access to their applications on any PC even if
they are not installed.
Redirecting this folder will also support Windows XP
This contains all user data. Make sure your storage
policy and quotas support today’s large file sizes and
give users enough room to breathe.
Redirecting this folder will also support Windows XP
Chapter 8
Profile Type
Applying this policy to pre-Vista OSes will
automatically configure Pictures, Music and Videos to
follow Documents even if they are not configured.
Determine if your organization wants to protect this
folder. If you do, use the Follow the Documents
folder option or rely on the setting in Documents.
Redirecting this folder will also support Windows XP
Determine if your organization wants to protect this
folder. If you do, use the Follow the Documents
folder option or rely on the setting in Documents.
Using this option will also support Windows XP
Determine if your organization wants to protect this
folder. If you do, use the Follow the Documents
folder option or rely on the setting in Documents.
Using this option will also support Windows XP
Only applies to Vista.
Only applies to Vista. If you are using Outlook, then
this Contacts folder is not necessary.
Only applies to Vista. You will need to determine if
your organization wants to protect downloads users
obtain from the Internet.
Only applies to Vista.
Only applies to Vista.
Saved Games
Only applies to Vista. The contents of this folder are
very small and apply mostly to the games included in
Vista. Your organization will need to determine if you
want to spend network bandwidth and storage space
on this content.
Table 8.2 Recommended Settings for combining Vista Folder Redirection with Roaming Profiles.
Chapter 8
Figure 8.60. Folder Redirection in Vista offers several more choices than in Windows XP
If you choose to use folder redirection to provide a bridge between XP and Vista during the migration,
you will need to supplement this approach with some form of local content capture because many key
profile folders are not protected by Windows XP folder redirection. You don’t want to find out after the
fact that users are missing their favorites or even worse, their Outlook data files.
Enabling Folder Redirection with Roaming Profiles
There are special considerations when enabling folder redirection. First, you need to ensure that
each user is redirected to the appropriate server. It wouldn’t do to have a user in New York
redirected to a server in Los Angeles. To do this, you must create special administrative groups
that can be used to regroup users and ensure that each user is assigned to the appropriate server;
you most likely already have security groups you can use for this. You must also ensure that
offline settings are appropriately configured to guarantee that users are working with the latest
version of their offline files.
Make sure you assign appropriate amounts of storage to each user for folder redirection. There is
no point in making this effort if you limit their disk space access to something that is unacceptable.
Redirecting folders through user groupings is in fact very similar to creating regional or rather
geographically-based user groups. Since each server is in fact a physical location, you will need
to create a user group for each server. Begin by enumerating the location of each file server that
will host user folders, and then name each global group accordingly. Once the groups are
created, you can begin the redirection process. Using groups allows you to limit the number of
GPOs required for the folder redirection implementation.
Chapter 8
In chapter 7, we discussed how to create the default user profile. With Windows, you can store
this profile under a folder named Default User in the Netlogon share of your domain controllers—
use \\domaincontrollername\netlogon to access the share; Active Directory replication will
automatically make this folder available on every domain controller in your domain. Placing the
default profile in this location will let each PC load the default profile from the network instead of from
the local machine. Since profiles are not compatible between Vista and XP, you need to name the
Vista network profile folder to Default User.v2 (the v2 identifies Vista); this will separate the Vista
profile from its XP counterpart. Be sure to fully test this strategy with both Vista and XP systems to
make sure nothing untoward occurs when default profiles are created.
Vista also supports mandatory profiles, but since it relies on version 2 profiles, you must name the
network folder containing this mandatory profile with the .v2 extension as with any other profilerelated folders in Vista.
Use the following procedure to prepare your folder redirection with roaming profiles strategy.
Make sure you are running Windows Server 2003 Service Pack 1 or R2 as your server OS. Make
sure all GPO settings are prepared and modified from a Vista PC.
1. Begin by creating the shares that will support redirection and roaming. You will need a
minimum of two shares: one for the user profiles (for example, User_Profiles) and one for
folder redirection (for example, Folder_Redir). If you are already using XP roaming profiles,
then use this share for the profiles.
Next, create a folder for each user in your organization under the profiles section. Name
each folder with the user’s account name followed by a .v2 extension. Vista will rely on this
folder to enable the Vista roaming profile. Assign full control to each folder for the user and
for the System accounts. You can generate a script to do this by exporting your user list from
AD and adding the appropriate command line settings in a program like Microsoft Excel.
Vista will populate these folders when users first log on to the system.
Because Vista does not display much user information during log on and log off, you
might want to change this default behavior. Apply a GPO to all Vista PCs and set the
Administrative Templates | System | Verbose vs normal status messages option under
Computer Configuration. This will let users know what is happening during logon and it
might also be useful for debugging issues.
Verify that roaming profiles have been set up in user account properties in AD. Each user
will have two profiles during the migration—version 1 for XP and version 2 for Vista.
To reduce the content of the roaming user profile and therefore limit network traffic and
logon times, exclude key folders from the roaming profile contents. Use the Administrative
Templates | System | User Profiles | Exclude directories in roaming profile option under
User Configuration. List each of the ten folders supported by Vista’s folder redirection (see
Table 8.2). Type the name as it appears in Windows Explorer and separate each name with a
semi-colon (;).
Rely on the suggestions in Table 8.2 to set your folder redirection policy. Use Windows
Settings | Folder Redirection options under User Configuration to do this. Change the
property of each folder. Redirect them to your folder redirection share. There are a couple of
Chapter 8
a. When you set the folder properties for folders that are supported by both
Windows XP and Vista, use the Also allow redirection policy… option under the
Settings tab (see Figure 8.13).
b. When redirecting AppData (Roaming), you will get a compatibility warning
message (see Figure 8.14). This is because older Windows OSes do not support
the full functionality of Vista in terms of folder redirection.
c. When redirecting the Documents folder, you will also get a warning message. In
this case, it tells you that by selecting to support older OSes, you automatically
change the behavior of the Pictures, Music and Video folders; they will be
protected by default and set to Follow the Documents folder. If you do not want
to protect them, then set the policy explicitly for each folder.
When Windows XP systems are no longer in use, archive all XP profiles (the ones
without the .v2 extension) and remove them from your servers.
Test the strategy in the lab before deploying it to production. Once again, have users sign off on
it through acceptance testing to make sure they are also happy with its operation. You will need
to warn users that their first logon will take time as Vista downloads the contents of their
Figure 8.61. Vista’s compatibility warning message for the AppData folder.
Chapter 8
Figure 8.62. Vista’s warning message for the Documents folder.
Finalizing the Personality Protection Strategy
Take the time to preserve what makes your users feel successful and comfortable in doing their
jobs and it will pay dividends. When mapped drives are restored, printers appear just as they did
previously, and data, shortcuts, and favorites are exactly as the user left them, the focus of the
user shifts toward learning the benefits of their new hardware and operating system. You can
spend less time training users on how all the old features in Windows XP are still in Vista and
focus instead on more advanced topics such as new multi-tasking features, advanced searching
and using new peripherals.
In addition, if you take the time to prepare a proper long term protection policy through folder
redirection, you’ll also be ready for the next migration when it comes along. Make this a smart
profile protection policy!
Chapter 9
Chapter 9: Putting it all Together
So far, each technically-focused team has been working on their own as they prepare their
portion of the technical solution. By now, you should be in the middle of the Organize phase of
the QUOTE System with each team progressing towards completion of their engineering
activities (see Figure 9.1). It is now time to first, review the status of each solution component,
mapping them out to the original logical solution design, correct any defects or deviances if they
appear and then, begin integrating each solution component into one cohesive whole that will
provide one single flow of operations during the rollout. The objective of this activity is to make
sure that each and every engineering aspect of your solution works as expected in every situation
and have it accepted by your peers, especially those that will be responsible for these aspects of
IT operation once the solution is in place.
Figure 9.63. The Organize Phase of the QUOTE System
Chapter 9
Of course, other personnel have also been working through the activities in the project plan as
outlined in Chapter 2—activities that are more administrative in nature such as training and
communications preparation—and these activities are soon to be tested and integrated into the
migration process, something that will be covered in the next chapter as we run through the Pilot
project. But, before you can focus on the overall migration picture, you need to make sure all
engineering aspects are up to snuff. To date, you’ve had five technically-focused teams working
out pieces of the solution. They include:
Infrastructure remediation
Security design
Application preparation
System imaging
Personality protection
In fact, the technical team has been following the Desktop Deployment Lifecycle (see Figure
9.2). Though they have not been working in a vacuum—if you planned your project well, team
members have been shared between the different technical activities so that they can share their
expertise with others—they have, however, been focused mostly on their own aspect of the
Figure 9.64. The Desktop Deployment Lifecycle
As each team progresses through its solution design, they also progress through the testing
levels. This means that they are now arriving into the Integration and Staging testing levels—
levels that require more structure and precision than their predecessors. Remember that one of
the features of the Unit and Functional testing levels is that they support undoable virtual disk
drives, letting the teams ‘go back in time’ as they work with the preparation of their portion of
the solution. But, Integration and Staging no longer offer this feature. Therefore, the teams must
have more precise implementation strategies as they arrive into these environments.
Chapter 9
At this point, these teams must also have produced preliminary versions of their technical
deliverables. These deliverables should include detailed procedural documentation for the
implementation of each part of the solution (see Figure 9.3). These deliverables will provide you
with the roadmap for the actual implementation. The Integration testing level provides a first
fully-functional network level for this first integration test. Staging provides a much more
comprehensive testing level that simulates as many aspects of the production network as
possible. For example, if your organization has remote sites, then the Staging level will simulate
these remotes sites and allow you to test the solution over simulated or real WAN links. Also,
because it is the last level before you get into the actual production network, there should be as
few mistakes as possible when you get to Staging. Staging is also the level where you will be
performing the last level of acceptance testing. In fact, you must have each portion of the
solution accepted by the appropriate authority before you can move out of the Staging testing
level and move into the production environment. Part of this acceptance testing will require you
to perform a proof of concept, migrating or perhaps re-migrating at this stage, the systems of
each member of the project team and perhaps even some key members of the organization,
though they should normally wait until the Pilot project.
Figure 9.65. The Technical Deliverables Roadmap
Chapter 9
Bring the Teams Together
To bring all teams together and perform the deployment of the solution within both Integration
and Staging, you’ll need once again to rely on the PC Migration Cycle (see Figure 9.4) since
everything needs to be implemented in the proper order to provide the best level of integration.
Figure 9.66. The PC Migration Cycle
As illustrated in the PC Migration Cycle, everything starts with Systems Readiness; this means
that you will need to look to your server team to begin the integration process. Let’s see what it
No matter what the size of your organization, it is always best to rely on a systems management tool
for your migration. It’s true, as pointed out in Chapter 4, Microsoft offers several free tools for desktop
migration, most notably the new tools available for Vista through the Windows Automated Installation
Kit (—which you will need in any case—and through the Business
Desktop Deployment Workbench ( But even though these tools are a vast improvement over their
predecessors, they still rely on several command-line interfaces which make the process more
complex by default.
If at all possible, we highly recommend that you obtain and rely on a real commercial systems
management product. Once again, Chapter 4 listed several of these—Altiris, LANDesk, Microsoft,
Special Operations Software, Symantec and more—and we do hope that by now, you’ve seen the
true value these products offer and you have integrated the appropriate one into your project. After
all, this migration is not a one-time event. Once you’re finished moving to Vista, you’ll need the right
tool to manage ongoing system changes. Batch files and command-line systems just don’t cut it in the
long run.
Chapter 9
Move into the Integration Testing Level
As detailed in Chapter 3, the Integration environment already offers several different system
features (see Figure 9.5):
A directory server running services such as Active Directory
A database server running a version of SQL Server (ideally, version 2005)
A file server running the file services role
A deployment server
An inventory server
Depending on your situation, your deployment and inventory servers may or may not be
configured. If you already had a systems management product and you were satisfied with it,
then these servers should be running the same version of these services that you have in your
production network. If you did not have a systems management tool and you made the (right!)
decision to obtain one, then these servers are just member servers awaiting the installation of the
roles on them.
Figure 9.67. The Content of the Integration Testing Level
Chapter 9
Note that this testing level includes separate servers for database, deployment and inventory.
Depending on the size of your organization and the type of systems management tool you purchased,
you may only need a single server for each of these roles. Many organizations that deploy Microsoft’s
System Center Configuration Manager (formerly Systems Management Server) often do it on a single
server even if they have larger organizations because it is the easiest way to deploy and manage this
complex product.
What is most important here is that this environment must have divided roles as much as
possible. Each of the previous environments—Unit and Functional—have all of the server roles
integrated into a single server. Many, many people make the mistake of developing a solution that
runs fine when a single server runs all the roles, including the domain controller (DC), because they
test everything with ‘god’ rights. It is admittedly hard to avoid when you are on a DC. This is why it is
so important to start separating roles as much as possible in the Integration level. When roles are
separated, you have to use proper delegation of authority to run your tests and if they don’t work
here, then you know they won’t work in production. This is the main goal of this testing level.
The first place to start the integration process is with the servers. Chapter 5 outlined all of the
different processes that were under the aegis of the server team. This is the team that will begin
the integration process. To do so, they will need to perform the following activities:
Install/update the inventory tool
Install/update the profile sizing inventory tool
Install/update the OS deployment tool
Install/update the application deployment tool or install an application streaming tool
Prepare the application packaging or application streaming environment
Prepare the central repository for Vista Group Policy templates (ADMX files)
Prepare Vista Group Policy objects (GPO)
Prepare required shared folders and assign appropriate security rights to them
Prepare Distributed File System (DFS) namespaces and replication systems
Prepare the central Key Management Server to manage Vista licenses
Because they have had the opportunity to work on this in two previous testing environments, the
server team should have documented processes and perhaps even automated processes for these
implementations. As you have seen in previous chapters, some of these operations are relatively
straightforward while others are more complex.
For example, if you already have a systems management tool in place, you’ll need to simply
update it. This means that it should be installed and configured according to your existing
standards and strategy and then, it should be updated to include Vista support. This update
procedure is what needs to be fully documented so that it works without a hitch. If you don’t
have an existing systems management tool, then you’ll need to have a full-fledged architecture
designed for the implementation of the new tool. This architecture and the procedures that
support it is what you’ll want to test at this stage.
Chapter 9
When the server components have been updated, then the other team members can begin their
own integration. As mentioned earlier, the Integration environment will already include several
components that will allow these PC-focused team members to work with their own solution
components. These activities will include the following:
Implementation of the personality protection system
Implementation of the OS imaging system
Packaging or sequencing of the applications that were retained after the rationalization
Implementation of the application deployment or streaming mechanism
Implementation of the Vista security policy as was designed by the security-focused team
Once each of these systems is in place, you move to an end-to-end test of the PC Migration
Cycle. Here all team members learn to ‘play nice’ with each other as each part of the process is
tested. You’ll need to test each of the migration scenarios you’ve elected to support: upgrade,
refresh and/or replace. The Integration environment should include the appropriate machine
types—Windows XP and bare metal systems—to let you test whichever scenario you selected.
The purpose of these tests is to make sure everything works as expected and to refine any
procedural documentation you need to rely on.
Maintain the Deviance Registry
It is also be very important to manage the deviance registry at this level. This is where you begin
to map the logical solution design to the actual physical design as it is implemented. For
example, you will want to compare the logical description of the PASS System Kernel against
the actual contents of an installed Vista system. You can also compare the personality protection
policy against the actual results of a personality capture and restoration. You can compare the
structure of the Generalized Application Layer (GAL) against the actual results installed on a
system once it is deployed.
Each time you find a change or a result that is not as expected—a deviance, in fact—you need to
run it through the paces. How important is it? Is it a show-stopper or is it nice to have? Who
defines these categories and do they have the proper authority to do so? In the end, you’ll end up
compromising—every project does—as project resources, costs, scope and quality begin to
affect the delivery timeline.
Managing the deviance registry is one of the toughest jobs in the project. That’s because it is this
registry that will help define the final solution you will be putting in place. This is why it is so
important to try to test out as many potential situations as possible while you’re in the Integration
environment. Try to simulate every possible situation you will be facing in the actual
deployment. Record the results, provide corrections, and then, when you’ve signed off on all of
these tests, you can move on to the Staging test level.
Chapter 9
Move to the Staging Testing Level
As mentioned earlier, the purpose of the Staging level is to simulate production as much as
possible. This is why it contains more than the Integration level (see Figure 9.6). In fact, it
includes everything you can find in Integration plus the following items:
Multiple domain controllers
Multiple file servers
Multiple management servers
A simulated remote site to simulate WAN deployments
In addition, this level includes the possibility of a test that cannot be done in other levels: the
proof of concept (POC). This proof of concept is a live test that is performed against the systems
in use by the project team. Their systems are upgraded through the use of the new deployment
mechanisms integrated into the Staging testing level. This lets you perform an end-to-end test
that will ensure the technical solution actually works as expected in a live setting.
Chapter 9
Figure 9.68. The Content of the Staging Testing Level
But before you perform the POC test, you need to test anything that you could not test in the
Integration level. Focus on those features that are included in the Staging level, but are not in the
Integration level. This involves Group Policy replication between multiple DCs, migration
repository replication between sites, capture of personality data in remote sites and then
replication to the main site for backup purposes, and simply testing WAN load for the migration.
This should help flesh out your solution to meet real-world needs.
Chapter 9
Make sure all servers are running Windows Server 2003 R2 so you can take advantage of its new
and updated DFS features. For some of you, this may mean that you need to perform server
upgrades as well in your production infrastructure.
Review the Migration Workflow
When you feel you’re ready, you can move on to the POC. To perform this test, you’ll need to
run through the migration workflow (see Figure 9.7). This workflow is the process you use to
perform any single migration. Of course, it relies very heavily on the PC Migration Cycle, but
because it involves migrating a real computer that is in use, it also involves other activities. Each
is covered in the workflow.
As you can see, this workflow begins well before the actual migration. It includes the following
Everything begins with a communication to the user to inform them of the
upcoming change. The details of this communication will be discussed in Chapter 10 as
we discuss the administrative aspects of the deployment.
The technical aspects of the migration begin with a detailed inventory of the PC to
be migrated. This includes an inventory of existing profiles.
Next, you apply rationalization principles to what is found on the original system.
Rationalize applications, hardware and profiles. This may require a visit to the user to
validate the rationalization results.
Produce the final system datasheet for this PC.
Hand off this datasheet to the release manager who will assign the task to the
deployment team.
The deployment team will determine if any hardware modifications are
required—upgrades or even a PC replacement.
Map out the applications required for this user post-OS install.
Perform the personality capture. Validate the data. Optionally perform a full PC
Perform any required hardware modifications.
Deploy the OS to the system. This can involve a disk wipe, possible repartitioning
to take advantage of features such as BitLocker and then an OS installation of the thin
Stream the Generalized Application Layer to the system.
Stream any role-based or ad hoc applications.
Restore the personality to the system.
Validate the deployment and produce the PC migration report.
Inform the migration support team that this PC is now migrated.
As you can see, there are several steps to a single PC migration.
Chapter 9
Figure 9.69. The Migration Workflow
Chapter 9
Perform the Proof of Concept Project Team Migration
You should now be ready to perform your POC with the project team. This requires a few
Since you will be using the Staging environment as your source for the migration, the
PCs used by your project team should be joined to the Staging domain to fully support
the migration. Be sure to be extremely careful with their profiles when you do this. You
might consider using your profile protection system to make sure users continue to have
access to all their data.
While the Staging environment does simulate production as much as possible it may be
missing some services. For example, you may not have an Exchange server in Staging,
therefore, you will need to have users connect to the production Exchange server. This
can easily be done through the use of RPC over HTTP. You might also not have all of the
file shares users are used to working with. Show them how to remap these shares and
make them persistent with their production domain accounts.
If users are working with Windows SharePoint Services (WSS) between the Staging and
the production domains, show them how they can avoid logon prompts by placing these
Web sites into the Intranet Zone of Internet Explorer.
The goal is to make the experience as normal as possible for all project team members while
performing a full end-to-end technical test. This means you might omit some applications on
purpose for some systems so you can test your migration support strategy. This support strategy
should include the ability to fast track application deployments to users when they are found
missing among other items.
Don’t do this test until you have fully tested each other technical aspect in the Staging environment.
Your solution should be as ready to go as possible before you get to this stage otherwise you will
have to repeat it several times. You can imagine how annoying this would be for your team members.
You also need to run this POC for some time. A minimum of two weeks is required and if you
can make it longer, then you will find out much more. The project team will have advice about
all sorts of things from the position of one icon to the structure of the file system on the desktop.
Make sure you capture each comment—once again the Project Team Site in Windows
SharePoint Services is probably the best location for this type of feedback. Use a triage system
just like the one used for the deviance registry to identify what is important and what is not.
Once again the goal is to perform a very tight quality assurance (QA) on each and every
technical aspect of the project. Also, make sure the in-process reporting works as expected
through the migration. And, of course, make sure the end result is as expected. Simulate as many
problematic situations as possible. For example, begin the deployment and cut of the network
connection in the middle of the operation to see how you can recover from this situation. If
you’re ready for any situation, your deployment project will provide much better results.
Chapter 9
Obtain Acceptance from Peers
The last thing to do before you can be deemed ready to exit from the Staging environment and
move on to the final stages of your project is to obtain final acceptance from your peers. This
involves the following groups:
Architectural team: the internal architectural team needs to approve all changes to both
server and workstation architecture.
Support team: the support team needs to approve and accept the new support methods
proposed by the project for the new OS. Of course, the project will be responsible for
immediate migration support for the duration of the project warranty period (usually 20
working days), but after that, the systems pass on to normal support operations.
Subject matter experts (SME) and software owners: each application package, custom
or commercial must be accepted by the designated SMEs in your organization.
Remember that the project team cannot take on this responsibility since it cannot possibly
include experts on every product in use in your organization.
Operations team: the operations team needs to approve the change in operations and the
new operational strategies designed by the project before the project can move full steam
ahead with the migration. This is probably the most important technical approval level
required by the project because it is the operations team who will live with these changes
once the project is complete. Because of this, more on this subject is covered at the end of
this chapter.
Internal developers: this team needs to approve the changes to their applications and
their programming strategies. Make sure they are involved early on in the project.
End user developers: these developers work with applications such as Word macros,
automated Excel spreadsheets, and Access databases. If you did the smart thing, you will
have made sure they use proper client server programming practices to ensure you do not
need to deploy a full version of Access to every end user, saving the organization both
time and money.
Management: upper management must approve the results of the project since they are
funding all of the efforts.
Project management sign off: the project management team must also sign off on the
final results of your technical solution. Since they will have used the system for some
time through the POC, they will be in the ideal situation to determine if the solution is
mature enough for sign off.
If you don’t get each of these sign offs, then you must return to the drawing board and revisit
your strategies until you get them right.
Chapter 9
Validate your Operational Strategies
One of the most important technical aspects of this project is the operations hand off—the hand
off of the operational strategies that will be in use once the migration is complete. The operations
team will be responsible for the ongoing management of all of the tools you put in place and use
during the migration project. Your goal is to make sure the duties they will take on are easier and
simpler to perform than what they are doing now.
Depending on the type of migration project you decide to run—for example, forklift versus
attrition migrations—your project will either run for a very short period of time and then move
into ongoing operations or, it will last a very long time because it is waiting upon your normal
PC replacement cycle to complete. We would argue that both types of projects should run for the
same duration.
In the traditional approach, forklift migrations perform a massive replacement of the operating
system on all of the PCs in the network in one fell swoop. While the project takes about nine
months to complete overall, the actual massive migration should only take about three months
(see Figure 9.8). Then, once all of the systems are modified, you hand off to operations and the
organization goes about its business as usual.
Chapter 9
Figure 9.70. The Traditional versus an Attrition Migration Approach
In an approach based on attrition, the migration project will last as long as it takes to replace
every PC in the network. For some, this may take three years while others may take four or even
five years to complete the process. Each time a new batch of PCs comes in, the project picks it
up and migrates them. But, there is a lot of opportunity for change during this process. For
example, there will be a considerable number of updates and service packs to Windows Vista
during a three to five year period. But most importantly, no single organization can run a
project—any project—for three to five years and actually have it succeed. Well, let’s say that
very few organizations can hope to succeed at this level.
Chapter 9
We would suggest that instead of running the project for this period of time, you should use the
same timeframe as the forklift project and migrate only the first batch of PCs through the project.
What needs to change is the nature of the project deliverables. This is where the operations hand
off comes in. This hand off is important for both approaches, but it is much more important for
the attrition approach because of its very nature.
The operations hand off for either project must include the following deliverables:
Kernel Image Management Strategy
Application Management Strategy
Personality Protection Strategy
Vista Security Management Strategy
In addition, both types of projects should include the following deliverables, but these are most
important for the attrition project:
New PC Integration Strategy
PC Migration Strategy
To make the most of your current migration project, you should aim to transfer its overall
operation into production at its end. Since it is unreasonable to expect that a migration project
could possibly run for a number of years, you should make sure that you transition all migration
processes into production. This way, the idea of an OS migration becomes part and parcel of
normal, ongoing operations. When a new batch of PCs comes in the year following the closure of
this project, the operations team is ready and knows how to transition these PCs with the new OS
into production. Because you have provided a structured process for PC migration, the
operations team does not need to do anything out ordinary and can proceed with business as
usual. This is the best and easiest way to ensure success for such a long term project.
Let’s look at some of these strategies.
Kernel Image Management Strategy
One of the outcomes of your project is the use of a system stack such as the PASS System
Kernel. As you know, this kernel includes several different layers that are designed to provide
service levels to end users. Because they include several different components and because of the
nature of operating systems, Windows and others, you know that you will need to put a kernel
update management system in place once the deployment is complete. Therefore, one of the
technical deliverables you should expect is a Kernel Image Management Strategy.
Chapter 9
This strategy must include how and when you update the kernel. As time goes by, you will be
relying on the kernel to install and update systems on an ongoing basis. Of course, you don’t
want to find yourself in a situation where you need to deploy the kernel, then spend inordinate
amounts of time waiting for it to update once installed. Conversely, you don’t want to have to
find yourself in a position where you are constantly returning to the reference computer, opening
it up, updating it and then running it through Sysprep to depersonalize it again.
This means you need to find a happy medium between updating the kernel remotely and
updating the actual kernel image. The best way to do this is to implement a formal update
This strategy should be based on a formal change request process. This change request process
would rely on the rationalization guidelines that emerge from this project to evaluate the validity
of the request. If approved, the change would then be delivered in the next kernel update (see
Figure 9.9).
Figure 9.71. The Kernel Update Process
Chapter 9
Given the rate of change in the software industry and the number of different software
components that often reside in the kernel, it is essential to manage this process on a three-month
basis at the very most. Waiting longer for update collection creates packages that are too unruly.
Updating the kernel more frequently can become too expensive. In the meantime, the deployed
copies of the kernel are updated through a centralized update management tool. Each time the
kernel is updated, its version number changes. Four updates per year have proven to be the
optimum rate of updates for the kernel in every organization using this process to date. Two
weeks before release, a freeze is applied to change requests in order to allow for the preparation
of the update.
Because Vista supports a single worldwide image, this process should be relatively simple to
perform. In fact, you’ll have at least two images if you include one for the x64 platform. It
should therefore be much simpler than what you do today. In addition, if you maintain the
reference computer inside a virtual machine, you will be able to greatly reduce the effort required
to maintain the kernel since you won’t have to rebuild the reference computer each time you
need it.
Application Management Strategy
In addition to kernel updates, your new certification center is responsible for each application
update validation and preparation. This means applying the same processes used with the kernel
to every managed software product.
Figure 9.72. The Traditional Application Preparation Process
Chapter 9
If you are using traditional application installations, then you will need to perform conflict
management—managing the potential conflicts caused by dynamic link libraries (DLL) included
in the software product that may overwrite or otherwise damage existing system or application
DLLs. To do so, you need to run each product through a testing phase similar to that of the
kernel (see Figure 9.10) and you need to prepare and maintain an inventory of every single
component that can exist on a PC. This is a very heavy management process; one more reason
why having software owners is important because they at least take on the burden of software
vigilance by watching for potential upgrades and new versions.
But, if you move to software virtualization, you’ll still have to package and prepare applications
and have them approved by the SMEs, but at least you won’t have to worry about two aspects
that are part of the traditional process:
DLL management because the DLLs are all managed by the virtualization agent.
Application deployment because when applications are streamed, they are only cached
locally; change the central source of the stream and it will automatically be restreamed to
all clients.
These are compelling reasons for this move. Another reason which we feel we don’t tout often
enough is the fact that all of your system kernels are going to be pristine at all times (see Figure
9.11) because the local application cache holds all of the deltas beyond the thin kernel. And,
because it is a cache, it can be flushed at any time, returning your system to its pristine state, This
has to be worth its weight in gold to you and your operations staff.
Figure 9.73. Working with Application Virtualization, Streaming and Pristine Kernels
Chapter 9
Technical Outcomes of the Project
A migration project, especially a PC migration project, is a transformation project that lets you
clean house and close the door to unwanted content. That’s what the rationalization process is all
about: get the dust out of the corners and make sure it doesn’t come back. If this is not what you
will get at the end of this project, then you wasted an excellent opportunity to take complete
control of your PC environment. We believe you should, at the very least, expect the following
technical results from this project:
Managed systems: you should be deploying a complete infrastructure and strategy for
long-term management of your PCs.
Thin system kernels: the only software that actually installs on your PCs should be the
operating system and anything that does not support application virtualization such as
antivirus and low-level utilities. These systems will be pristine at all times and should be
very easy to reset back to the standard should any issues arise.
Application virtualization and streaming: you should be virtualizing everything from
the GAL to ad hoc applications, anything you can in fact. This will avoid all potential
conflicts, ensure any application will work at the same time as any other, facilitate
software updates, facilitate software license management and prepare you for the longterm future of Windows PC evolution.
Locked-down systems: your PCs should now be completely locked down. After all, if
you put in the effort it requires to move to an application virtualization and thin kernel
strategy, and then let users have administrative rights on their PCs, you will definitely not
be able to take advantage of a pristine kernel at all times.
PC migration strategy: you should have a defined PC migration strategy that not only
supports actual migrations but also any PC replacement that occurs during normal
Personality protection: you should have a complete personality protection system that
saves end user data and lets you back it up at any time. After all, with all of the new
compliance regulations affecting almost any type of organization, it is a really good idea
to have a central copy of all user data at all times. This way, you have access to any data
whenever you need it.
Constant inventories: you should have learned your lesson this time and made sure you
have a constant and complete inventory system in place. Isn’t it time you covered all your
Secure infrastructure: with Windows Vista’s new security features, you should have a
much more secure PC infrastructure once all systems are migrated.
Greener PCs: using the new power-savvy Group Policy settings in Vista, you should be
able to save considerably on power consumption for each PC in your network.
Reduced support costs: because your systems are locked down and are now managed,
support calls for PC issues should be greatly reduced. One of our customers who moved
to locked-down PCs in 1998 reduced their PC-related help desk calls by more than 800
percent! You might not have this dramatic a drop, but you should definitely have to deal
with less than ever before.
Chapter 9
Lower overall Total Cost of Ownership (TCO): putting a new managed PC
infrastructure in place based on Vista and application virtualization should provide
considerable savings in terms overall costs for the operation of PCs, savings like you
have never seen before.
A smart PC management strategy: PCs have been a thorn in the side of IT
professionals for years. Now, with the strategy outlined here, you can finally gain control
of this aspect of your network once and for all. Be smart, get it right this time!
These aren’t the only benefits you will derive from this project. One will be a complete change
management strategy if you continue to use the QUOTE System for your other IT projects.
Others will be derived from the other, administrative processes that are tied to the deployment.
These will be covered in the last chapter of this guide.
Whatever you do, at this stage you need to make sure your technical expectations—those that
were outlined so long ago in your original business case—are fully met and, if possible,
exceeded. Then and only then will you feel you’ve made the proper investment and moved to the
very best of platforms.
Chapter 10
Chapter 10: Finalizing the Project
The project is almost complete. Two phases of the QUOTE System are left. The Transfer phase
will focus on the manufacturing activities of the project, running through the pilot project, and
then migrating all systems to Vista. The Evaluate phase will focus on project closure, running
through the hand off to internal support teams, performing the project post-mortem, and
preparing for system evolution.
In order to complete these phases, you need to run through a series of activities which include
(see Figure 10.1):
Oversee administrative activities to ensure that each aspect of the process that will
support the deployment is ready for prime time.
Run the pilot project which will include more than the proof of concept (POC) that was
run at the end of the Organize phase. While the POC focused on the technical activities to
ensure every engineering process was ready to roll, the pilot project’s goal will be to
bring both administrative and technical processes together for a complete end to end test.
Then, it will aim to report on each aspect of the deployment process to make sure all is
ready. If changes are required, then, the project team needs to perform required
modifications before they can proceed.
Once all is deemed ready, the massive deployment begins and the project enters a
manufacturing mode where it runs through the repetitive processes required to migrate
each system. If the organization has opted to use a forklift migration—migrating each
system in one fell swoop—then, this process runs until all systems are deployed. If the
project has opted to use attrition—migrating systems through their hardware refresh
program—then, the deployment will only affect those systems that are upgraded this
year. It will also need to prepare for migration recurrence: re-running the migration
processes each year until all systems are running Vista.
Once the mass production is complete, a hand-off to internal support teams must be
performed before project mechanisms are integrated into every day operations.
Project mechanisms should not be completely dismantled because the migration of systems is an
everyday task that must be integrated into normal operations.
A project post-mortem must be completed to ensure that project wins are captured and
project failures are trapped. This helps ensure that projects of this type keep improving as
the organization runs them.
The very final phase of this project is the preparation for system evolution. Each migration
project can only take on a certain amount of work, aiming to get the best quality based on
available resources. But, once all systems are migrated, the organization can begin to invest more
fully into the benefits inherent in the new OS and add functionality to its deployment as
operations teams begin to master the elements that make up the overall solution.
Chapter 10
Figure 10.74. Running Final Project Phases
Closing the Organize Phase
At the end of the Organize phase of the project, the engineering team has been working
feverishly to close off all of the technical aspects of the solution. Meanwhile, the administrative
team has also been at work. Their goals were to finalize several other aspects of the process the
project will use to actually perform the deployment.
End user representatives have been working directly with the end users themselves to validate
inventories and prepare final user and PC datasheets—datasheets that will handed off to the
release manager and used by the deployment team to build new PCs. Training coordinators have
been working with operational teams to make sure they have the appropriate technical training in
time for the deployment. Trainers have been working to finalize the end user training programs,
be they either simple presentations of new features or complete programs covering each and
every new feature.
Support teams have been assisting users to convert their documents and applications for
operation with the new PC environment. They have also been working to finalize the support
program that will be put in place to back the deployment and ensure everything works as
expected as far as the end users are concerned.
Document the official support program for the deployment. This will make it easier for everyone
involved as everyone will know exactly what they can expect from it.
Chapter 10
Communications teams have been working at continuing the promotion of the new PC
environment to create a feeling of good will towards the project. They have also worked at
preparing the communications that will be used to inform users during the deployment itself.
These communications will be critical since users must know when and how they will be
affected as well as how they can expect to continue business as usual during the deployment
The fear of change is probably the biggest obstacle the project will face. In the ideal project, everyone
will be ready for the coming of new technologies, but, as you know, there is no such project.
Everyone—IT personnel, end users, managers, support teams—fears change to some degree. While
some hide it well, others do not and, in extreme cases, will do everything in their power to stop or
even block the coming change. This is why the good will communications program is so important.
Given this, it is hard to understand why it is so often forgotten or even worse, cut from the project. If
you want to make this project a success, you will invest in an appropriate change management
program—a program that will foster desire for change through the identification of solutions to
ongoing issues your organization faces at all levels because of the system you are currently using.
When change has been properly positioned, people will still fear it, but will be more comfortable with it
because they will at least understand how it will affect them.
The project management team has been working at finalizing the actual deployment strategy and
preparing the actual deployment plan—the detailed plan that identifies each user to be migrated
and how many users are to be affected each day. In addition, they have had to prepare mitigation
strategies—strategies that identify what will be done in the event that prospective users are not
available the day they are to be migrated. This strategy is critical to the success of the project
otherwise the project will not be able to meet its production goals of deploying a given number
of users per day. Finally, the project team has launched the final acquisition program required to
obtain the systems that will be used to seed the deployment.
Acquisitions are an essential component of the deployment of a new operating system. If you are
relying on attrition to migrate to Vista, then you will need to acquire new systems to begin the
deployment in your organization. If you decided to perform a forklift migration, then you also need to
use acquisitions to create a seed pool of systems that will be used to support the deployment. In
either case, you will want to identify who will be migrated and how you will allocate these new
computer systems. In attrition deployments, you will already have an identification program since you
will already know who will receive the new systems. In forklift deployments, you will use the new
acquisitions to start a rotation program, cascading newer, more powerful systems throughout the
organization and eventually replacing PCs for each and every user.
If you use a cascading rotation program, make sure you clean each PC case, keyboard,
mouse and screen as they run through the migration process. This will give each user in the
organization the impression of acquiring a new, more powerful PC than the one they had before.
Cleaning programs of this type buy enormous amounts of good will and greatly assist with the
success of the project.
Chapter 10
Running the Transfer Phase
The Transfer phase is the phase that runs the mass-production deployment operations (see Figure
10.2). By now, everything should be in place to run the actual deployment. Your staging areas
should be complete, the technical solution is fine-tuned, your support mechanisms are in place
and your administrative processes are ready. You’ll begin with the pilot project and then when
you’ve collected information, analyzed the results of this full end-to-end test, and made
appropriate modifications, you’ll be ready to perform the actual deployment.
Figure 10.75. The Activities of the Transfer Phase
Run the Pilot Project
The actual deployment begins with the Pilot Project which is a complete simulation of every step
of the deployment. Everything, that is everything, must be tested. You’ll need to preselect pilot
project users, run the pilot, and thoroughly analyze the full operation of the process. One goal of
this project is to do everything right so that you do not need to redeploy the pilot population
when it is time to perform the actual deployment. The only way to achieve this is to be as ready
as possible. If too many things go wrong, you’ll have to return to the drawing board and then,
when you’re ready for the actual deployment, begin with the pilot population.
Begin by selecting the users for the pilot project.
Chapter 10
Identify Pilot Users
Pilot users should be actual users as well as your own project team. The pilot population should
represent about 10 percent of the population of your organization. Include both central and
regional users if you have them. Include users from as many different levels in the organization
as possible. Include users that require only the OS kernel to operate as well as users requiring
role-based application groups. Include novice as well as experienced users.
Consider choosing users who will not be available during the deployment as well. This is one situation
that must be planned for as it occurs quite often during the actual deployment. This will let you test
the administrative process you put in place to replace users on the fly when they make themselves
scarce the day of the deployment.
Basically, the pilot population should represent as many aspects of the actual deployment as
possible so that you won’t be caught unawares when you’re doing this for real. One aspect which
may be different is the actual number of deployments you will begin with. Many organizations
decide to start small and build up to a production schedule. For example, if your goal is to deploy
100 PCs per day, your pilot project could start with 10 per day for the first couple of days, move
up to 20 when you feel ready and gradually bring up the production to expected levels.
Many users will be happy to participate in the pilot project while others will be reluctant to do so.
Users must be made aware that there is risk involved in participating in this aspect of the project.
There are also many advantages, the least of which is the delivery of your new solution to pilot users
before it hits the general population of your organization. If you’ve done your homework right, people
will flock to this aspect of the project just to obtain the pre-delivery of your new OS. If this is the case,
you might have to be very selective when choosing users. Make sure you select the right mix of
Build a Deployment Schedule
One of the key aspects of the deployment and one which will prove to be a major contributing
factor to its success is the deployment schedule. This schedule lists each system to be deployed,
its configuration, the target user, the actual location of the system, hardware upgrades if required
and any other detail that will make the deployment a success. Several techniques are used for
deployment schedules. Some organizations rely on a building by building, floor by floor
approach, deploying to users based on their geographical location. Others rely on departmental
groupings, deploying to each user within a specific department. Often a combination of both is
the best approach.
Ideally, users to be deployed will be regrouped physically in the same area. In addition, if they
all belong to the same department, then the deployment is that much easier because each system
is very similar in configuration. Organizations will often keep the most complex and demanding
users for the very end of the project because by that time, the deployment team is fully
experienced and ready to handle even the most difficult situations.
Chapter 10
This schedule must be managed by a person with very strong coordination skills as their job is to
make sure the project continues to meet its rate of production—so many PCs per day—even if
the target users are not available the day of their scheduled migration. One good support tool for
this situation is the preparation of a pool of ‘spare’ users that can be migrated at any time with
little warning. When someone is not available for deployment, you just pull a name from the
pool of spare users and therefore maintain your production rates.
Initiate Communications to End Users
Once the schedule and all other project aspects are ready, you can begin the pilot project.
Remember, this is a full end-to-end test so it must be just as if you were doing an actual
deployment. You can cut some corners, but try to keep them to a minimum.
Just as your deployment project will begin with initial user communications, your pilot should
begin with a communication to the target user. Rely on the deployment schedule to identify
which users to communicate with. User communications should begin well ahead of the actual
deployment date. In many cases, you give each user at least one month’s notice as they will be
responsible for tasks on their own before they are ready to be deployed. Then you can remind
them each week as the deployment approaches.
While users will have their own responsibilities during the project, you can expect that a good number
of them will not perform them beforehand. Make sure you are ready for this eventuality in your project
processes and infrastructure requirements.
This will let you give them the opportunity to clean their files before you migrate their system.
Depending on the strategy you used to protect their files, you may or may not offer them a
backup system for files that will not officially be recovered. This way users can back up personal
files on their system and restore them on their own once the system is migrated.
A typical user communication would include a description of the deployment process in laymen
terms as well as a description of the activities the users will be responsible for (see Figure 10.3).
The description of user activities should be as detailed as possible to make it as easy as possible
for users to protect their data.
Chapter 10
There are several different types of training programs available for end users. One of the most
efficient programs is one that consists of a presentation that focuses on the changed features of the
OS. It should provide demonstrations of each new feature and should include a course book users
can take away with them. If you marry this program with the actual deployment, this frees the user’s
PC for deployment operations. While the user is in training, you can perform deployment activities on
the PC with no interference from the user and no disruption to their work.
Many organizations are moving to online training. This has extensive value and Microsoft
has made the Enterprise Learning Framework (ELF) available with this purpose in mind. More
information on the ELF can be found at In
fact, you should strive to rely on online resources as much as possible. Many of these are already
available for your project through the Windows SharePoint Services Team Site the project has been
relying on. Using target audiences, you can easily create a version of this site that provides public
information on the project.
But consider this. The actual training program you will use for the deployment will depend
on your deployment strategy. If you choose to deploy during the daytime when users are away from
their desk, then you should use a live presentation that is provided in a conference room of some sort.
This has several advantages. You have full access to the user’s PC while they are away; the user is
regrouped with peers they can rely on to assist in the learning process; and the user begins their move
to Vista as soon as they enter the training room since they will have Vista on their PC when they return
to their desk.
If you choose to deploy at night without classroom training, then you should include logon
instructions for Vista in the user communiqué and then a link to the online training that pops up once
the first session is open by the user. This approach is more demanding of users, but can also provide
good results.
At the very least, you should have an online resource center for users moving to Vista
and you should make sure users are fully aware of it. You should also include printable tip sheets that
will help users make the move to Vista.
Chapter 10
Figure 10.76. A Typical Pre-deployment User Communiqué
On site assistance is also a very good way to make the deployment smoother. By providing
assistants that ‘walk the floor’ of deployed areas to provide direct assistance to users, you can buy
enormous good will and make the transition to Vista even easier.
Perform the Deployment
When D-Day comes, the deployment team will migrate the PC. This involves running the
engineering tasks—user data protection, OS installation, application deployment, user data
restore, and so on—on the user’s PC. Once again, the actual tasks will depend on the migration
strategy you selected. For example, if you decided to perform a forklift migration and are
replacing each PC through a general cascade of more powerful systems to each user, then you
will be using a staging area for the preparation of the PCs. This means moving each PC from its
present location to the staging area and back. In regions, this will mean having a mobile staging
area that can go from site to site. If you are doing an attrition migration, then you will migrate
only those PCs that are upgraded this year. Your pilot project will need to take this into account.
If you are using a forklift migration, then you will rely on a pool of systems to start the process. When
you deliver this initial pool, it will liberate more systems and rebuild the pool. This means you can
build systems beforehand in a staging area and when it comes time to deliver them, your team only
needs to replace the systems at user’s desks, making it quick and efficient.
Chapter 10
Ideally, whether you will be migrating all PCs or only a portion of them, you will be using
remote deployment strategies to simplify the migration. Test each aspect of this process during
your pilot project. Make sure you get full reports on the progression of the migration. These
reports of migrations in progress, completed migrations, systems still to go and so on will be
essential for the proper coordination of the project when the full deployment is in operation.
You’ll also need to test your support strategy to make sure your support team is ready for
anything and can respond to emergency situations with a minimum of disruption to end user
Don’t forget the regular team meetings. These are essential because they provide direct feedback on
the actual performance of the project and they offer opportunities for pep talks and mutual feedback.
Collect Pilot Data
Pilot users are lucky because they get an early release of the new system, but there is a payment
involved. Each pilot user must fill out an evaluation of their pilot experience. But they’re not the
only ones. Every other person that is involved in the pilot, from technicians to trainers to
communicators, must fill out the pilot evaluation as well. You must evaluate each and every
aspect of the project including:
Initial user communications
Deployment schedule
Project management
Technical installation and deployment
Personality protection
End user training
Technical training
Support process
Overall solution design
Desktop presentation and look & feel
Risk management strategy
And anything else that makes up this end-to-end test. The more information you collect, the
more feedback you will get.
Since the pilot is a complete test of all processes both administrative and technical for the
deployment, it is important to obtain information about each step of the process and from each
one of the people involved in the delivery and operation of the solution. Basically, you need to
poll every aspect of the pilot project. Once again, the project’s team site is by far the best
location to both collect and process this data.
Chapter 10
Perform the Pilot Evaluation
The project must first have taken the evaluation period into account—this is usually a period of
two weeks—so that there is ample time to collect and evaluate the feedback provided by both the
project team and the end users of the solution. In addition, pilot users should be given another
two weeks to test and use their new systems.
Much of this feedback has already been collected throughout the proof of concept, but since most
of the people providing feedback during this process were technically-savvy personnel, it often
tends to be more technical in nature. It is often possible for these highly trained teams to
overlook some things that seem totally obvious to a normal end user. Therefore, this is the type
of information the project team should concentrate on during the evaluation of the pilot.
It does sometimes happen that some technical component goes wrong at this stage. This is often due
to improper testing techniques and/or improper simulation of the production environment. Hopefully,
this will not happen to you and your project, but if it does, the buffer time zone you created in the
project plan to perform the evaluation may need to be extended, depending of course, on the
complexity of the issue you need to deal with.
If modifications are extensive enough, you may have to redo the pilot. This is usually performed at the
very beginning of the deployment and requires input into the deployment schedule. Most likely, you
will be able to create a system update or patch that can be deployed with little impact to each pilot
If any issues are found, they must be dealt with thoroughly. This means performing any required
corrections. They may range from simply rewriting the end user communications to rethinking
the actual deployment process. Most likely the modifications will be minor. You should still take
the time to make each one to ensure you have a product of the highest quality.
Then, when all modifications are completed, you need to get a sign off from upper management
or project sponsors on the overall solution. Once this sign off is obtained, you can hand off the
complete process to the deployment team and give the go ahead for the production deployment
to begin.
The impact of using an attrition program is that your operations staff will need to support multiple PC
OSes during a long period of time. This is expensive and cumbersome. There is no better way to
provide top quality support than when you have a completely homogeneous PC environment. Make
sure you consider this additional cost in your budget evaluations and in your long-term project
Performing the Final Deployment
Once you’ve completed the pilot project to everyone’s satisfaction, you can move on to the meat
of the project, the actual deployment. Everything you’ve done up to now has been leading to this
point. Now, you’ll focus on deployment coordination. This is perhaps the trickiest part of this
operation: making sure that the deployment meets its performance goals and ensuring that every
deployment experience is a good deployment experience. This is where deployment reporting
will be crucial to the operation. Proper reports need to provide detailed information on the status
of each installation, the number of installations per day, the number of remaining installations
and the installation progression. These will prove to be invaluable planning and management
tools as you progress through the deployment (see Figure 10.4).
Chapter 10
Commercial tools such as Altiris Deployment Solution can provide comprehensive information
at any time in your deployment. For example, Figure 10.4 displays a combination of both preand post-Vista migration reports through custom Web parts (not to be confused with SharePoint
Web parts) in a quick glance dashboard view. As you can see, it shows you the entire migration
job from beginning to end in one single dashboard. The first Web part shows the mix of machine
types in an environment, including the processor architectures (Win32 versus Win64). The next
Web part on the lower left shows the count of computers by operating system. The next Web part
in the upper right shows a pre-assessment report, including computers that need to be upgraded
and the costs associated with the upgrade. The next shows the most popular software installed,
again, from the pre-assessment report. The final part on the lower left shows the real-time data
from Deployment Solution and which migration jobs have completed along with any failures.
Reporting is closely tied to inventory and relies on the inventory solution you implemented or
relied on to obtain your assessments before the migration program was launched. This solution
should include canned reports that tell you how many computers are running the new OS, how
many are still waiting for applications, the status of personality restores on each system, and the
overall status of the deployment itself. Web-based reports are often the best because everyone
who needs them has access to them without special tools and you can link them to the team site
hosting your project’s information.
Figure 10.77. Altiris Deployment Solution provides comprehensive reporting in real time
Chapter 10
As mentioned earlier, you should have a risk management strategy in place to ensure that you
have sufficient numbers of users and PCs to deploy each day so you can meet production goals.
After that, everything is nothing but a series of repetitive tasks that run over and over again until
the deployment—whether forklift or attrition—is complete.
In the meantime, you can complete the technical training program to operations staff, bringing
them up to speed on both the actual solution you are deploying and the general features of Vista
and Office 2007 if you’ve elected to deploy a new productivity suite along with the OS. This
general training program should address each and every technical professional in your
organization, making sure they are all up to speed on the solution. You can marry this training
program with the deployment and train operators as they are migrated to the new OS. This way,
they can apply their knowledge as soon as possible. Training operators too early can lead to poor
retention of new skills.
Make sure you use focused technical training programs. Most technical training programs cover every
feature that is available in a product. But your solution does not. This means you need to have
trainers review the contents of your solution and modify training programs to remove anything that
you did not implement. This will reduce training times and make it much more practical for your staff.
You’ll also want to include a hardware disposal program in the deployment. If you are replacing
hardware through an attrition program, then you can rely on your normal disposal program. If
you are cascading all systems and re-issuing them into production through a forklift migration,
then you’ll only need a disposal program for those systems that are completely obsolete. Make
sure you wipe the disks appropriately through either low level formats or actual wipe operations.
It’s amazing what hackers can find on disks that have been wiped improperly.
Disk wiping programs abound and don’t require you to put a spike through the hard drive. Many
provide completely secure disk wiping. Some are free and others are not. Just search for ‘disk wiping’
as the key words in your favorite online search engine to locate the program you need. But, if you
want completely compliant wiping, then look to Symantec Ghost which supports secure system
retirement that is compliant with the Department of Defense requirements or Altiris Deployment
Solution which also includes similar capabilities.
Finally, your deployment will take some time, perhaps months. Because of this, you need to have
the proper infrastructure in place to allow you to manage updates and patches for deployed
systems as well as updating the OS image itself. You’ll want to make sure that all systems are up
to date at all times. These tools will also verify that your systems stay pristine as per your
original goals. Since you’ve aimed for a complete system lock down according to industrystandard best practices, you’ll want to monitor each system to make sure it stays this way.
The deployment portion of the Transfer phase will take longer than the deployment itself. That is
because of the project’s support policy which needs to run its length after the final PC has been
deployed. You can still move on to the next phase of the QUOTE System while this support
program is running its length (see Figure 10.5).
Chapter 10
Figure 10.78. The Deployment Timeline
Running the Evaluate Phase
The Evaluate phase—the last phase of the project—is used to close off the project and pass each
activity on to recurring operations (see Figure 10.6). In fact, it focuses on three key operations:
Handoff to ongoing operations staff
Project post-mortem analysis
Preparation for solution evolution
Each requires special care to make sure the project is as complete a success as possible.
Chapter 10
Figure 10.79. The Activities of the Evaluate Phase
Project Handoff to Operations
When the project is closing down, it needs to hand off all of its internal processes to ongoing
operations. This usually involves three key activities:
Final documentation delivery
A final transfer of knowledge to operations staff
Deactivation of project support mechanisms
The first portion of the handoff to operations is the finalization and delivery of all project
documentation. Technical documentation should be provided to operations staff while project
documentation will be handed off to your organization’s project management office. Technical
documentation should be customized to include the details of the solution you implemented. The
delivery should also include an update program and schedule for each document. Each time a
portion of the solution is modified or updated, the documentation should be updated as well.
Chapter 10
Technical documentation is often one of the most difficult deliverables to obtain. For some reason,
technical staff members don’t like to write. But it doesn’t have to be complicated. Many technical
resources on Vista exist already and are available from Microsoft at Since your documents are
for internal publication only, you can use this information as a starting point and retain only those
elements you included in your solution. In addition, you can use tables and spreadsheets to
document the features you implemented. Several of the links provided throughout this book pointed to
resources you can use as sources for your own documentation. Don’t make the mistake of providing
too little documentation. And, worse, don’t wait until the end of the project to begin your
It is best to identify an owner for each of the documents in this delivery. If owners are not
identified, then documentation falls by the wayside and quickly becomes out of date. Then, you
have a corresponding loss of connection to the state of your systems and structure of your
network. Make sure you don’t fall into this trap: maintain your documents at all time.
The second portion of this handoff is usually delivered in the form of a custom training program
that relies on a presentation format and open discussion between operations staff and project
staff. The presentation should focus on the deviance registry—the bug tracking system used by
the project—to cover the most common situations faced by support teams during the preparation
of the technical solution and those faced throughout its delivery to end users. It should also
identify the solutions used to resolve these issues as well as the tools used during the project
support program to ensure that the systems were up to date. This training should also provide a
map to all of the documentation delivered by the project, identifying where each deliverable fits
into the solution as well as when and how it should be applied.
Depending on the type of technical training that was provided to operations staff, this handoff
training program can take as little as one day per training group. If, however, operations staff did
not receive a full complement of technical training covering every aspect of the solution, then
your handoff program will need to be more complete and will require a longer time to deliver.
The third portion of the handoff deals with the support mechanisms put in place for the
deployment. Just like the mechanisms for actual deployment, the support mechanisms shouldn’t
be dismantled at all. Instead, they should be merged into ongoing support mechanisms.
Remember, new PC installations, PC re-installations and personality protections are ongoing
operations that organizations must master at all times. Since you’ve gone through the trouble of
putting all of these technologies together for the deployment, don’t make the mistake of getting
rid of everything just because the deployment is complete.
Operations staff will now be responsible for constant inventories, operating system and software
deliveries and updates, as well as maintaining user data protection mechanisms. In addition, if
you implemented a locked down environment as we recommended, then your staff will be
responsible for monitoring the steady state of this lock down. Because of this, they will need to
protect high privilege accounts and make sure they are secure at all times. If this is the first time
they do this, they will need to be even more vigilant just to make sure your systems stay
protected. This is a significant change for your staff. They can no longer ask for user passwords
and must use proper security practices both in their own work and when they deal with end users.
This will take some time before it becomes a habit.
Chapter 10
This is one very good reason for using your own internal staff for the project as much as possible.
This way, they begin to get used to the new technologies as they work through project activities. This
makes for a much better transfer of knowledge.
Project Post-Mortem
Once the deployment is complete, upper management will want to evaluate the completeness and
effectiveness of the rollout. How many machines in total are up and running on the new OS? Or,
perhaps more importantly, which machines have not yet been successfully transitioned and what
is being done about them?
In addition, you can rely on application metering tools to confirm an efficient use of all software
licenses. Confirm that licenses have been allocated efficiently and take the time to calculate the
percent improvement in ratios of what has been deployed versus what is in use for each
application. This type of report will go a long way towards buying and maintaining good will
with your management.
You should also review each of the project structures you put in place and identify which worked
well and which didn’t. This will let you identify which processes you can improve and where
your strengths lie. It is also a very good idea to perform an actual costs versus projections
analysis to see how you did overall in project budgeting. Finally, you should examine your
projected timelines with actual deliverables to see how your projections fared. All of these are
elements you need to review to make sure your projects constantly improve. Include each of
these findings in your standard project start up guide.
Don’t be a statistic. Make sure your projects are delivered on time and under budget. And in the
end, you’ll be able to build a set of customized best practices—practices that can make all of
your future projects profitable and timely.
Rely on the QUOTE System for future projects. Now that you understand how the QUOTE System
works, make it your change management strategy. It will help with IT projects of all types.
Calculating Return on Investment (ROI)
You might also consider the value of conducting a formal return on investment (ROI) study
based on the project. What were your costs for rolling out an OS before this project? Which tools
provided the best help to automate the process? How much did you save in terms of hard dollars
and reduced deployment resources using these tools? How did application virtualization help?
Are the issues you identified at the beginning of the project resolved?
In a standard environment, IT staffs spend much of their time dealing with the deployment of
new machines, operating systems and applications. If you remember Figure 1.10 from Chapter 1,
you’ll remember that according to research firm Gartner, organizations implementing a wellmanaged PC platform can reduce costs by up to 40 percent. Did your savings match or even beat
this projection? Creating an official ROI report is another way to generate positive visibility
throughout your organization for the project you just completed.
Chapter 10
Future Systems Evolution
The project is coming to a close as you finish the final phase of the QUOTE System. Now that
Vista is deployed on all of your systems, you can begin the move towards systems evolution.
You should first identify if you’ve met your systems management goals. Use the following
Review support calls: Identify if the number of calls has been reduced or if their nature
has changed. The number of calls should be significantly smaller and calls should be
focused on usage tips, not system problems.
Poll users: Are they satisfied with the level and quality of IT service? Were they satisfied
with the deployment?
Review procedures: Are IT staff using standard operating procedures and has it reduced
the number of incidents they need to deal with on an ongoing basis?
Manage change: Is your organization now ready to manage IT change on an ongoing
The implementation of a project of this magnitude isn’t an easy process, but with the principles
and practical examples of this guide along with industry best practices, you will be able to put in
place a complete continuous change management process for PCs within your organization.
Change is an ongoing process. It is the very nature of IT and anything technical in nature,
especially computer technology. Vista is only the beginning. Next, you’ll want to deploy
Windows Server 2008 to take advantage of even more of Vista’s feature set. Many of the
practices and strategies you used here will be reusable for that process.
For more guidance on migrating to Windows Server 2008 and taking advantage of its new
operating system virtualization capabilities, look for Windows Server 2008: The Complete
Reference by Ruest and Ruest from McGraw-Hill Osborne, ISBN-10: 0-07-226365-2, ISBN-13:
9780072263657 which should be available in bookstores by November 2007.
But change is something that can be controlled as you’ve seen throughout this project (see Figure
10.7). There are a number of different drivers for change, but if you apply structured practices
and risk management, you will be able to move from point A to point B without too many
disruptions to your organization. That’s because now that you have a structured systems
management strategy, you’ll always know where you are starting from. With your new inventory
systems, you should have and should always maintain proper information on your current
situation. Change management starts at the current situation, reviews new product features,
identifies industry and manufacturer best practices, identifies modifications to existing standards
and potential problems and uses these elements to perform the change in a structured manner.
Chapter 10
Figure 10.80. The Change Management Process
Taking Baby Steps
As you’ve seen, the QUOTE System is a cycle—a cycle that begins again once it is complete.
The goal of your project has been to implement a single standard operating environment. Now
that it is in place, you can add functionality and capability, as you need it, and at the pace you
need it (see Figure 10.8).
Many organizations see the move to Windows Vista as a major undertaking. And such it is. But
if you take it in stride and design your initial implementation project with more reasonable
objectives, you can take smaller steps towards the migration. How do you do this? It’s simple.
Windows Vista includes a host of new and improved features. You don’t need to put them all in
place at once! Design your initial project with the objective of putting in place half or less of the
new feature set. Then, once the new technology is in place, scale your network by introducing
specific features through mini-projects.
Chapter 10
Figure 10.81. Taking Baby Steps towards a Migration
The ancient Chinese philosopher, Sun Tzu, stated in The Art of War one of the world’s most famous
proverbs: ‘Divide and conquer.’ This is exactly what you should do when implementing major
technological change. Divide it into smaller pieces or ‘baby steps’. It will be much easier to conquer.
Taking small steps is just what the QUOTE System is all about. In this guide, you use the
QUOTE to perform your initial implementation of Windows Vista. Then, you use the Evaluate
Phase to begin systems evolution. It is at this point that you begin to recover and re-use your
migration investments by starting to add functionalities to the network environment. And since
you now have a controlled IT environment, the changes you bring to your new network should
no longer disrupt business operations. In fact, you begin a new QUOTE each time you start a
mini-project to add new functionality.
Lessons Learned
During this project, you learned quite a bit of information about yourself and your organization.
Here are a few final tips to assist your new evolutionary phase.
Be sure you have clear objectives. And if you use teams that include both internal
personnel and external consultants, be sure that these objectives are completely clear to
both teams. Ensure that each team’s own objectives are identified and communicated to
the other.
a. Be sure to make it clear to your internal team what ‘project mode’ means. Often,
external project teams are in project mode while internal teams are still in
recurrent management mode. Project mode means deadlines, deliverables,
acceptance processes and accountability. You can’t put everything off until
tomorrow when you are in project mode.
b. Your internal team may be disturbed by the external team because they will not
necessarily have the same objectives or work at the same pace. It may be a very
good idea to ensure that internal teams have a small ‘project management’ course
at the beginning of the project to ensure the smooth integration of both teams.
This should be part of the Project Welcome Kit.
Chapter 10
If your internal teams have specific objectives that may not necessarily be covered
by the project, make sure that these are identified at the very beginning of the project. If
there can be any synchronicity between these specific objectives and the project’s own,
map it right away.
Make sure you design and use Project Welcome Kits for each project you
implement. These are seldom created yet they can provide so much help because, at the
very least, they clearly identify everyone’s roles and responsibilities.
Make sure that your current situation analysis provides a summary report. The
current situation is the starting point for any change. If you don’t have a complete
summary of the situation, it will be very hard for external consultants to be able to grasp
the complexity of your environment. The summary will also provide a clearer picture of
your organization to internal team members.
Make sure you have regularly scheduled update meetings during your
implementation project. Communications is the key to both change and risk management.
These updates are crucial to the complete success of your project. Also, don’t forget
communications to users. Good will generation is part of every change project.
When you design the operational mode of your project, document it and make
sure everyone sticks to it. If deliverables have a set timeframe for acceptance, accept
them within this set timeframe. If escalation procedures work in a specific manner, use
them in this manner. Do not deviate from set practices unless it becomes critical to do so.
Make sure you have a very tight grasp on project activities. Slippage can put the
project at risk. It’s up to your team to make sure this doesn’t happen and if it does, that
you’re ready for it.
Select the right project manager and technical architect. These roles are probably
the most critical in the entire project.
Learn to delegate! Too many projects use key internal personnel for every single
critical activity. If you find yourself in this situation, learn to identify what is the most
important to you and delegate other tasks. Many projects put themselves at risk by
relying on the same persons for every critical activity. This is one good way to burn out
and fail.
Once you have rationalized technologies within your network, make sure
department managers stick to the rationalization processes. Make them understand the
dollar value or extra expense associated with their acceptance of non-standard products.
If you use external consultants, make sure you team them up with internal
personnel. Ask them the right questions and have them document their answers. Make
sure you make the most from the knowledge transfer process.
Chapter 10
Learn to recover and re-use your investment in your migration project by
continuing to scale your IT environment once it is completed.
Once you’ve implemented systems management practices, don’t lose them!
Systems management is an ongoing process. Ensure that you follow its growth and learn
to adapt it to changing business needs and technological environments.
Don’t forget that the first time is always the hardest time. Practice lets you learn
from your experiences.
One final recommendation: in your implementation, don’t try to adapt people to
technology, adapt technology to people. That’s what it’s made for.
This project involved a lot of activities, but the results make up for them. Managed systems are
about increased productivity, reduced support costs and constant evolution. They are about
constantly being up to date in terms of the technology you use in your network. They are about
using IT as a strategic business tool. But, to do so, you need to provide continuous learning
environments for continued increases in people skills. You also need structured change
management practices.
Make the project team site evolve. It can easily become the source for a continuous learning center
for Vista and other technologies. Evolution doesn’t stop with the end of the project. Make sure you
continue to foster knowledge and advanced usage of the technologies you took so much time to
Relying on technology is about three factors—People, PCs and Processes—interacting in your
own IT environment to the profit of your organization. Make the most of it now that you control
your PC assets.
Moving on to other QUOTEs
We finish the QUOTE for Vista with a move to other feature implementations and, eventually,
other migration projects. But now, they should be considerably easier to perform. You will
determine the success of your own implementation. The value you place on IT will help
determine the level of benefits you will draw from the systems you use. Technology is an asset
that you control, not the other way around.
This guide brings together the sum of our experience with migration projects. We’ve tried to
make it as useful as possible, but there is always room for improvement. If you find that there are
links missing or even if you have comments on its contents, don’t hesitate to communicate with
us at [email protected] Good luck with your migration!
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF