Cisco Press Designing for Cisco Internetwork

Authorized Self-Study Guide
Designing for Cisco Internetwork
Solutions (DESGN)
Second Edition
Diane Teare
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
ii
Authorized Self-Study Guide
Designing for Cisco Internetwork Solutions (DESGN), Second Edition
Diane Teare
Copyright© 2008 Cisco Systems, Inc.
Published by:
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review.
Printed in the United States of America
First Printing October 2007
Library of Congress Cataloging-in-Publication Data:
Teare, Diane.
Designing for Cisco internetwork solutions (DESGN) / Diane Teare. -- 2nd ed.
p. cm. -- (Authorized self-study guide)
Rev. ed. of: CCDA self-study : designing for Cisco internetwork solutions (DESGN) / Diane Teare. c2004.
"Exam 640-863."
ISBN-13: 978-1-58705-272-9 (hardcover)
ISBN-10: 1-58705-272-5 (hardcover)
1. Computer networks--Examinations--Study guides. 2. Telecommunications engineers--Certification. 3. Internetworking
(Telecommunication)--Examinations--Study guides. I. Title. II. Series.
TK5105.5.T418 2008
004.6--dc22
2007032855
ISBN-13: 978-1-58705-272-9
ISBN-10: 1-58705-272-5
Warning and Disclaimer
This book is designed to provide information about designing Cisco networks. Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied.
The information is provided on an “as is” basis. The author, Cisco Press, and Cisco Systems, Inc. shall have neither liability nor
responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from
the use of the discs or programs that may accompany it.
The opinions expressed in this book belong to the author and are not necessarily those of Cisco Systems, Inc.
iii
Trademark Acknowledgments
All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Cisco Press
or Cisco Systems, Inc., cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
Corporate and Government Sales
The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may
include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact:
U.S. Corporate and Government Sales
1-800-382-3419
corpsales@pearsontechgroup.com
For sales outside the United States please contact:
International Sales
international@pearsoned.com
Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book is crafted with care and precision, undergoing rigorous development that involves the unique expertise of members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how we could improve the quality
of this book, or otherwise alter it to better suit your needs, you can contact us through email at feedback@ciscopress.com. Please
make sure to include the book title and ISBN in your message.
We greatly appreciate your assistance.
Publisher: Paul Boger
Cisco Representative: Anthony Wolfenden
Associate Publisher: Dave Dusthimer
Cisco Press Program Manager: Jeff Brady
Executive Editor: Brett Bartow
Development Editor: Eric Stewart
Managing Editor: Patrick Kanouse
Copy Editor: Mike Henry
Senior Project Editor: Tonya Simpson
Technical Editors: Shawn Boyd and Richard Piquard
Editorial Assistant: Vanessa Evans
Proofreader: Gayle Johnson
Designer: Louisa Adair
Composition: Mark Shirar
Indexer: Ken Johnson
iv
About the Author
Diane Teare is a professional in the networking, training, and e-learning fields. She has more than
20 years of experience in designing, implementing, and troubleshooting network hardware and
software and has also been involved in teaching, course design, and project management. She has
extensive knowledge of network design and routing technologies and is an instructor with one of
the largest authorized Cisco Learning Partners. She was recently the Director of e-Learning for
the same company, where she was responsible for planning and supporting all the company’s
e-learning offerings in Canada, including Cisco courses. Diane has a bachelor’s degree in applied
science in electrical engineering (BASc) and a master’s degree in applied science in management
science (MASc). She is a certified Cisco instructor and currently holds her CCNP and CCDP
certifications. She coauthored the Cisco Press titles Campus Network Design Fundamentals, the
three editions of Building Scalable Cisco Internetworks (BSCI), and Building Scalable Cisco
Networks. She also edited the first edition of this book and Designing Cisco Networks.
About the Technical Reviewers
Shawn Boyd is a senior network consultant for ARP Technologies, Inc. He has worldwide
experience in consulting on many different projects, such as security/VoIP for Cisco Systems
Israel, intrusion prevention for Top Layer Networks of Boston, and DSL infrastructure rollout for
Telus Canada. Shawn is also active in course development and is a certified Cisco instructor with
ARP Technologies, Inc., responsible for teaching most of the Cisco curriculum. He has coauthored
IT security–related books for Cisco Press and has been a technical editor on a few Cisco Press
Self-Study Guides. His background is in network security and design at a service provider level.
He has worked for Canada’s largest telco providers, performing network designs and
implementations, and was lead contact on many large government contracts.
Richard Piquard is a senior network architect for Global Knowledge Network, Inc. He has more
than seven years of experience as a certified Cisco instructor, teaching introductory and advanced
routing, switching, design, and voice-related courses throughout North America and Europe.
Richard has a highly diverse skill set in design and implementation of both Cisco and multivendor
environments. His experience in the industry ranges from his military background as the network
chief of the Marine Corps Systems Command, Quantico, Virginia, to a field engineer for the Xylan
Corporation (Alcatel), Calabasas, California, to a member of a four-person, worldwide network
planning and implementation team for the Household Finance Corporation, Chicago.
v
Dedications
This book is dedicated to my wonderful husband, Allan Mertin, whose optimism inspires
me; to our captivating son, Nicholas, and his enthusiastic curiosity and quest for
knowledge; to my parents, Syd and Beryl, for their continuous love and support; and to
my friends, including “the Girls,” for continuing to help me keep my sanity!
Acknowledgments
I would like to thank the many people who helped put this book together, including the following:
The Cisco Press team—Brett Bartow, the executive editor, for driving this book through the
process, and his continued support over the years. Vanessa Evans was instrumental in organizing
the logistics and administration. Eric Stewart, the development editor, has been invaluable in
producing a high-quality manuscript. I would also like to thank Tonya Simpson for her excellent
work in shepherding this book through the editorial process. Thanks also to Richard Froom, Balaji
Sivasubramanian, and Erum Frahim, the authors of Cisco Press’s Building Cisco Multilayer
Switched Networks (BCMSN), Fourth Edition.
The Cisco Systems team—Many thanks to the members of the team who developed the latest
version of the DESGN course. The team included two people from Chesapeake Netcraftsmen:
Carole Warner Reece and Peter Welcher. Members of the team from Cisco Systems included
Dennis Masters, Dwayne Fields, Pat Lao, Bill Chadwick, Bob Eckoff, Bob Ligett, Drew Blair, and
the project manager, Dan Stern.
The technical reviewers—I would like to thank the technical reviewers of this book, Shawn Boyd
and Richard Piquard, for their comprehensive, detailed review and beneficial input.
My family—Of course, this book would not have been possible without the constant
understanding and tolerance of my family, who have lived through the many weekends and nights
it took to complete it. Special thanks to Nicholas for always making sure I got lots of hugs!
vi
vii
Contents at a Glance
Foreword
xxvi
Introduction
xxvii
Chapter 1
Network Fundamentals Review 3
Chapter 2
Applying a Methodology to Network Design
Chapter 3
Structuring and Modularizing the Network
Chapter 4
Designing Basic Campus and Data Center Networks
Chapter 5
Designing Remote Connectivity
Chapter 6
Designing IP Addressing in the Network
Chapter 7
Selecting Routing Protocols for the Network
Chapter 8
Voice Network Design Considerations
Chapter 9
Wireless Network Design Considerations
Chapter 10
Evaluating Security Solutions for the Network
57
129
221
293
377
429
479
565
651
Appendix A
Answers to Review Questions and Case Studies
Appendix B
IPv4 Supplement
Appendix C
Open System Interconnection (OSI) Reference Model
Appendix D
Network Address Translation
Acronyms and Abbreviations
Index
888
725
807
871
859
845
viii
Contents
Foreword xxvi
Introduction xxvii
Chapter 1
Network Fundamentals Review 3
Introduction to Networks 3
Protocols and the OSI Model 4
The OSI Model 5
Protocols 6
The OSI Layers 6
Physical Layer—Layer 1 7
Data Link Layer—Layer 2 7
Network Layer—Layer 3 7
Transport Layer—Layer 4 8
Upper Layers—Layers 5 Through 7 9
Communication Among OSI Layers 9
LANs and WANs 11
Network Devices 13
Terminology: Domains, Bandwidth, Unicast, Broadcast, and Multicast
Hubs 14
Switches 14
Routers 16
Introduction to the TCP/IP Suite 17
TCP/IP Transport Layer Protocols 18
Port Numbers 20
TCP Sequencing, Acknowledgment, and Windowing 21
TCP/IP Internet Layer Protocols 24
Protocols 25
IP Datagrams 25
TCP/IP-Related Data Link Layer Protocol 27
Routing 27
Routers Work at the Lower Three OSI Layers 28
Routing Tables 29
Routing Protocols 31
Addressing 31
Physical Addresses 31
Logical Addresses 32
Routing and Network Layer Addresses 33
IP Addresses 34
IP Address Classes 34
Private and Public IP Addresses 35
Subnets 36
13
ix
Switching Types 38
Layer 2 Switching 38
Layer 3 Switching 41
Spanning Tree Protocol 42
Redundancy in Layer 2 Switched Networks
STP Terminology and Operation 43
STP Terminology 43
STP States 45
Rapid STP 47
Virtual LANs 47
VLAN Membership 48
Trunks 49
STP and VLANs 49
Inter-VLAN Routing 51
Comprehensive Example 52
Summary 55
Chapter 2
42
Applying a Methodology to Network Design
57
The Cisco Service Oriented Network Architecture 57
Business Drivers for a New Network Architecture 57
Intelligence in the Network 58
Cisco SONA Framework 60
Network Design Methodology 64
Design as an Integral Part of the PPDIOO Methodology 64
Benefits of the Lifecycle Approach to Network Design 66
Design Methodology 67
Identifying Customer Requirements 69
Assessing the Scope of a Network Design Project 69
Identifying Required Information 70
Extracting Initial Requirements 70
Gathering Network Requirements 71
Planned Applications and Network Services 73
Organizational Goals 75
Organizational Constraints 78
Technical Goals 80
Technical Constraints 81
Characterizing the Existing Network and Sites 83
Customer Input 83
Sample Site Contact Information 84
Sample High-Level Network Diagram 86
Auditing or Assessing the Existing Network 87
Tools for Assessing the Network 89
Manual Information Collection Examples 90
Automatic Information Collection Examples 94
x
Analyzing Network Traffic and Applications 95
Tools for Analyzing Traffic 96
NBAR 97
NetFlow 98
Other Network Analysis Tools Examples 101
Network Health Checklist 102
Summary Report 103
Creating a Draft Design Document 104
Time Estimates for Performing Network Characterization 105
Using the Top-Down Approach to Network Design 107
The Top-Down Approach to Network Design 107
Top-Down Approach Compared to Bottom-Up Approach 108
Top-Down Design Example 108
Decision Tables in Network Design 110
Structured Design 112
Network Design Tools 114
Building a Prototype or Pilot Network 115
Documenting the Design 116
The Design Implementation Process 117
Planning a Design Implementation 117
Implementing and Verifying the Design 119
Monitoring and Redesigning the Network 119
Summary 120
References 120
Case Study: ACMC Hospital Network Upgrade 121
Case Study Scenario 121
Organizational Facts 121
Current Situation 122
Plans and Requirements 124
Case Study Questions 124
Review Questions 125
Chapter 3
Structuring and Modularizing the Network
129
Network Hierarchy 129
Hierarchical Network Model 129
Hierarchical Network Design Layers 129
Access Layer Functionality 131
The Role of the Access Layer 131
Layer 2 and Multilayer Switching in the Access Layer
Access Layer Example 133
Distribution Layer Functionality 134
The Role of the Distribution Layer 134
Distribution Layer Example 136
132
xi
Core Layer Functionality 136
The Role of the Core Layer 137
Switching in the Core Layer 137
Hierarchical Routing in the WAN 139
Using a Modular Approach to Network Design 140
Evolution of Enterprise Networks 140
Cisco SONA Framework 141
Functional Areas of the Cisco Enterprise Architecture 141
Guidelines for Creating an Enterprise Network 145
Enterprise Campus Modules 146
Campus Infrastructure Module 148
Building Access Layer 148
Building Distribution Layer 148
Campus Core Layer 149
Server Farm Module 149
Enterprise Campus Guidelines 150
Enterprise Edge Modules 150
E-commerce Module 152
Internet Connectivity Module 152
Remote Access and VPN Module 153
WAN and MAN and Site-to-Site VPN Module 154
Enterprise Edge Guidelines 154
Service Provider Modules 155
Internet Service Provider Module 156
PSTN Module 156
Frame Relay/ATM Module 156
Remote Enterprise Modules 157
Enterprise Branch Module 157
Enterprise Data Center Module 158
Enterprise Teleworker Module 158
Services Within Modular Networks 159
Interactive Services 159
Security Services in a Modular Network Design 162
Internal Security 162
External Threats 166
High-Availability Services in a Modular Network Design 169
Designing High Availability into a Network 169
High Availability in the Server Farm 170
Designing Route Redundancy 173
Designing Link Redundancy 175
Voice Services in a Modular Network Design 177
Two Voice Implementations 177
IP Telephony Components 178
Modular Approach in Voice Network Design 179
Evaluating the Existing Data Infrastructure for Voice Design
181
xii
Wireless Services in a Modular Network 181
Centralized WLAN Components 182
Application Networking Services in a Modular Network Design
ANS Examples 184
ANS Components 184
Network Management Protocols and Features 186
Network Management Architecture 186
Protocols and Standards 187
SNMP 188
SNMPv1 189
SNMPv2 190
SNMPv3 191
MIB 192
MIB-II 194
Cisco MIB 195
MIB Polling Guidelines 195
MIB Example 196
RMON 197
RMON1 198
RMON1 Groups 198
RMON1 and RMON2 199
RMON2 Groups 200
Netflow 202
NetFlow Versus RMON Information Gathering 204
CDP 205
CDP Information 206
How CDP Works 206
Syslog Accounting 207
Syslog Distributed Architecture 210
Summary 211
References 212
Case Study: ACMC Hospital Modularity 212
Review Questions 215
Chapter 4
Designing Basic Campus and Data Center Networks
Campus Design Considerations 221
Designing an Enterprise Campus 221
Network Application Characteristics and Considerations 222
Peer-Peer Applications 222
Client–Local Server Applications 223
Client–Server Farm Applications 224
Client–Enterprise Edge Applications 226
Application Requirements 227
Environmental Characteristics and Considerations 228
Network Geography Considerations 228
Transmission Media Considerations 230
183
221
xiii
Infrastructure Device Characteristics and Considerations 235
Convergence Time 236
Multilayer Switching and Cisco Express Forwarding 237
IP Multicast 239
QoS Considerations in LAN Switches 241
Load Sharing in Layer 2 and Layer 3 Switches 244
Enterprise Campus Design 245
Enterprise Campus Requirements 246
Building Access Layer Design Considerations 246
Managing VLANs and STP 247
Managing Trunks Between Switches 251
Managing Default PAgP Settings 252
Implementing Routing in the Building Access Layer 252
Building Distribution Layer Design Considerations 253
Using First-Hop Redundancy Protocols 254
Deploying Layer 3 Routing Protocols Between Building Distribution and Campus Core
Switches 255
Supporting VLANs That Span Multiple Building Access Layer Switches 257
Campus Core Design Considerations 257
Large Campus Design 259
Small and Medium Campus Design Options 260
Edge Distribution at the Campus Core 261
Server Placement 263
Servers Directly Attached to Building Access or Building Distribution Layer Switches 264
Servers Directly Attached to the Campus Core 264
Servers in a Server Farm Module 264
Server Farm Design Guidelines 266
Server Connectivity Options 267
The Effect of Applications on Switch Performance 267
Enterprise Data Center Design Considerations 268
The Enterprise Data Center 268
The Cisco Enterprise Data Center Architecture Framework 269
Enterprise Data Center Infrastructure 272
Data Center Access Layer 274
Data Center Aggregation Layer 274
Data Center Core Layer 275
Density and Scalability of Servers 276
Summary 276
References 277
Case Study: ACMC Hospital Network Campus Design 277
Case Study Additional Information 278
Case Study Questions 279
Review Questions 289
xiv
Chapter 5
Designing Remote Connectivity
293
Enterprise Edge WAN Technologies 293
Introduction to WANs 293
WAN Interconnections 294
Traditional WAN Technologies 295
Packet-Switched Network Topologies 296
WAN Transport Technologies 298
TDM (Leased Lines) 299
ISDN 300
Frame Relay 300
Asynchronous Transfer Mode 301
MPLS 301
Metro Ethernet 304
DSL Technologies 304
Cable Technology 308
Wireless Technologies 309
Synchronous Optical Network and Synchronous Digital Hierarchy 311
Dense Wavelength Division Multiplexing 313
Dark Fiber 314
WAN Transport Technology Pricing and Contract Considerations 314
WAN Design 316
Application Requirements of WAN Design 317
Response Time 318
Throughput 318
Packet Loss 318
Reliability 318
Technical Requirements: Maximum Offered Traffic 319
Technical Requirements: Bandwidth 320
Evaluating the Cost-Effectiveness of WAN Ownership 321
Optimizing Bandwidth in a WAN 322
Data Compression 322
Bandwidth Combination 324
Window Size 324
Queuing to Improve Link Utilization 325
Congestion Avoidance 329
Traffic Shaping and Policing to Rate-Limit Traffic Classes 330
Using WAN Technologies 332
Remote Access Network Design 332
VPN Design 333
VPN Applications 333
VPN Connectivity Options 334
Benefits of VPNs 337
WAN Backup Strategies 338
Dial Backup Routing 338
Permanent Secondary WAN Link 338
Shadow PVC 340
xv
The Internet as a WAN Backup Technology 341
IP Routing Without Constraints 341
Layer 3 Tunneling with GRE and IPsec 341
Enterprise Edge WAN and MAN Architecture 343
Enterprise Edge WAN and MAN Considerations 344
Cisco Enterprise MAN and WAN Architecture Technologies 345
Selecting Enterprise Edge Components 348
Hardware Selection 348
Software Selection 348
Cisco IOS Software Packaging 348
Cisco IOS Packaging Technology Segmentation 351
Comparing the Functions of Cisco Router Platforms and Software Families 351
Comparing the Functions of Multilayer Switch Platforms and Software Families 352
Enterprise Branch and Teleworker Design 352
Enterprise Branch Architecture 353
Enterprise Branch Design 355
Small Branch Office Design 356
Medium Branch Office Design 359
Large Branch Office Design 360
Enterprise Teleworker (Branch of One) Design 362
Summary 364
References 365
Case Study: ACMC Hospital Network WAN Design 366
Case Study Additional Information 366
Business Factors 367
Technical Factors 367
Case Study Questions 368
Review Questions 372
Chapter 6
Designing IP Addressing in the Network
377
Designing an IP Addressing Plan 377
Private and Public IPv4 Addresses 377
Private Versus Public Address Selection Criteria 378
Interconnecting Private and Public Addresses 379
Guidelines for the Use of Private and Public Addresses in an Enterprise Network
Determining the Size of the Network 381
Determining the Network Topology 382
Size of Individual Locations 383
Planning the IP Addressing Hierarchy 384
Hierarchical Addressing 384
Route Summarization 384
IP Addressing Hierarchy Criteria 386
Benefits of Hierarchical Addressing 386
Summarization Groups 387
Impact of Poorly Designed IP Addressing 388
380
xvi
Benefits of Route Aggregation 389
Fixed- and Variable-Length Subnet Masks 390
Routing Protocol Considerations 391
Classful Routing Protocols 391
Classless Routing Protocols 393
Hierarchical IP Addressing and Summarization Plan Example 394
Methods of Assigning IP Addresses 395
Static Versus Dynamic IP Address Assignment Methods 396
When to Use Static or Dynamic Address Assignment 396
Guidelines for Assigning IP Addresses in the Enterprise Network 397
Using DHCP to Assign IP Addresses 398
Name Resolution 400
Static Versus Dynamic Name Resolution 400
When to Use Static or Dynamic Name Resolution 401
Using DNS for Name Resolution 401
DHCP and DNS Server Location in a Network 403
Introduction to IPv6 404
IPv6 Features 405
IPv6 Address Format 406
IPv6 Address Types 408
IPv6 Address Scope Types 408
Interface Identifiers in IPv6 Addresses 409
IPv6 Unicast Addresses 410
Global Aggregatable Unicast Addresses 411
Link-Local Unicast Addresses 411
IPv6 Address Assignment Strategies 412
Static IPv6 Address Assignment 412
Dynamic IPv6 Address Assignment 413
IPv6 Name Resolution 414
Static and Dynamic IPv6 Name Resolution 414
IPv4- and IPv6-Aware Applications and Name Resolution 414
IPv4-to-IPv6 Transition Strategies and Deployments 415
Differences Between IPv4 and IPv6 415
IPv4-to-IPv6 Transition 416
Dual-Stack Transition Mechanism 416
Tunneling Transition Mechanism 417
Translation Transition Mechanism 418
IPv6 Routing Protocols 419
RIPng 420
EIGRP for IPv6 420
OSPFv3 421
Integrated IS-IS Version 6 421
BGP4+ 422
xvii
Summary 422
References 423
Case Study: ACMC Hospital IP Addressing Design
Review Questions 426
Chapter 7
423
Selecting Routing Protocols for the Network
429
Routing Protocol Features 429
Static Versus Dynamic Routing 430
Static Routing 430
Dynamic Routing 431
Interior Versus Exterior Routing Protocols 432
IGP and EGP Example 432
Distance Vector Versus Link-State Versus Hybrid Protocols
Distance Vector Example 435
Link-State Example 436
Routing Protocol Metrics 438
What Is a Routing Metric? 438
Metrics Used by Routing Protocols 439
Routing Protocol Convergence 441
RIPv2 Convergence Example 442
Comparison of Routing Protocol Convergence 443
Flat Versus Hierarchical Routing Protocols 444
Flat Routing Protocols 444
Hierarchical Routing Protocols 445
Routing Protocols for the Enterprise 446
EIGRP 446
EIGRP Terminology 447
EIGRP Characteristics 449
OSPF 449
OSPF Hierarchical Design 450
OSPF Characteristics 451
Integrated IS-IS 453
Integrated IS-IS Terminology 453
Integrated IS-IS Characteristics 455
Summary of Interior Routing Protocol Features 455
Selecting an Appropriate Interior Routing Protocol 456
When to Choose EIGRP 457
When to Choose OSPF 457
Border Gateway Protocol 457
BGP Implementation Example 459
External and Internal BGP 460
Routing Protocol Deployment 461
Routing Protocols in the Enterprise Architecture 461
Routing in the Campus Core 461
Routing in the Building Distribution Layer 463
433
xviii
Routing in the Building Access Layer 463
Routing in the Enterprise Edge Modules 464
Route Redistribution 464
Using Route Redistribution 465
Administrative Distance 466
Selecting the Best Route 467
Route Redistribution Direction 467
Route Redistribution Planning 468
Route Redistribution in the Enterprise Architecture 468
Route Filtering 470
Redistributing and Filtering with BGP 470
Route Summarization 471
The Benefits of Route Summarization 471
Recommended Practice: Summarize at the Distribution Layer 471
Recommended Practice: Passive Interfaces for IGP at the Access Layer
Summary 474
References 474
Case Study: ACMC Hospital Routing Protocol Design 475
Review Questions 475
Chapter 8
Voice Network Design Considerations
479
Traditional Voice Architectures and Features 479
Analog and Digital Signaling 479
The Analog-to-Digital Process 480
Time-Division Multiplexing in PSTN 482
PBXs and the PSTN 483
Differences Between a PBX and a PSTN Switch 484
PBX Features 485
PSTN Switches 486
Local Loops, Trunks, and Interswitch Communications 487
Telephony Signaling 489
Telephony Signaling Types 490
Analog Telephony Signaling 491
Digital Telephony Signaling 491
PSTN Numbering Plans 495
International Numbering Plans 495
Call Routing 496
Numbering Plans 496
Integrating Voice Architectures 500
Introduction to Integrated Networks 500
Drivers for Integrating Voice and Data Networks 502
H.323 503
Introduction to H.323 503
H.323 Components 503
H.323 Example 507
473
xix
Introduction to IP Telephony 508
IP Telephony Design Goals 509
Single-Site IP Telephony Design 510
Multisite WAN with Centralized Call Processing Design 511
Multisite WAN with Distributed Call Processing Design 513
Call Control and Transport Protocols 514
Voice Conversation Protocols 515
Call Control Functions with H.323 516
Call Control Functions with the Skinny Client Control Protocol
Call Control Functions with SIP 518
Call Control Functions with MGCP 520
Voice Issues and Requirements 521
Voice Quality Issues 521
Packet Delays 521
Fixed Network Delays 522
Variable Network Delays 524
Jitter 526
Packet Loss 527
Echo 527
Voice Coding and Compression 529
Coding and Compression Algorithms 530
Voice Coding Standards (Codecs) 530
Sound Quality 531
Codec Complexity, DSPs, and Voice Calls 532
Bandwidth Considerations 533
Reducing the Amount of Voice Traffic 533
Voice Bandwidth Requirements 534
Codec Design Considerations 536
QoS for Voice 536
Bandwidth Provisioning 538
Signaling Techniques 538
Classification and Marking 538
Congestion Avoidance 539
Traffic Policing and Shaping 539
Congestion Management: Queuing and Scheduling 539
Link Efficiency 541
CAC 541
Building Access Layer QoS Mechanisms for Voice 544
AutoQoS 545
Introduction to Voice Traffic Engineering 545
Terminology 546
Blocking Probability and GoS 546
Erlang 547
CCS 547
Busy Hour and BHT 547
CDR 548
516
xx
Erlang Tables 548
Erlang B Table 549
Erlang Examples 549
Trunk Capacity Calculation Example 550
Off-Net Calls Cost Calculation Example 551
Calculating Trunk Capacity or Bandwidth 552
Cisco IP Communications Return on Investment Calculator
Summary 553
References 554
Case Study: ACMC Hospital Network Voice Design 555
Case Study Additional Information 556
Case Study Questions 556
Review Questions 557
Chapter 9
Wireless Network Design Considerations
565
Introduction to Wireless Technology 565
RF Theory 567
Phenomena Affecting RF 567
RF Math 568
Antennas 570
Agencies and Standards Groups 570
IEEE 802.11 Operational Standards 571
IEEE 802.11b/g Standards in the 2.4 GHz Band 572
802.11a Standard in the 5-GHz Band 575
802.11 WLANs Versus 802.3 Ethernet LANs 576
WLAN Topologies 577
WLAN Components 577
Cisco-Compatible WLAN Clients 577
Autonomous APs 578
Lightweight APs 578
AP Power 578
WLAN Operation 579
WLAN Security 580
The Cisco Unified Wireless Network 581
The Cisco UWN Architecture 581
Cisco UWN Elements 582
Cisco UWN Lightweight AP and WLC Operation 583
Cisco UWN Wireless Authentication and Encryption 585
LWAPP Fundamentals 588
Layer 2 LWAPP Architecture 588
Layer 3 LWAPP Architecture 589
WLAN Controllers 590
WLC Terminology 590
WLC Interfaces 590
WLC Platforms 592
Access Point Support Scalability 594
553
xxi
Lightweight APs 597
Lightweight AP Discovery and Join Process 598
Lightweight AP and WLC Control Messages 600
Access Point Modes 601
Mobility in a Cisco Unified Wireless Network 602
Intracontroller Roaming 603
Intercontroller Roaming at Layer 2 604
Intercontroller Roaming at Layer 3 606
Mobility Groups 607
Recommended Practices for Supporting Roaming 609
Radio Resource Management and RF Groups 610
Radio Resource Management 610
RF Grouping 612
AP Self-Healing 613
Cisco UWN Review 613
Designing Wireless Networks with Lightweight Access Points and Wireless LAN Controllers
RF Site Survey 615
RF Site Survey Process 616
Define the Customer Requirements 616
Identify Coverage Areas and User Density 617
Determine Preliminary AP Locations 618
Perform the Actual Survey 619
Document the Findings 621
Controller Redundancy Design 621
Dynamic Controller Redundancy 622
Deterministic Controller Redundancy 624
Deterministic Redundancy Options 625
Design Considerations for Guest Services in Wireless Networks 628
Design Considerations for Outdoor Wireless Networks 631
Wireless Mesh Components 632
MAP-to-RAP Connectivity 633
Mesh Design Recommendations 634
Design Considerations for Campus Wireless Networks 635
Common Wireless Design Questions 635
Controller Placement Design 636
Campus Controller Options 637
Design Considerations for Branch Office Wireless Networks 638
Branch Office Considerations 638
Local MAC 638
REAP 639
Hybrid REAP 640
Branch Office WLAN Controller Options 642
Summary 642
References 643
615
xxii
Case Study: ACMC Hospital UWN Considerations
Review Questions 646
Chapter 10
644
Evaluating Security Solutions for the Network
651
Network Security 651
The Need for Network Security 651
Network Security Requirements 652
Security Legislation Examples 652
Terminology Related to Security 653
Threats and Risks 654
Threat: Reconnaissance Attacks 655
Threat: Gaining Unauthorized Access to Systems 657
Threat: DoS 657
Risk: Integrity Violations and Confidentiality Breaches 659
Network Security Policy and Process 660
Security Policy 662
The Need for a Security Policy 662
Risk Assessment and Management 663
Documenting the Security Policy 666
Network Security Process 667
The Cisco Self-Defending Network 669
The Cisco Self-Defending Network Framework 669
Secure Network Platform 670
Cisco Self-Defending Network Phases 670
Trust and Identity Management 672
Trust 672
Identity 674
Access Control 677
Trust and Identity Management Technologies 677
Identity and Access Control Deployment 681
Threat Defense 682
Physical Security 683
Infrastructure Protection 686
Threat Detection and Mitigation 688
Secure Connectivity 691
Encryption Fundamentals 692
VPN Protocols 693
Transmission Confidentiality: Ensuring Privacy 693
Maintaining Data Integrity 695
Security Management 697
Cisco Security Management Technologies 698
Network Security Solutions 699
Integrated Security Within Network Devices 699
Cisco IOS Router Security 700
Security Appliances 702
xxiii
IPSs 702
Catalyst Services Modules 703
Endpoint Security Solutions 705
Securing the Enterprise Network 706
Deploying Security in the Enterprise Campus 706
Deploying Security in the Enterprise Data Center 707
Deploying Security in the Enterprise Edge 709
Summary 711
References 712
Case Study 10-1: ACMC Hospital Network Security Design 713
Case Study Questions 714
Case Study 10-2: ACMC Hospital Network—Connecting More Hospitals
Case Study Questions 715
Review Questions 719
Appendix A
Answers to Review Questions and Case Studies
Appendix B
IPv4 Supplement
Appendix C
Open System Interconnection (OSI) Reference Model
Appendix D
Network Address Translation
Acronyms and Abbreviations
Index
888
715
725
807
871
859
845
xxiv
Icons Used in This Book
Access Point
Cisco Unified
Communications
Manager
H.323
Device
PBX
Router
Catalyst
Switch
DSU/CSU
DSU/CSU
Cisco IP Phone
Bridge
Hub
Multilayer
Switch
ATM
Switch
ISDN/Frame
Relay
Switch
Content Switch
Gateway
Access
Server
Phone
Netflow
Router
V
Voice-Enabled
Router
Router with
Firewall
Communication
Server
LWAPP
VPN
Concentrator
Network
Management
Appliance
DSLAM
Wide Area
Application
Engine
WiSM
Optical
Services Router
Lightweight
Double Radio
Access Point
WLAN
Controller
PC with
Software
Terminal
File
Server
Web
Server
Cisco Works
Workstation
Modem
Printer
Laptop
Cisco Security
MARS
NAC
Appliance
PIX Security
Appliance
Network Cloud
PC
Token
Ring
NAS
Cisco MDS
9000 SSM
Optical
Transport
NAS
InfiniBand
WAFS
IDS
Token Ring
FDDI
Line: Ethernet
FDDI
Line: Serial
Line: Switched Serial
Wireless Connection
xxv
Command Syntax Conventions
The conventions used to present command syntax in this book are the same conventions used in
the IOS Command Reference. The Command Reference describes these conventions as follows:
■
Boldface indicates commands and keywords that are entered literally as shown. In actual
configuration examples and output (not general command syntax), boldface indicates
commands that are manually input by the user (such as a show command).
■
Italics indicate arguments for which you supply actual values.
■
Vertical bars (|) separate alternative, mutually exclusive elements.
■
Square brackets [ ] indicate optional elements.
■
Braces { } indicate a required choice.
■
Braces within brackets [{ }] indicate a required choice within an optional element.
xxvi
Foreword
Cisco Certification Self-Study Guides are excellent self-study resources for networking
professionals to maintain and increase internetworking skills and to prepare for Cisco Career
Certification exams. Cisco Career Certifications are recognized worldwide and provide valuable,
measurable rewards to networking professionals and their employers.
Cisco Press exam certification guides and preparation materials offer exceptional—and flexible—
access to the knowledge and information required to stay current in one’s field of expertise, or to
gain new skills. Whether used to increase internetworking skills or as a supplement to a formal
certification preparation course, these materials offer networking professionals the information
and knowledge required to perform on-the-job tasks proficiently.
Developed in conjunction with the Cisco certifications and training team, Cisco Press books are
the only self-study books authorized by Cisco. They offer students a series of exam practice tools
and resource materials to help ensure that learners fully grasp the concepts and information
presented.
Additional authorized Cisco instructor-led courses, e-learning, labs, and simulations are available
exclusively from Cisco Learning Solutions Partners worldwide. To learn more, visit http://
www.cisco.com/go/training/.
I hope you will find this guide to be an essential part of your exam preparation and professional
development, as well as a valuable addition to your personal library.
Drew Rosen
Manager, Learning and Development
Learning@Cisco
September 2007
xxvii
Introduction
Modern networks are both extremely complex and critical to business success. As organizational
processes continue to increase the requirements for bandwidth, reliability, and functionality from
their networks, network designers are challenged to rapidly develop and evolve networks that use
new protocols and technologies. Network designers are also challenged to stay current with the
internetworking industry’s constant and rapid changes. Designing robust, reliable, scalable
networks is a necessary skill for network operators and designers in the modern organizational
environment.
This book teaches you how to design enterprise networks. You will learn about network design in
the context of the Cisco Service Oriented Network Architecture (SONA) architectural framework
and Enterprise Architecture. Specific topics include campus and data center infrastructure, remote
connectivity, IP addressing design, routing protocol selection, designing voice networks, wireless
network design, and including security in your designs.
An ongoing case study and chapter-ending review questions illustrate and help solidify the
concepts presented in this book.
This book provides you with the knowledge and skills you need to achieve associate-level
competency in network design. It starts you down the path to attaining your CCDA certification,
because it provides in-depth information to help you prepare for the DESGN exam.
DESGN is the first step in the design curriculum that supports the Cisco network design
certification track. This book focuses on the technology and methods currently available.
xxviii
Objectives of This Book
The goal of this book is to provide you with the knowledge you need to gather internetworking
requirements, identify solutions, and design the network infrastructure and services to ensure
basic functionality, using the principles of hierarchical network design to structure and modularize
a converged enterprise network design. Design tasks might include understanding the design
methodology; structuring and modularizing the network design using the Cisco Enterprise
Architecture; designing the Enterprise Campus, Enterprise Data Center, Enterprise Edge, and
remote modules as needed; designing an addressing plan and selecting suitable routing protocols;
designing basic voice transport across the network; designing a basic wireless solution; and
evaluating security solutions.
Who Should Read This Book
This book is intended for network and sales engineers who are involved in network design,
planning, and implementation, and for those who plan to take the 640-863 DESGN exam toward
the CCDA certification. This book provides in-depth study material for that exam. To fully benefit
from this book, you should have the following prerequisite skills:
■
CCNA–level knowledge (or CCNA certification), which can best be achieved by completing
the related CCNA courses and using CCNA books from Cisco Press. You can find more
information on the CCNA certification at http://www.cisco.com/go/ccna/.
■
Knowledge of wireless networking, quality of service (QoS), and multilayer switching is
highly recommended. The level equivalent to that covered in the Building Cisco Multilayer
Switched Networks (BCMSN) course or the book Building Cisco Multilayer Switched
Networks (BCMSN), Fourth Edition (Richard Froom, Balaji Sivasubramanian, Erum Frahim,
Cisco Press, 2007) is appropriate.
NOTE We assume that you understand the wireless networking material in the Cisco Press
book just mentioned. In Chapter 9, we include some material from that book as an introduction
to wireless technology. Refer to the Cisco Press BCMSN book for more detailed information.
■
Practical experience deploying and operating networks based on Cisco network devices and
the Cisco IOS.
xxix
Summary of the Contents
The chapters and appendixes of this book are as follows:
■
Chapter 1, “Network Fundamentals Review,” introduces some fundamental concepts and
terminology that are the foundation for the material in the rest of the book.
■
Chapter 2, “Applying a Methodology to Network Design,” introduces the Cisco vision of
intelligent networks and the Service Oriented Network Architecture (SONA) architectural
framework. The lifecycle of a network and a network design methodology based on the
lifecycle are presented, and each phase of the network design process is explored in detail.
■
Chapter 3, “Structuring and Modularizing the Network,” introduces a modular hierarchical
approach to network design, the Cisco Enterprise Architecture. The chapter includes a
detailed description of services within modular networks. Network management protocols
and features are also discussed.
■
Chapter 4, “Designing Basic Campus and Data Center Networks,” examines the design of the
Enterprise Campus and Enterprise Data Center network infrastructure.
■
Chapter 5, “Designing Remote Connectivity,” discusses WAN technologies and design
considerations. This chapter describes the Enterprise WAN and metropolitan-area network
(MAN) architectures and the Enterprise Branch and Teleworker architectures and discusses
the selection of WAN hardware and software components.
■
Chapter 6, “Designing IP Addressing in the Network,” discusses the design of an IP version
4 (IPv4) addressing scheme. The chapter also introduces IP version 6 (IPv6) and discusses
IPv4-to-IPv6 migration strategies.
■
Chapter 7, “Selecting Routing Protocols for the Network,” describes considerations for
selecting the most appropriate network routing protocol. The chapter discusses why certain
protocols are suitable for specific modules in the Enterprise Architecture. It concludes with a
description of some advanced routing protocol deployment features, including redistribution,
filtering, and summarization.
■
Chapter 8, “Voice Network Design Considerations,” introduces voice design principles and
provides guidelines for a successful integrated network deployment. It begins with an
overview of traditional voice architectures and features and continues with a discussion of
integrated voice architectures, including VoIP and IP telephony.
■
Chapter 9, “Wireless Network Design Considerations,” introduces the Cisco Unified Wireless
Network (UWN) architecture and discusses wireless design principles. The chapter introduces
wireless technologies and explores considerations when designing Cisco UWNs in enterprise
environments.
■
Chapter 10, “Evaluating Security Solutions for the Network,” describes network security,
including threats and risks, and network security policies. The Cisco Self-Defending Network
strategy for designing network security is explored, and Cisco network security solutions for
enterprise networks are discussed.
xxx
■
Appendix A, “Answers to Review Questions and Case Studies,” contains answers to the
review questions and case studies that appear at the end of the chapters.
■
Appendix B, “IPv4 Supplement,” provides job aids and supplementary information intended
for your use when working with IPv4 addresses. Topics include an IP addressing and
subnetting job aid, a decimal-to-binary conversion chart, IPv4 addressing review, and IPv4
access lists.
■
Appendix C, “Open Systems Interconnection (OSI) Reference Model,” is a brief overview of
the OSI seven-layer model.
■
Appendix D, “Network Address Translation,” contains information about Cisco’s
implementation of Network Address Translation (NAT) and port address translation (PAT).
■
“Acronyms and Abbreviations” spells out the abbreviations, acronyms, and initialisms used
in this book.
Case Studies and Review Questions
Starting in Chapter 2, each chapter concludes with a case study on Acme County Medical Center
(ACMC) Hospital, a fictitious small county hospital in the United States, to help you evaluate your
understanding of the concepts presented. In each task of the case study, you act as a network
design consultant and make creative proposals to accomplish the customer’s business needs. The
final goal of each case study is a paper solution. Also starting in Chapter 2, each chapter also
includes review questions on the subjects covered in that chapter so that you can test your
knowledge.
To find out how you did and what material you might need to study further, you can compare your
answers to those provided in Appendix A. Note that for each case study task, Appendix A provides
a solution based on the assumptions made. There is no claim that the provided solution is the best
or only solution. Your solution might be more appropriate for the assumptions you made. The
provided solution allows you to understand the author’s reasoning and offers you a means of
comparing and contrasting your solution.
What’s New in This Edition
This book is an update to CCDA Self-Study: Designing for Cisco Internetwork Solutions (DESGN),
ISBN 1-58705-141-9. This second edition reflects changes to the DESGN course. The following
are the major changes between editions:
■
Every chapter has been rewritten. Some material that was removed from the main portion of
the previous edition because of course changes has been put in sidebars, as appropriate. The
appendixes have been modified and updated to reflect the book’s content.
■
The methodology used throughout the book is now based on Cisco’s SONA framework and
Enterprise architectures.
xxxi
■
New topics include the design of the data center and the design of teleworker and branch
offices.
■
A new chapter on wireless network design, Chapter 9, has been included.
■
Chapter 1 has been enhanced to include a more thorough review of networking fundamentals
and to reflect new prerequisite material.
■
Some information on IP addressing in the main body of the first edition has been moved to
Appendix B.
■
Chapter 10 includes details of Cisco network security solutions and the Cisco Self-Defending
Network strategy.
■
The information about network management has been condensed and moved to Chapter 3.
■
The case study is new and includes a more thorough examination of network design issues.
Simulation output is no longer included.
Author’s Notes, Key Points, Sidebars, and Cautions
The notes, key points, and sidebars found in this book provide extra information on a subject.
KEY
The key points highlight information that is important for understanding the topic at hand
POINT and specific points of interest.
Resources for Further Study
Within each chapter are references to other resources that provide you with further information on
specific topics. For more information about Cisco exams, training, and certifications, refer to the
Training and Events area on the Cisco website at http://www.cisco.com/web/learning/index.html.
NOTE The website references in this book were accurate at the time of writing; however, they
might have since changed. If a URL is unavailable, you might try conducting a search using the
title as keywords in your favorite search engine.
This chapter describes the fundamental
concepts that relate to networks and
includes the following sections:
■
Introduction to Networks
■
Protocols and the OSI Model
■
LANs and WANs
■
Network Devices
■
Introduction to the TCP/IP Suite
■
Routing
■
Addressing
■
Switching Types
■
Spanning Tree Protocol
■
Virtual LANs
■
Comprehensive Example
■
Summary
CHAPTER
1
Network Fundamentals Review
The goal of this chapter is to introduce some fundamental concepts and terminology that are the
foundation for the other material in the book. After a brief introduction to networks in general,
we delve into the communication protocols that are used by network devices; this necessarily
includes a discussion of the infamous Open Systems Interconnection (OSI) model. LANs and
WANs are described, as are the various devices found in a network. This is followed by an
introduction to TCP/IP, used extensively in the Internet. Routing and addressing, including IP
addresses, are explored. The two types of switching—Layer 2 and Layer 3 switching—are
described. Spanning Tree Protocol (STP) and its operation are introduced, followed by a
discussion of VLANs. The chapter concludes with a comprehensive example, tying together
many of the concepts covered. You are encouraged to review any of the material in this chapter
that you are not familiar with before reading the rest of the book, because these ideas are critical
to understanding the more complex technologies covered in the other chapters.
Introduction to Networks
In the 1960s and 1970s, before the PC was invented, a company would typically have only one
central computer: a mainframe. Users connected to the mainframe through terminals on their
desks. These terminals had no intelligence of their own—their only function was to display a
text-based user interface provided by the mainframe. For this reason, they were usually called
dumb terminals. The only network was the connection between the terminals and the
mainframe.
In 1981, the IBM PC was released—an event that changed the industry significantly. The PC
had intelligence of its own, allowing users to do tasks on their desktops that previously required
a mainframe. Networks were introduced to interconnect these distributed PCs.
The term network is used in many ways. For example, people network with one another,
telephones are networked in the public telephone system, and data networks connect different
computers. These uses of the term have a common thread: Networks make it possible for people
or devices to communicate with each other.
A data network is a network that allows computers to exchange data. The simplest data network
is two PCs connected through a cable. However, most data networks connect many devices.
4
Chapter 1: Network Fundamentals Review
An internetwork is a collection of individual networks connected by networking devices and that
function as a single large network. The public Internet is the most common example—it is a single
network that connects millions of computers. Internetworking refers to the industry and products
that are involved in the design, implementation, and administration of internetworks.
The first networks were LANs; they enabled multiple users in a relatively small geographic area
to exchange files and messages and to access shared resources such as printers and disk storage.
WANs were introduced to interconnect these LANs so that geographically dispersed users could
also share information. The “LANs and WANs” section later in this chapter further describes these
two types of networks.
NOTE The “Acronyms and Abbreviations” appendix near the end of the book lists many of
the acronyms that appear in this book.
Protocols and the OSI Model
This section describes the OSI model and protocols used in internetworking. As an introduction,
imagine that you are in Toronto and you want to send an e-mail to your friend in San Francisco.
Successfully sending and receiving e-mail involves doing many things, including the following:
■
You must type the message in your e-mail application.
■
You must address the message in your e-mail application.
■
You must click the Send button in your e-mail application to start sending the message.
■
You must use the correct type of connections and wires to connect your PC to your local
network.
■
Your PC must put the data on the wire.
■
Your PC must be able to connect to the Internet, and you must provide any necessary login
information.
■
Network devices must find the best path through the Internet so that the e-mail is received by
the right person.
The following section introduces the OSI model, a model that describes all these communication
functions and their relationships with each other.
Protocols and the OSI Model
5
The OSI Model
The ISO standards committee created a list of all the network functions required for sending data
(such as an e-mail) and divided them into seven categories. This model is known as the OSI sevenlayer model. The OSI seven-layer model was released in 1984; it is illustrated in Figure 1-1.
Figure 1-1
Each of the Seven Layers of the OSI Model Represents Functions Required for Communication
7
Application
6
Presentation
5
Session
4
Transport
3
Network
Upper
Layers
Lower
Layers
2
Data Link
1
Physical
NOTE You might also have heard people talk about OSI Layers 8 and 9. Although they are
not official, Layer 8 is commonly known as the political layer, and Layer 9 is the religious layer.
These lightheartedly represent all the other issues you might encounter in an IT project.
KEY
The OSI model represents everything that must happen to send data. The important thing
POINT to remember is that the OSI model does not specify how these things are to be done, just
what needs to be done. Different protocols can implement these functions differently. For
example, the open-standard Internet Protocol (IP) and Novell’s Internetwork Packet
Exchange (IPX) protocol are different implementations of the network layer.
As also shown in Figure 1-1, the seven layers can be thought of in two groups: the upper layers
and the lower layers. The term upper layers often refers to Layers 5 through 7, and the term lower
layers often refers to Layers 1 through 4, although this terminology is relative. The term upper
layer also refers to any layer above another layer.
6
Chapter 1: Network Fundamentals Review
The upper layers are concerned with application issues—for example, the interface to the user and
the format of the data. The lower layers are concerned with transport issues—for example, how
the data traverses the network and the physical characteristics of that network.
Protocols
A protocol is a set of rules. The OSI model provides a framework for the communication protocols
used between computers. Just as we need rules of the road—for example, so that we know that a
red light means stop and a green light means go—computers also need to agree on a set of rules
to successfully communicate. Two computers must use the same protocol to communicate.
Computers that try to use different protocols would be analogous to speaking in Italian to someone
who understands only English—it would not work.
Many different networking protocols are in use, in a variety of categories. For example, LAN and
WAN protocols (at the lower two OSI layers) specify how communication is accomplished across
various media types. Routed protocols (at Layer 3) specify the data’s format and how it is carried
throughout a network, and routing protocols (some of which also operate at Layer 3) specify how
routers communicate with one another to indicate the best paths through the network.
KEY
Many protocol suites define various protocols that correspond to the functions defined in
POINT the seven OSI layers, including routed protocols, a selection of routing protocols,
applications, and so forth. Protocol suites are also known as protocol stacks.
The most widely used network protocol suite today is the TCP/IP suite, named after two of the
protocols within the suite. This network protocol suite is used in many places, including the
backbone of the Internet and within organization’s networks. Novell’s NetWare, Apple
Corporation’s AppleTalk, and IBM’s System Network Architecture are other examples of network
protocol suites.
KEY
The OSI protocol suite is yet another suite. Although the OSI protocol suite uses the same
POINT names for its seven layers as the OSI seven-layer model does, the two OSI items are
different—one is a protocol suite, and the other is the model that is used as a point of
reference for all of the protocol suites.
The OSI Layers
The following sections briefly describe each of the seven layers of the OSI model, starting at the
lowest layer. Appendix C, “Open System Interconnection (OSI) Reference Model,” delves deeper
into the details of the OSI model.
Protocols and the OSI Model
7
Physical Layer—Layer 1
The OSI physical layer defines specifications such as the electrical and mechanical conditions
necessary for activating, maintaining, and deactivating the physical link between devices.
Specifications include voltage levels, maximum cable lengths, connector types, and maximum
data rates. The physical layer is concerned with the binary transmission of data. This binary data
is represented as bits (which is short for binary digits). A bit has a single binary value, either 0 or 1.
Data Link Layer—Layer 2
Layer 2, the data link layer, defines the format of data that is to be transmitted across the physical
network. It indicates how the physical medium is accessed, including physical addressing, error
handling, and flow control. The data link layer sends frames of data; different media have different
types of frames.
KEY
A frame is a defined set of data that includes addressing and control information and is
POINT transmitted between network devices. A frame can contain a header field (in front of the
data) and a trailer field (after the data); these two fields are said to “frame” the data.
For LANs, the Institute of Electrical and Electronics Engineers (IEEE) split Layer 2 into two
sublayers: Logical Link Control (LLC) and Media Access Control (MAC).
The LLC sublayer (defined by the IEEE 802.2 specification) allows multiple network layer (Layer
3) protocols to communicate over the same physical data link by allowing the Layer 3 protocol to
be specified in the LLC portion of the frame.
Some examples of MAC sublayer protocols are IEEE 802.3 Ethernet and IEEE 802.5 Token Ring.
The MAC sublayer specifies the physical MAC address that uniquely identifies a device on a
network. Each frame that is sent specifies a destination MAC address; only the device with that
MAC address should receive and process the frame. Each frame also includes the MAC address
of the frame’s source.
NOTE You might be interested in some IEEE trivia: The IEEE 802 committee was formed in
February (the second month) of 1980, and thus was called “802.” The IEEE 802.3 standard, for
example, was ratified in the IEEE annex building 3 in Geneva at that time.
Network Layer—Layer 3
The network layer is responsible for routing, which allows data to be properly forwarded across a
logical internetwork (consisting of multiple physical networks). Logical network addresses (as
opposed to physical MAC addresses) are specified at Layer 3. Layer 3 protocols include routed
8
Chapter 1: Network Fundamentals Review
and routing protocols. The routing protocols determine the best path that should be used to forward
the routed data through the internetwork to its destination.
The network layer sends datagrams (or packets); different routed protocols have different types of
datagrams.
KEY
A datagram is a defined set of data that includes addressing and control information and
POINT is routed between the data’s source and destination.
If a datagram needs to be sent across a network that can handle only a certain amount of
data at a time, the datagram can be fragmented into multiple packets and then reassembled
at the destination. Therefore, a datagram is a unit of data, whereas a packet is what
physically goes on the network. If no fragmentation is required, a packet is a datagram;
the two terms are often used interchangeably.
Transport Layer—Layer 4
Layer 4, the transport layer, is concerned with end-to-end connections between the source and the
destination. The transport layer provides network services to the upper layers.
Connection-oriented reliable transport establishes a logical connection and uses sequence
numbers to ensure that all data is received at the destination. Connectionless best-effort transport
just sends the data and relies on upper-layer error detection mechanisms to report and correct
problems. Reliable transport has more overhead than best-effort transport.
KEY
Best-effort delivery means that the protocol will not check to see whether the data was
POINT delivered intact; a higher-level protocol, or the end user, must confirm that the data was
delivered correctly
Multiplexing allows many applications to use the same physical connection. For example, data is
tagged with a number that identifies the application from which it came. Both sides of the
connection then can interpret the data in the same way.
The transport layer sends segments.
KEY
A segment is a defined set of data that includes control information and is sent between the
POINT transport layers of the sender and receiver of the data.
Protocols and the OSI Model
9
Upper Layers—Layers 5 Through 7
From the lower layers’ perspective, the three upper layers represent the data that must be
transmitted from the source to the destination; the network typically neither knows nor cares about
the contents of these layers. For completeness, the following briefly describes the functions of
these layers:
■
The session layer, Layer 5, is responsible for establishing, maintaining, and terminating
communication sessions between applications running on different hosts.
■
The presentation layer, Layer 6, specifies the format, data structure, coding, compression, and
other ways of representing the data to ensure that information sent from one host’s application
layer can be read by the destination host.
■
Finally, the application layer, Layer 7, is the closest to the end user; it interacts directly with
software applications that need to communicate over the network.
KEY
The OSI application layer is not the application itself; rather, the OSI application layer
POINT provides the communication services to the application.
For example, your e-mail application might use two OSI application layer protocols—
Simple Mail Transfer Protocol (SMTP) and Post Office Protocol version 3 (POP3)—to
send and retrieve e-mail messages.
Communication Among OSI Layers
This section describes how communication among the seven OSI layers is accomplished. When
you send an e-mail from Toronto to your friend in San Francisco, you can think of your e-mail
application sending a message to the e-mail application on your friend’s computer. In OSI model
terms, information is exchanged between peer OSI layers—the application layer on your computer
is communicating with the application layer on your friend’s computer. However, to accomplish
this, the e-mail must go through all the other layers on your computer; for example, it must have
the correct network layer address, be put in the correct frame type, and so on. The e-mail must then
go over the network, and then go back through all the layers on your friend’s computer, until it
finally arrives at your friend’s e-mail application.
Control information from each layer is added to the e-mail data before it passes to lower layers;
this control information is necessary to allow the data to go through the network properly. Thus,
the data at each layer is encapsulated, or wrapped in, the information appropriate for that layer,
including addressing and error checking. The right side of Figure 1-2 illustrates the following
encapsulation process:
■
At Layer 4, the e-mail is encapsulated in a segment.
■
At Layer 3, this segment is encapsulated in a packet.
Chapter 1: Network Fundamentals Review
■
At Layer 2, this packet is encapsulated in a frame.
■
Finally, at Layer 1, the frame is sent out on the wire (or air, if wireless is used) in bits.
Figure 1-2
Data Is Encapsulated as It Goes Down Through the Layers and Is Unencapsulated as It
Goes Up
Presentation
Session
Application
Application
Data
E-Mail
Data Link
Physical
San Francisco
Frame
Info
Presentation
Session
Segmet E-Mail
Info
Segment
Transport
Packet
Info
Segmet E-Mail
Info
Packet
Network
Packet
Info
Segmet E-Mail
Info
Frame
Data Link
Bits
Physical
Transport
Network
Encapsulation
Application
Decapsulation
10
1010111000....110101...
Toronto
The grouping of data used to exchange information at a particular OSI layer is known as a protocol
data unit (PDU). Thus, the PDU at Layer 4 is a segment, at Layer 3 is a packet, and at Layer 2 is
a frame.
Notice how the overall size of the information increases as the data goes down through the lower
layers. When data is received at the other end of the network, this additional information is
analyzed and then removed as the data is passed to the higher layers toward the application layer.
In other words, the data is unencapsulated, or unwrapped; this process is shown on the left side of
Figure 1-2.
NOTE Cisco sometimes uses the word decapsulate instead of unencapsulate.
NOTE For simplicity, Figure 1-2 shows only two systems, one in San Francisco and one in
Toronto, and does not show the details of e-mail protocols or e-mail servers. Later sections in
this chapter describe what happens when intermediate devices, such as routers, are encountered
between the two systems.
LANs and WANs
11
At each layer, different protocols are available. For example, the packets sent by IP are different
from those sent by IPX because different protocols (rules) must be followed. Both sides of peer
layers that are communicating must support the same protocol.
LANs and WANs
LANs were first used between PCs when users needed to connect with other PCs in the same
building to share resources. A LAN is a high-speed, yet relatively inexpensive, network that allows
connected computers to communicate. LANs have limited reach (hence the term local-area
network), typically less than a few hundred meters, so they can connect only devices in the same
room or building, or possibly within the same campus.
A LAN is an always-on connection—in other words, you don’t have to dial up or otherwise
connect to it when you want to send some data. LANs also usually belong to the organization in
which they are deployed, so no incremental cost is typically associated with sending data. A
variety of LAN technologies are available, some of which are shown in the center of Figure 1-3
and briefly described here:
■
Ethernet and IEEE 802.3, running at 10 megabits per second (Mbps), use a carrier sense
multiple access collision detect (CSMA/CD) technology. When a CSMA/CD device has data
to send, it listens to see whether any of the other devices on the wire (multiple access) are
transmitting (carrier sense). If no other device is transmitting, this device starts to send its
data, listening all the time in case another device erroneously starts to send data (collision
detect).
■
Fast Ethernet (at 100 Mbps), covered by the IEEE 802.3u specification, also uses the CSMA/
CD technology.
■
Gigabit Ethernet (running at 1 gigabit per second [Gbps]) is covered by the IEEE 802.3z and
802.3ab specifications and uses the CSMA/CD technology.
■
Wireless LAN (WLAN) standards, defined by the IEEE 802.11 specifications, are capable of
speeds up to 54 Mbps under the 802.11g specification. (A new standard, 802.11n, planned to
be ratified in 2007, will be capable of higher speeds.) WLANs use a carrier sense multiple
access collision avoidance (CSMA/CA) mechanism (versus the CSMA/CD mechanism used
by the wired Ethernet standards).
Chapter 1: Network Fundamentals Review
OSI Model
LANs
EIA/TIA-449
EIA/TIA-232
V.35
V.24
RJ-45
Cable
PPP
802.11
802.3z / 802.3ab
802.3u
Physical
Layer
802.3
Data Link
Layer
HDLC
802.2 LLC
ISDN BRI
A Variety of LAN and WAN Standards
Frame Relay
Figure 1-3
Ethernet
12
WANs
WANs interconnect devices that are usually connected to LANs and are located over a relatively
broad geographic area (hence the term wide-area network). Compared to a LAN, a typical WAN
is slower, requires a connection request when you want to send data, and usually belongs to
another organization (called a service provider). You pay the service provider a fee (known as a
tariff) for the use of the WAN; this fee could be a fixed monthly amount, or it could be variable
based on usage and distance.
Just as you find many types of LANs, many types of WANs are also available, some of which are
illustrated on the right side of Figure 1-3. Like LANs, WANs function at the lower two layers of
the OSI model. A few, such as ISDN, also function at Layer 3. The service you use depends on
many factors, including what is available where you are and, of course, the cost of the service.
Some of the common WAN technologies include the following:
■
Packet-switched network: A network that shares the service provider’s facilities. The service
provider creates permanent virtual circuits and switched virtual circuits that deliver data
between subscribers’ sites. Frame Relay is an example of a packet-switched network.
■
Leased line: A point-to-point connection reserved for transmission. Common data link layer
protocols used in this case are PPP and High-Level Data Link Control (HDLC).
Network Devices
■
13
Circuit-switched network: A physical path reserved for the duration of the connection
between two points. ISDN Basic Rate Interface (BRI) is an example of this type of network.
Two other technologies, digital subscriber line (DSL) and cable, connect residential and business
premises to service providers’ premises:
■
DSL: Uses unused bandwidth on traditional copper telephone lines to deliver traffic at higher
speeds than traditional modems allow. The most common DSL implementation is asymmetric
DSL (ADSL). It is called asymmetric because the download speed is faster than the upload
speed, reflecting the needs of most users and more efficiently using the available bandwidth
on standard two-wire telephone lines. ADSL allows regular telephone traffic to
simultaneously share the line with high-speed data traffic so that only one telephone line is
required to support both high-speed Internet and normal telephone services.
■
Cable: Uses unused bandwidth on cable television networks to deliver data at higher speeds
than traditional modems allow.
NOTE These and other WAN technologies are discussed in Chapter 5, “Designing Remote
Connectivity.”
Network Devices
The main devices that interconnect networks are hubs, switches, and routers, as described in the
following sections.
NOTE Many other devices can be used in networks to provide specific functionality; these
devices are introduced in the appropriate chapters in this book. For example, security devices,
including firewalls, are discussed in Chapter 10, “Evaluating Security Solutions for the
Network.”
Terminology: Domains, Bandwidth, Unicast, Broadcast, and Multicast
The following is some terminology related to the operation of network devices:
■
A domain is a specific part of a network.
■
Bandwidth is the amount of data that can be carried across a network in a given time period.
■
Unicast data is data meant for a specific device.
■
Broadcast data is data meant for all devices; a special broadcast address indicates this.
14
Chapter 1: Network Fundamentals Review
■
Multicast data is data destined for a specific group of devices; again, a special address
indicates this.
■
A bandwidth domain, known as a collision domain for Ethernet LANs, includes all devices
that share the same bandwidth.
■
A broadcast domain includes all devices that receive each others’ broadcasts (and multicasts).
Devices in the same bandwidth domain are also in the same broadcast domain; however, devices
in the same broadcast domain can be in different bandwidth domains.
Hubs
A typical Ethernet LAN uses unshielded twisted-pair (UTP) cables with RJ-45 connectors (which
are slightly bigger than telephone RJ-11 connectors). Because these cables have only two ends,
you need an intermediary device to connect more than two computers. That device is a hub.
A hub works at Layer 1 and connects multiple devices so that they are logically all on one LAN.
Physical Interfaces and Ports
The physical connection point on a network device—a hub, switch, or router—is called an
interface or a port.
Don’t confuse this definition of port with the application layer port numbers discussed in the
“TCP/IP Transport Layer Protocols” section later in this chapter.
A hub has no intelligence—it sends all data received on any port to all the other ports.
Consequently, devices connected through a hub receive everything that the other devices send,
whether or not it was meant for them. This is analogous to being in a room with lots of people—
if you speak, everyone can hear you. If more than one person speaks at a time, everyone just hears
noise.
All devices connected to a hub are in one collision domain and one broadcast domain.
NOTE A hub just repeats all the data received on any port to all the other ports; thus, hubs are
also known as repeaters.
Switches
Just as having many people in a room trying to speak can result in nobody hearing anything
intelligible, using hubs in anything but a small network is not efficient. To improve performance,
Network Devices
15
LANs are usually divided into multiple smaller LANs interconnected by a Layer 2 LAN switch.
The devices connected to a switch again appear as they are all on one LAN, but this time, multiple
conversations between devices connected through the switch can happen simultaneously.
NOTE This section discusses Layer 2 LAN switches. The later section “Switching Types”
introduces Layer 3 switching.
LAN switches are Layer 2 devices and have some intelligence—they send data to a port only if
the data needs to go there. A device connected to a switch port does not receive any of the
information addressed to devices on other ports. Therefore, the main advantage of using a switch
instead of a hub is that the traffic received by a device is reduced because only frames addressed
to a specific device are forwarded to the port on which that device is connected.
Switches read the source and destination MAC addresses in the frames and therefore can keep
track of who is where, and who is talking to whom, and send data only where it needs to go.
However, if the switch receives a frame whose destination address indicates that it is a broadcast
(information meant for everyone) or multicast (information meant for a group), by default it sends
the frame out all ports (except for the one on which it was received).
All devices connected to one switch port are in the same collision domain, but devices connected
to different ports are in different collision domains. By default, all devices connected to a switch
are in the same broadcast domain.
Switches Versus Bridges
You might have also heard of bridges. Switches and bridges are logically equivalent. The main
differences are as follows:
■
Switches are significantly faster because they switch in hardware, whereas bridges switch in
software.
■
Switches can interconnect LANs of unlike bandwidth. A 10-Mbps Ethernet LAN and a 100Mbps Ethernet LAN, for example, can be connected using a switch. In contrast, all the ports
on a bridge support one type of media.
■
Switches typically have more ports than bridges.
■
Modern switches have additional features not found on bridges; these features are described
in later chapters.
Switches do not allow devices on different logical LANs to communicate with each other; this
requires a router, as described in the next section.
16
Chapter 1: Network Fundamentals Review
Routers
A router goes one step further than a switch. It is a Layer 3 device that has much more intelligence
than a hub or switch. By using logical Layer 3 addresses, routers allow devices on different LANs
to communicate with each other and with distant devices—for example, those connected through
the Internet or through a WAN. Examples of logical Layer 3 addresses include TCP/IP’s IP
addresses and Novell’s IPX addresses.
A device connected to a router does not receive any of the information meant just for devices on
other ports, or broadcasts (destined for all networks) from devices on other ports.
The router reads the source and destination logical addresses in the packets and therefore keeps
track of who is where, and who is talking to whom, and sends data only where it needs to go. It
supports communication between LANs, but it blocks broadcasts (destined for all networks).
All devices connected to one router port are in the same collision domain, but devices connected
to different ports are in different collision domains.
All the devices connected to one router port are in the same broadcast domain, but devices
connected to different ports are in different broadcast domains. Routers block broadcasts (destined
for all networks) and multicasts by default; routers forward only unicast packets (destined for a
specific device) and packets of a special type called directed broadcasts.
NOTE IP multicast technology, which enables multicast packets to be sent throughout a
network, is described in Chapter 4, “Designing Basic Campus and Data Center Networks.”
NOTE An IP-directed broadcast is an IP packet that is destined for all devices on an IP subnet.
IP subnets are described in the “Addressing” section later in this chapter.
The fact that a router does not forward broadcasts (destined for all networks) is a significant
difference between a router and a switch, and it helps control the amount of traffic on the network.
For example, many protocols, such as IP, might use broadcasts for routing protocol advertisements,
discovering servers, and so on. These broadcasts are a necessary part of local LAN traffic, but they
are not required on other LANs and can even overwhelm slower WANs. Routers can generate
broadcasts themselves if necessary (for example, to send a routing protocol advertisement), but do
not pass on a received broadcast.
Routing operation is discussed further in the “Routing” section, later in this chapter.
Introduction to the TCP/IP Suite
17
NOTE The concepts of unicast, multicast, and broadcast apply to Layer 2 and Layer 3
separately. Although a router does not forward any type of frame, it can forward a unicast,
multicast, or directed broadcast packet that it received in a frame. A switch, however, can
forward a unicast, multicast, or broadcast frame.
Introduction to the TCP/IP Suite
As mentioned earlier, TCP/IP is the most widely used protocol suite. The relationship between the
five layers of the TCP/IP protocol suite and the seven layers of the OSI model is illustrated in
Figure 1-4.
TCP/IP Protocol Suite
Figure 1-4
Application
Presentation
Application
Session
Transport
Transport
Network
Internet
Data Link
Data Link
Physical
Physical
OSI Model
TCP/IP
The five layers of the TCP/IP suite are the application layer, transport layer, Internet layer, data
link layer, and physical layer.
NOTE The data link and physical layers are sometimes grouped as one layer, called the
network interface layer.
The TCP/IP application layer includes the functionality of the OSI application, presentation, and
session layers. Applications defined in the TCP/IP suite include the following:
■
FTP and Trivial File Transfer Protocol (TFTP): Transfer files between devices.
■
SMTP and POP3: Provide e-mail services.
18
Chapter 1: Network Fundamentals Review
■
HTTP: Transfers information to and from a World Wide Web server through web browser
software.
■
Telnet: Emulates a terminal to connect to devices.
■
Domain Name System (DNS): Translates network device names into network addresses and
vice versa.
■
Simple Network Management Protocol (SNMP): Used for network management,
including setting threshold values and reporting network errors.
■
Dynamic Host Configuration Protocol (DHCP): Assigns dynamic IP addressing
information to devices as they require it.
The transport layer and Internet layer protocols are detailed in the following sections.
The data link and physical layers can support a wide variety of LANs and WANs (including those
discussed in the “LANs and WANs” section, earlier in this chapter). A data link layer protocol
related to the TCP/IP suite is described in the later “TCP/IP-Related Data Link Layer Protocol”
section.
TCP/IP Transport Layer Protocols
The TCP/IP transport layer includes the following two protocols:
■
Transmission Control Protocol (TCP): Provides connection-oriented, end-to-end reliable
transmission. Before sending any data, TCP on the source device establishes a connection
with TCP on the destination device, ensuring that both sides are synchronized. Data is
acknowledged; any data not received properly is retransmitted. FTP is an example of an
application that uses TCP to guarantee that the data sent from one device to another is
received successfully.
■
User Datagram Protocol (UDP): Provides connectionless, best-effort unacknowledged data
transmission. In other words, UDP does not ensure that all the segments arrive at the
destination undamaged. UDP does not have the overhead of TCP related to establishing the
connection and acknowledging the data. However, this means that upper-layer protocols or
the user must determine whether all the data arrived successfully, and retransmit if necessary.
TFTP is an example of an application that uses UDP. When all the segments have arrived at
the destination, TFTP computes the file check sequence and reports the results to the user. If
an error occurs, the user must send the entire file again.
NOTE DNS is an example of an application layer protocol that may use either TCP or UDP,
depending on the function it is performing.
Introduction to the TCP/IP Suite
19
TCP and UDP, being at the transport layer, send segments. Figure 1-5 illustrates the fields in a
UDP segment and in a TCP segment.
UDP Segment Headers Contain at Least 8 Bytes, Whereas TCP Segment Headers Contain at
Least 20 Bytes
Figure 1-5
Source Port Number
Destination Port Number
Sequence number
Source Port Number
Destination Port Number
Length
8 Bytes
Checksum
Acknowledgment number
Header
Length Reserved
Data
UDP Segment
Code
Bits
20 Bytes
Window Size
Checksum
Urgent
Option
Data
TCP Segment
The UDP segment fields are as follows:
■
Source and destination port numbers (16 bits each): Identify the upper-layer protocol (the
application) in the sending and receiving devices.
■
Length (16 bits): The total number of 32-bit words in the header and the data.
■
Checksum (16 bits): The checksum of the header and data fields, used to ensure that the
segment is received correctly.
■
Data (variable length): The upper-layer data (the application data).
The TCP segment fields are as follows:
■
Source and destination port numbers (16 bits each): Identify the upper-layer protocol (the
application) in the sending and receiving hosts.
■
Sequence and acknowledgment numbers (32 bits each): Ensure the correct order of the
received data and that the data reached the destination.
■
Header length (4 bits): The number of 32-bit words in the header.
■
Reserved (6 bits): For future use, set to 0.
■
Code bits (6 bits): Indicates different types of segments. For example, the SYN (synchronize)
bit is used for setting up a session, the ACK (acknowledge) bit is used for acknowledging a
segment, and the FIN (finish) bit is used for closing a session.
20
Chapter 1: Network Fundamentals Review
■
Window size (16 bits): The number of octets that the receiving device is willing to accept
before it must send an acknowledgment.
NOTE An octet is 8 bits of data.
■
Checksum (16 bits): The checksum of the header and data fields, used to ensure that the
segment is received correctly.
■
Urgent (16 bits): Indicates the end of urgent data.
■
Option (0 or 32 bits): Only one option is currently defined: the maximum TCP segment size.
■
Data (variable): The upper-layer data (the application data).
Notice that the UDP header is much smaller than the TCP header. UDP does not need the
sequencing, acknowledgment, or windowing fields, because it does not establish and maintain
connections.
Port number operation, which is the same for both TCP and UDP, is described in the next section.
Following that section, the operation of sequence and acknowledgment numbers and windowing
are described; these are crucial to understanding TCP operation.
Port Numbers
KEY
TCP and UDP use protocol port numbers to distinguish among multiple applications that
POINT are running on a single device.
Well-known, or standardized, port numbers are assigned to applications so that different
implementations of the TCP/IP protocol suite can interoperate. Well-known port numbers are
numbers up to 1023; examples include the following:
■
FTP: TCP port 20 (data) and port 21 (control)
■
TFTP: UDP port 69
■
SMTP: TCP port 25
■
POP3: TCP port 110
■
HTTP: TCP port 80
■
Telnet: TCP port 23
■
DNS: TCP and UDP port 53
■
SNMP: UDP port 161
Introduction to the TCP/IP Suite
21
Port numbers from 1024 through 49151 are called registered port numbers; these are registered
for use by other applications. The dynamic ports numbers are those from 49152 through 65535;
these can be dynamically assigned by hosts as source port numbers when they create and end
sessions.
For example, Figure 1-6 illustrates a device in Toronto that is opening a Telnet session (TCP port
23) with a device in London. Note that the source port from Toronto is 50051. Toronto records this
Telnet session with London as port 50051 to distinguish it from any other Telnet sessions it might
have running (because simultaneous multiple Telnet sessions can be running on a device). The
London device receives port number 23 and therefore knows that this is a Telnet session. In its
reply, it uses a destination port of 50051, which Toronto knows is the Telnet session it opened with
London.
Figure 1-6
Source and Destination Port Numbers Indicate the Application Being Used
Source Dest
Port
Port
50051
23
Data
London
Port 23 received, so
I'll send this to my
Telnet application.
This is session
50051 with Toronto.
I'm opening a
Telnet (23) session
on the London server.
I'll call it session 50051.
Toronto
Source Dest
Port
Port
23
50051
Data
TCP Sequencing, Acknowledgment, and Windowing
To illustrate TCP operation, this section follows a TCP session as it is established, data is sent, and
the session is closed.
KEY
A TCP connection is established by a process called a three-way handshake. This process
POINT uses the SYN and ACK bits (in the code bits field in the TCP segment) as well as the
sequence and acknowledgment number fields.
The TCP three-way handshake is shown in Figure 1-7.
22
Chapter 1: Network Fundamentals Review
Figure 1-7
Three-Way Handshake Establishes a TCP Session
Toronto
1
SYN
seq = 21
3
London
SYN, ACK
seq = 75, ack = 22
2
ACK
seq = 22, ack = 76
In this example, a user in Toronto wants to establish a TCP session with a device in London to start
a Telnet session. The first step in the handshake involves the initiator, Toronto, sending a segment
with the SYN bit set—this indicates that it wants to start a session and synchronize with London.
This segment also includes the initial sequence number that Toronto is using—21 in this example.
Assuming that the device in London is willing to establish the session, it returns a segment that
also has the SYN bit set. In addition, this segment has the ACK bit set because London is
acknowledging that it successfully received a segment from Toronto. The acknowledgment
number is set to 22, indicating that London is now expecting to receive segment 22 and therefore
that it successfully received number 21. This is known as an expectational acknowledgment. The
new segment includes the initial sequence number that London is using—75 in this example.
Finally, Toronto replies with an acknowledgment segment, sequence number 22 (as London is
expecting), and acknowledgment number 76, indicating that it is now expecting number 76 and
therefore has successfully received number 75. The session is now established, and data can be
exchanged between Toronto and London.
NOTE The sequence and acknowledgment numbers specify octet numbers, not segment
numbers. For ease of illustration, this example assumes that a segment is 1 octet of data. This is
not the case in real life, but it simplifies the example so that the concepts are easier to understand.
The window size field in the segment controls the flow of the session. It indicates how many octets
a device is willing to accept before it must send an acknowledgment. Because each host can have
different flow restrictions (for example, one host might be very busy and therefore require that a
smaller amount of data be sent at one time), each side of the session can have different window
sizes, as illustrated in Figure 1-8.
Introduction to the TCP/IP Suite
Figure 1-8
23
Window Size Indicates the Number of Octets a Device Is Willing to Accept Before It Sends an
Acknowledgment
Toronto
London
Window size=3
Window size=2
ACK, window=3,seq=1, ack=21
ACK, window=3,seq=2, ack=21
ACK, window=2, seq=21, ack=3
ACK, window=2, seq=22, ack=3
ACK, window=2, seq=23, ack=3
ACK, window=3,seq=3, ack=24
In this example, the window size on Toronto is set to 3, and on London it is set to 2. When Toronto
sends data to London, it can send 2 octets before it must wait for an acknowledgment. When
London sends data to Toronto, it can send 3 octets before it must wait for an acknowledgment.
NOTE The window size specifies the number of octets that can be sent, not the number of
segments. For ease of illustration, this example assumes that a segment is 1 octet of data. This
is not the case in real life, but it again simplifies the example so that the concepts are easier to
understand. The window sizes shown in the example are also small for ease of explanation. In
reality, the window size would be much larger, allowing a lot of data to be sent between
acknowledgments.
After all the data for the session is sent, the session can be closed. The process is similar to how it
was established, using a handshake. In this case, four steps are used, as illustrated in Figure 1-9.
In this example, Toronto wants to close its Telnet session with London. The first step in the
handshake involves Toronto sending a segment with the FIN bit set, indicating that it wants to
finish the session. This segment also includes the sequence number that Toronto is currently
using—107 in this example.
24
Chapter 1: Network Fundamentals Review
Figure 1-9
Four-Way Handshake Closes a TCP Session
Toronto
1
London
FIN, ACK
seq=107, ack=322
2
ACK
seq=322, ack=108
Inform application that connection from
Toronto is closed and wait for it to close
the connection to Toronto.
FIN, ACK
3
seq=322, ack=108
4
ACK,
seq=108, ack=323
London immediately acknowledges the request. This segment has the ACK bit set with the
acknowledgment number set to 108, indicating that London successfully received number 107.
This segment includes the sequence number that London is currently using—322 in this example.
London then informs its Telnet application that half of the session, the connection from Toronto,
is now closed.
When the application on the London device requests that the other half of the connection (to
Toronto) be closed, London sends a new segment with the FIN bit set, indicating that it wants to
close the session.
Finally, Toronto replies with an acknowledgment segment with acknowledgment number 323
(indicating that it has successfully received number 322). The session is now closed in both
directions.
TCP/IP Internet Layer Protocols
The TCP/IP Internet layer corresponds to the OSI network layer and includes the IP-routed
protocol, as well as a protocol for message and error reporting.
Introduction to the TCP/IP Suite
25
Protocols
The protocols at this layer include the following:
■
IP: Provides connectionless, best-effort delivery of datagrams through the network. A unique
IP address—a logical address—is assigned to each interface of each device in the network. IP
and IP addresses are introduced later in this chapter and are described in more detail in
Appendix B, “IPv4 Supplement.”
NOTE Two versions of IP currently exist: IP version 4 (IPv4) and the emerging IP version 6
(IPv6). In this book, the term IP refers to IPv4. IPv6 is introduced in Chapter 6, “Designing IP
Addressing in the Network.”
■
Internet Control Message Protocol (ICMP): Sends messages and error reports through the
network. For example, the ping application included in most TCP/IP protocol suites sends an
ICMP echo message to a destination, which then replies with an ICMP echo reply message.
Ping provides confirmation that the destination can be reached and gives a measure of how
long packets are taking to travel between the source and destination.
NOTE These protocols are all at the TCP/IP Internet layer, corresponding to the OSI model
network layer, Layer 3. They run on top of the TCP/IP physical and data link layers, Layers 1
and 2.
NOTE You might have heard people refer to IP as a “LAN protocol”; this is because they
configure IP on PCs that are attached to LANs. However, IP is, in fact, a network layer
protocol—it runs on top of any LAN or WAN.
IP Datagrams
Figure 1-10 illustrates the fields of an IP datagram.
26
Chapter 1: Network Fundamentals Review
Figure 1-10
An IP Datagram Contains at Least 20 Bytes
Type of
Version Header
Length Service
Identification
Time To
Live
Total Length
Flags
Protocol
Fragment Offset
Header Checksum
20 Bytes
Source IP Address
Destination IP Address
IP Options
Padding
Data
The IP datagram fields are as follows:
■
Version (4 bits): Identifies the IP version—in this case, version 4.
■
Header length (4 bits): The number of 32-bit words in the header (including the options).
■
Type of service (ToS) (8 bits): Specifies how the datagram should be handled within the
network. These bits mark traffic for a specific quality of service (QoS), which is further
described in Chapter 4.
■
Total length (16 bits): The total number of octets in the header and data fields.
■
Identification (16 bits), flags (3 bits), and fragment offset (13 bits): Handle cases where a
large datagram must be fragmented—split into multiple packets—to go through a network
that cannot handle datagrams of that size.
■
Time to Live (TTL) (8 bits): Ensures that datagrams do not loop endlessly in the network;
this field must be decremented by 1 by each router that the datagram passes through.
■
Protocol (8 bits): Indicates the upper-layer (Layer 4, the transport layer) protocol that the data
is for. Therefore, this field might indicate the type of segment that the datagram is carrying,
similar to how the port number field in the UDP and TCP segments indicates the type of
application that the segment is carrying. A protocol number of 6 means that the datagram is
carrying a TCP segment, whereas a protocol number of 17 means that the datagram is
carrying a UDP segment. The protocol may have other values, such as a value indicating that
traffic from a specific routing protocol is being carried inside the datagram.
■
Header checksum (16 bits): Ensures that the header is received correctly.
Routing
27
■
Source and destination IP addresses (32 bits each): Logical IP addresses assigned to the
source and destination of the datagram, respectively. IP addresses are introduced later in this
chapter, in the “Addressing” section.
■
IP options and padding (variable length; 0 or a multiple of 32 bits): Used for network testing
and debugging.
■
Data (variable): The upper-layer (transport layer) data.
TCP/IP-Related Data Link Layer Protocol
The TCP/IP data link layer corresponds to the OSI data link layer. It includes the Address
Resolution Protocol (ARP) to request the MAC address (the data link layer physical address) for
a given IP address. The returned MAC address is used as the destination address in the frames that
encapsulate the packets of data being routed to the destination IP address.
Routing
This section examines how routers work and introduces routing tables and routing protocols.
Routers work at the OSI model network layer. The main functions of a router are first to determine
the best path that each packet should take to get to its destination and second to send the packet on
its way. Sending the packet out the appropriate interface, along the best path, is also called
switching the packet because the packet is encapsulated in a new frame, with the appropriate
framing information.
Therefore, a router’s job is much like that of a worker at a post office. The postal worker looks at
the address label on the letter (the network layer address on the packet), determines which way the
letter (the packet) should be sent, and then sends it. The comparison between the post office and a
router is illustrated in Figure 1-11.
28
Chapter 1: Network Fundamentals Review
Figure 1-11
TO:
?
A Router Behaves Much Like a Worker at a Post Office
A letter arrives
at the post office.
A packet arrives
at the router’s interface.
The postal worker
looks at the “To:”
address.
The router looks at the
“destination” address.
The postal worker
determines where the
letter should go next,
on its way to the final
destination.
For example:
The router determines
where the packet should
go next, on its way to the
final destination.
For example:
Local: put
on truck.
Deliver to
final recipient.
Cross-country:
put on plane
to next city.
On attached
LAN: put on
LAN for recipient.
On distant
network: send
across WAN
to next router.
Recipient receives
packet.
NOTE This discussion of routers is concerned with the traditional role of routers in a network,
at the OSI model network layer. Routers are now taking on more functions—for example, in
QoS and security areas; these other functions are described in the relevant chapters throughout
this book.
Routers Work at the Lower Three OSI Layers
The router doesn’t care what is in the higher layers—what kind of data is in the packet. The router
is just responsible for sending the packet the correct way. The router does have to be concerned
with the data link and physical layers, though, because it might have to receive and send data on
different media. For example, a packet received on an Ethernet LAN might have to be sent out on
a Frame Relay WAN, requiring the router to know how to communicate on both these types of
media. In terms of layers, therefore, a router unencapsulates received data up to the network layer
and then encapsulates the data again into the appropriate frame and bit types. This process is
Routing
29
illustrated in Figure 1-12, where the PC on the left is sending data to the PC on the right. The
routers have determined that the path marked with the arrows is the best path between the PCs.
Figure 1-12
Router Works at the Network Layer
FDDI
Application
Application
Presentation
Presentation
Session
Session
Network
Transport
Network
Network
Network
Transport
Network
Network
Data Link
Data Link
Data Link
Data Link
Data Link
Data Link
Physical
Physical
Physical
Physical
Physical
Physical
Ethernet
Frame Relay
HDLC
FDDI
Ethernet
In this figure, notice that only the two PCs care about the upper layers, whereas all the routers in
the path concern themselves with only the lower three layers.
Routing Tables
To determine the best path on which to send a packet, a router must know where the packet’s
destination network is.
KEY
Routers learn about networks by being physically connected to them or by learning about
POINT them either from other routers or from a network administrator. Routes configured by
network administrators are known as static routes because they are hard-coded in the
router and remain there—static—until the administrator removes them. Routes to which a
router is physically connected are known as directly connected routes. Routers learn routes
from other routers by using a routing protocol.
30
Chapter 1: Network Fundamentals Review
However routes are learned, routers keep the best path (or multiple best paths) to each destination
in a routing table. A routing table contains a list of all the networks that a router knows how to
reach. For each network, the routing table typically contains the following items:
■
How the route to the network was learned (for example, statically or by using a routing
protocol).
■
The network address of the router from which the route to the network was learned (if
applicable).
■
The interface (port) on the router through which the network can be reached.
■
The metric of the route. The metric is a measurement, such as the number of other routers that
the path goes through, that routing protocols use when determining the best path.
NOTE The path that the router determines is the best depends on the routing protocol in use.
For example, some routing protocols define best as the path that goes through the fewest other
routers (the fewest hops), whereas others define best as the path with the highest bandwidth.
For example, in the network shown in Figure 1-13, the metric used is hops—the number of other
routers between this router and the destination network. Both routers know about all three
networks. Router X, on the left, knows about networks A and B because it is connected to them
(hence the metric of 0) and knows about network C from Router Y (hence the metric of 1). Router
Y, on the right, knows about networks B and C because it is connected to them (hence the metric
of 0) and knows about network A from Router X (hence the metric of 1).
Figure 1-13
Routers Keep Routing Information in Routing Tables
Network Interface Metric
1
1
A
1
0
B
2
0
C
Network A
1
2
X
1
Network B
2
Y
Network C
Network Interface Metric
0
A
1
0
B
2
1
C
2
Addressing
31
Routing Protocols
Routers use routing protocols to exchange routing information. Routing protocols allow routers to
learn from other routers the networks that are available so that data can be sent in the correct
direction. Remember that two routers communicating with each other must use the same routing
protocol or they can’t understand each other.
The TCP/IP protocol suite includes the following routing protocols:
■
Routing Information Protocol (RIP), versions 1 and 2 (RIPv1 and RIPv2)
■
Enhanced Interior Gateway Routing Protocol (EIGRP)
■
Open Shortest Path First (OSPF)
■
Integrated Intermediate System-to-Intermediate System (IS-IS)
■
Border Gateway Protocol (BGP) Version 4 (BGP-4)
NOTE These routing protocols are discussed further in Chapter 7, “Selecting Routing
Protocols for the Network.”
The previous sections introduced the basics of routing and how routers learn about the available
networks so that data can be sent along the correct path. Routers look at the packet’s destination
address to determine where the packet is going so that they can then select the best route to get the
packet there. The following section discusses these addresses.
Addressing
This section describes physical and network layer addressing and how routers use these addresses.
The section concludes with a brief introduction to IP addressing.
Physical Addresses
MAC addresses were discussed earlier; recall that these are at the data link layer and are
considered physical addresses. When a network interface card is manufactured, it is assigned an
address—called a burned-in address (BIA)—that doesn’t change when the network card is
installed in a device and is moved from one network to another. Typically, this BIA is copied to
interface memory and is used as the interface’s MAC address. MAC addresses are analogous to
Social Insurance numbers or Social Security numbers—one is assigned to each person, and the
32
Chapter 1: Network Fundamentals Review
numbers don’t change when that person moves to a new house. These numbers are associated with
the physical person, not where the person lives.
NOTE Some organizations set the MAC addresses of their devices to something other than
the BIA (for example, based on the location of the device in the network) for management
purposes.
KEY
The BIA is a 48-bit value. The upper 24 bits are an Organizational Unique Identifier (OUI)
POINT representing the vendor that makes the device. The lower 24 bits are a unique value for
that OUI, typically the device’s serial number.
NOTE The top 2 bits of the BIA are not actually part of the OUI. The seventh bit in a BIA is
referred to as the universal/locally administered (U/L) bit; it identifies whether the address has
been locally or universally assigned. The eighth bit in the BIA is the individual/group (I/G) bit;
it identifies whether the address is for an individual device or a group.
Knowing the MAC address assigned to a PC or to a router’s interface doesn’t tell you anything
about where it is or what network it is attached to—it can’t help a router determine the best way
to send data to it. For that you need logical network layer addresses; they are assigned when a
device is installed on a network and should be changed when the device is moved.
Logical Addresses
When you send a letter to someone, you have to know that person’s postal address. Because every
postal address in the world is unique, you can potentially send a letter to anyone in the world.
Postal addresses are logical and hierarchical—for example, they include the country, province/
state, street, and building/house number. The top portion of Figure 1-14 illustrates Main Street
with various houses. All these houses have one portion of their address in common—Main
Street—and one portion that is unique—their house number.
KEY
Network layer addresses are also logical and hierarchical, and they are either defined
POINT statically by an administrator or obtained automatically from a server. They have two main
parts: the network that the device is on (similar to the street, city, province, and so on) and
the device number on that network (similar to the building number).
Addressing
Figure 1-14
33
Network Layer Addresses Are Similar to Postal Addresses
28
30
Main Street
32
Main Street
Main Street
Main Street
29
Main Street
17.1
31
Main Street
17.3
Network 17
17.5
NOTE The terms device, host, and node are used interchangeably to represent the entity that
is communicating.
The lower portion of Figure 1-14 illustrates a network, 17, with various PCs on it. All these PCs
have one portion of their address in common—17—and one part that is unique—their device
number. Devices on the same logical network must share the same network portion of their address
and have different device portions.
Routing and Network Layer Addresses
A router typically looks at only the network portion of a destination address. It compares the
network portion to its routing table, and if it finds a match, it sends the packet out the appropriate
interface, toward its destination.
A router needs to concern itself only with the device portion of a destination address if it is directly
connected to the same network as the destination. In this case, the router must send the packet
directly to the appropriate device, and it needs to use the entire destination address for this. A
34
Chapter 1: Network Fundamentals Review
router on a LAN uses ARP to determine the MAC address of the device with that IP address and
then creates an appropriate frame with that MAC address as the destination MAC address.
IP Addresses
IP addresses are network layer addresses. As you saw earlier, IP addresses are 32-bit numbers. As
shown in Figure 1-15, the 32 bits are usually written in dotted-decimal notation—they are grouped
into 4 octets (8 bits each), separated by dots, and each octet is represented in decimal format. Each
bit in the octet has a binary weight (the highest is 128 and the next is 64, followed by 32, 16, 8, 4,
2, and 1). Thus, the minimum value for an octet is 0, and the maximum decimal value for an octet
is 255.
Figure 1-15
32-Bit IPv4 Addresses Are Written in Dotted-Decimal Notation
decimal:
binary:
192
11000000
168
10101000
5
1
00000101
00000001
NOTE The maximum value of an octet is when all 8 bits are binary 1. The decimal value of
an octet is calculated by adding all the weighted bits—in this case, 128 + 64 + 32 + 16 + 8 + 4
+ 2 + 1 = 255.
NOTE Appendix B details how to convert between decimal and binary formats and vice versa
and provides a decimal-to-binary conversion chart. Appendix B also includes further details on
IPv4 addressing.
IP Address Classes
IPv4 addresses are categorized into five classes: A, B, C, D, and E. Only Class A, B, and C
addresses are used for addressing devices; Class D is used for multicast groups, and Class E is
reserved for experimental use.
The first octet of an IPv4 address defines which class it is in, as illustrated in Table 1-1 for Class
A, B, and C addresses. The address class determines which part of the address represents the
Addressing
35
network bits (N) and which part represents the host bits (H), as shown in this table. The number
of networks available in each class and the number of hosts per network are also shown.
Table 1-1
IP Address Classes A, B, and C Are Available for Addressing Devices
Class
Format*
HigherOrder Bits
Address Range
Number of
Networks
Number of Hosts
per Network
A
N.H.H.H
0
1.0.0.0 to 126.0.0.0
126
16,777,214
B
N.N.H.H
10
128.0.0.0 to 191.255.0.0
16,386
65,534
C
N.N.N.H
110
192.0.0.0 to 223.255.255.0
2,097,152
254
*N=network
number bits; H=host number bits
NOTE Class A addresses are any addresses that have the higher-order bit set to 0; this would
include 0 through 127 in the first octet. However, network 0.0.0.0 is reserved, and network
127.0.0.0 (any address starting with decimal 127) is reserved for loopback functionality.
Therefore, the first octet of Class A addresses ranges from 1 to 126.
NOTE Class D addresses have higher-order bits 1110 and are in the range of 224.0.0.0 to
239.255.255.255. Class E addresses have higher-order bits 1111 and are in the range of
240.0.0.0 to 255.255.255.255.
For example, 192.168.5.1 is a Class C address. Therefore, it is in the format N.N.N.H—the
network part is 192.168.5 and the host part is 1.
Private and Public IP Addresses
The IPv4 address space is divided into public and private sections. Private addresses are reserved
addresses to be used only internally within a company’s network, not on the Internet. When you
36
Chapter 1: Network Fundamentals Review
want to send anything on the Internet, private addresses must be mapped to a company’s external
registered address. Public IPv4 addresses are provided for external communication.
KEY
RFC 1918, Address Allocation for Private Internets, defines the private IPv4 addresses as
POINT follows:
■
10.0.0.0 to 10.255.255.255
■
172.16.0.0 to 172.31.255.255
■
192.168.0.0 to 192.168.255.255
The remaining addresses are public addresses.
NOTE Internet RFC documents are written definitions of the Internet’s protocols and policies.
A complete list and the documents themselves can be found at http://www.rfc-editor.org/
rfc.html.
Note that all the IP addresses used in this book are private addresses, to avoid publishing anyone’s
registered address.
Subnets
As illustrated in Table 1-1, Class A addresses have little use in a normal organization—most
companies would not want one network with more than 16 million PCs on it! This would not be
physically possible or desirable. Because of this limitation on addresses when only their class is
considered (called classful addressing) and the finite number of such addresses, subnets were
introduced by RFC 950, Internet Standard Subnetting Procedure.
Class A, B, and C addresses can be divided into smaller networks, called subnetworks or subnets,
resulting in a larger number of possible networks, each with fewer host addresses available than
the original network.
The addresses used for the subnets are created by borrowing bits from the host field and using
them as subnet bits; a subnet mask indicates which bits have been borrowed. A subnet mask is a
32-bit value associated with an IP address to specify which bits in the address represent network
and subnet bits and which represent host bits. Using subnet masks creates a three-level hierarchy:
network, subnet, and host.
Addressing
37
KEY
In binary format, a subnet mask bit of 1 indicates that the corresponding bit in the IP
POINT address is a network or subnet bit, and a subnet mask bit of 0 indicates that the
corresponding bit in the IP address is a host bit.
Subnet bits come from the higher-order (leftmost) bits of the host field; therefore, the 1s
in the subnet mask are contiguous.
The default subnet masks for Class A, B, and C addresses are shown Table 1-2.
Table 1-2
IP Address Default Subnet Masks
Class
Default Mask in Binary Format
Default Mask in Decimal Format
A
11111111.00000000.00000000.00000000
255.0.0.0
B
11111111.11111111.00000000.00000000
255.255.0.0
C
11111111.11111111.11111111.00000000
255.255.255.0
When all of an address’s host bits are 0, the address is for the subnet itself (sometimes called the
wire). When all of an address’s host bits are 1, the address is the directed broadcast address for that
subnet (in other words, for all the devices on that subnet).
NOTE An IP-directed broadcast is an IP packet destined for all devices on an IP subnet.
When the directed broadcast originates from a device on another subnet, routers that are not
directly connected to the destination subnet forward the IP-directed broadcast in the same way
they would forward unicast IP packets destined for a host on that subnet.
On Cisco routers, the ip directed-broadcast interface command controls what the last router in
the path, the one connected to the destination subnet, does with a directed broadcast packet. If
ip directed-broadcast is enabled on the interface, the router changes the directed broadcast to
a broadcast and sends the packet, encapsulated in a Layer 2 broadcast frame, onto the subnet.
However, if the no ip directed-broadcast command is configured on the interface, directed
broadcasts destined for the subnet to which that interface is attached are dropped. In Cisco IOS
version 12.0, the default for this command was changed to no ip directed-broadcast.
KEY
The formula 2s calculates the number of subnets created, where s is the number of subnet
POINT bits (the number of bits borrowed from the host field).
The formula 2h – 2 calculates the number of host addresses available on each subnet,
where h is the number of host bits.
38
Chapter 1: Network Fundamentals Review
For example, 10.0.0.0 is a Class A address with a default subnet mask of 255.0.0.0, indicating 8
network bits and 24 host bits. If you want to use 8 of the host bits as subnet bits instead, you would
use a subnet mask of 11111111.11111111.00000000.00000000, which is 255.255.0.0 in decimal
format. You could then use the 8 subnet bits to address 256 subnets. Each of these subnets could
support up to 65,534 hosts. The address of one of the subnets is 10.1.0.0; the broadcast address on
this subnet is 10.1.255.255.
Another way of indicating the subnet mask is to use a prefix. A prefix is a slash (/) followed by a
numeral that is the number of bits in the network and subnet portion of the address—in other
words, the number of contiguous 1s that would be in the subnet mask. For example, the subnet
mask of 255.255.240.0 is 11111111.11111111.11110000.00000000 in binary format, which is 20
1s followed by 12 0s. Therefore, the prefix would be /20 for the 20 bits of network and subnet
information, the number of 1s in the mask.
IP addressing is further explored in Appendix B; IP address planning is discussed in Chapter 6.
Switching Types
Switches were initially introduced to provide higher-performance connectivity than hubs because
switches define multiple collision domains. Switches have always been able to process data at a
faster rate than routers because the switching functionality is implemented in hardware—in
Application-Specific Integrated Circuits (ASIC)—rather than in software, which is how routing
has traditionally been implemented. However, switching was initially restricted to the examination
of Layer 2 frames. With the advent of more powerful ASICs, switches can now process Layer 3
packets, and even the contents of those packets, at high speeds.
The following sections first examine the operation of traditional Layer 2 switching. Layer 3
switching—which is really routing in hardware—is then explored.
Layer 2 Switching
KEY
Layer 2 LAN switches segment a network into multiple collision domains and
POINT interconnect devices within a workgroup, such as a group of PCs.
The heart of a Layer 2 switch is its MAC address table, also known as its content-addressable
memory. This table contains a list of the MAC addresses that are reachable through each switch
port. Recall that a physical MAC address uniquely identifies a device on a network. When a switch
is first powered up, its MAC address table is empty, as shown in Figure 1-16.
Switching Types
Figure 1-16
39
The MAC Address Table Is Initially Empty
MAC Address Table
Por t Addresses that can be reached
1
2
3
4
A
D
0260.60AA.AAAA
0260.60DD.DDDD
B
0260.60BB.BBBB
C
0260.60CC.CCCC
In this sample network, consider what happens when device A sends a frame destined for device
D. The switch receives the frame on port 1 (from device A). Recall that a frame includes the MAC
address of the source device and the MAC address of the destination device. Because the switch
does not yet know where device D is, the switch must flood the frame out of all the other ports;
therefore, the switch sends the frame out of ports 2, 3, and 4. This means that devices B, C, and D
all receive the frame. Only device D, however, recognizes its MAC address as the destination
address in the frame; it is the only device on which the CPU is interrupted to further process the
frame.
KEY
Broadcast and multicast frames are, by default, flooded to all ports of a Layer 2 switch
POINT other than the incoming port. The same is true for unicast frames destined for any device
not in the MAC address table.
In the meantime, the switch now knows that device A can be reached on port 1 because the switch
received a frame from device A on port 1; the switch therefore puts the MAC address of device A
40
Chapter 1: Network Fundamentals Review
in its MAC address table for port 1. This process is called learning—the switch is learning all the
MAC addresses it can reach.
KEY
A switch uses the frame’s destination MAC address to determine the port to which it sends
POINT the frame.
A switch uses the frame’s source MAC address to populate its MAC address table; the
switch eavesdrops on the conversation between devices to learn which devices can be
reached on which ports.
At some point, device D is likely to reply to device A. At that time, the switch receives a frame
from device D on port 4; the switch records this information in its MAC address table as part of
its learning process. This time, the switch knows where the destination, device A, is; the switch
therefore forwards the frame only out of port 1. This process is called filtering—the switch sends
the frames out of only the port through which they need to go, when the switch knows which port
that is, rather than flooding them out of every port. This reduces the traffic on the other ports and
reduces the interruptions that the other devices experience. Over time, the switch learns where all
the devices are, and the MAC address table is fully populated, as shown in Figure 1-17.
Figure 1-17
The Switch Learns Where All the Devices Are and Populates Its MAC Address Table
MAC Address Table
Por t Addresses that can be reached
1
0260.60AA.AAAA
2
0260.60BB.BBBB
3
0260.60CC.CCCC
4
2060.60DD.DDDD
1
2
3
4
A
D
0260.60AA.AAAA
0260.60DD.DDDD
B
0260.60BB.BBBB
C
0260.60CC.CCCC
Switching Types
41
The filtering process also means that multiple simultaneous conversations can occur between
different devices. For example, if device A and device B want to communicate, the switch sends
their data between ports 1 and 2; no traffic goes on ports 3 or 4. At the same time, devices C and
D can communicate on ports 3 and 4 without interfering with the traffic on ports 1 and 2.
Consequently, the network’s overall throughput has increased dramatically.
The MAC address table is kept in the switch’s memory and has a finite size (depending on the
specific switch used). If many devices are attached to the switch, the switch might not have room
for an entry for every one, so the table entries time out after a period of not being used. As a result,
the most active devices are always in the table.
MAC addresses can also be statically configured in the MAC address table, and you can specify a
maximum number of addresses allowed per port. One advantage of static addresses is that less
flooding occurs, both when the switch first comes up and because of not aging out the addresses.
However, this also means that if a device is moved, the switch configuration must be changed. A
related feature available in some switches is the capability to sticky-learn addresses—the address
is dynamically learned, as described earlier, but is then automatically entered as a static command
in the switch configuration. Limiting the number of addresses per port to one and statically
configuring those addresses can ensure that only specific devices are permitted access to the
network; this feature is particularly useful when addresses are sticky-learned.
Layer 3 Switching
KEY
A Layer 3 switch is really a router with some of the functions implemented in hardware
POINT to improve performance. In other words, some of the OSI model network layer routing
functions are performed in high-performance ASICs rather than in software.
The functions performed by routers (as described in the earlier “Routing” section) can be CPUintensive. Offloading the switching of the packet to hardware can result in a significant increase in
performance.
A Layer 3 switch performs all the same functions as a router; the differences are in the physical
implementation of the device rather than in the functions it performs. Therefore, functionally, the
terms router and Layer 3 switch are synonymous.
Layer 4 switching is an extension of Layer 3 switching that includes examination of the contents
of the Layer 3 packet. For example, the protocol number in the IP packet header (as described in
the “IP Datagrams” section) indicates which transport layer protocol (for example, TCP or UDP)
is being used, and the port number in the TCP or UDP segment indicates the application being
used (as described in the “TCP/IP Transport Layer Protocols” section). Switching based on the
42
Chapter 1: Network Fundamentals Review
protocol and port numbers can ensure, for example, that certain types of traffic get higher priority
on the network or take a specific path.
Within Cisco switches, Layer 3 switching can be implemented in two different ways—through
multilayer switching or through Cisco Express Forwarding, as described in Chapter 4.
Spanning Tree Protocol
KEY
STP is a Layer 2 protocol that prevents logical loops in switched networks that have
POINT redundant links.
The following sections examine why such a protocol is needed in Layer 2 networks. STP
terminology and operation are then introduced.
Redundancy in Layer 2 Switched Networks
Redundancy in a network, such as that shown in Figure 1-18, is desirable so that communication
can still take place if a link or device fails. For example, if switch X in this figure stopped
functioning, devices A and B could still communicate through switch Y. However, in a switched
network, redundancy can cause problems.
Figure 1-18
Redundancy in a Switched Network Can Cause Problems
A
0260.60AA.AAAA
Switch X
Switch Y
1
2
1
2
B
0260.60BB.BBBB
The first type of problem occurs if a broadcast frame is sent on the network. For example, consider
what happens when device A in Figure 1-18 sends an ARP request to find the MAC address of
Spanning Tree Protocol
43
device B. The ARP request is sent as a broadcast. Both switch X and switch Y receive the
broadcast; for now, consider just the one received by switch X, on its port 1. Switch X floods the
broadcast to all its other connected ports; in this case, it floods it to port 2. Device B can see the
broadcast, but so can switch Y, on its port 2; switch Y floods the broadcast to its port 1. This
broadcast is received by switch X on its port 1; switch X floods it to its port 2, and so forth. The
broadcast continues to loop around the network, consuming bandwidth and processing power.
This situation is called a broadcast storm.
The second problem that can occur in redundant topologies is that devices can receive multiple
copies of the same frame. For example, assume that neither of the switches in Figure 1-18 has
learned where device B is located. When device A sends data destined for device B, switch X and
switch Y both flood the data to the lower LAN, and device B receives two copies of the same
frame. This might be a problem for device B, depending on what it is and how it is programmed
to handle such a situation.
The third difficulty that can occur in a redundant situation is within the switch itself—the MAC
address table can change rapidly and contain wrong information. Again referring to Figure 1-18,
consider what happens when neither switch has learned where device A or B is located, and device
A sends data to device B. Each switch learns that device A is on its port 1, and each records this
in its MAC address table. Because the switches don’t yet know where device B is, they flood the
frame—in this case, on their port 2. Each switch then receives the frame from the other switch on
its port 2. This frame has device A’s MAC address in the source address field; therefore, both
switches now learn that device A is on their port 2. As a result, the MAC address table is
overwritten. Not only does the MAC address table have incorrect information (device A is actually
connected to port 1, not port 2, of both switches), but because the table changes rapidly, it might
be considered unstable.
To overcome these problems, you must have a way to logically disable part of the redundant
network for regular traffic while maintaining redundancy for the case when an error occurs. STP
does just that.
STP Terminology and Operation
The following sections introduce the IEEE 802.1d STP terminology and operation.
STP Terminology
STP terminology can best be explained by examining how a sample network, such as the one
shown in Figure 1-19, operates.
44
Chapter 1: Network Fundamentals Review
Figure 1-19
STP Chooses the Port to Block
100 Mbps
Root Port (Forwarding)
Switch X
MAC: 0000.0c22.2222
1
2
Nonroot
Bridge
Designated Port (Forwarding)
Root
Bridge
1
2
Switch Y
MAC: 0000.0c11.1111
Designated Port (Forwarding)
Nondesignated Port (Blocking)
10 Mbps
NOTE Notice that STP terminology refers to the devices as bridges rather than switches.
Within an STP network, one switch is elected as the root bridge—it is at the root of the spanning
tree. All other switches calculate their best path to the root bridge. Their alternative paths are put
in the blocking state. These alternative paths are logically disabled from the perspective of regular
traffic, but the switches still communicate with each other on these paths so that the alternative
paths can be unblocked in case an error occurs on the best path.
All switches running STP (it is turned on by default in Cisco switches) send out Bridge Protocol
Data Units (BPDU). Switches running STP use BPDUs to exchange information with neighboring
switches. One of the fields in the BPDU is the bridge identifier (ID); it comprises a 2-octet bridge
priority and a 6-octet MAC address. STP uses the bridge ID to elect the root bridge—the switch
with the lowest bridge ID is the root bridge. If all bridge priorities are left at their default values,
the switch with the lowest MAC address therefore becomes the root bridge. In Figure 1-19, switch
Y is elected as the root bridge.
All the ports on the root bridge are called designated ports, and they are all in the forwarding
state—that is, they can send and receive data. The STP states are described in the next section.
On all nonroot bridges, one port becomes the root port, and it is also in the forwarding state. The
root port is the one with the lowest cost to the root. The cost of each link is by default inversely
proportional to the link’s bandwidth, so the port with the fastest total path from the switch to the
root bridge is selected as the root port on that switch. In Figure 1-19, port 1 on switch X is the root
port for that switch because it is the fastest way to the root bridge.
Spanning Tree Protocol
45
NOTE If multiple ports on a switch have the same fastest total path costs to the root bridge,
STP considers other BPDU fields. STP looks first at the bridge IDs in the received BPDUs (the
bridge IDs of the next switch in the path to the root bridge); the port that received the BPDU
with the lowest bridge ID becomes the root port. If these bridge IDs are also equal, the port ID
breaks the tie; the port with the lower port ID becomes the root port. The port ID field includes
a port priority and a port index, which is the port number. Therefore, if the port priorities are the
same (for example, if they are left at their default value), the lower port number becomes the root
port.
Each LAN segment must have one designated port. It is on the switch that has the lowest cost to
the root bridge (or, if the costs are equal, the port on the switch with the lowest bridge ID is
chosen), and it is in the forwarding state. In Figure 1-19, the root bridge has designated ports on
both segments, so no more are required.
NOTE The root bridge sends configuration BPDUs on all its ports periodically—every 2
seconds, by default. These configuration BPDUs include STP timers, therefore ensuring that all
switches in the network use the same timers. On each LAN segment, the switch that has the
designated port forwards the configuration BPDUs to the segment; every switch in the network
therefore receives these BPDUs on its root port.
All ports on a LAN segment that are not root ports or designated ports are called nondesignated
ports and transition to the blocking state—they do not send data, so the redundant topology is
logically disabled. In Figure 1-19, port 2 on switch X is the nondesignated port, and it is in the
blocking state. Blocking ports do, however, listen for BPDUs.
If a failure happens—for example, if a designated port or a root bridge fails—the switches send
topology change BPDUs and recalculate the spanning tree. The new spanning tree does not
include the failed port or switch, and the ports that were previously blocking might now be in the
forwarding state. This is how STP supports the redundancy in a switched network.
STP States
Figure 1-20 illustrates the various STP port states.
46
Chapter 1: Network Fundamentals Review
Figure 1-20
A Port Can Transition Among STP States
Blocking
– listen for BPDUs
Listening
– send and receive BPDUs
– elect root bridge, select root
ports and designated ports
Learning
– can populate MAC address table
– send and receive BPDUs
Forwarding
– send and receive data
– send and receive BPDUs
max age
= 20 seconds
50 seconds
forward delay
= 15 seconds
forward delay
= 15 seconds
When a port initially comes up, it is put in the blocking state, in which it listens for BPDUs and
then transitions to the listening state. A blocking port in an operational network can also transition
to the listening state if it does not hear any BPDUs for the max-age time (a default of 20 seconds).
While in the listening state, the switch can send and receive BPDUs but not data. The root bridge
and the various final states of all the ports are determined in this state.
If the port is chosen as the root port on a switch, or as a designated port on a segment, that port
transitions to the learning state after the listening state. In the learning state, the port still cannot
send data, but it can start to populate its MAC address table if any data is received. The length of
time spent in each of the listening and learning states is dictated by the value of the forward-delay
parameter, which is 15 seconds by default. After the learning state, the port transitions to the
forwarding state, in which it can operate normally. Alternatively, if in the listening state the port
is not chosen as a root port or designated port, it becomes a nondesignated port and transitions
back to the blocking state.
KEY
Do not confuse the STP learning state with the learning process that the switch goes
POINT through to populate its MAC address table. The STP learning state is a transitory state.
Although a switch can learn MAC addresses from data frames received on its ports that
are in the STP learning state, it does not forward those frames. In a stable network, switch
ports are in either the forwarding or blocking state. Ports in the blocking state do not listen
to data frames and therefore do not contribute to the switch’s MAC address table. Ports in
the forwarding state do, of course, listen to (and forward) data frames, and those frames
populate the switch’s MAC address table.
Virtual LANs
47
Several features and enhancements to STP are implemented on Cisco switches to help to reduce
the convergence time—the time it takes for all the switches in a network to agree on the network’s
topology after that topology has changed.
Rapid STP
Rapid STP (RSTP) is defined by IEEE 802.1w. RSTP incorporates many of the Cisco enhancements to STP, resulting in faster convergence. Switches in an RSTP environment converge quickly
by communicating with each other and determining which links can forward, rather than just
waiting for the timers to transition the ports among the various states. RSTP ports take on different
roles than STP ports. The RSTP roles are root, designated, alternate, backup, and disabled. RSTP
port states are also different from STP port states. The RSTP states are discarding, learning, and
forwarding. RSTP is compatible with STP. For example, 802.1w alternate and backup port states
correspond to the 802.1d blocking port state.
Virtual LANs
As noted earlier, a broadcast domain includes all devices that receive each others’ broadcasts (and
multicasts). All the devices connected to one router port are in the same broadcast domain. Routers
block broadcasts (destined for all networks) and multicasts by default; routers forward only
unicast packets (destined for a specific device) and packets of a special type called directed
broadcasts. Typically, you think of a broadcast domain as being a physical wire, a LAN. But a
broadcast domain can also be a VLAN, a logical construct that can include multiple physical LAN
segments.
KEY
The Cisco definition of VLANs is very clear: “[A] group of devices on one or more LANs
POINT that are configured (using management software) so that they can communicate as if they
were attached to the same wire, when in fact they are located on a number of different
LAN segments. Because VLANs are based on logical instead of physical connections,
they are extremely flexible.” This definition is from “Virtual LANs/VLAN Trunking
Protocol (VLANs/VTP),” available at http://www.cisco.com/en/US/tech/tk389/tk689/
tsd_technology_support_protocol_home.html.
Figure 1-21 illustrates the VLAN concept. On the left side of the figure, three individual physical
LANs are shown, one each for Engineering, Accounting, and Marketing. These LANs contain
workstations—E1, E2, A1, A2, M1, and M2—and servers—ES, AS, and MS. Instead of physical
LANs, an enterprise can use VLANs, as shown on the right side of the figure. With VLANs,
members of each department can be physically located anywhere, yet still be logically connected
with their own workgroup. Therefore, in the VLAN configuration, all the devices attached to
VLAN E (Engineering) share the same broadcast domain, the devices attached to VLAN A
(Accounting) share a separate broadcast domain, and the devices attached to VLAN M
48
Chapter 1: Network Fundamentals Review
(Marketing) share a third broadcast domain. Figure 1-21 also illustrates how VLANs can span
multiple switches; the link between the two switches in the figure carries traffic from all three of
the VLANs and is called a trunk.
Figure 1-21
A VLAN Is a Logical Implementation of a Physical LAN
E1
E2
ES
Engineering
LAN E
Trunk
VLANs E, A & M
A1
A2
E1
AS
Accounting
LAN A
Switch 1
Switch 2
VLAN
E
A1
M1
M2
MS
Marketing
LAN M
Physical LANs
VLAN
A
M1
E2
ES
VLAN
M
AS
A2
M2
MS
VLAN VLAN VLAN
E
A
M
VLAN E VLAN A VLAN M
Logical VLANs
VLAN Membership
KEY
A switch port that is not a trunk can belong to only one VLAN at a time. You can either
POINT statically or dynamically configure which VLAN a port belongs to.
Static port membership means that the network administrator configures which VLAN the port
belongs to, regardless of the devices attached to it. This means that after you have configured the
ports, you must ensure that the devices attaching to the switch are plugged into the correct port,
and if they move, you must reconfigure the switch.
Alternatively, you can configure dynamic VLAN membership. Some static configuration is still
required, but this time, it is on a separate device called a VLAN Membership Policy Server (VMPS).
The VMPS could be a separate server, or it could be a higher-end switch that contains the VMPS
information. VMPS information consists of a MAC address–to–VLAN map. As a result, ports are
assigned to VLANs based on the MAC address of the device connected to the port. When you
move a device from one port to another port (either on the same switch or on another switch in the
network), the switch dynamically assigns the new port to the proper VLAN for that device by
consulting the VMPS.
Virtual LANs
49
Trunks
As mentioned earlier, a port that carries data from multiple VLANs is called a trunk. A trunk port
can be on a switch, a router, or a server. A trunk port can use one of two protocols: Inter-Switch
Link (ISL) or IEEE 802.1Q.
ISL is a Cisco-proprietary trunking protocol that involves encapsulating the data frame between
an ISL header and trailer. The header is 26 bytes long; the trailer is a 4-byte cyclic redundancy
check that is added after the data frame. A 15-bit VLAN ID field is included in the header to
identify the VLAN that the traffic is for. (Only the lower 10 bits of this field are used, thus
supporting 1024 VLANs.)
The 802.1Q protocol is an IEEE standard protocol in which the trunking information is encoded
within a Tag field inserted inside the frame header itself. Trunks using the 802.1Q protocol define
a native VLAN. Traffic for the native VLAN is not tagged; it is carried across the trunk unchanged.
Consequently, end-user stations that don’t understand trunking can communicate with other
devices directly over an 802.1Q trunk as long as they are on the native VLAN. The native VLAN
must be defined to be the same VLAN on both sides of the trunk. Within the Tag field, the 802.1Q
VLAN ID field is 12 bits long, allowing up to 4096 VLANs to be defined. The Tag field also
includes a 3-bit 802.1p user priority field; these bits are used as class of service (CoS) bits for QoS
marking. (Chapter 4 describes QoS.)
The two types of trunks are not compatible with each other, so both ends of a trunk must be defined
with the same trunk type.
NOTE Multiple switch ports can be logically combined so that they appear as one higherperformance port. Cisco does this with its EtherChannel technology, combining multiple Fast
Ethernet or Gigabit Ethernet links. Trunks can be implemented on both individual ports and on
these EtherChannel ports.
STP and VLANs
Cisco developed per-VLAN spanning tree (PVST) so that switches can have one instance of STP
running per VLAN, allowing redundant physical links within the network to be used for different
VLANs and thus reducing the load on individual links. PVST is illustrated in Figure 1-22.
50
Chapter 1: Network Fundamentals Review
Figure 1-22
PVST Allows Redundant Physical Links to Be Used for Different VLANs
Physical Topology
Switch X
Switch Y
1
2
1
2
Logical Topology for
VLAN A
Switch X
Non Root
Bridge
1
2
Logical Topology for
VLAN B
1
2
Switch Y
Root
Bridge
Switch X
Root
Bridge
1
2
1
2
Switch Y
Non Root
Bridge
The top diagram in Figure 1-22 shows the physical topology of the network, with switches X and
Y redundantly connected. In the lower-left diagram, switch Y has been selected as the root bridge
for VLAN A, leaving port 2 on switch X in the blocking state. In contrast, the lower-right diagram
shows that switch X has been selected as the root bridge for VLAN B, leaving port 2 on switch Y
in the blocking state. With this configuration, traffic is shared across all links, with traffic for
VLAN A traveling to the lower LAN on switch Y’s port 2, whereas traffic for VLAN B traveling
to the lower LAN goes out switch X’s port 2.
PVST works only over ISL trunks. However, Cisco extended this functionality for 802.1Q trunks
with the PVST+ protocol. Before this became available, 802.1Q trunks supported only Common
Spanning Tree, with one instance of STP running for all VLANs.
Multiple-Instance STP (MISTP) is an IEEE standard (802.1s) that uses RSTP and allows several
VLANs to be grouped into a single spanning-tree instance. Each instance is independent of the
other instances so that a link can forward for one group of VLANs while blocking for other
VLANs. MISTP therefore allows traffic to be shared across all the links in the network, but it
reduces the number of STP instances that would be required if PVST/PVST+ were implemented.
Rapid per-VLAN Spanning Tree Plus (RPVST+) is a Cisco enhancement of RSTP, using PVST+.
Virtual LANs
51
Inter-VLAN Routing
KEY
Just like devices on different LANs, those on different VLANs require a Layer 3
POINT mechanism (a router or a Layer 3 switch) to communicate with each other.
A Layer 3 device can be connected to a switched network in two ways: by using multiple physical
interfaces or through a single interface configured as a trunk. These two connection methods are
shown in Figure 1-23. The diagram on the left illustrates a router with three physical connections
to the switch; each physical connection carries traffic from only one VLAN.
Figure 1-23
A Router, Using Either Multiple Physical Interfaces or a Trunk, Is Required for
Communication Among VLANs
Router
VLAN M
VLAN A
VLAN E
Router
Trunk
VLANs
E, A & M
OR
Trunk
Trunk
E1
E1
VLANs E, A & M
VLAN
E
VLAN
E
A1
VLAN
A
VLANs E, A & M
A1
M1
E2
ES
VLAN
M
AS
A2
M2
MS
VLAN VLAN VLAN
E
A
M
VLAN E VLAN A VLAN M
VLAN
A
M1
E2
ES
VLAN
M
AS
A2
M2
MS
VLAN VLAN VLAN
E
A
M
VLAN E VLAN A VLAN M
The diagram on the right illustrates a router with one physical connection to the switch. The
interfaces on the switch and the router have been configured as trunks; therefore, multiple logical
connections exist between the two devices. When a router is connected to a switch through a trunk,
it is sometimes called a “router on a stick,” because it has only one physical interface (a stick) to
the switch.
Each interface between the switch and the Layer 3 device, whether physical interfaces or logical
interfaces within a trunk, is in a separate VLAN and therefore in a separate subnet for IP networks.
52
Chapter 1: Network Fundamentals Review
Comprehensive Example
This section presents a comprehensive example, tying together many of the concepts covered in
the rest of this chapter. Figure 1-24 illustrates the network used in this example.
Figure 1-24
PC1 in New York Is Sending FTP Data to FS1 in London
172.16.3.5
file server
Gigabit
Ethernet
FS1
Gigabit
Ethernet
10.1.1.1
Ethernet
S1
PC1
R4
Fast
Ethernet
Switch
R5
R3
S2
Switch
R1
Fram
eR
elay
LC
HD
R2
New York
London
In this network, PC1, located in New York, has an FTP connection with the file server FS1 in
London. PC1 is transferring a file, using FTP, to FS1. The path between PC1 and FS1 goes through
switch S1; routers R1, R2, and R3; and switch S2, as illustrated by the thick line in the figure. The
routers have communicated, using a routing protocol, to determine the best path between network
10.0.0.0 and network 172.16.0.0. PC1 has an IP address of 10.1.1.1, and FS1 has an IP address of
172.16.3.5. When PC1 first needed to send data to a device on another network, it sent an ARP
request; its default gateway, R1, replied with its own MAC address, which PC1 keeps in its
memory.
FTP data is now being sent from PC1 to FS1. Figure 1-25 shows how this data flows within the
devices in the network, and what the data looks like at each point within the network.
Comprehensive Example
Figure 1-25
53
Data Is Encapsulated and Unencapsulated as It Flows Through the Network
PC1
S1
R1
R2
R3
S2
FS1
Application
Application
Presentation
Presentation
Session
Session
Transport
Transport
Network
Data Link
Data Link
Physical
Network
Network
Data Link
Data Link
Physical
Physical
A
Network
Data Link
B
Physical
C
Network
Physical
D
Data Link
Data Link
Physical
Physical
E
F
A
Ethernet Source MAC Dest MAC
IP
Frame
= MAC
= MAC
Datagram
of PC1
of R1
Protocol Source. IP Dest. IP
Dest. port
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
10.1.1.1 172.16.3.5
data)
FTP data
Dest. port
Protocol Source. IP Dest. IP
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
data)
10.1.1.1 172.16.3.5
FTP data
IP
Datagram
Protocol Source. IP Dest. IP
Dest. port
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
10.1.1.1 172.16.3.5
data)
FTP data
IP
Datagram
Protocol Source. IP Dest. IP
Dest. port
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
10.1.1.1 172.16.3.5
data)
FTP data
Gigabit Source MAC Dest MAC
IP
Ethernet
= MAC
= MAC
Datagram
Frame
of FS1
of R3
Dest. port
Protocol Source. IP Dest. IP
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
data)
10.1.1.1 172.16.3.5
FTP data
Dest. port
Protocol Source. IP Dest. IP
TCP
= 6 (TCP) address = address = Segment = 20 (FTP
data)
10.1.1.1 172.16.3.5
FTP data
B
Fast
IP
Source MAC Dest MAC
Ethernet
Datagram
= MAC
= MAC
Frame
of PC1
of R1
C
Frame Relay
Frame
D
HDLC Frame
E
F
Gigabit Source MAC Dest MAC
IP
Ethernet
= MAC
= MAC
Datagram
Frame
of R3
of FS1
54
Chapter 1: Network Fundamentals Review
Starting at the left of Figure 1-25, PC1 prepares the data for transport across the network, and the
resulting frame is shown as point A in the figure. PC1 encapsulates the FTP data in a TCP segment;
the destination port field of the segment is set to 20, indicating that it contains FTP data. This TCP
segment is then encapsulated in an IP datagram. The protocol number of the datagram is set to 6,
indicating that it contains a TCP segment. The source IP address is set to PC1’s address, 10.1.1.1,
whereas the destination IP address is set to FS1’s address, 172.16.3.5. The IP datagram is
encapsulated in an Ethernet frame, with the source MAC address set to PC1’s MAC address and
the destination MAC address set to R1’s MAC address. PC1 then puts the frame on the Ethernet
network, and the bits arrive at S1.
S1 receives the frame and looks at the destination MAC address—it is R1’s MAC address. S1
looks in its MAC address table and sees that this MAC address is on its Fast Ethernet port.
Therefore, S1 encapsulates the IP datagram in a Fast Ethernet frame, as shown at point B in the
figure. Notice that the source and destination MAC addresses have not changed in this new frame
type, and that the datagram, segment, and data all remain untouched by the switch. S1 then puts
the frame on the Fast Ethernet network, and the bits arrive at R1.
R1 receives the frame, and because it is destined for R1’s MAC address, R1 unencapsulates the
frame to Layer 3. R1 looks at the destination IP address 172.16.3.5 and compares it to its routing
table. This network is accessible through R2, over a Frame Relay network, so R1 encapsulates the
IP datagram in a Frame Relay frame, as shown at point C in the figure. Notice that the datagram,
segment, and data all remain untouched by the router, but the frame type has changed. R1 then
puts the frame on the Frame Relay network, and the bits arrive at R2.
R2 receives the frame and unencapsulates it to Layer 3. R2 looks at the destination IP address
172.16.3.5 and compares it to its routing table. This network is accessible through R3, over an HDLC
network, so R2 encapsulates the IP datagram in an HDLC frame, as shown at point D in the figure.
Notice that the datagram, segment, and data all remain untouched by the router, but the frame type has
changed again. R2 then puts the frame on the HDLC network, and the bits arrive at R3.
R3 receives the frame and unencapsulates it to Layer 3. R3 looks at the destination IP address
172.16.3.5 and compares it to its routing table. This network is accessible through its Gigabit
Ethernet interface—it is directly connected to that network. When R3 first needed to send data to
FS1, it sent an ARP request; FS1 replied with its own MAC address, which R3 keeps in its
memory. So, R3 encapsulates the IP datagram in a Gigabit Ethernet frame, as shown at point E in
the figure, with the source MAC address set to its own address and the destination MAC address
set to FS1’s address. Notice that the datagram, segment, and data all remain untouched by the
router, but the frame type has changed. The bits arrive at S2.
S2 receives the frame and looks at the destination MAC address—it is FS1’s MAC address. S2
looks in its MAC address table and sees that this MAC address is on another one of its Gigabit
Ethernet ports. Therefore, the IP datagram can stay in a Gigabit Ethernet frame, as shown at point
Summary
55
F in the figure. Notice that the source and destination MAC addresses have not changed in this
frame, and that the datagram, segment, and data all remain untouched by the switch. S2 then puts
the frame on the other Gigabit Ethernet network, and the bits arrive at FS1. FS1 receives the frame,
and because it is destined for FS1’s MAC address, FS1 unencapsulates the frame to Layer 3. FS1
looks at the destination IP address and determines that it is its own address. Therefore, FS1
unencapsulates the segment and the FTP data and then sends it to its FTP application. The FTP
data is now at its destination.
KEY
At each communication layer, the same protocol must be used at each side of a connection.
POINT
For example, PC1 is sending data to FS1 using FTP, so both PC1 and FS1 must support
FTP at the application layer. If they don’t, the session will fail, and data will not be sent.
Note, however, that the FTP data can go through many different types of media—Layers
1 and 2—on its way to FS1. The devices (switches, routers, PC, and file server) all
unencapsulate up to at least Layer 2; thus, both sides of each connection between these
devices must support the same Layers 1 and 2. For example, if PC1 supported only
Ethernet and S1 supported only Fast Ethernet, they would not be able to communicate.
Because S1 has an Ethernet port, it can connect to PC1 and then convert the data to send
out on its Fast Ethernet port.
Summary
In this chapter, you learned about fundamental networking concepts; these concepts form a solid
foundation for understanding the rest of this book. The following topics were explored:
■
Introduction to networks
■
Discussion of networking protocols and the OSI model, a key component of networking and
the basis of modern protocol suites
■
LANs and WANs
■
Network devices, including hubs, switches, and routers
■
Introduction to the TCP/IP suite and a discussion of the IP, TCP, and UDP protocols
■
Routing, including an introduction to routing protocols
■
Addressing, including MAC and IP addresses
■
Layer 2 and Layer 3 switching
■
Use and operation of STP in Layer 2 networks
■
Concept and operation of VLANs
■
Comprehensive example illustrating the encapsulation and unencapsulation processes
This chapter introduces a network design
methodology and presents guidelines for
building an effective network design
solution. It includes the following
sections:
■
The Cisco Service Oriented Network
Architecture
■
Network Design Methodology
■
Identifying Customer Requirements
■
Characterizing the Existing Network and
Sites
■
Using the Top-Down Approach to Network
Design
■
The Design Implementation Process
■
Summary
■
References
■
Case Study: ACMC Hospital Network
Upgrade
■
Review Questions
CHAPTER
2
Applying a Methodology
to Network Design
This chapter begins with an introduction to the Cisco vision of intelligent networks and the
Service Oriented Network Architecture (SONA) architectural framework. The lifecycle of a
network and a network design methodology based on the lifecycle are presented. Each phase of
the network design process is explored in detail, starting with how to identify customer
requirements, including organizational and technical goals and constraints. Because many
customers build on an existing network and at existing sites, this chapter also presents methods
of characterizing that existing network and those sites. A top-down approach to design and
structured design principles is presented. The design process includes a discussion about
building a prototype or pilot and the appropriate content of a design specification. The chapter
concludes with a discussion of the design implementation process.
The Cisco Service Oriented Network Architecture
The extremely rich variety of application-level business solutions available today and the need
to integrate these applications drives the need for a new network architecture. This section
introduces the Cisco vision and framework that enable customers to build a more intelligent
network infrastructure. the Cisco SONA architectural framework shifts the view of the network
from a pure traffic transport-oriented view toward a service- and application-oriented view.
Business Drivers for a New Network Architecture
New business requirements, the growth of applications, and the evolution of IT combine to drive
the need for a new network architecture. In today’s business environment, intense competition
and time-to-market pressures are prompting enterprises to look for new IT solutions that can
help them better respond to market and customer demands. Consumers are asking for new
products and service offerings—and they want them fast. They are also demanding improved
customer service, enhanced customization flexibility, and greater security, all at a lower cost.
Modern networks connect multiple resources and information assets within the organization as
well as provide access to external resources. In this environment, the IT model has evolved from
mainframes, to client/server models, to Internet applications, as illustrated in Figure 2-1. The
Cisco vision of the next phase of IT evolution is a real-time infrastructure that integrates the
network and the applications as one system.
58
Chapter 2: Applying a Methodology to Network Design
Figure 2-1
IT Evolution from Connectivity to Intelligent Systems
Real-Time Infrastructure
Internet
Client/Server
Network of
Networks
Mainframe
Packet
Networks
Proprietary
Network
Integrated System
for Terminal to
Mainframe
Connectivity
(VTAM)
Demand for
Networks to
Connect
Multivendor
Devices
(Packets)
Pervasive,
Open Networks
Enable
Client/Server
to Extend
Beyond
Corporate
Boundaries
(TCP/IP)
New
Network
Architecture
Intelligent
Information
Network
The Network
and
Applications
Work
Together as
an Integrated
System
(Messages)
Organizations are finding that networking is no longer just about connectivity; rather, network
intelligence is starting to play a role in improving business performance and processes.
Intelligence enhances the network’s role as a foundation for enabling communication,
collaboration, and business success. With increased awareness of the applications that operate on
the network foundation, the network becomes an active participant in applications, network
management, business systems, and services to enable them to work better.
The network is the common single element that connects and enables all components of the IT
infrastructure.
Organizations need their networks to evolve to intelligent systems that participate actively in the
delivery of applications to effectively reach the goals of improved productivity, reduced time to
market, greater revenue, lower expenses, and stronger customer relationships. An effective
network provides the foundation for transforming business practices.
Intelligence in the Network
Integrating intelligence into the network involves aligning network and business requirements. To
accommodate today’s and tomorrow’s network requirements, the Cisco vision of the future
includes the Intelligent Information Network (IIN), a strategy that addresses how the network is
integrated with businesses and business priorities. This vision encompasses the following features:
■
Integration of networked resources and information assets that have been largely
unlinked: The modern converged networks with integrated voice, video, and data require that
IT departments (and other departments traditionally responsible for other technologies) more
closely link the IT infrastructure with the network.
The Cisco Service Oriented Network Architecture
59
■
Intelligence across multiple products and infrastructure layers: The intelligence built in
to each component of the network is extended networkwide and applies end-to-end.
■
Active participation of the network in the delivery of services and applications: With
added intelligence, it is possible for the network to actively manage, monitor, and optimize
service and application delivery across the entire IT environment.
KEY
The intelligent network offers much more than basic connectivity, bandwidth for users,
POINT and access to applications. It offers end-to-end functionality and centralized, unified
control that promotes true business transparency and agility.
With this technology vision, Cisco is helping organizations address new IT challenges, such as the
deployment of service-oriented architectures, web services, and virtualization (as described in the
upcoming Phase 2 bullet). This vision offers an evolutionary approach that consists of three phases
in which functionality can be added to the infrastructure as required. The three phases are
illustrated in Figure 2-2 and described as follows:
■
Phase 1: Integrated transport: Everything (data, voice, and video) consolidates onto an IP
network for secure network convergence. By integrating data, voice, and video transport into
a single standards-based modular network, organizations can simplify network management
and generate enterprisewide efficiencies. Network convergence also lays the foundation for a
new class of IP-enabled applications, now known as Cisco Unified Communications
solutions.
NOTE Cisco Unified Communications is the name, launched in March 2006, for the entire
range of what were previously known as Cisco IP communications products. These include all
call control, conferencing, voice mail and messaging, customer contact, IP phone, video
telephony, videoconferencing, rich media clients, and voice application products.
■
Phase 2: Integrated services: When the network infrastructure is converged, IT resources
can be pooled and shared, or virtualized, to flexibly address the changing needs of the
organization. By extending this virtualization concept to encompass server, storage, and
network elements, an organization can transparently use all its resources more efficiently.
Business continuity is also enhanced because in the event of a local systems failure, shared
resources across the intelligent network can provide needed services.
■
Phase 3: Integrated applications: This phase focuses on making the network applicationaware so that it can optimize application performance and more efficiently deliver networked
applications to users. In addition to capabilities such as content caching, load balancing, and
60
Chapter 2: Applying a Methodology to Network Design
application-level security, application network services make it possible for the network to
simplify the application infrastructure by integrating intelligent application message
handling, optimization, and security into the existing network.
Figure 2-2
Intelligence in the Network
Network-Enabled
Applications
Integrated
Applications
Network Intelligence
PHASE 3
Virtualized Resources and Services
Integrated
Services
PHASE 2
Intelligent Movement of Data/Voice/Video
Across a System of Networks
Integrated
Transport
PHASE 1
Time
NOTE You can access the IIN home page at http://www.cisco.com/go/iin.
NOTE The IT industry is currently deploying Phase 2 integrated services. With ApplicationOriented Networking technology, Cisco has entered Phase 3, and the industry is starting to
define Phase 3 integrated applications.
Cisco SONA Framework
The Cisco SONA is an architectural framework that illustrates how to build integrated systems and
guides the evolution of enterprises toward more intelligent networks. Using the SONA framework,
enterprises can improve flexibility and increase efficiency by optimizing applications, business
processes, and resources to enable IT to have a greater effect on business.
The SONA framework leverages the extensive product-line services, proven architectures, and
experience of Cisco and its partners to help enterprises achieve their business goals.
The Cisco Service Oriented Network Architecture
61
The SONA framework, shown in Figure 2-3, shows how integrated systems can allow a dynamic,
flexible architecture and provide for operational efficiency through standardization and
virtualization.
Collaboration
Applications
Adaptive Management
Services
Business
Applications
Collaboration
Layer
Cisco SONA Framework
Application
Layer
Figure 2-3
Networked
Infrastructure
Layer
Interactive
Services
Layer
Application Networking Services
Infrastructure Services
Places in the Network
Server
Storage
Clients
KEY
In the SONA framework, the network is the common element that connects and enables
POINT all components of the IT infrastructure.
The SONA framework defines the following three layers:
■
Networked Infrastructure layer: Where all the IT resources are interconnected across a
converged network foundation. The IT resources include servers, storage, and clients. The
Networked Infrastructure layer represents how these resources exist in different places in the
network, including the campus, branch, data center, enterprise edge, WAN, metropolitan-area
network (MAN), and with the teleworker. The objective of this layer is to provide
connectivity, anywhere and anytime.
The Networked Infrastructure layer includes the network devices and links to
connect servers, storage, and clients in different places in the network.
62
Chapter 2: Applying a Methodology to Network Design
■
Interactive Services layer: Includes both application networking services and infrastructure
services. This layer enables efficient allocation of resources to applications and business processes
delivered through the networked infrastructure. This layer includes the following services:
— Voice and collaboration services
— Mobility services
— Wireless services
— Security and identity services
— Storage services
— Compute services
— Application networking services (content networking services)
— Network infrastructure virtualization
— Adaptive network management services
— Quality of service (QoS)
— High availability
— IP multicast
■
Application layer: This layer includes business applications and collaboration applications.
The objective of this layer is to meet business requirements and achieve efficiencies by
leveraging the interactive services layer. This layer includes the following collaborative
applications:
— Instant messaging
— Cisco Unified Contact Center
— Cisco Unity (unified messaging)
— Cisco IP Communicator and Cisco Unified IP Phones
— Cisco Unified MeetingPlace
— Video delivery using Cisco Digital Media System
— IP telephony
NOTE The preceding lists include voice as an infrastructure service and IP telephony as an
application. Note that some Cisco documentation uses the term IP telephony to describe the
infrastructure service supported by other services, such as voice. To avoid ambiguity, the term
IP telephony is used in this book to describe the network application supported by other
services, such as voice.
Figure 2-4 illustrates some of these SONA offerings within each of the layers.
The Cisco Service Oriented Network Architecture
63
Application
Layer
Collaboration
Layer
Cisco SONA Offerings
Figure 2-4
Business
Applications
Instant
Messaging
Unified
Messaging
Cisco Unified
Meeting Place
Cisco Unified
Contact Center
IP Phone
Video
Delivery
Application-Oriented Networking
Voice and
Collaboration Services
Security Services
Mobility Services
Infrastructure
Services
Compute Services
Storage Services
Identity Services
Adaptive Management
Services
Application Delivery
Services
Virtualization
Services Management
Interactive
Services
Layer
Advanced Analytics and Decision Support
Network Infrastructure Virtualization
Networked
Infrastructure
Layer
Infrastructure Management
Campus
Branch
Data
Center
Server
Enterprise
Edge
Storage
WAN and
MAN
Teleworker
Clients
NOTE You can access the SONA home page at http://www.cisco.com/go/sona.
The benefits of SONA include the following:
■
Functionality: Supports the organizational requirements.
■
Scalability: Supports growth and expansion of organizational tasks by separating functions
and products into layers; this separation makes it easier to grow the network.
■
Availability: Provides the necessary services, reliably, anywhere, anytime.
■
Performance: Provides the desired responsiveness, throughput, and utilization on a perapplication basis through the network infrastructure and services.
■
Manageability: Provides control, performance monitoring, and fault detection.
■
Efficiency: Provides the required network services and infrastructure with reasonable
operational costs and appropriate capital investment on a migration path to a more intelligent
network, through step-by-step network services growth.
■
Security: Provides for an effective balance between usability and security while protecting
information assets and infrastructure from inside and outside threats.
64
Chapter 2: Applying a Methodology to Network Design
Network Design Methodology
The network design methodology presented in this section is derived from the Cisco Prepare, Plan,
Design, Implement, Operate, and Optimize (PPDIOO) methodology, which reflects a network’s
lifecycle. The following sections describe the PPDIOO phases and their relation to the network
design methodology, and the benefits of the lifecycle approach to network design. Subsequent
sections explain the design methodology in detail.
Design as an Integral Part of the PPDIOO Methodology
The PPDIOO network lifecycle, illustrated in Figure 2-5, reflects the phases of a standard
network’s lifecycle. As shown in this figure, the PPDIOO lifecycle phases are separate, yet closely
related.
Figure 2-5
PPDIOO Network Lifecycle Influences Design
Coordinated
Planning and Strategy
- Make Sound Financial
Decisions
Prepare
Operational Excellence
- Adapt to Changing
Business Requirements
Optimize
Plan
Assess Readiness
- Can the Network Support
the Proposed System?
Maintain Network Health
- Manage, Resolve,
Repair, Replace
Operate
Design
Design the Solution
- Products, Service, Support
Aligned to Requirements
Implement
Implement the Solution
- Integrate Without Disruption
or Causing Vulnerability
Network Design Methodology
65
The following describes each PPDIOO phase:
■
Prepare phase: The Prepare phase involves establishing the organizational (business)
requirements, developing a network strategy, and proposing a high-level conceptual
architecture, identifying technologies that can best support the architecture. Financial
justification for the network strategy is established by assessing the business case for the
proposed architecture.
■
Plan phase: This phase involves identifying the network requirements, which are based on
the goals for the network, where the network will be installed, who will require which
network services, and so forth. The Plan phase also involves assessing the sites where the
network will be installed and any existing networks, and performing a gap analysis to
determine if the existing system infrastructure, sites, and operational environment can support
the proposed system. A project plan helps manage the tasks, responsibilities, critical
milestones, and resources required to implement the changes to the network. The project plan
should align with the scope, cost, and resource parameters established in the original business
requirements. The output of this phase is a set of network requirements.
■
Design phase: The initial requirements determined in the Plan phase drive the network design
specialists’ activities. These specialists design the network according to those initial
requirements, incorporating any additional data gathered during network analysis and
network audit (when upgrading an existing network) and through discussion with managers
and network users. The network design specification that is produced is a comprehensive
detailed design that meets current business and technical requirements and incorporates
specifications to support availability, reliability, security, scalability, and performance. This
design specification provides the basis for the implementation activities.
■
Implement phase: Implementation and verification begins after the design has been
approved. The network and any additional components are built according to the design
specifications, with the goal of integrating devices without disrupting the existing network or
creating points of vulnerability.
■
Operate phase: Operation is the final test of the design’s appropriateness. The Operate phase
involves maintaining network health through day-to-day operations, which might include
maintaining high availability and reducing expenses. The fault detection and correction and
performance monitoring that occur in daily operations provide initial data for the network
lifecycle’s Optimize phase.
■
Optimize phase: The Optimize phase is based on proactive network management, the goal
of which is to identify and resolve issues before real problems arise and the organization is
affected. Reactive fault detection and correction (troubleshooting) are necessary when
proactive management cannot predict and mitigate the failures. In the PPDIOO process, the
66
Chapter 2: Applying a Methodology to Network Design
Optimize phase might lead to network redesign if too many network problems or errors arise,
if performance does not meet expectations, or if new applications are identified to support
organizational and technical requirements.
Although Design is one of the six PPDIOO phases, all the other phases influence design decisions,
and the Design phase interacts closely with them, as follows:
■
The requirements derived from the Prepare and Plan phases are the basis for network design.
■
The Implement phase includes the initial verification of the design on the actual network.
■
During the Operate and Optimize phases, the final decision is made about the appropriateness
of the design, based on network analysis and any problems that arise. The network might have
to be redesigned to correct any discovered errors.
Benefits of the Lifecycle Approach to Network Design
The network lifecycle approach provides many benefits, including the following:
■
Lowering the total cost of network ownership:
— Identifying and validating technology requirements
— Planning for infrastructure changes and resource requirements
— Developing a sound network design aligned with technical requirements and
business goals
— Accelerating successful implementation
— Improving the efficiency of the network and of the staff supporting it
— Reducing operating expenses by improving the efficiency of operation processes
and tools
■
Increasing network availability:
— Assessing the state of the network’s security and its ability to support the proposed
design
— Specifying the correct set of hardware and software releases and keeping them
operational and current
— Producing a sound operational design and validating network operation
— Staging and testing the proposed system before deployment
— Improving staff skills
— Proactively monitoring the system and assessing availability trends and alerts
— Proactively identifying security breaches and defining remediation plans
Network Design Methodology
■
67
Improving business agility:
— Establishing business requirements and technology strategies
— Readying sites to support the system to be implemented
— Integrating technical requirements and business goals into a detailed design and
demonstrating that the network is functioning as specified
— Expertly installing, configuring, and integrating system components
— Continually enhancing performance
■
Accelerating access to applications and services:
— Assessing and improving operational preparedness to support current and planned
network technologies and services
— Improving service-delivery efficiency and effectiveness by increasing availability,
resource capacity, and performance
— Improving the availability, reliability, and stability of the network and the
applications running on it
— Managing and resolving problems affecting the system and keeping software
applications current
Design Methodology
When working in an environment that requires creative production on a tight schedule—for
example, when designing an internetwork—using a methodology can be helpful. A methodology
is a documented, systematic way of doing something.
Following a design methodology can have many advantages:
■
It ensures that no step is missed when the process is followed.
■
It provides a framework for the design process deliverables.
■
It encourages consistency in the creative process, enabling network designers to set
appropriate deadlines and maintain customer and manager satisfaction.
■
It allows customers and managers to validate that the designers have thought about how to
meet their requirements.
68
Chapter 2: Applying a Methodology to Network Design
The design methodology presented here includes three basic steps; some of the design
methodology steps are intrinsic to the PPDIOO Design phase, whereas other steps are related to
other PPDIOO phases:
Step 1
Identify customer requirements: In this step, which is typically
completed during the PPDIOO Prepare phase, key decision makers identify
the initial requirements. Based on these requirements, a high-level
conceptual architecture is proposed.
Step 2
Characterize the existing network and sites: The Plan phase involves
characterizing sites and assessing any existing networks, and performing a
gap analysis to determine whether the existing system infrastructure, sites,
and operational environment can support the proposed system.
Characterization of the existing network and sites includes site and network
audit and network analysis. During the network audit, the existing network
is thoroughly checked for integrity and quality. During the network
analysis, network behavior (traffic, congestion, and so forth) is analyzed.
Step 3
Design the network topology and solutions: In this step, the detailed
design of the network is created. Decisions are made about networked
infrastructure, infrastructure services, and applications. The data for
making these decisions is gathered during the first two steps.
A pilot or prototype network might be constructed to verify the correctness
of the design and to identify and correct any problems as a proof of concept
before implementing the entire network.
A detailed design document is also written during this step; it includes
information that has been documented in the previous steps.
When the design is complete, the design implementation process is executed; this process includes
the following steps:
Step 1
Plan the implementation: During this step, the implementation
procedures are prepared in advance to expedite and clarify the actual
implementation. Cost assessment is also undertaken at this time. This step
is performed during the PPDIOO Design phase.
Step 2
Implement and verify the design: The actual implementation and
verification of the design take place during this step by building a network.
This step maps directly to the Implement phase of the PPDIOO
methodology.
NOTE A pilot or prototype network verifies the design somewhat; however, the design is not
truly verified until it is actually implemented.
Identifying Customer Requirements
Step 3
69
Monitor and optionally redesign: The network is put into operation after
it is built. During operation, the network is constantly monitored and
checked for errors. If troubleshooting problems become too frequent or
even impossible to manage, a network redesign might be required; this can
be avoided if all previous steps have been completed properly. This step is,
in fact, a part of the Operate and Optimize phases of the PPDIOO
methodology.
The remaining sections in this chapter detail each of the design methodology steps, followed by a
brief discussion of the implementation process steps.
Identifying Customer Requirements
As the organization’s network grows, so does the organization’s dependency on the network and
the applications that use it. Network-accessible organizational data and mission-critical
applications that are essential to the organization’s operations depend on network availability.
To design a network that meets customers’ needs, the organizational goals, organizational
constraints, technical goals, and technical constraints must be identified. This section describes the
process of determining which applications and network services already exist and which ones are
planned, along with associated organizational and technical goals and constraints. We begin by
explaining how to assess the scope of the design project. After gathering all customer
requirements, the designer must identify and obtain any missing information and reassess the
scope of the design project to develop a comprehensive understanding of the customer’s needs.
Assessing the Scope of a Network Design Project
When assessing the scope of a network design, consider the following:
■
Whether the design is for a new network or is a modification of an existing network.
■
Whether the design is for an entire enterprise network, a subset of the network, or a single
segment or module. For example, the designer must ascertain whether the design is for a set
of Campus LANs, a WAN, or a remote-access network.
■
Whether the design addresses a single function or the network’s entire functionality.
Examples of designs that would involve the entire network include one in which all branch office
LANs are upgraded to support Fast Ethernet, and a migration from traditional Private Branch
Exchange (PBX)–based telephony to an IP telephony solution. A project to reduce bottlenecks on
a slow WAN is an example that would likely affect only the WAN. Adding wireless client mobility
or provisioning core redundancy are designs that would likely affect only the campus.
70
Chapter 2: Applying a Methodology to Network Design
The Open Systems Interconnection (OSI) reference model is important during the design phase.
The network designer should review the project scope from the protocol layer perspective and
decide whether the design is needed for only the network layer, or if other layers are also involved.
For example:
■
The network layer includes the routing and addressing design.
■
The application layer includes the design of application data transport (such as transporting
voice).
■
The physical and data link layers include decisions about the connection types and the
technologies to be used, such as Gigabit Ethernet, Asynchronous Transfer Mode, and Frame
Relay.
NOTE Appendix C, “Open System Interconnection (OSI) Reference Model,” details the
seven layers of the OSI reference model.
Table 2-1 exhibits sample results of assessing the scope of design for a sample enterprise,
Corporation X.
Table 2-1
Corporation X Network Design Scope Assessment
Scope of Design
Comments
Entire network
The backbone at the central office needs to be redesigned. All branch offices’
LANs will be upgraded to Fast Ethernet technology.
Network layer
Introduction of private IP addresses requires a new addressing plan. Certain
LANs must also be segmented. Routing must be redesigned to support the new
addressing plan and to provide greater reliability and redundancy.
Data link layer
The central office backbone and some branch offices require redundant
equipment and redundant links are needed. The organization also requires a
campus wireless radio frequency (RF) site survey to determine mobility
deployment options and equipment scope.
Identifying Required Information
Determining requirements includes extracting initial requirements from the customer and then
refining these with other data that has been collected from the organization.
Extracting Initial Requirements
Initial design requirements are typically extracted from the Request for Proposal (RFP) or Request
for Information (RFI) documents that the customer issues. An RFP is a formal request to vendors
Identifying Customer Requirements
71
for proposals that meet the requirements that the document identifies. An RFI is typically a less
formal document an organization issues to solicit ideas and information from vendors about a
specific project.
The first step in the design process should be predocumenting (sifting, processing, reordering,
translating, and so forth) the design requirements and reviewing them with the customer for
verification and approval, obtaining direct customer input, in either oral or written form.
Figure 2-6 illustrates an iterative approach to developing the design requirements document.
Figure 2-6
Iterative Approach to Identifying Customer Requirements
RFP
ACME.com
Customer
4
1
2
Draft Document
3
Designer
5
While There are Comments
to the Design Requirements
2
1. Introduction
2. Design Requirements
…
…
3. …
Figure 2-6 illustrates the following steps:
Step 1
Extract the initial customer requirements (from the RFP or RFI).
Step 2
Query the customer for a verbal description of the initial requirements.
Step 3
Produce a draft document that describes the design requirements.
Step 4
Verify the design requirements with the customer, and obtain customer
approval.
Step 5
Revise the document as necessary to eliminate errors and omissions.
Steps 2 to 5 are repeated if the customer has additional comments about the draft document.
Gathering Network Requirements
As illustrated in Figure 2-7, the process of gathering requirements can be broken down into five
steps. During these steps (which are sometimes called milestones), the designer discusses the
72
Chapter 2: Applying a Methodology to Network Design
project with the customer’s staff to determine and gather the necessary data, including appropriate
documentation.
Gathering Data for Design Requirements
Figure 2-7
Organization
1
2
3
Identify Network
Applications and
Network Services
4
Define
Organizational
Goals
Define Technical
Goals
5
Define and Check
Organizational
Constraints
Define and Check
Technical
Constraints
Document the
Collected Information
As shown in Figure 2-7, the steps are as follows:
Step 1
Identify the planned network applications and network services.
Step 2
Determine the organizational goals.
Step 3
Determine the possible organizational constraints.
Step 4
Determine the technical goals.
Step 5
Determine the technical constraints that must be taken into account.
These steps provide the designer with data that must be carefully interpreted, analyzed, and
presented to support the design proposal. Throughout these steps, the designer takes thorough
notes, produces documentation, and presents the findings to the customer for further discussion.
Identifying Customer Requirements
73
The process is not unidirectional; the designer might return to a step and make additional inquiries
about issues as they arise during the design process. The next five sections detail these steps.
Planned Applications and Network Services
The designer must determine which applications the customer is planning to use and the
importance of each of these applications. Using a table helps organize and categorize the
applications and services planned; the table should contain the following information:
■
Planned application types: Include e-mail, groupware (tools that aid group work), voice
networking, web browsing, video on demand (VoD), databases, file sharing and transfer,
computer-aided manufacturing, and so forth.
■
Applications: Specific applications that will be used, such as Microsoft Internet Explorer,
Cisco Unified MeetingPlace, and so forth.
■
Level of importance: The importance of the applications—whether critical or important or
not important—is noted.
■
Comments: Additional notes taken during the data-gathering process.
Table 2-2 shows an example of data gathered about the planned applications for the sample
company, Corporation X.
Table 2-2
Corporation X’s Planned Applications
Level of Importance
(Critical, Important,
Not Important)
Application Type
Application
E-mail
Microsoft Office
Outlook
Important
Groupware
Cisco Unified
MeetingPlace
Important
Web browsing
Microsoft Internet
Explorer, Netscape
Navigator, Opera
Important
Video on demand
Cisco Digital Media
System
Critical
Database
Oracle
Critical
Customer support
applications
Custom applications
Critical
Comments
Need to be able to share
presentations and
applications during remote
meetings
All data storage is based
on Oracle
74
Chapter 2: Applying a Methodology to Network Design
NOTE The Cisco Digital Media System is an enhanced system that can be used in place of
the Cisco Internet Protocol Television (IP/TV) products; Cisco has announced the end-of-sale
and end-of-life dates for the Cisco IP/TV 3400 Series products. See http://www.cisco.com/en/
US/netsol/ns620/networking_solutions_white_paper0900aecd80537d33.shtml for more details.
NOTE Information on the Opera browser is available at http://www.opera.com/.
The planned infrastructure services table is similar to the planned application table. It lists
infrastructure services that are planned for the network and additional comments about those
services.
Recall that infrastructure services include security, QoS, network management, high availability,
and IP multicast. Software distribution, backup, directory services, host naming, and user
authentication and authorization are examples of other services and solutions that are deployed to
support a typical organization’s many applications. Table 2-3 shows sample data that was gathered
about the infrastructure services planned for the sample company, Corporation X.
Table 2-3
Corporation X’s Planned Infrastructure Services
Service
Comments
Security
Deploy security systematically: Firewall technology to protect the internal network;
virus-scanning application to check incoming traffic for viruses; intrusion detection
and prevention systems to protect from and inform about possible outside intrusions.
Consider the use of authentication, authorization, and accounting systems to ensure
that only authenticated and authorized users have access to specific services.
QoS
Implementation of QoS to prioritize more important and more delay-sensitive
traffic over less important traffic (higher priority for voice and database traffic;
lower priority for HTTP traffic).
Network
management
Introduction and installation of centralized network management tools (such as HP
OpenView with CiscoWorks applications) for easier and more efficient network
management.
High availability
Use redundant paths and terminate connections on different network devices to
eliminate single points of failure.
IP multicast
Introduction of IP multicast services needed for the introduction of
videoconferencing and e-learning solutions.
Voice
Company wants to migrate to IP telephony.
Mobility
Need mobility for employees and guest access for clients.
Identifying Customer Requirements
75
Organizational Goals
Every design project should begin by determining the organizational goals that are to be achieved.
The criteria for success must be determined, and the consequences of a failure understood.
Network designers are often eager to start by analyzing the technical goals before considering the
organizational goals and constraints. However, detailed attention to organizational goals and
constraints is important for a project’s success. In discussions about organizational goals, the
designer obtains knowledge about the customer’s expectations of the design’s positive outcomes
for the organization. Both short- and long-term goals should be identified. This organizationcentered approach allows the network to become a strategic asset and competitive weapon for the
customer.
Preliminary research on the organization’s activities, products, processes, services, market,
suppliers, competitive advantages, and structure enhances the positioning of the technologies and
products to be used in the network.
This is an opportunity to determine what is important to the customer. Some sample questions a
designer might ask to help determine organizational goals include the following:
■
What are you trying to accomplish with this project?
■
What business challenges are you currently facing?
■
What are the consequences of not resolving these issues?
■
How would you measure or quantify success if you could fix or correct the identified
problems and issues?
■
What applications are most critical to your organization?
■
What is the major objective of this project?
■
What is driving the change?
■
Do you need to support any government or safety or legal mandates?
■
What are your main concerns with the implementation of a new solution?
■
What technologies or services are needed to support your objectives?
76
Chapter 2: Applying a Methodology to Network Design
■
What other technology projects and business initiatives will affect your group in the next two
to five years?
■
What skill sets does your technical staff currently have?
■
What is your goal for return on investment?
Organizational goals differ from organization to organization. The following are some typical
goals that commercial organizations might have:
■
Increase the operation’s generated revenue and profitability. A new design should reduce costs
in certain segments and propel growth in others. The network designer should discuss with
the customer any expectations about how the new network will influence revenues and profits.
■
Shorten development cycles and enhance productivity by improving internal data availability
and interdepartmental communications.
■
Improve customer support and offer additional customer services that can expedite reaction
to customer needs and improve customer satisfaction.
■
Open the organization’s information infrastructure to all key stakeholders (prospects,
investors, customers, partners, suppliers, and employees), and build relationships and
information accessibility to a new level.
NOTE Similar, though not identical, goals are common to governmental, charitable,
religious, and educational organizations. Most of these entities focus on using available
resources effectively to attain the organization’s goals and objectives. In not-for-profit
organizations, key measures are typically stated in terms of cost containment, service quality,
service expansion, and resource deployment. This section emphasizes the deployment of
networks in commercial organizations as an example of the type of research required for
establishing the network requirements.
To illustrate the importance of considering organizational goals in a network design, consider two
manufacturing enterprises that are contemplating network updates. Enterprise A’s main reason for
change is to improve customer satisfaction. It has received many complaints that customer
information is difficult to obtain and understand, and there is a need for online ordering capability.
In contrast, Enterprise B is driven by the need to reduce costs—this is a mandate from its CEO.
When design decisions are made, these goals will most likely result in different outcomes. For
example, Enterprise A might choose to implement an integrated product information database
with e-commerce capability, whereas Enterprise B might not see the value of investing in this
technology.
Identifying Customer Requirements
77
Following are examples of the types of data that can be gathered about some common
organizational goals:
■
Increase competitiveness: List competitive organizations and their advantages and
weaknesses. Note possible improvements that might increase competitiveness or
effectiveness.
■
Reduce costs: Reducing operational costs can result in increased profitability (even without
a revenue increase) or increased services with the same revenue. List current expenses to help
determine where costs could be reduced.
■
Improve customer support: Customer support services help provide a competitive
advantage. List current customer support services, with comments about possible and desired
improvements.
■
Add new customer services: List current customer services, and note future and desired
(requested) services.
Table 2-4 presents data gathered about the organizational goals of a sample company, Corporation X.
Table 2-4
Corporation X’s Organizational Goals
Organizational Goal
Gathered Data (Current
Situation)
Comments
Increase competitiveness
Corporation Y
Better products
Corporation Z
Reduced costs
Repeating tasks—entering
data multiple times, timeconsuming tasks
Single data-entry point
Reduce cost
Easy-to-learn applications
Simple data exchange
Improve customer support
Order tracking and technical
support is done by
individuals
Introduction of web-based order
tracking and web-based tools for
customer technical support
Add new customer services
Current services:
Secure web-based ordering
Telephone and fax orders,
and telephone and fax
confirmation
Secure web-based confirmations
78
Chapter 2: Applying a Methodology to Network Design
Organizational Constraints
When assessing organizational goals, it is important to analyze any organizational constraints that
might affect the network design. Some sample questions the designer might ask to help determine
organizational constraints include the following:
■
What in your current processes works well?
■
What in your current processes does not work well?
■
Which processes are labor-intensive?
■
What are the barriers for implementation in your organization?
■
What are your major concerns with the implementation of a new solution?
■
What financial and timing elements must be considered?
■
What projects already have budget approval?
■
Are other planned technology projects and business initiatives compatible with your current
infrastructure and technology solutions?
■
What qualifications does your current staff have? Do you plan to hire more staff? If so, for
what roles?
■
Do you have a budget for technical development for your staff?
■
Are there any policies in place that might affect the project?
Typical constraints include the following:
■
Budget: Reduced budgets or limited resources often force network designers to implement
an affordable solution rather than the best technical solution. This usually entails some
compromises in availability, manageability, performance, and scalability. The budget must
include all equipment purchases, software licenses, maintenance agreements, staff training,
and so forth. Budget is often the final decision point for design elements, selected equipment,
and so on. The designer must know how much money is available to invest in a solid design.
It also useful to know the areas in which the network can be compromised to meet budget
requirements.
■
Personnel: The availability of trained personnel within the organization might be a design
consideration. Organizations might not have enough trained personnel, or they might not have
enough personnel. Familiarity with both the equipment and technologies speeds deployment
and reduces cost, and trained technicians must be available to verify that all network elements
are working. Therefore, the designer must know the number and availability of operations
personnel, their expertise, and possible training requirements. Additional constraints might be
Identifying Customer Requirements
79
imposed if the organization is outsourcing network management. The designer must consider
the network’s implementation and maintenance phases, which require adequately trained
staff.
■
Policies: Organizations have different policies about protocols, standards, vendors, and
applications; to design the network successfully, the designer must understand these policies.
For example, the designer should determine customer policies related to single-vendor or
multivendor platforms; an end-to-end single-vendor solution might be a benefit, because
compatibility issues do not restrain the network. As another example, many organizations,
such as government agencies (for example, defense departments), often have strict policies
preventing implementation of proprietary protocols.
■
Schedule: The organization’s executive management must discuss and approve the project
schedule to avoid possible disagreements about deadlines. For example, the introduction of
new network applications often drives the new network design; the implementation time
frames for new applications are often tightly connected and therefore influence the available
time for network design.
Table 2-5 shows organizational constraints and accompanying data that has been collected for a
sample company, Corporation X.
Table 2-5
Corporation X’s Organizational Constraints
Organizational
Constraint
Gathered Data (Current
Situation)
Budget
$650,000
Budget can be extended by a maximum
of $78,000
Personnel
Two engineers with college degrees
and Cisco Certified Network
Associate (CCNA) certifications for
network maintenance; one has Cisco
Certified Network Professional
(CCNP) certification
Plans to hire additional engineers for
network maintenance; need technical
development plan for staff
Comments
Three engineers for various operating
systems and applications maintenance
Policy
Prefers a single vendor and
standardized protocols
Current equipment is Cisco; prefers to
stay with Cisco
Schedule
Plans to introduce various new
applications in the next nine months
New applications that will be introduced
shortly are videoconferencing,
groupware, and IP telephony
80
Chapter 2: Applying a Methodology to Network Design
Technical Goals
The technical goals of the project must also be determined before the design starts. Some sample
questions the designer might ask to help determine technical goals include the following:
■
What are your technology priorities?
■
How does your technology budgeting process work?
■
What infrastructure issues exist or will exist related to your applications rollouts?
■
What skill sets does your technical staff need to acquire?
■
Does your current network have any performance issues?
■
Which portions of your network are considered mission-critical?
■
Do you anticipate significant growth in the number of network users over the next few years?
■
How is your network managed now?
The following list describes some common technical goals:
■
Improve network performance: An increase in the number of users and the introduction of
new applications might degrade network performance, especially responsiveness and
throughput. The first goal of network redesign is usually to increase performance—for
example, by upgrading the speed of links or by partitioning the network into smaller
segments.
NOTE Performance is a general term that includes responsiveness, throughput, and resource
utilization. The users of networked applications and their managers are usually most sensitive
to responsiveness issues; speed is of the essence. The network system’s managers often look to
throughput as a measure of effectiveness in meeting the organization’s needs. Executives who
have capital budget responsibility tend to evaluate resource utilization as a measure of economic
efficiency. It is important to consider the audience when presenting performance information.
■
Improve security and reliability of mission-critical applications and data: Increased
threats from both inside and outside the enterprise network require the most up-to-date
security rules and technologies to avoid disruptions of network operation.
■
Decrease expected downtime and related expenses: When a network failure occurs,
downtime must be minimal, and the network must respond quickly to minimize related costs.
■
Modernize outdated technologies: The emergence of new network technologies and
applications demands regular updates to and replacement of outdated equipment and
technologies.
Identifying Customer Requirements
81
■
Improve scalability of the network: Networks must be designed to provide for upgrades and
future growth.
■
Simplify network management: Simplify network management functions so that they are
easy to use and easily understood.
Using a table helps the designer identify technical goals. Different goals have different levels of
importance, which the customer should determine. One way of expressing the level of importance
is with percentages: Specific technical goals are rated in importance on a scale from 1 to 100, with
the sum totaling 100; this scale provides direction for the designer when choosing equipment,
protocols, features, and so forth.
Table 2-6 depicts the desired technical goals that were gathered for the sample company,
Corporation X, along with their importance rating and additional comments. In this example,
the designer sees that the customer places great importance on availability, scalability, and
performance; this suggests that the network design should include redundant equipment,
redundant paths, use of high-speed links, and so forth.
Corporation X’s Technical Goals
Table 2-6
Technical Goals
Importance
Comments
Performance
20
Important in the central site, less important in
branch offices
Security
15
The critical data transactions must be secure
Availability
25
Should be 99.9%
Adaptability (to new technologies)
10
Scalability
25
Manageability
5
The network must be scalable
Total 100
Technical Constraints
Network designers might face various technical constraints during the design process. Some
sample questions the designer might ask to help determine technical constraints include the
following:
■
How do you determine your technology priorities?
■
Do you have a technology refresh process? If so, is that an obstacle, or does it support the
proposed project?
82
Chapter 2: Applying a Methodology to Network Design
■
What urgent technical problems require immediate resolution or mitigation?
■
Do you have a plan for technical development for your staff in specific areas?
■
Do any applications require special network features (protocols and so forth)?
Good network design addresses constraints by identifying possible trade-offs, such as the following:
■
Existing equipment: The network design process is usually progressive; legacy equipment
must coexist with new equipment.
■
Bandwidth availability: Insufficient bandwidth in parts of the network where the bandwidth
cannot be increased because of technical constraints must be resolved by other means.
■
Application compatibility: If the new network is not being introduced at the same time as
new applications, the design must provide compatibility with old applications.
■
Lack of qualified personnel: Lack of qualified personnel suggests that the designer must consider
the need for additional training; otherwise, certain features might have to be dropped. For example,
if the network proposal includes the use of IP telephony but the network administrators are not
proficient in IP telephony, it might be necessary to propose an alternative solution.
Using a table can facilitate the process of gathering technical constraints. The designer identifies
the technical constraints and notes the current situation and the necessary changes that are required
to mitigate a certain constraint.
Table 2-7 presents sample technical constraints gathered for Corporation X. Under existing
equipment, the designer notes that the coaxial cabling in the LAN’s physical cabling plant still
exists and comments that twisted pair and fiber optics should replace it. The bandwidth availability
indicates that the WAN service provider does not have any other available links; the organization
should consider changing to another service provider. Application compatibility suggests that the
designer should take care when choosing equipment.
Table 2-7
Technical Constraints for Corporation X
Technical
Constraints
Gathered Data
(Current Situation)
Existing equipment
Coaxial cable
The cabling must be replaced with twisted pair to the
desktop, and fiber optics for uplinks and in the core
Bandwidth
availability
64-kbps WAN link
Upgrade bandwidth; change to another service
provider because the current one does not have any
other links to offer
Application
compatibility
IP version 6 (IPv6)based applications
New network equipment must support IPv6
Comments
Characterizing the Existing Network and Sites
83
Characterizing the Existing Network and Sites
The second step of the design methodology is characterizing the existing network and sites.
Information collected and documented in this step is important, because the design might depend
on the existing network’s hardware, software, and link capacity.
In many cases, a network already exists and the new design relies on restructuring and upgrading
the existing network and sites. Even when a network does not exist, the sites that will be networked
still should be examined. The following sections present insights into the process of examining an
existing network and sites and describe the tools used to gather the data, assess the network, and
analyze the network. A checklist to assess the network’s health is presented. Guidelines for
creating a summary report are introduced. The discussion concludes with the draft design
document and estimates of the time required to complete the entire characterization process.
The first step in characterizing the existing network and sites is to gather as much information
about them as possible, typically based on the following input:
Step 1
Customer input: Review existing documentation about the network, and
use verbal input from the customer to obtain a first impression about the
network. Although this step is mandatory, it is usually insufficient, and
some results might be incorrect.
Step 2
Network audit: Perform a network audit, also called an assessment, which
reveals details of the network and augments the customer’s description.
Step 3
Traffic analysis: If possible, use traffic analysis to provide information
about the applications and protocols used and to reveal any shortcomings
in the network.
NOTE Although traffic analysis is a good idea in principle, it is often too costly in terms of
time and effort to do in practice.
The following sections describe each of these steps and the tools used.
Customer Input
Customer input includes all pertinent network and site documentation. Some items the designer
could request, depending on the scope of the project, include the following:
■
Site contact information (especially needed if remote deployments are planned)
■
Existing network infrastructure (from physical diagrams and documents, and site surveys as
needed), including the following:
84
Chapter 2: Applying a Methodology to Network Design
— Locations and types of servers, including a list of network applications supported
— Locations and types of network devices
— Cabling that is currently in place, including network interface connection tables and
worksheets
— Wiring closet locations
— Environmental controls, including heating, ventilation, and air conditioning
requirements, and filtration
— Locations of telephone service demarcation points
— WAN speeds and locations of the WAN connection feeds
— Locations of power receptacles, and availability of additional receptacles and power
sources
■
Existing network infrastructure from logical topology diagrams, including the addressing
scheme and routing protocols in use, and the infrastructure services supported, such as voice,
storage, and wireless services
■
Information about the expected network functionality
This documentation should allow the designer to determine information about the planned and
existing network and sites, including the following:
■
Network topology: Includes devices, physical and logical links, external connections,
bandwidth of connections, frame types (data link encapsulations), IP addressing, routing
protocols, and so forth.
■
Network services: Includes security, QoS, high availability, voice, storage, wireless, and
so forth.
■
Network applications: Examples include unified messaging and video delivery.
All this information should be included in the design document; it also forms the basis for
breaking the network into modules.
Sample Site Contact Information
Site contact information is especially important for projects involving remote deployments when
equipment delivery and installations must be coordinated. The customer might provide all the
necessary site contact information, or the designer might have to conduct a physical site audit to
obtain the necessary information.
Characterizing the Existing Network and Sites
85
While at the site, the designer can also obtain other information; for example, power availability
can be determined by examining the existing wiring closets. Digital pictures taken by a remote site
contact can help in getting a quick sense of the remote environment. Table 2-8 illustrates a sample
site contact form.
Table 2-8
Sample Site Contact Form
1. What is the site location/name?
2. What is the site address?
3. What is the shipping address?
4. Who is the site contact?
Name:
Title:
Telephone Number:
Cell Phone Number:
Fax Number:
Pager Number:
E-mail address:
Out-of-Hours Contact Number:
5. Is this site owned and maintained by the customer?
Yes/No
6. Is this a staffed site?
Yes/No
7. What are the hours of operation?
8. What are the building and room access procedures?
9. Are there any special security/safety procedures?
Yes/No
What are they?
10. Are there any union/labor requirements or procedures?
Yes/No
What are they?
11. What are the locations of the equipment cabinets and
racks?
Floor:
Room:
Position:
86
Chapter 2: Applying a Methodology to Network Design
Sample High-Level Network Diagram
Figure 2-8 shows the high-level topology of a sample network, provided by a customer.
Sample Customer-Provided High-Level Network Diagram
Figure 2-8
Management
Workstations
Server Farm
External Servers
Internet
PSTN/ISDN
WAN
With only this diagram, many questions remain about the network and the expected network
functionality, including the following:
■
What is the IP addressing scheme?
■
What level of redundancy or high availability currently exists in the network?
■
What level of redundancy or high availability is required in the new network?
■
What are the details of the security design?
■
What types of links are in the network?
■
What are the link speeds?
Characterizing the Existing Network and Sites
87
■
What are the planned Layer 2 and Layer 3 topologies?
■
How is connectivity provided to remote sites?
■
What network infrastructure services are in use, such as voice and video, and what is planned?
■
Are existing wireless devices in place, or are any wireless deployments planned?
■
What routing protocols are in use?
■
Are there any server farm or remote data center connectivity requirements?
■
What network management tools are in place?
It is important to get as much information as possible about the existing situation before
commencing design.
Auditing or Assessing the Existing Network
A network audit or assessment is the second step in acquiring information about an existing
network. The auditing process starts by consolidating existing information the customer provides.
Up-to-date information can be gathered from the existing management software used by the
customer. If the customer has insufficient tools, the designer can choose to temporarily introduce
additional software tools; if they prove useful, these tools can be used in the network permanently
(during the Operate and Optimize phases). An audit provides details such as the following:
■
A list of network devices
■
Hardware specifications and versions, and software versions of network devices
■
Configurations of network devices
■
Output of various auditing tools to verify and augment the existing documentation
■
Link, CPU, and memory utilization of network devices
■
A list of unused ports, modules, and slots in network devices, to be used to understand
whether the network is expandable
Figure 2-9 illustrates three different sources of information that can be used in the auditing
process: existing documentation, existing tools, and new tools.
88
Chapter 2: Applying a Methodology to Network Design
Figure 2-9
Network Audit Information Sources
1
Existing
Documentation
Customer
Management
Workstation
2
Network Designer
Existing Tools
3
Existing
Management
Software
New Tools
Additional Auditing
Tools (Optional)
The auditing process might require minor (temporary) network changes. Automated auditing
should be used in large networks for which a manual approach would take too much time.
However, the audit process balances both detail and effort to produce as much information as
needed or possible. For example, it should not require that a large set of CPU-heavy auditing tools
be purchased and installed in the customer network to collect configurations of network devices.
The auditing process is typically performed from a central location, such as a location in a secure
environment that has access to all network devices.
Figure 2-10 illustrates sample information that a manual or automated auditing process collects from
the network management workstation. The auditing process should collect all information relevant
to the redesign. The same process should be used for all network devices affected by the design.
Figure 2-10
Sample Information Collected During a Network Audit
Management
Workstation
Use Auditing Tools to
Collect Network Information
Router Type
CPU Type
Average CPU Utilization
Memory Size and Utilization
FLASH Size
IOS Version
Configuration
Routing Tables
Interface Types
Speeds
Average Link Utilizations
Identify Unused Interfaces,
Modules, or Slots
Characterizing the Existing Network and Sites
89
Tools for Assessing the Network
A small network can be assessed without special tools. Monitoring commands can be used to
collect relevant information on a small number of network devices. The approach can be semiautomated by introducing scripting tools to execute the monitoring commands automatically.
In large networks, a manual auditing approach is too time-consuming and less reliable. The
following are some special tools that can be used to collect the relevant information from the
network devices:
■
CiscoWorks to map a network and collect various types of information (such as network
topology, hardware and software versions, configurations, and so on).
■
Third-party tools such as WhatsUp Professional from Ipswitch, SNMPc from Castle Rock
Computing, open-source Cacti (which is a successor to the popular Multi Router Traffic
Grapher), NetMRI from Netcordia, and NetVoyant from NetQoS.
■
Other vendors’ tools to collect relevant information from equipment manufactured by those
vendors.
■
Other tools can help characterize the existing environment. For example, instead of a full
wireless site survey, it can be helpful to conduct a brief RF sample of the environment using
enterprise-level tools. Such tools include AirMagnet Survey PRO (to perform an RF site
survey), Cognio Spectrum Expert (a spectrum analysis tool), and laptop applications such as
AiroPeek from WildPackets (network analyzer software that supports decoding of wireless
data packets) and the Cisco Aironet Site Survey Utility. Wireless networks are described in
detail in Chapter 9, “Wireless Network Design Considerations.”
Assessment Tool Information
Information on the aforementioned tools can be found at the following locations:
■
CiscoWorks: http://www.cisco.com/
■
WhatsUp Professional: http://www.ipswitch.com/
■
SNMPc: http://www.castlerock.com/
■
Cacti: http://www.cacti.net/
■
NetMRI: http://www.netcordia.com/
■
NetVoyant: http://www.netqos.com/
■
AirMagnet Survey PRO: http://www.airmagnet.com/
■
Spectrum Expert: http://www.cognio.com/
90
Chapter 2: Applying a Methodology to Network Design
■
AiroPeek: http://www.wildpackets.com/
■
Cisco Aironet Site Survey Utility: http://www.cisco.com/
Manual Information Collection Examples
The auditing process can be performed manually on relatively small networks using various
monitoring commands. Figure 2-11 illustrates three different types of network devices,
information to be collected, and commands that can be used to obtain the information:
■
On Cisco routers that run Cisco IOS software, the show tech-support command usually
displays all information about the router. show processes cpu can be used to determine CPU
use, and show processes memory can be used to determine memory usage.
■
On Cisco switches that run Cisco Catalyst Operating System (CatOS) software, the most
useful commands vary, depending on the version of the software. Useful commands might
include show version, show running-config, or show tech-support, if available.
■
On Cisco Secure PIX Security Appliances, the show version and write terminal (to see the
configuration) commands are useful.
Figure 2-11
Collecting Audit Information on Cisco Devices
Platform: Cisco 7206
Cisco IOS Version: 12.2(8)T
Memory: 64 MB
FLASH: 16 MB
Free Memory: 5 MB
Config:
Router#show tech-support
-------------------show version ------------Cisco Internetwork Operating System Software
IOS (tm) 7200 Software (C7200-JS-M), Version
TAC Support: http://www.cisco.com/tac
Copyright (c) 1986-2002 by Cisco Systems, Inc.
(snip)
Platform: Catalyst 5509
Cat OS Version: 6.3.5cv
Memory: 16 MB
FLASH: 16 MB
Free Memory: 6 MB
Config:
Switch#show version
<snip>
Switch:show running-config
<snip>
Swithc#show tech-support
<snip>
Platform: PIX 535
PIX OS Version: 6.1(2)
Memory: 64 MB
FLASH: 16 MB
Config:
PIX#show version
<snip>
PIX#write terminal
<snip>
…
Characterizing the Existing Network and Sites
91
Many other commands are available on Cisco devices to determine relevant information.
NOTE If older equipment or older versions of the Cisco IOS are being used, the capability of
the network to support new services might be affected.
Example 2-1 illustrates sample output from the show processes cpu command on a Cisco router.
Example 2-1
show processes cpu Command Output
show p rocesses cpu
Router#s
CPU utilization for five seconds: 24%/20%; one minute: 45%; five minutes: 40%
PID Runtime(ms)
Invoked
uSecs
5Sec
1Min
5Min
TTY
1
2464
468381
5
0.00%
0.00%
0.00%
0
Process
2
44
44
1000
0.16%
0.04%
0.01%
66
Virtual Exec
3
0
2
0
0.00%
0.00%
0.00%
0
IpSecMibTopN
4
6326689
513354
12324
0.00%
0.25%
0.27%
0
Check heaps
5
0
1
0
0.00%
0.00%
0.00%
0
Chunk Manager
6
60
58
1034
0.00%
0.00%
0.00%
0
Pool Manager
7
0
2
0
0.00%
0.00%
0.00%
0
Timers
8
0
12
0
0.00%
0.00%
0.00%
0
Serial Backgroun
9
2139
468342
4
0.00%
0.00%
0.00%
0
ALARM_TRIGGER_SC
10
3851
78081
49
0.00%
0.00%
0.00%
0
Environmental mo
11
4768
44092
108
0.00%
0.00%
0.00%
0
ARP Input
12
4408
19865
221
0.00%
0.00%
0.00%
0
DDR Timers
13
4
2
2000
0.00%
0.00%
0.00%
0
Dialer event
14
16
2
8000
0.00%
0.00%
0.00%
0
Entity MIB API
15
0
1
0
0.00%
0.00%
0.00%
0
SERIAL A’detect
16
0
1
0
0.00%
0.00%
0.00%
0
Critical Bkgnd
17
57284
377088
151
0.00%
0.00%
0.00%
0
Net Background
18
15916
59331
268
0.00%
0.00%
0.00%
0
Logger
Load Meter
<more>
The output in Example 2-1 displays information about the network device CPU utilization, which
is important for describing the network’s health. Table 2-9 describes the show processes cpu
command output’s fields and descriptions.
92
Chapter 2: Applying a Methodology to Network Design
Table 2-9
show processes cpu Command Output Description
Field
Description
CPU utilization
CPU utilization for the last:
Five seconds—The first number in the ratio indicates the total CPU utilization;
the second number in the ratio indicates the percentage of CPU time that was
spent at the interrupt level
One minute—Total CPU utilization for the last minute
Five minutes—Total CPU utilization for the last 5 minutes
PID
The process ID
Runtime (ms)
CPU time, expressed in milliseconds, that the process has used
Invoked
The number of times the process has been invoked
uSecs
Microseconds of CPU time for each process invocation
5Sec
CPU utilization by task in the last 5 seconds
1Min
CPU utilization by task in the last minute
5Min
CPU utilization by task in the last 5 minutes
TTY
Terminal that controls the process
Process
Name of the process
Example 2-2 illustrates sample output from the show processes memory command on a Cisco
router.
Example 2-2
show processes memory Command Output
show p rocess memory
Router#s
Total: 26859400, Used: 8974380, Free: 17885020
PID TTY
Allocated
Freed
Holding
Getbufs
0
0
88464
1848
6169940
0
0
0
428
1987364
428
0
0
0
116119836
105508736
487908
373944
Retbufs Process
0 *Init*
0 *Sched*
55296 *Dead*
1
0
284
284
3868
0
0 Load Meter
2
66
5340
1080
17128
0
0 Virtual Exec
3
0
668
284
7252
0
0 IpSecMibTopN
4
0
0
0
6868
0
0 Check heaps
5
0
96
0
6964
0
6
0
17420
231276
6964
5388
7
0
284
284
6868
0
0 Timers
8
0
284
284
6868
0
0 Serial Background
0 Chunk Manager
254912 Pool Manager
Characterizing the Existing Network and Sites
Example 2-2
93
show processes memory Command Output (Continued)
9
0
0
0
6868
0
0 ALARM_TRIGGER_SC
10
0
284
284
6868
0
0 Environmental mo
11
0
316
3799360
7184
0
0 ARP Input
12
0
2547784
1033916
7372
6804
13
0
284
284
12868
0
0 Dialer event
14
0
10744
2284
15328
0
0 Entity MIB API
15
0
96
0
6964
0
0 SERIAL A’detect
16
0
96
0
6964
0
0 Critical Bkgnd
17
0
23412
2632
15404
0
0 Net Background
0 DDR Timers
<more>
Table 2-10 describes the show processes memory command output’s fields and descriptions.
Table 2-10
show processes memory Command Output Description
Field
Description
Total
Total amount of held memory
Used
Total amount of used memory
Free
Total amount of free memory
PID
Process ID
TTY
Terminal that controls the process
Allocated
Bytes of memory allocated by the process
Freed
Bytes of memory freed by the process, regardless of who originally allocated
it
Holding
Amount of memory currently allocated to the process
Getbufs
Number of times the process has requested a packet buffer
Retbufs
Number of times the process has relinquished a packet buffer
Process
Process name
*Init*: System initialization
*Sched*: The scheduler
*Dead*: Processes that are now dead as a group
Total (not shown in
Example 2-2)
Total amount of memory held by all processes
94
Chapter 2: Applying a Methodology to Network Design
Automatic Information Collection Examples
Figure 2-12 is a screen shot from the open-source Cacti application showing a list of devices found
in the network.
Figure 2-12
Cacti Device List Example
Figure 2-13 is a screen shot from the NetMRI appliance from Netcordia. The inventory results are
expanded to show the Cisco Cat4506 devices, including IP addresses, device names, and operating
system versions.
Figure 2-13
NetMRI Inventory Example
Characterizing the Existing Network and Sites
95
Analyzing Network Traffic and Applications
Traffic analysis is the third step in characterizing a network. Traffic analysis verifies the set of
applications and protocols used in the network and determines the applications’ traffic patterns. It
might reveal any additional applications or protocols running on the network. Each discovered
application and protocol should be described in the following terms:
■
Importance to the customer
■
QoS-related requirements
■
Security-related requirements
■
Scope (in other words, the network modules in which the application or protocol is used)
Use the following interactive approach, illustrated in Figure 2-14, to create a list of applications
and protocols used in the network:
Step 1
Use customer input to list expected applications.
Step 2
Use traffic analyzers to verify the customer’s list of applications.
Step 3
Present the customer with the new list of applications, and discuss
discrepancies.
Step 4
Generate the final list of applications and their requirements (importance,
QoS, security), as defined by the customer.
Figure 2-14
Use an Interactive Traffic Analysis Process
4
Importance,
QoS, Security
3
Customer
List of
1
Applications
2
Traffic
Analyzer
Traffic Profiles
Network
Designer
Application
Matrix
96
Chapter 2: Applying a Methodology to Network Design
For example, the following information was collected about a fictitious application:
■
Name: Application #8
■
Description: Accounting software
■
Protocol: Transmission Control Protocol (TCP) port 5151
■
Servers: 2
■
Clients: 50
■
Scope: Campus
■
Importance: High
■
Avg. Rate: 50 kbps with 10-second bursts to 1 megabit per second (Mbps)
Assume that a customer requirement concerns QoS on a WAN connection with limited bandwidth.
In this case, the information collected is relevant because it describes the following:
■
The application (TCP port 5151), which is required for performing classification
■
The importance of the application; this information is useful for evaluating how much
bandwidth should be allocated to the application
■
The current bandwidth consumption according to the present QoS implementation
Note, however, that this information might not be relevant should the customer requirement
instead concern a secure and resilient Internet connection. In that case, it might be necessary to
gather additional information.
Tools for Analyzing Traffic
Tools used for traffic analysis range from manual identification of applications using Cisco IOS
software commands to those in which dedicated software- or hardware-based analyzers capture
live packets or use the Simple Network Management Protocol (SNMP) to gather interface
information. Analysis tools include the following:
■
Cisco IOS Network-Based Application Recognition (NBAR): NBAR can be used to
identify the presence of well-known applications and protocols in the network.
■
Cisco IOS NetFlow technology: NetFlow is an integral part of Cisco IOS software that
collects and measures data as it enters specific routers or switch interfaces. NetFlow allows
the identification of lesser-known applications because it gathers information about every
Characterizing the Existing Network and Sites
97
flow. This information can be collected manually using the Cisco IOS software show ip cache
flow command. Alternatively, Cisco Network Service (CNS) NetFlow Collection Engine
(NFC) allows automatic information gathering of each flow in the network segment.
■
Third-party hardware or software-based products: Can be used to analyze traffic in
different subnets of the network. Examples include the following:
— The open-source Cacti (http://www.cacti.net/)
— Network General Sniffer (http://www.sniffer.com/)
— WildPackets EtherPeek and AiroPeek (http://www.wildpackets.com/)
— SolarWinds Orion (http://www.solarwindssoftware.com/)
— Wireshark (http://www.wireshark.org/)
■
Remote monitoring probes can also be used to support traffic analysis.
The following sections include examples of some of these tools.
NBAR
Cisco IOS NBAR is a classification engine that recognizes a wide variety of applications,
including web-based and other difficult-to-classify protocols, which utilize dynamic TCP and
User Datagram Protocol (UDP) port assignments. Other QoS tools within the network can be
configured to invoke services for a specific application that is recognized and classified by NBAR,
ensuring that network resources are used efficiently.
QoS
The purpose of QoS is to provide appropriate network resources (such as bandwidth, delay, jitter,
and packet loss) to applications. QoS maximizes the return on network infrastructure investments
by ensuring that mission-critical applications receive the required performance and that
noncritical applications do not hamper the performance of critical applications. QoS is deployed
by defining application classes or categories. These classes are defined using various classification
techniques, such as NBAR, that are available in Cisco IOS software. After these classes are defined
and configured, the desired QoS features—such as marking, congestion management, congestion
avoidance, link efficiency mechanisms, or policing and shaping—can be applied to the classified
traffic to provide the appropriate network resources among the defined classes. Therefore,
classification is an important first step in configuring QoS in a network infrastructure.
NOTE Further details about NBAR can be found at http://www.cisco.com/en/US/products/
ps6616/products_ios_protocol_group_home.html.
98
Chapter 2: Applying a Methodology to Network Design
Example 2-3 is sample output of the Cisco IOS NBAR show ip nbar protocol-discovery
command. This command shows the statistics gathered by the NBAR Protocol Discovery feature,
which provides an easy way to discover application protocols that are transiting an interface. The
Protocol Discovery feature discovers any protocol traffic supported by NBAR and can be used to
monitor both input and output traffic. This command displays statistics for all interfaces on which
the Protocol Discovery feature is currently enabled. The default output of this command includes
the average 30-second bit rate (in bits per second), input byte count, input packet count, and
protocol name.
Example 2-3
show ip nbar protocol-discovery Command Output
s h o w i p n b a r p r o t o c o l - d i sc o v e r y
Router#s
FastEthernet0/0.2
Protocol
Input
Output
Packet Count
Packet Count
Byte Count
Byte Count
30 second bit rate (bps) 30 second bit rate(bps)
------------------------ ------------------------ ----------------------http
secure-http
snmp
telnet
ntp
dns
46384
79364
5073520
64042528
305
1655
2762
2886
429195
1486350
0
0
143
10676
17573
1679322
0
0
1272
12147
122284
988834
0
0
5383
0
624428
0
0
0
305
235
31573
55690
50
120
NetFlow
NetFlow switching provides network administrators with access to detailed recording information
from their data networks. NetFlow also provides a highly efficient mechanism with which to
process security access lists without paying as much of a performance penalty as other available
switching methods incur.
Characterizing the Existing Network and Sites
99
Cisco Network Service NetFlow Collection technology provides the base for applications,
including network traffic accounting, usage-based network billing, network planning, network
monitoring, outbound marketing, and data-mining capabilities for both service provider and
enterprise customers. Cisco provides a set of NFC applications that collect NetFlow export data,
perform data volume reduction and post-processing, and give end-user applications easy access to
NFC data. NFC also provides measurement-based QoS by capturing the traffic classification or
precedence associated with each flow, enabling differentiated charging based on QoS. NFC is
supported on HPUX, Solaris, Linux, and the Cisco CNS Programmable Network Family product.
Chapter 3, “Structuring and Modularizing the Network,” discusses NetFlow technology in more
detail.
Example 2-4 provides sample output from the Cisco IOS show ip cache flow command,
illustrating statistics gathered by the NetFlow switching feature. By analyzing NetFlow data, a
designer can identify the cause of congestion, determine the class of service for each user and
application, and identify the traffic’s source and destination network. NetFlow allows extremely
granular and accurate traffic measurements and high-level aggregated traffic collection.
Example 2-4
show ip cache flow Command Output
show i p cache flow
Router#s
IP packet size distribution (12718M total packets):
1-32
64
96
128
160
192
224
256
288
320
352
384
416
448
480
.000 .554 .042 .017 .015 .009 .009 .009 .013 .030 .006 .007 .005 .004 .004
512
544
576 1024 1536 2048 2560 3072 3584 4096 4608
.003 .007 .139 .019 .098 .000 .000 .000 .000 .000 .000
IP Flow Switching Cache, 4456448 bytes
65509 active, 27 inactive, 820628747 added
955454490 ager polls, 0 flow alloc failures
Exporting flows to 1.1.15.1 (2057)
820563238 flows exported in 34485239 udp datagrams, 0 failed
last clearing of statistics 00:00:03
Protocol
Total
Flows
--------
Flows
/Sec
/Flow
/Pkt
/Sec
/Flow
/Flow
TCP-Telnet
2656855
4.3
86
78
372.3
49.6
27.6
TCP-FTP
5900082
9.5
9
71
86.8
11.4
33.1
TCP-FTPD
3200453
5.1
193
461
1006.3
45.8
33.4
546778274
887.3
12
325
11170.8
8.0
32.3
25536863
41.4
21
283
876.5
10.9
31.3
24520
0.0
28
216
1.1
26.2
39.0
49148540
79.7
47
338
3752.6
30.7
32.2
UDP-DNS
117240379
190.2
3
112
570.8
7.5
34.7
UDP-NTP
9378269
15.2
1
76
16.2
2.2
38.7
TCP-WWW
TCP-SMTP
TCP-BGP
TCP-other
Packets Bytes
Packets Active(Sec) Idle(Sec)
continues
100
Chapter 2: Applying a Methodology to Network Design
Example 2-4
show ip cache flow Command Output (Continued)
UDP-TFTP
8077
0.0
3
62
0.0
9.7
33.2
UDP-Frag
51161
0.0
14
322
1.2
11.0
39.4
14837957
24.0
5
224
125.8
12.1
34.3
77406
0.1
47
259
5.9
52.4
27.0
820563238 1331.7
15
304
20633.0
9.8
33.0
ICMP
IP-other
...
Total:
Table 2-11 provides the show ip cache flow command output’s fields and descriptions.
Table 2-11
show ip cache flow Command Output Description
Field
Description
bytes
Number of bytes of memory used by the NetFlow cache
active
Number of active flows in the NetFlow cache at the time this command was
executed
inactive
Number of flow buffers allocated in the NetFlow cache
added
Number of flows created since the start of the summary period
ager polls
Number of times the NetFlow code looked at the cache to expire entries
flow alloc failures
Number of times the NetFlow code tried to allocate a flow but could not
Exporting flows
IP address and UDP port number of the workstation to which flows are
exported
flows exported
Total number of flows exported and the total number of UDP datagrams sent
failed
Number of flows that the router could not export
last clearing of statistics
Standard time output (hh:mm:ss) since the clear ip flow stats command was
executed
The activity by each protocol display field descriptions are as follows:
Protocol
IP protocol and the well-known port number (as documented at http://
www.iana.org/)
Total Flows
Number of flows for this protocol since the last time statistics were cleared
Flows/Sec
Average number of flows seen for this protocol, per second
Packets/Flow
Average number of packets in the flows seen for this protocol
Bytes/Pkt
Average number of bytes in the packets seen for this protocol
Packets/Sec
Average number of packets for this protocol, per second
Characterizing the Existing Network and Sites
101
show ip cache flow Command Output Description (Continued)
Table 2-11
Field
Description
Active(Sec)/Flow
Sum of all the seconds from the first packet to the last packet of an expired
flow
Idle(Sec)/Flow
Sum of all the seconds from the last packet seen in each non-expired flow
Other Network Analysis Tools Examples
Figure 2-15 shows sample output from the Cacti tool, illustrating the daily throughput on a link in
Dallas.
Figure 2-15
Cacti Can Display Daily Traffic
Figure 2-16 is a sample utilization table from the SolarWinds Orion tool, illustrating the current
percentage utilization on the top 25 interfaces.
102
Chapter 2: Applying a Methodology to Network Design
Figure 2-16
SolarWinds Orion Tool Can Display Utilization
Network Health Checklist
Based on the data gathered from the customer’s network, the designer should check off any items
that are true in the following Network Health Checklist. On a healthy network, it should be
possible to check off all the items.
Note that these guidelines are only approximations. Exact thresholds depend on the type of traffic,
applications, internetworking devices, topology, and criteria for accepting network performance.
As every good engineer knows, the answer to most network performance questions (and most
questions in general) is “It depends.”
■
No shared Ethernet segments are saturated (no more than 40 percent network utilization).
■
No WAN links are saturated (no more than 70 percent network utilization).
■
The response time is generally less than 100 milliseconds (1 millisecond = 1/1000 of a
second; 100 milliseconds = 1/10 of a second).
■
No segments have more than 20 percent broadcasts or multicasts.
■
No segments have more than one cyclic redundancy check error per million bytes of data.
■
On the Ethernet segments, less than 0.1 percent of the packets result in collisions.
■
The Cisco routers are not overutilized (the 5-minute CPU utilization is no more than 75
percent).
Characterizing the Existing Network and Sites
103
■
The number of output queue drops has not exceeded 100 in an hour on any Cisco router.
■
The number of input queue drops has not exceeded 50 in an hour on any Cisco router.
■
The number of buffer misses has not exceeded 25 in an hour on any Cisco router.
■
The number of ignored packets has not exceeded 10 in an hour on any interface on a Cisco
router.
The designer should also document any concerns about the existing network’s health and its ability
to support growth.
Summary Report
The result of the network characterization process is a summary report that describes the network’s
health. The customer input, network audit, and traffic analysis should provide enough information
to identify possible problems in the existing network. The collected information must be collated
into a concise summary report that identifies the following:
■
Features required in the network
■
Possible drawbacks of and problems in the existing network
■
Actions needed to support the new network’s requirements and features
With this information, the designer should be able to propose hardware and software upgrades to
support the customer requirements or to influence a change in requirements. Example 2-5 presents
a sample summary report that identifies different aspects of a network infrastructure.
Example 2-5
Sample Summary Report
The network uses 895 routers:
–655 routers use Cisco IOS software version 12.3 or later
–221 routers use Cisco IOS software version 12.1(15)
–19 routers use Cisco IOS software version 12.0(25)
Requirement: QoS congestion management (queuing) required in the WAN
Identified problem:
–Cisco IOS software version 12.0 does not support new queuing technologies
–15 out of 19 routers with Cisco IOS software 12.0 are in the WAN
–12 out of 15 routers do not have enough memory to upgrade to Cisco IOS
software version 12.3 or later
–5 out of 15 routers do not have enough flash memory to upgrade to Cisco IOS
software version 12.3 or later
Recommended action:
–12 memory upgrades to 64 Megabytes (MB), 5 FLASH upgrades to 16MB
Alternatives:
continues
104
Chapter 2: Applying a Methodology to Network Design
Example 2-5
Sample Summary Report (Continued)
–Replace hardware as well as software to support queuing
–Find an alternative mechanism for that part of the network
–Find an alternative mechanism and use it instead of queuing
–Evaluate the consequences of not implementing the required feature in that part of the
network
The summary report conclusions should identify the existing infrastructure’s shortcomings. In
Example 2-5, QoS congestion management (queuing) is required. However, the designer has
identified that Cisco IOS software version 12.0 does not support the newest queuing technologies.
In addition, some routers do not have enough RAM and flash memory for an upgrade.
Summary report recommendations relate the existing network and the customer requirements.
These recommendations can be used to propose upgrading of hardware and software to support
the required features or modifying the customer requirements. In this example, options include
evaluating the necessity of the queuing requirement in the WAN.
Creating a Draft Design Document
After thoroughly examining the existing network, the designer creates a draft design document.
Figure 2-17 illustrates a draft design document’s index (not yet fully developed), including the
section that describes the existing network. The “Design Requirements” and “Existing Network
Infrastructure” chapters of the design document are closely related—examining the existing
network can result in changes to the design requirements. Data from both chapters directly
influences the network’s design.
Figure 2-17
Draft Design Document Index
Immediate
Feedback
Customer Input
Network Audit
Traffic Analysis
Draft Design Document
1. Design Requirements
2. Existing Network Infrastructure
2.1. Network Topology
2.2. Network Audit
2.3. Applications Used in the
Network
2.4. Network Health Analysis
2.5. Recommended Changes
to the Existing Network
…
Appendix A—List of Existing
Network Devices
Appendix B—Configurations of
Existing Network Devices
Immediate
Feedback
Analysis and
recommendations
based on collected
information and
the requirements
of the design
Characterizing the Existing Network and Sites
105
Typical draft documentation for an existing network should include the following items:
■
Logical (Layer 3) topology map or maps. Divide the topology into network modules if the
network is too large to fit into one topology map.
■
Physical (Layer 1) topology map or maps.
■
The network audit results, including the types of traffic in the network, the traffic congestion
points, the suboptimal traffic paths, and so on.
■
A summary section describing the major network services used in the existing network, such
as Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), and Internet Protocol
Security (IPsec).
■
A summary description of applications and overlay services used in the network.
■
A summary description of issues that might affect the design or the established design
requirements.
■
A list of existing network devices, with the platform and software versions.
■
Configurations of existing network devices, usually attached as either a separate document or
an appendix to the design document.
Time Estimates for Performing Network Characterization
This section provides some guidelines to estimate how long it may take to characterize the
network. The time required to characterize a network varies significantly, depending on factors
such as the following:
■
The experience of the network engineer
■
The quality of documentation provided by the customer and the quality of the communication
with the customer
■
The size and complexity of network
■
The efficiency of network management and discovery tools
■
Whether or not the network devices are carefully managed via SNMP
■
How much information is needed for the scope of the project
Figure 2-18 provides a range of time estimates, in hours, for the characterization of networks in a
variety of sizes. These estimates assume a highly skilled (Cisco Certified Internet Expert level)
network engineer with efficient automated tools for network discovery and performance
106
Chapter 2: Applying a Methodology to Network Design
gathering, and a network in which the devices communicate with SNMP. The network
characterization includes strategic evaluation and possible network redesign.
Figure 2-18
Network Characterization Estimates (in Hours)
Small Network
1-20
Switches/
Routers
Medium
Network
20-200
Switches/
Routers
Large Network
200-800
Switches/
Routers
Very Large
Network
>800
Switches/
Routers
a) Interview management team
4
4
8
8
12
12
16
16
b) Interview network team
4
4
6
6
8
12
24
24
c) Review documentation
4
4
6
6
8
12
16
16
d) Set up network discovery tool
4
4
6
6
8
8
16
16
e) Resolve SNMP access and similar
problems
4
4
8
16
16
48
80
160
f) Allow tools to gather data
g) Analyze captured data
h) Prepare high-level Layer 3 diagrams
i) Prepare report stating conclusions
j) Incremental effort to prepare network
diagrams
Total estimated manpower, in hours
Variable
Variable
Variable
Variable
4
8
16
16
24
24
40
40
4
4
4
8
8
16
16
32
16
16
32
32
48
48
80
80
Not Included
Not Included
Not Included
Not Included
44 - 48
86 - 98
132 - 180
288 - 384
The steps in Figure 2-18 are as follows:
a.
Interviewing the management team to gather goals and constraints.
b.
Interviewing the network team, and gathering goals, constraints, documentation, and
diagrams.
c.
Reviewing documentation and diagrams, and clarifying items with the site team.
d.
Setting up the network discovery tool, which typically involves using automated discovery or entering a device list or IP address range into the tool; verifying that the tool has
found most routers and switches; and starting to collect performance data.
e.
Resolving SNMP access and similar problems if devices have not been very carefully
managed in the past.
f.
Allowing the discovery tool to gather data. The time for this step will vary depending on
network, and should include seasonal or cyclical factors, but generally one week of data
is sufficient. The network engineer typically does not need to oversee this process.
Using the Top-Down Approach to Network Design
107
g.
Analyzing the captured data; minimizing the time required is dependent on using efficient tools.
h.
Preparing high-level (Layer 3) diagrams of the proposed network.
i.
Preparing the report of conclusions and recommendations.
NOTE These estimates do not include the time needed to prepare detailed network diagrams
if the customer does not supply them.
Consequently, network characterization typically takes from one to many weeks of effort,
depending on the size and complexity of the network and the other factors mentioned at the
beginning of this section.
Using the Top-Down Approach to Network Design
After establishing the organizational requirements and documenting the existing network, the
designer is ready to design a network solution. This section first discusses the top-down approach
to network design. Decision tables and structured design are described, and the section includes a
brief discussion of the types of network design tools that might be used. The section concludes
with a discussion about building a pilot or prototype, and the contents of a detailed design
document.
The Top-Down Approach to Network Design
Designing a large or even medium-sized network can be a complex project. Procedures have been
developed to facilitate the design process by dividing it into smaller, more manageable steps.
Identifying the separate steps or tasks ensures a smooth process and reduces potential risks.
A top-down design allows the designer to “see the big picture” before getting to the details. Topdown design clarifies the design goals and initiates the design from the perspective of the required
applications. The top-down approach adapts the physical infrastructure to the needs of the
applications. Network devices are chosen only after a thorough requirements analysis. Structured
design practices should be integrated with the top-down approach, especially in very complex
networks.
In contrast to top-down design, the network design approach in which network devices and
technologies are selected first is called bottom-up, or connect-the-dots. This approach often results
in an inappropriate network for the required services and is primarily used when a very quick
response to the design request is needed. With a bottom-up approach, the risk of having to redesign
the network is high.
108
Chapter 2: Applying a Methodology to Network Design
Guidelines for producing a top-down design include the following:
■
Thoroughly analyze the customer’s requirements.
■
Initiate the design from the top of the OSI model. In other words, define the upper OSI layers
(application, presentation, and session) first, and then define the lower OSI layers (transport,
network, data link, and physical)—the infrastructure (routers, switches, and media) that is
required.
■
Gather additional data about the network (protocol behavior, scalability requirements,
additional requirements from the customer, and so forth) that might influence the logical and
physical design. Adapt the design to the new data, as required.
Top-Down Approach Compared to Bottom-Up Approach
A top-down approach to design has many benefits compared to a bottom-up approach, including
the following:
■
Incorporating the customer organization’s requirements
■
Providing the customer and the designer with the “big picture” of the desired network
■
Providing a design that is appropriate for both current requirements and future development
The disadvantage of the top-down approach is that it is more time-consuming than the bottom-up
approach; it necessitates a requirement analysis so that the design can be adapted to the identified
needs.
A benefit of the bottom-up approach—selecting the devices and technologies and then moving
toward services and applications—is that it allows a quick response to a design request. This
design approach facilitates designs based on the designer’s previous experience.
The major disadvantage of the bottom-up approach is that it can result in an inappropriate design,
leading to costly redesign.
Top-Down Design Example
Consider an example that uses the basics of the top-down approach when designing an IP
telephony network solution. In this example, the customer requires a network that can support IP
telephony. IP telephony permits the use of the same network resources for both data and voice
transport, thus reducing the costs of having two separate networks. To achieve this, the network
must support Voice over IP (VoIP) technology; this first step in the design process is illustrated in
Figure 2-19.
Using the Top-Down Approach to Network Design
Figure 2-19
109
A Voice over IP Network Is Required for IP Telephony
VoIP
Figure 2-20 illustrates the addition of an IP-based network, which is required to support VoIP. The
network includes IP-enabled routers and other devices not shown in the figure. The IP network’s
delay is also managed; to achieve this, specific QoS mechanisms are also implemented in the
network, as indicated in Figure 2-20.
Figure 2-20
IP and QoS Are Required for VoIP
IP Network
QoS
QoS
IP Routing
Delay
QoS
Figure 2-21 illustrates the addition of the call monitoring and management function. This function
was previously overlooked because such functions were traditionally handled by a PBX on a
separate voice network; during the top-down design, it became clear that this function is necessary.
A Cisco Unified Communications Manager is therefore placed inside the network to manage and
monitor IP telephone calls.
NOTE Cisco Unified Communications Manager is a server-based application that establishes
and maintains signaling and control for IP telephone sessions.
IP telephony is described further in Chapter 8, “Voice Network Design Considerations.”
110
Chapter 2: Applying a Methodology to Network Design
Figure 2-21
Cisco Unified Communications Manager Is Required for Monitoring and Managing VoIP Calls
IP Network
QoS
QoS
Call Monitoring, Management
(Cisco Unified Communications
Manager)
Cisco Unified Communications Manager
Decision Tables in Network Design
Decision tables are used for making systematic decisions when there are multiple solutions or
options to a network issue or problem. Decision tables facilitate the selection of the most
appropriate option from many possibilities and can be helpful for justifying why a certain solution
was chosen. Options are usually selected based on the highest level of compliance with given
requirements. Basic guidelines for creating a network design decision table include the following:
Step 1
Determine the network building block about which decisions will be made
(the physical topology, routing protocol, security implementation, and so
on).
Step 2
Collect possible options for each decision. Be certain to include all options
(or as many as possible) to obtain maximum value from the decision table.
A thorough survey of the existing state of technology and considerable
knowledge are needed to include all options.
Step 3
Create a table of the possible options and the given requirements. Include
the relevant parameters or properties.
Step 4
Match the given requirements with the specific properties of the given
options.
Step 5
Select the most appropriate option—the option with the most matches—if
all requirements are treated equally. However, if some requirements are
considered more important than others, implement a weighting system
such that each of the requirements is assigned a weight that is proportional
to its importance in the decision-making process.
Figure 2-22 is an example of a decision table for selecting a routing protocol based on multiple
criteria. In this example, several routing protocols are considered as possible options: OSPF,
Using the Top-Down Approach to Network Design
111
Intermediate System–to–Intermediate System (IS-IS), Enhanced Interior Gateway Routing
Protocol (EIGRP), and BGP. Five required parameters are listed, along with an indication of how
well the routing protocols comply with these parameters. As indicated in the figure, the chosen
protocol should include the following properties:
■
It should support a large network. All the protocols being considered meet this requirement.
■
It must be Enterprise-focused, rather than Internet service provider–focused. BGP was
designed to support interconnecting networks of autonomous systems; it is not optimized for
use in the enterprise. IS-IS is typically deployed in service provider environments, rather than
in enterprises.
■
Support for variable-length subnet mask (VLSM) is required. All the protocols being
considered support VLSM.
■
It must be supported on Cisco routers, which is the case for all the protocols being considered.
■
Network support staff should have a good knowledge of the chosen protocol to enable them
to troubleshoot the network. In this case, the network support staff are knowledgeable about
EIGRP, but not about OSPF, IS-IS, or BGP.
NOTE All requirements in this example have the same level of importance, so no weights are
used.
Based on the stated requirements, EIGRP is the routing protocol of choice in this example.
Figure 2-22
Sample Decision Table for Routing Protocol Selection
OSPF
IS-IS
BGP
Required
Network
Parameters
Large
Very Large
Very Large
Large
Enterprise-Focused Yes
(Yes/No)
Yes
No
No
Yes
Support for VLSM
(Yes/No)
Yes
Yes
Yes
Yes
Yes
Supports Cisco
Routers
(Yes/No)
Yes
Yes
Yes
Yes
Yes
Network Support
Staff Knowledge
(Good/Fair/Poor)
Good
Fair
Poor
Poor
Good
Options EIGRP
Parameters
Size of Network
(Small/Medium/
Large/Very Large)
Large
112
Chapter 2: Applying a Methodology to Network Design
Structured Design
The output of the design should be a model of the complete system. The top-down approach is
highly recommended. Rather than focusing on the network components, technologies, or
protocols, instead focus on the business goals, technical objectives, and existing and future
network applications and services.
Structured design focuses on a systematic approach, dividing the design task into related, less
complex components, as follows:
■
First, identify the applications needed to support the customer’s requirements.
■
Next, identify the applications’ logical connectivity requirements, with a focus on the
necessary infrastructure services and network infrastructure.
■
Split the network functionally to develop the network infrastructure and hierarchy
requirements.
NOTE This book uses the Cisco Enterprise Architecture to provide consistent infrastructure
modularization, as described in Chapter 3.
■
Design each of the functional elements separately, yet in relation to other elements. For
example, the network infrastructure and infrastructure services designs are tightly connected;
they are both bound to the same logical, physical, and functional models. Use the top-down
approach during all designs.
After identifying the connectivity requirements, the designer works on each of the functional
module’s details. The network infrastructure and infrastructure services are composed of logical
structures. Each of these structures (such as addressing, routing protocols, QoS, security, and so
forth) must be designed separately, but in close relation to other structures, with a goal of creating
one homogenous network.
Some logical structures are more closely related than others. Network infrastructure elements are
more closely related to each other than to infrastructure services, and infrastructure services are
more closely related to each other than to network infrastructure elements. For example, physical
topology and addressing design are very closely related, whereas addressing and QoS design are
not.
Several approaches to physically structuring a network module exist. The most common approach
is a three-layer hierarchical structure: core, distribution, and access. In this approach, three
separate, yet related, physical structures are developed instead of a single, large network, resulting
in meaningful and functionally homogeneous elements within each layer. Selecting the
Using the Top-Down Approach to Network Design
113
functionality and required technologies is easier when it is applied to separate structured network
elements than when it is applied to the complex network.
NOTE Chapter 3 discusses the hierarchical model in detail.
Figure 2-23 is an example of how a network design can be divided into smaller, yet related sections
using structured design practices.
Figure 2-23
Structured Design Example
Application
Design
Top-Down
Network
Infrastructure Design
Tight Design
Interaction
Infrastructure
Services Design
Modularize
the Network
Implement
Functional Hierarchy
Logical
Subdivision
Physical
Topology
Design
Addressing
Design
Routing
Design
Logical
Subdivision
Technology
Selection
QoS
Design
Security
Design
Multicast
Design . . .
In this example, network infrastructure design and infrastructure services design are tightly
connected; both are bound to the same logical, physical, and functional models. These elements
are subdivided logically. The network infrastructure design is subdivided into physical topology
design, addressing design, routing design, and technology selection. The infrastructure services
design is subdivided into QoS design, security design, and multicast design. All design phases use
the top-down approach.
114
Chapter 2: Applying a Methodology to Network Design
Network Design Tools
Several types of tools can be used to ease the task of designing a complex modern network,
including the following:
■
Network modeling tools: Network modeling tools are helpful when a lot of input design
information (such as customer requirements, network audit and analysis results, and so on)
exists. Network modeling tools enable modeling of both simple and complex networks. The
tools process the information provided and return a proposed configuration, which can be
modified and reprocessed to add redundant links, support additional sites, and so forth.
■
Strategic analysis tools: Strategic analysis or what-if tools help designers and other people
who are working on the design (engineers, technologists, and business and marketing
professionals) to develop network and service plans, including detailed technical and business
analysis. These tools attempt to calculate the effects of specific network components through
simulated scenarios.
■
Decision tables: As discussed, decision tables are manual tools for choosing specific network
characteristics from multiple options, based on required parameters.
■
Simulation and verification tools or services: These tools or services are used to verify the
acquired design, thereby lessening the need for a pilot network implementation.
Figure 2-24 illustrates how the initial requirements information is processed with network design
tools to produce a network design.
Figure 2-24
Using Network Design Tools
Simulate and Test
Network Design
Input Data
Customer
Requirements
Used With
or
Build Prototype
Network
Design Tools
Resulting In
Network Audit
and Analysis
Network
Modeling
Strategic
Analysis
Decision
Tables
Network
Design
Using the Top-Down Approach to Network Design
115
To verify a network design that was produced with the help of network modeling tools, strategic
analysis tools, and decision tables, either use simulation and test tools or build a pilot or prototype
network. The pilot or prototype network also creates a proof of concept that confirms the
appropriateness of the design implementation plan.
Building a Prototype or Pilot Network
It is often desirable to verify a design before implementation. A design can be tested in an existing,
or live, network—this is called a pilot—or, preferably, in a prototype network that does not affect
the existing network. A successful design implementation in either a pilot or prototype network
can be used as a proof of concept in preparation for full implementation and can be used as input
to the implementation steps.
KEY
A pilot network tests and verifies the design before the network is launched, or is a subset
POINT of the existing network in which the design is tested.
A pilot network is normally used when the design is for a completely new network; pilots
can also be used for designs that add to an existing network.
A prototype network tests and verifies a redesign in an isolated network, before it is
applied to the existing network. A prototype network is usually used to verify designs that
must be implemented on an existing network infrastructure.
It is important that the pilot or prototype test the design, including the customer’s most important
stated requirements. For example, if a key requirement is minimal response time for remote users,
ensure that the prototype or pilot verifies that the maximum acceptable response time is not
exceeded.
A prototype or pilot implementation can have one of two results:
■
Success: This result is usually enough to prove the design concept.
■
Failure: This result is normally used to correct the design; the prototype or pilot phase is then
repeated. In the case of small deviations, the design can be corrected and tested in the
prototype or pilot network immediately.
Figure 2-25 is a sample topology subset of a planned network. The highlighted areas indicate the
parts of the network involved in a redesign. This part of the topology is implemented first in a
prototype to verify the design.
116
Chapter 2: Applying a Methodology to Network Design
Figure 2-25
A Prototype Network
Management
Workstations
Server Farm
External Servers
Internet
PSTN/ISDN
WAN
Documenting the Design
A design document lists the design requirements, documents the existing network and the network
design, identifies the proof-of-concept strategy and results, and details the implementation plan.
The final design document structure should be similar to the one in Figure 2-26, which includes
the following:
■
Introduction: Every design document should include an introduction to present the main
reasons leading to the network design or redesign.
■
Design requirements: Also a mandatory part of any design document, this section includes
the organization’s requirements and design goals that must be fulfilled.
■
Existing network infrastructure: This section is required only for a network redesign. The
subsections document the results of the existing network characterization steps.
■
Design: This section is an essential part of the design document and identifies the design and
implementation details. The design details documented will obviously differ depending on
the type of design project (whether it is a completely new network, a network redesign, or
The Design Implementation Process
117
simply a new service introduction, for example), but they typically include the topology,
addressing, and design. Implementation details, such as configuration templates and exact
configurations of network devices, are included to ease the implementation process.
■
Proof of concept: This section describes the pilot or prototype network verification and test
results.
■
Implementation plan: This section provides the implementation details that technical staff
need to carry out as quickly and smoothly as possible, without requiring the presence of the
designer.
■
Appendixes: The appendixes usually include lists and, optionally, configurations of existing
network devices.
Figure 2-26
Sample Design Document
Design Document Index
1. Introduction
2. Design Requirements
3. Existing Network Infrastructure
3.1. Network topology
3.2. Network audit
3.3. Applications used in the
network
3.4. Network health analysis
3.5. Recommended changes to the
existing network
4. Design
4.1. Design summary
4.2. Design details
4.2.1. Topology design
4.2.2. Addressing design
4.2.3. EIGRP design
4.2.4. Security design
…
4.3. Implementation details
4.3.1. Configuration templates
for campus devices
4.3.2. Configuration templates
for WAN devices
…
5. Proof of Concept
5.1. Pilot or prototype network
5.2. Test results
6. Implementation Plan
6.1. Summary
6.2. Implementation steps
Appendix A—List of existing network
devices
Appendix B—Configurations of
existing network devices
The Design Implementation Process
After the design is complete, the design implementation process is executed.
Planning a Design Implementation
Planning and documenting the design implementation is the first step in this process. The design
implementation description should be as detailed as possible. The more detailed the design
documentation, the less knowledgeable the network engineer must be to implement the design.
Very complex implementation steps usually require that the designer carry out the
118
Chapter 2: Applying a Methodology to Network Design
implementation, whereas other staff members (or another company) can perform welldocumented, detailed implementation steps.
Implementation must consider the possibility of a failure, even after a successful pilot or prototype
network test. The plan should therefore include a test at every step and a rollback plan to revert to
the original setup if a problem occurs. List implementation steps and estimated times in a table.
If a design is composed of multiple complex implementation steps, plan to implement each step
separately rather than all at once. In case of failure, incremental implementation reduces
troubleshooting and reduces the time needed to revert to a previous state. Implementation of a
network design consists of several phases (install hardware, configure systems, launch into
production, and so forth). Each phase consists of several steps, and the documentation for each
step should contain the following:
■
Description of the step
■
References to design documents
■
Detailed implementation guidelines
■
Detailed rollback guidelines in case of failure
■
Estimated time necessary for implementation
Figure 2-27 illustrates a sample implementation plan summary.
Figure 2-27
Sample Summary Design Implementation Plan
Date
Time
Phase 3
Step 1
Step 2
Step 3
Step 4
Phase 4
Step 1
Step 2
Step 3
Step 4
Phase 5
04/02/2007
04/03/2007
04/02/2007
Step 1
...
Step 2
Description
Implementation
Details
Install campus hardware
Connect switches
Install routers
Complete cabling
Verify data link layer
Configure campus hardware
Section 6.2.3
Section 6.2.3.1
Section 6.2.3.2
Section 6.2.3.3
Section 6.2.3.4
Section 6.2.4
Configure VLANs
Configure IP addressing
Configure routing
Verify connectivity
Launch campus updates into
production
Complete connections to
existing network
Section 6.2.4.1
Section 6.2.4.2
Section 6.2.4.3
Section 6.2.4.4
Section 6.2.5
Section 6.2.5.1
Verify connectivity
Section 6.2.5.2
Complete
✓
✓
✓
✓
✓
The Design Implementation Process
119
In Figure 2-27, each step of the implementation phase is briefly described, with references to the
detailed implementation plan for further details. The detailed implementation plan section should
describe precisely what needs to be accomplished.
Figure 2-28 provides a detailed description of an implementation step. It describes the
configuration of EIGRP on 50 routers in the network and lists the two major components of the
step (in the per-router configuration procedure).
Figure 2-28
Sample Detailed Design Implementation Step
• Section 6.2.7.3, “Configure routing protocols in the
WAN network module”:
– Number of routers involved is 50.
– Use template from section 4.2.3, “EIGRP details.”
– Per router configuration:
• Use passive-interface command on all nonbackbone LANs. (See
section 4.2.3, “EIGRP details”)
• Use summarization according to the design. (See section 4.2.3,
“EIGRP details,” and section 4.2.2, “Addressing details”)
– Estimated time is 10 minutes per router.
– Roll-back procedure: Remove EIGRP configuration on all
routers.
The reference to the design document is useful for retrieving the details about the EIGRP
implementation.
Implementing and Verifying the Design
Successful implementation of the pilot or prototype network might have already concluded work
on the design. However, implementation is the designed network’s first actual test. Even if a pilot
or prototype network was used as a proof of concept, only the actual implementation reveals any
design weaknesses. The design’s final confirmation is the full, live network implementation. As
part of the implementation phase, the designer assists with the design verification and takes
remedial actions, if necessary.
The design document should include a list of checks to be performed both during the pilot or
prototype phase and during the implementation, to ensure that the network is functioning as
required.
Monitoring and Redesigning the Network
The network is put into operation after it is built. During operation, the network is constantly
monitored and checked for errors and problems. A network redesign might be required if
troubleshooting problems become too frequent or even impossible to manage. For example, at
least a partial redesign might be necessary if the new network is consistently congested. Solutions
120
Chapter 2: Applying a Methodology to Network Design
might include increasing bandwidth, adding filters, upgrading to devices with more capacity,
moving servers that are in high use, and so forth. Hopefully this scenario can be avoided if all
previous design steps have been completed properly.
Summary
In this chapter you learned about the principles of network design, with a focus on the following
topics:
■
The three phases of the Cisco SONA architectural framework: integrated transport, services,
and applications
■
The three layers of the Cisco SONA architectural framework: networked infrastructure,
interactive (infrastructure) services, and application
■
The PPDIOO network lifecycle
■
The network design methodology based on this lifecycle, which has three basic steps:
— Identify customer requirements
— Characterize the existing network and sites
— Design the network topology
■
The design implementation process, which also has three basic steps:
— Plan the implementation
— Implement and verify the design
— Monitor and optionally redesign
References
For additional information, see the following resources:
■
Service Oriented Network Architecture Introduction, http://www.cisco.com/go/sona/
■
Lifecycle Services Strategy Introduction, http://www.cisco.com/en/US/products/ps6890/
serv_category_home.html
■
Oppenheimer, P. Top-Down Network Design, Second Edition. Indianapolis: Cisco Press,
2004.
Case Study: ACMC Hospital Network Upgrade
121
Case Study: ACMC Hospital Network Upgrade
This case study analyzes the network infrastructure of Acme County Medical Center (ACMC)
Hospital, a fictitious small county hospital in the United States. This same case study is used
throughout the remainder of the book so that you can continue to evaluate your understanding of
the concepts presented.
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study, you develop a high-level design for the ACMC Hospital network.
Case Study Scenario
This case study analyzes the network infrastructure of ACMC Hospital, a fictitious small county
hospital. The hospital has provided you with a short description of the current situation and its
plans. As a network designer, it is your job to identify all the organization’s requirements and data
that will allow you to provide an effective solution.
Organizational Facts
ACMC Hospital is a medium-sized regional hospital located in Acme County, with approximately
500 staff members supporting up to 1000 patients. The hospital is interested in updating its main
facility (which uses equipment from various vendors) in its Layer 2 campus. You are meeting to
define the client’s requirements.
122
Chapter 2: Applying a Methodology to Network Design
ACMC has 15 buildings in total on the campus, plus 5 small remote clinics. There are two main
hospital buildings and an auxiliary building. The two main buildings have seven floors each, with
four wiring closets per floor. The auxiliary building—the Children’s Place—is connected to the
two main buildings; the switches from these three buildings are connected with fiber connections
in a ring. The Children’s Place has three floors, with three wiring closets per floor. The other 12
campus buildings are smaller office and support facilities, with 10 to 40 people per building,
located on one or two floors.
The network architect is new to the hospital. The hospital is aggressively expanding its clinic and
alternative emergency room presence within Acme County. Due to population growth in general,
plans to enlarge the main campus are also under way. The hospital is doing fairly well financially.
It wants to selectively deploy cutting-edge technology for better patient care and high productivity.
Management is tired of network downtime and slowness affecting patient care. Network
manageability is important because ACMC has a tradition of basing operations on small support
staffs with high productivity. ACMC’s upgrade timeframe is 6 to 12 months.
Current Situation
The current network uses inexpensive switches from several vendors, purchased over time. They
comply with various standards, depending on when they were purchased. The switches are not
SNMP-manageable, although a small amount of information is available from each switch via the
web or command-line interface.
Within each of the three main buildings is a main switch. One floor switch from each floor
connects to the main switch. The other switches connect either directly to the floor switch or via
a daisy chain of switches, depending on which was most convenient at the time.
The small outlying buildings have one or two 24-port switches. One of these connects back to one
of the main building switches via fiber. If there is a second switch, it connects via the first switch.
Currently, the staff VLAN spans the entire campus. No Layer 3 switching is present. The address
space is 172.16.0.0 /16. Addresses are coded sequentially into PCs as they are deployed. Staff
members have been meaning to deploy DHCP but have not had the time.
The applications that the organization is currently running include standard office applications,
plus some specialized medical tools running over IP. Radiology, Oncology, and other departments
do medical imaging. As these departments acquire new tools, they are adding real-time motion to
the highly detailed medical images, requiring large amounts of bandwidth. All the new servers are
capable of using Gigabit or Gigabit EtherChannel connectivity.
Many servers are currently located in various closets. Many lack uninterrupted power supplies or
proper environmental controls. A staff member has to roll a tape backup cart to each server closet
to back up each server. There are about 40 centrally located servers in one raised floor “server
Case Study: ACMC Hospital Network Upgrade
123
room,” and 30 other servers distributed around the campus near their users. The server room takes
up part of the first floor of Main Building 1, along with the cafeteria and other non-networked
areas.
Hospital Support Services has been experimenting with workstations on wheels (WoW). Moving
these and plugging them into an Ethernet jack is just not working very well.
The WAN uses 56-kbps links to three of the remote clinics and dialup connectivity to the other
two. The one router uses static routing that was configured by a previous network designer.
The staff members have frequently complained about slow response times. There appears to be
severe congestion of the LAN, especially at peak hours. The staff provided you with a copy of its
recent network diagram, which is shown in Figure 2-29.
Figure 2-29
ACMC Network Diagram Provided by the Customer
56 Kbps
56 Kbps
56 Kbps
Dial
Dial
Remote
Clinics
Main Building #1
Smaller Buildings
Main Building #2
Smaller Buildings
Children’s Place
Smaller Buildings
124
Chapter 2: Applying a Methodology to Network Design
You believe that the current situation does not provide for future growth, high reliability, and ease
of management.
Plans and Requirements
The introduction of new applications will result in an additional load on the links to the remote
clinics. The expected tighter integration and growth of remote offices will even further increase
the traffic load on the WAN links. The hospital would like to upgrade the WAN infrastructure to
provide sufficient bandwidth between the remote clinics and headquarters and, at the same time,
find a solution for better convergence during network failures. The company is aware of the
drawbacks of its current IP addressing scheme and is seeking a better solution.
The hospital must comply with the U.S. Health Insurance Portability and Accountability Act
(HIPAA).
Case Study Questions
Complete the following steps:
Step 1
Document ACMC’s requirements.
Step 2
Document any information that you think is missing from the case study
scenario and that you consider necessary for the design.
Before beginning the design, you will need this information. Assume that
you have talked to the customer about the missing information, and
document any assumptions you make. You don’t need to assume that all the
missing information is provided by the customer; some might never be
available. However, you do need to assume answers for your critical
questions.
NOTE Further information is provided in the case studies in subsequent chapters, as relevant
for that chapter. Thus, not all the information is provided in these answers.
Step 3
Outline the major design areas that you feel need to be addressed when
designing the solution for this scenario. List the tasks, and provide a brief
comment for each.
NOTE There are many ways, other than those provided in our answer, in which this
customer’s network could be improved. Further information is provided in the case studies in
subsequent chapters, and other options are discussed, as relevant for that chapter.
Review Questions
Review Questions
Answer the following questions, and then refer to Appendix A for the answers.
1.
What features are included in the Cisco vision of an intelligent network?
2.
Describe the three phases of evolving to an intelligent information network.
3.
Describe the three layers of the SONA framework.
4.
Name some of the benefits of using the SONA framework.
5.
Match the PPDIOO network lifecycle phases with their correct descriptions.
Phases:
a.
Prepare phase
b.
Plan phase
c.
Design phase
d.
Implement phase
e.
Operate phase
f.
Optimize phase
1.
The network is built
2.
A network design specification is produced
3.
Includes fault detection and correction and performance monitoring
4.
Network requirements are identified
5.
Business requirements and strategy related to the network are established
6.
Based on proactive management of the network
6.
During which PPDIOO phase is the initial design verification performed?
7.
What are the three basic steps of the design methodology?
8.
What steps are needed to implement a design?
9.
List some determinants of the scope of a design project.
10.
What steps are involved in gathering network requirements?
125
126
Chapter 2: Applying a Methodology to Network Design
11.
Corporation X is planning to introduce new systems for its employees, including e-learning,
groupware, videoconferencing, and an alternative telephone service to reduce its operational
costs. Which of the following is a planned application?
a.
E-mail
b.
IP multicast
c.
Cisco Unified MeetingPlace
d.
Quality of service
12.
What are some typical organizational goals?
13.
Corporation X is currently spending $7000 per month for telephony services provided by its
local phone company. The new IP telephony equipment costs $40,000, and the operating costs
are $2000 per month. When will the introduction of IP telephony pay for itself?
a.
After eight months.
b.
After five months.
c.
After one year.
d.
It will not pay for itself.
14.
List some common organizational constraints.
15.
Explain why a schedule might be a design constraint.
16.
Users typically think of network performance in terms of what?
a.
Throughput
b.
Responsiveness
c.
Resource utilization
17.
How might bandwidth be a technical constraint for a network design?
18.
How does traffic analysis help in the characterization of a network?
19.
List some site contact information that would be important for projects involving remote
deployments when equipment delivery and installations must be coordinated.
20.
True or false: The auditing process should never require any changes in the network?
21.
List some tools that can be used in the network assessment process.
22.
Which command can be used to determine memory usage on a Cisco router?
a.
show processes memory
b.
show processes cpu
c.
show memory utilization
d.
show version
Review Questions
23.
127
Which command displays packet size distribution and activity by protocol on a Cisco router?
a.
show ip nbar protocol-discovery
b.
show ip interface
c.
show version
d.
show ip cache flow
24.
What is the difference between a saturated Ethernet segment and a saturated WAN link?
25.
The network health summary report includes recommendations that __________________.
a.
relate the existing network and the customer requirements
b.
are based on the customer requirements
c.
are used to sell more boxes
26.
True or false: Characterization of a network can typically be completed in a few hours.
27.
With a top-down design: (choose three)
28.
a.
The design adapts the physical infrastructure to the requirements.
b.
The design adapts the requirements to the physical infrastructure.
c.
Network devices are chosen after requirement analysis.
d.
Network devices are selected first.
e.
The risk of having to redesign the network is high.
f.
The risk of having to redesign the network is low.
What are the layers in the three-layer hierarchical structure?
a.
Core, distribution, and desktop
b.
Core, distribution, and access
c.
Core, routing, and access
d.
Backbone, routing, and access
29.
What types of tools can be used during the network design process?
30.
What is the difference between a pilot and a prototype?
31.
What sections are included in a typical final design document?
32.
What items should be included in the documentation for a network design implementation
plan?
33.
Why is the network designer involved in the implementation phase?
34.
What might necessitate a redesign of the network?
This chapter introduces a modular
hierarchical approach to network design,
the Cisco Enterprise Architecture. This
chapter includes the following sections:
■
Network Hierarchy
■
Using a Modular Approach to Network
Design
■
Services Within Modular Networks
■
Network Management Protocols and
Features
■
Summary
■
References
■
Case Study: ACMC Hospital Modularity
■
Review Questions
CHAPTER
3
Structuring and
Modularizing the Network
This chapter introduces a modular hierarchical approach to network design, the Cisco
Enterprise Architecture. The chapter begins with a discussion of the hierarchical network
structure. The next section introduces network modularization and discusses the details of the
Cisco Enterprise Architecture. Following that are a detailed description of services within
modular networks, and a discussion of network management protocols and features.
Network Hierarchy
This section explains the hierarchical network model, which is composed of the access,
distribution, and core layers. The functions generally associated with each of these layers are
discussed, as is the most common approach to designing a hierarchical network.
Historically used in the design of enterprise local-area network and wide-area network data
networks, this model works equally well within the functional modules of the Cisco Enterprise
Architecture. These modules are discussed later in this chapter, in the section “Using a Modular
Approach to Network Design.”
Hierarchical Network Model
The hierarchical network model provides a framework that network designers can use to help
ensure that the network is flexible and easy to implement and troubleshoot.
Hierarchical Network Design Layers
As shown in Figure 3-1, the hierarchical network design model consists of three layers:
■
The access layer provides local and remote workgroup or user access to the network.
■
The distribution layer provides policy-based connectivity.
■
The core (or backbone) layer provides high-speed transport to satisfy the connectivity and
transport needs of the distribution layer devices.
130
Chapter 3: Structuring and Modularizing the Network
Figure 3-1
Hierarchical Model’s Three Layers
Core
High-Speed Switching
Distribution
Policy-Based Connectivity
Access
Local and Remote Workgroup Access
Each hierarchical layer focuses on specific functions, thereby allowing the network designer to
choose the right systems and features based on their function within the model. This approach
helps provide more accurate capacity planning and minimize total costs. Figure 3-2 illustrates a
sample network showing the mapping to the hierarchical model’s three layers.
Figure 3-2
Sample Network Designed Using the Hierarchical Model
WAN
Internet
PSTN
Workstations
Access
Layer
Multilayer or
Layer 2 Switching
in Access
Distribution
Layer
Multilayer
Switching
in Distribution
Multilayer
Switching
in Core
Core
Layer
Servers
Servers Connected
Directly to Server Farm
Distribution
Network Hierarchy
131
You do not have to implement the hierarchical layers as distinct physical entities; they are defined
to aid successful network design and to represent functionality that must exist within a network.
The actual manner in which you implement the layers depends on the needs of the network you
are designing. Each layer can be implemented in routers or switches, represented by physical
media, or combined in a single device. A particular layer can be omitted, but hierarchy should be
maintained for optimum performance. The following sections detail the functionality of the three
layers and the devices used to implement them.
Access Layer Functionality
This section describes the access layer functions and the interaction of the access layer with the
distribution layer and local or remote users.
The Role of the Access Layer
The access layer is the concentration point at which clients access the network. Access layer
devices control traffic by localizing service requests to the access media.
The purpose of the access layer is to grant user access to network resources. Following are the
access layer’s characteristics:
■
In the campus environment, the access layer typically incorporates switched LAN devices
with ports that provide connectivity for workstations and servers.
■
In the WAN environment, the access layer for teleworkers or remote sites provides access to
the corporate network across some wide-area technology, such as Frame Relay, Multiprotocol
Label Switching (MPLS), Integrated Services Digital Network, leased lines, Digital
Subscriber Line (DSL) over traditional telephone copper lines, or coaxial cable.
■
So as not to compromise network integrity, access is granted only to authenticated users or
devices (such as those with physical address or logical name authentication). For example, the
devices at the access layer must detect whether a telecommuter who is dialing in is legitimate,
yet they must require minimal authentication steps for the telecommuter.
132
Chapter 3: Structuring and Modularizing the Network
Layer 2 and Multilayer Switching in the Access Layer
Access can be provided to end users as part of either a Layer 2 (L2) switching environment or a
multilayer switching environment.
NOTE In this book, the term multilayer switching denotes a switch’s generic capability to use
information at different protocol layers as part of the switching process; the term Layer 3
switching is a synonym for multilayer switching in this context.
Cisco switches implement the use of protocol information from multiple layers in the switching
process in two different ways. The first way is multilayer switching (MLS) and the second way
is Cisco Express Forwarding (CEF). MLS and CEF are described further in Chapter 4,
“Designing Basic Campus and Data Center Networks.”
Using Layer 2 Switching in the Access Layer
Access to local workstations and servers can be provided using shared or switched media LANs;
VLANs may be used to segment the switched LANs. Each LAN or VLAN is a single broadcast
domain.
The access layer aggregates end-user switched 10/100 ports and provides Fast Ethernet, Fast
EtherChannel, and Gigabit Ethernet uplinks to the distribution layer to satisfy connectivity
requirements and reduce the size of the broadcast domains. You can deploy multiple VLANs, each
with its own IP subnet and its own instance of Spanning Tree Protocol (STP) providing alternative
paths in case of failure. In this case, Layer 2 trunking (typically using the Institute for Electrical
and Electronic Engineers [IEEE] 802.1Q trunking protocol) is used between the access layer
switches and the distribution layer switches, with per-VLAN STP on each uplink for load
balancing and redundancy, and with a distribution layer multilayer switch providing the interVLAN communication for the access layer.
NOTE Chapter 4 discusses STP further.
KEY
A recommended best practice is to implement one VLAN—thus supporting one IP
POINT subnet—per access switch and to connect the access switches to the distribution switches
with Layer 3 links rather than with trunks.
NOTE In small networks, the access layer is often collapsed into the distribution layer; in
other words, one device might handle all functions of the access and distribution layers.
Network Hierarchy
133
KEY
Using the Rapid Spanning Tree Protocol (RSTP) is a recommended best practice in the
POINT enterprise. RSTP is an evolution of the IEEE 802.1d STP standard and provides faster
spanning-tree convergence after a topology change.
When RSTP cannot be implemented, Cisco IOS STP features such as UplinkFast, PortFast, and
BackboneFast can be used to provide equivalent convergence improvements. These features are
described as follows:
■
UplinkFast: Enables faster failover on an access layer switch on which dual uplinks connect
to the distribution layer. The failover time is reduced by unblocking the blocked uplink port
on a switch immediately after root port failure, thereby transitioning it to the forwarding state
immediately, without transitioning the port through the listening and learning states.
■
BackboneFast: If a link fails on the way to the root switch but is not directly connected to the
local switch, BackboneFast reduces the convergence time from 50 seconds to between 20 and
30 seconds.
■
PortFast: Enables switch ports connected to nonswitch devices (such as workstations) to
immediately enter the spanning-tree forwarding state, thereby bypassing the listening and
learning states, when they come up. Ports connected only to an end-user device do not have
bridging loops, so it is safe to go directly to the forwarding state, significantly reducing the
time it takes before the port is usable.
NOTE Chapter 4 discusses other STP features.
Using Multilayer Switching in the Access Layer
The most common design for remote users is to use multilayer switches or routers. A multilayer
switch, or router, is the boundary for broadcast domains and is necessary for communicating
between broadcast domains (including VLANs). Access routers provide access to remote office
environments using various wide-area technologies combined with multilayer features, such as
route propagation, packet filtering, authentication, security, Quality of Service (QoS), and so on.
These technologies allow the network to be optimized to satisfy a particular user’s needs. In a
dialup connection environment, dial-on-demand routing (DDR) and static routing can be used to
control costs.
Access Layer Example
Figure 3-3 illustrates a sample network in which the campus access layer aggregates end users and
provides uplinks to the distribution layer. The access layer switches are dual-attached to the
distribution layer switches for high availability.
134
Chapter 3: Structuring and Modularizing the Network
Figure 3-3
Access Layer Connectivity in a Campus LAN
Access
Distribution
Si
Si
To Core
The access layer can support convergence, high availability, security, QoS, and IP multicast. Some
services found at the access layer include establishing a QoS trust boundary, broadcast
suppression, and Internet Group Management Protocol (IGMP) snooping.
Distribution Layer Functionality
This section describes distribution layer functions and the interaction of the distribution layer with
the core and access layers.
The Role of the Distribution Layer
The distribution layer represents both a separation between the access and core layers and a
connection point between the diverse access sites and the core layer. The distribution layer
determines department or workgroup access and provides policy-based connectivity.
Following are the characteristics of the distribution layer:
■
Distribution layer devices control access to resources that are available at the core layer and
must therefore use bandwidth efficiently.
■
In a campus environment, the distribution layer aggregates wiring closet bandwidth by
concentrating multiple low-speed access links into a high-speed core link and using switches
to segment workgroups and isolate network problems to prevent them from affecting the core
layer.
■
Similarly, in a WAN environment, the distribution layer aggregates WAN connections at the
edge of the campus and provides policy-based connectivity.
Network Hierarchy
135
■
This layer provides redundant connections for access devices. Redundant connections also
provide the opportunity to load-balance between devices.
■
The distribution layer represents a routing boundary between the access and core layers and
is where routing and packet manipulation are performed.
■
The distribution layer allows the core layer to connect diverse sites while maintaining high
performance. To maintain good performance in the core, the distribution layer can redistribute
between bandwidth-intensive access-layer routing protocols and optimized core routing
protocols. Route filtering is also implemented at the distribution layer.
■
The distribution layer can summarize routes from the access layer to improve routing protocol
performance. For some networks, the distribution layer offers a default route to access-layer
routers and runs dynamic routing protocols only when communicating with core routers.
■
The distribution layer connects network services to the access layer and implements policies
for QoS, security, traffic loading, and routing. For example, the distribution layer addresses
different protocols’ QoS needs by implementing policy-based traffic control to isolate
backbone and local environments. Policy-based traffic control prioritizes traffic to ensure the
best performance for the most time-critical and time-dependent applications.
■
The distribution layer is often the layer that terminates access layer VLANs (broadcast
domains); however, this can also be done at the access layer.
■
This layer provides any media transitions (for example, between Ethernet and ATM) that must
occur.
Policy-Based Connectivity
Policy-based connectivity means implementing the policies of the organization (as described in
Chapter 2, “Applying a Methodology to Network Design”). Methods for implementing policies
include the following:
■
Filtering by source or destination address
■
Filtering based on input or output ports
■
Hiding internal network numbers by route filtering
■
Providing specific static routes rather than using routes from a dynamic routing protocol
■
Security (for example, certain packets might not be allowed into a specific part of the
network)
■
QoS mechanisms (for example, the precedence and type of service [ToS] values in IP packet
headers can be set in routers to leverage queuing mechanisms to prioritize traffic)
136
Chapter 3: Structuring and Modularizing the Network
Distribution Layer Example
Figure 3-4 shows a sample network with various features of the distribution layer highlighted.
Figure 3-4
Example of Distribution Layer Features
Multilayer Switching in
Wiring Closet
Access
Route Filtering Toward
the Access Layer
Multilayer
Distribution
RIPv2
Multilayer
EIGRP
Core
Routing Boundary,
Concentration of
Access Attachments,
Packet Filtering, Policing
Route Summarization,
Eventual Load-Balancing
Multilayer Switching in Core
Following are the characteristics of the distribution layer in the routed campus network shown in
Figure 3-4:
■
Multilayer switching is used toward the access layer (and, in this case, within the access layer).
■
Multilayer switching is performed in the distribution layer and extended toward the core layer.
■
The distribution layer performs two-way route redistribution to exchange the routes between
the Routing Information Protocol version 2 (RIPv2) and Enhanced Interior Gateway Routing
Protocol (EIGRP) routing processes.
■
Route filtering is configured on the interfaces toward the access layer.
■
Route summarization is configured on the interfaces toward the core layer.
■
The distribution layer contains highly redundant connectivity, both toward the access layer
and toward the core layer.
Core Layer Functionality
This section describes core layer functions and the interaction of the core layer with the
distribution layer.
Network Hierarchy
137
The Role of the Core Layer
The function of the core layer is to provide fast and efficient data transport. Characteristics of the
core layer include the following:
■
The core layer is a high-speed backbone that should be designed to switch packets as quickly
as possible to optimize communication transport within the network.
■
Because the core is critical for connectivity, core layer devices are expected to provide a high
level of availability and reliability. A fault-tolerant network design ensures that failures do not
have a major impact on network connectivity. The core must be able to accommodate failures
by rerouting traffic and responding quickly to changes in network topology. The core must
provide a high level of redundancy. A full mesh is strongly suggested, and at least a wellconnected partial mesh with multiple paths from each device is required.
■
The core layer should not perform any packet manipulation, such as checking access lists or
filtering, which would slow down the switching of packets.
■
The core layer must be manageable.
■
The core devices must be able to implement scalable protocols and technologies, and provide
alternative paths and load balancing.
Switching in the Core Layer
Layer 2 switching or multilayer switching (routing) can be used in the core layer. Because core
devices are responsible for accommodating failures by rerouting traffic and responding quickly to
network topology changes, and because performance for routing in the core with a multilayer
switch incurs no cost, most implementations have multilayer switching in the core layer. The core
layer can then more readily implement scalable protocols and technologies, and provide alternate
paths and load balancing.
Figure 3-5 shows an example of Layer 2 switching in the campus core.
Figure 3-5
Layer 2 Switching in the Campus Core
Access
Layer 2
Distribution Multilayer
Routing Protocol at the
Distribution Layer
Computes Primary
and Alternative Paths
Layer 2
Redundant Layer 2 Links
to Provide Resilience
Core
Redundant Component to
Avoid Network Outages in
Case of a Device Failure
High-Speed Interconnections to
Accommodate Heavy Traffic Load
138
Chapter 3: Structuring and Modularizing the Network
In Figure 3-5, a typical packet between access sites follows these steps:
Step 1
The packet is Layer 2–switched toward a distribution switch.
Step 2
The distribution switch performs multilayer switching toward a core
interface.
Step 3
The packet is Layer 2–switched across the LAN core.
Step 4
The receiving distribution switch performs multilayer switching toward an
access layer LAN.
Step 5
The packet is Layer 2–switched across the access layer LAN to the
destination host.
Figure 3-6 shows an example of multilayer switching in the campus core.
Figure 3-6
Multilayer Switching in the Campus Core
Access
Layer 2
Distribution Multilayer
Multilayer
Core
Distribution Layer Routers
Peer with Core Routers
and Exchange Routing
Information
Redundant Links
and Core Devices Provide
High Level of Resilience
In Figure 3-6, a typical packet between access sites follows these steps:
Step 1
The packet is Layer 2–switched toward a distribution switch.
Step 2
The distribution switch performs multilayer switching toward a core
interface.
Step 3
The packet is multilayer-switched across the LAN core.
Step 4
The receiving distribution switch performs multilayer switching toward an
access LAN.
Step 5
The packet is Layer 2–switched across the access layer LAN to the
destination host.
Network Hierarchy
Hierarchical Routing in the WAN
Figure 3-7 shows an example of hierarchical routing in the WAN portion of a network.
Figure 3-7
Hierarchical Routing in the WAN
Core Layer
Distribution
Layer
Distribution
Layer
Access
Layer
Access
Layer
In Figure 3-7, a typical packet between access sites follows these steps:
Step 1
The packet is Layer 3–forwarded toward the distribution router.
Step 2
The distribution router forwards the packet toward a core interface.
Step 3
The packet is forwarded across the WAN core.
Step 4
The receiving distribution router forwards the packet toward the
appropriate access layer router.
Step 5
The packet is Layer 3–forwarded to the destination host’s access layer
LAN.
139
140
Chapter 3: Structuring and Modularizing the Network
Using a Modular Approach to Network Design
This section expands on the Cisco Service-Oriented Network Architecture (SONA) framework
described in Chapter 2 and explores the six modules of the Cisco Enterprise Architecture, with an
emphasis on the network infrastructure design considerations.
NOTE The access, distribution, and core layers can appear within each module of the Cisco
Enterprise Architecture.
The modularity built into the architecture allows flexibility in network design and facilitates
implementation and troubleshooting. Before the details of the architecture itself are introduced, an
overview of the evolution of enterprise networks is provided.
Evolution of Enterprise Networks
You do not have to go far back in history to find a time when networks were primarily used for file
and print services. These networks were isolated LANs that were built throughout the enterprise
organization. As organizations interconnected, these isolated LANs and their functions grew from
file and print services to include critical applications; the critical nature and complexity of the
enterprise networks also grew.
As discussed in the previous section, Cisco introduced the hierarchical model to divide the
enterprise network design (separately for both campus and WAN networks) into the access,
distribution, and core layers. This solution has several weaknesses, especially for large networks,
which are difficult to implement, manage, and, particularly, troubleshoot. Networks became
complex, and it was difficult to evaluate a network solution end-to-end through the network. The
hierarchical model does not scale well to these large networks.
An efficient method of solving and scaling a complex task is to break it into smaller, more
specialized tasks. Networks can easily be broken down smaller because they have natural physical,
logical, and functional boundaries. If they are sufficiently large to require additional design or
operational separation, these specialized functional modules can then be designed hierarchically
with the access, distribution, and core layers.
The Cisco Enterprise Architecture does just that: It reduces the enterprise network into further
physical, logical, and functional boundaries, to scale the hierarchical model. Now, rather than
designing networks using only the hierarchical model, networks can be designed using this Cisco
Enterprise Architecture, with hierarchy (access, distribution, and core) included in the various
modules, as required.
Designing with this Cisco Enterprise Architecture is not much different from what is already used
in practice; it formalizes current practice. There have always been separate hierarchies for the
Using a Modular Approach to Network Design
141
campus (with access, distribution, and core) and for the WAN (the remote office was the access
layer, the regional office provided the distribution layer, and the headquarters was the core). The
hierarchies tied together at the campus backbone. The Cisco Enterprise Architecture extends the
concept of hierarchy from the original two modules: Campus and WAN.
Cisco SONA Framework
As illustrated in Figure 3-8, the Cisco SONA provides an enterprise-wide framework that
integrates the entire network—campus, data center, enterprise edge, WAN, branches, and
teleworkers—offering staff secure access to the tools, processes, and services they require.
Collaboration
Layer
Cisco SONA Framework
Application
Layer
Figure 3-8
Business
Applications
Instant
Messaging
Unified
Messaging
Cisco Unified
Meeting Place
Cisco Unified
Contact Center
IP Phone
Video
Delivery
Application-Oriented Networking
Voice and
Collaboration Services
Security Services
Mobility Services
Infrastructure
Services
Storage Services
Compute Services
Identity Services
Adaptive Management
Services
Application Delivery
Services
Virtualization
Services Management
Interactive
Services
Layer
Advanced Analytics and Decision Support
Network Infrastructure Virtualization
Networked
Infrastructure
Layer
Infrastructure Management
Campus
Branch
Data
Center
Enterprise
Edge
WAN and
MAN
Teleworker
The modules of the Cisco Enterprise Architecture represent focused views of each of the places in
the network described in the SONA framework. Each module has a distinct network infrastructure
and distinct services; network applications extend between the modules.
Functional Areas of the Cisco Enterprise Architecture
At the first layer of modularity in the Cisco Enterprise Architecture, the entire network is divided
into functional components—functional areas that contain network modules—while still
142
Chapter 3: Structuring and Modularizing the Network
maintaining the hierarchical concept of the core-distribution-access layers within the network
modules as needed.
NOTE The access, distribution, and core layers can appear in any functional area or module
of the Cisco Enterprise Architecture.
The Cisco Enterprise Architecture comprises the following six major functional areas (also called
modules):
■
Enterprise Campus
■
Enterprise Edge
■
Service Provider
■
Enterprise Branch
■
Enterprise Data Center
■
Enterprise Teleworker
KEY
An enterprise does not implement the modules in the Service Provider functional area;
POINT they are necessary for enabling communication with other networks.
NOTE The Cisco SONA Enterprise Edge and the WAN and metropolitan-area network
(MAN) modules are represented as one functional area in the Cisco Enterprise Architecture, the
Enterprise Edge.
Figure 3-9 illustrates the modules within the Cisco Enterprise Architecture.
Using a Modular Approach to Network Design
Figure 3-9
143
Cisco Enterprise Architecture
Enterprise Campus
Enterprise Edge
Building
Access
E-Commerce
Service Provider
Enterprise
Branch
ISP A
Building
Distribution
Internet
Connectivity
ISP B
Campus
Core
Enterprise
Data Center
Remote Access
and VPN
PSTN
Server Farm and
Data Center
WAN and MAN
Site-to-Site VPN
Network
Management
Frame
Relay/ATM
Enterprise
Teleworker
NOTE Figure 3-9 is reproduced on the inside back cover of this book for your reference.
The Cisco Enterprise Campus Architecture combines a core infrastructure of intelligent switching
and routing with tightly integrated productivity-enhancing technologies, including Cisco Unified
Communications, mobility, and advanced security. The architecture provides the enterprise with
high availability through a resilient multilayer design, redundant hardware and software features,
and automatic procedures for reconfiguring network paths when failures occur. IP multicast
capabilities provide optimized bandwidth consumption, and QoS features ensure that real-time
traffic (such as voice, video, or critical data) is not dropped or delayed. Integrated security protects
against and mitigates the impact of worms, viruses, and other attacks on the network, including at
the switch port level. For example, the Cisco enterprise-wide architecture extends support for
security standards, such as the IEEE 802.1X port-based network access control standard and the
Extensible Authentication Protocol. It also provides the flexibility to add Internet Protocol
Security (IPsec) and MPLS virtual private networks (VPN), identity and access management, and
144
Chapter 3: Structuring and Modularizing the Network
VLANs to compartmentalize access. These features help improve performance and security while
decreasing costs.
The Cisco Enterprise Edge Architecture offers connectivity to voice, video, and data services
outside the enterprise. This module enables the enterprise to use Internet and partner resources,
and provide resources for its customers. QoS, service levels, and security are the main issues in
the Enterprise Edge.
The Cisco Enterprise WAN and MAN and Site-to-Site VPN module is part of the Enterprise Edge.
It offers the convergence of voice, video, and data services over a single Cisco Unified
Communications network, which enables the enterprise to span large geographic areas in a costeffective manner. QoS, granular service levels, and comprehensive encryption options help ensure
the secure delivery of high-quality corporate voice, video, and data resources to all corporate sites,
enabling staff to work productively and efficiently wherever they are located. Security is provided
with multiservice VPNs (both IPsec and MPLS) over Layer 2 or Layer 3 WANs, hub-and-spoke,
or full-mesh topologies.
The Cisco Enterprise Data Center Architecture is a cohesive, adaptive network architecture that
supports requirements for consolidation, business continuance, and security while enabling
emerging service-oriented architectures, virtualization, and on-demand computing. Staff,
suppliers, and customers can be provided with secure access to applications and resources,
simplifying and streamlining management and significantly reducing overhead. Redundant data
centers provide backup using synchronous and asynchronous data and application replication. The
network and devices offer server and application load balancing to maximize performance. This
architecture allows the enterprise to scale without major changes to the infrastructure. This module
can be located either at the campus as a server farm or at a remote facility.
The Cisco Enterprise Branch Architecture allows enterprises to extend head-office applications
and services (such as security, Cisco Unified Communications, and advanced application
performance) to thousands of remote locations and users or to a small group of branches. Cisco
integrates security, switching, network analysis, caching, and converged voice and video services
into a series of integrated services routers (ISR) in the branch so that the enterprises can deploy
new services without buying new routers. This architecture provides secure access to voice,
mission-critical data, and video applications—anywhere, anytime. Advanced routing, VPNs,
redundant WAN links, application content caching, and local IP telephony call processing features
are available with high levels of resilience for all the branch offices. An optimized network
leverages the WAN and LAN to reduce traffic and save bandwidth and operational expenses. The
enterprise can easily support branch offices with the capability to centrally configure, monitor, and
manage devices located at remote sites, including tools, such as Cisco AutoQoS and the Cisco
Router and Security Device Manager graphical user interface QoS wizard, which proactively
resolve congestion and bandwidth issues before they affect network performance.
Using a Modular Approach to Network Design
145
The Cisco Enterprise Teleworker Architecture allows enterprises to securely deliver voice and data
services to remote small or home offices (known as small office, home office [SOHO]) over a
standard broadband access service, providing a business-resiliency solution for the enterprise and
a flexible work environment for employees. Centralized management minimizes the IT support
costs, and robust integrated security mitigates the unique security challenges of this environment.
Integrated security and identity-based networking services enable the enterprise to extend campus
security policies to the teleworker. Staff can securely log in to the network over an always-on VPN
and gain access to authorized applications and services from a single cost-effective platform.
Productivity can be further enhanced by adding an IP phone, thereby providing cost-effective
access to a centralized IP communications system with voice and unified messaging services.
NOTE Each of these modules has specific requirements and performs specific roles in the
network; note that their sizes in Figure 3-9 are not meant to reflect their scale in a real network.
This architecture allows network designers to focus on only a selected module and its functions.
Designers can describe each network application and service on a per-module basis and validate
each as part of the complete enterprise network design. Modules can be added to achieve
scalability if necessary; for example, an organization can add more Enterprise Campus modules if
it has more than one campus.
Guidelines for Creating an Enterprise Network
When creating an Enterprise network, divide the network into appropriate areas, where the
Enterprise Campus includes all devices and connections within the main Campus location; the
Enterprise Edge covers all communications with remote locations and the Internet from the
perspective of the Enterprise Campus; and the remote modules include the remote branches,
teleworkers, and the remote data center. Define clear boundaries between each of the areas.
NOTE Depending on the network, an enterprise can have multiple campus locations. A
location that might be a remote branch from the perspective of a central campus location might
locally use the Cisco Enterprise Campus Architecture.
Figure 3-10 shows an example of dividing a network into an Enterprise Campus area, an
Enterprise Edge area, and some remote areas.
146
Chapter 3: Structuring and Modularizing the Network
Figure 3-10
Sample Network Divided into Functional Areas
Enterprise Campus
Building 2
Building 1
Campus
Backbone
Phone
Router
File Servers
Internet
Firewall Web Internet
Server Router
Universal
Gateway
PSTN Network
Enterprise
Edge
MPLS
Teleworker
Home PC
Branch Offices
Telecommuter
The following sections provide additional details about each of the functional areas and their
modules.
Enterprise Campus Modules
This section introduces the Enterprise Campus functional area and describes the purpose of each
module therein. It also discusses connections with other modules.
An enterprise campus site is a large site that is often the corporate headquarters or a major office.
Regional offices, SOHOs, and mobile workers might have to connect to the central campus for
data and information. As illustrated in Figure 3-11, the Enterprise Campus functional area
includes the Campus Infrastructure module and, typically, a Server Farm module.
Using a Modular Approach to Network Design
Enterprise Campus Functional Area
Building
Access
Campus Infrastructure
Figure 3-11
Building
Distribution
Campus
Core
To Enterprise
Edge Modules
To Enterprise
Edge Modules
Server
Farm
Network
Management
Internal
E-Mail
Cisco Unified
Corporate Communications
Server
Manager
147
148
Chapter 3: Structuring and Modularizing the Network
Campus Infrastructure Module
The Campus Infrastructure design consists of several buildings connected across a Campus Core.
The Campus Infrastructure module connects devices within a campus to the Server Farm and
Enterprise Edge modules. A single building in a Campus Infrastructure design contains a Building
Access layer and a Building Distribution layer. When more buildings are added to the Campus
Infrastructure, a backbone or Campus Core layer is added between buildings. The Campus
Infrastructure module includes three layers:
■
The Building Access layer
■
The Building Distribution layer
■
The Campus Core layer
NOTE In the most general model, the Building Access layer uses Layer 2 switching, and the
Building Distribution layer uses multilayer switching.
Building Access Layer
The Building Access layer, located within a campus building, aggregates end users from different
workgroups and provides uplinks to the Building Distribution layer. It contains end-user devices
such as workstations, Cisco IP phones, and networked printers, connected to Layer 2 access
switches; VLANs and STP might also be supported. The Building Access layer provides important
services, such as broadcast suppression, protocol filtering, network access, IP multicast, and QoS.
For high availability, the access switches are dual-attached to the distribution layer switches. The
Building Access layer might also provide Power over Ethernet (PoE) and auxiliary VLANs to
support voice services.
Building Distribution Layer
The Building Distribution layer aggregates the wiring closets within a building and provides
connectivity to the Campus Core layer. It provides aggregation of the access layer networks using
multilayer switching. The Building Distribution layer performs routing, QoS, and access control.
Requests for data flow into the multilayer switches and onward into the Campus Core layer;
responses follow the reverse path. Redundancy and load balancing with the Building Access and
Campus Core layer are recommended. For example, in Figure 3-11, the Building Distribution
layer has two equal-cost paths into the Campus Core layer, providing fast failure recovery because
each distribution switch maintains two equal-cost paths in its routing table to every destination
network. If one connection to the Campus Core layer fails, all routes immediately switch over to
the remaining path.
Using a Modular Approach to Network Design
149
Campus Core Layer
The Campus Core layer is the core layer of the Campus Infrastructure module. Within the
Enterprise Campus functional area, this high-performance, switched backbone connects the
buildings and various parts of the campus. Specifically, this layer interconnects the Building
Distribution layer with the Server Farm and the Enterprise Edge modules.
The Campus Core layer of the Campus Infrastructure module provides redundant and fastconverging connectivity between buildings and with the Server Farm and Enterprise Edge
modules. It routes and switches traffic as quickly as possible from one module to another. This
module usually uses multilayer switches for high-throughput functions with added routing, QoS,
and security features.
Server Farm Module
A high-capacity, centralized server farm module provides users with internal server resources. In
addition, it typically supports network management services for the enterprise, including
monitoring, logging, and troubleshooting, and other common management features from end to
end.
The Server Farm module typically contains internal e-mail and other corporate servers that
provide internal users with application, file, print, e-mail, and Domain Name System (DNS)
services. As shown in Figure 3-11, because access to these servers is vital, as a best practice, they
are typically connected to two different switches to enable full redundancy or load sharing.
Moreover, the Server Farm module switches are cross-connected with the Campus Core layer
switches, thereby enabling high reliability and availability of all servers in the Server Farm
module.
The network management system performs system logging, network monitoring, and general
configuration management functions. For management purposes, an out-of-band network
connection (a network on which no production traffic travels) to all network components is
recommended. For locations where an out-of-band network is impossible (because of geographic
or system-related issues), the network management system uses the production network.
Network management can provide configuration management for nearly all devices in the
network, using a combination of the following two technologies:
■
Cisco IOS routers can act as terminal servers to provide a dedicated management network
segment to the console ports on the Cisco devices throughout the enterprise by using a
reverse-Telnet function.
150
Chapter 3: Structuring and Modularizing the Network
■
More extensive management features (software changes, content updates, log and alarm
aggregation, and Simple Network Management Protocol [SNMP] management) can be
provided through the dedicated out-of-band management network segment.
NOTE These Server Farm attributes also apply to a remote Data Center module.
Enterprise Campus Guidelines
Follow these guidelines for creating the modules within an Enterprise Campus functional area:
Step 1
Select modules within the campus that act as buildings with access and
distribution layers.
Step 2
Determine the locations and the number of access switches and their
uplinks to distribution layer switches.
Step 3
Select the appropriate distribution layer switches, taking into account the
number of access layer switches and end users. Use at least two distribution
layer switches for redundancy.
Step 4
Consider two uplink connections from each access layer switch to the two
distribution layer switches.
Step 5
Determine where servers are or will be located, and design the Server Farm
module with at least two distribution layer switches that connect all servers
for full redundancy. Include out-of-band network management connections
to all critical devices in the campus network.
Step 6
Design the Campus Infrastructure module’s Campus Core layer using at
least two switches and provide for the expected traffic volume between
modules.
Step 7
Interconnect all modules of the Enterprise Campus with the Campus
Infrastructure module’s Campus Core layer in a redundant manner.
Enterprise Edge Modules
This section describes the components of the Enterprise Edge and explains the importance of each
module. The Enterprise Edge infrastructure modules aggregate the connectivity from the various
elements outside the campus—using various services and WAN technologies as needed, typically
provisioned from service providers—and route the traffic into the Campus Core layer. The
Enterprise Edge modules perform security functions when enterprise resources connect across
public networks and the Internet. As shown in Figure 3-12 and in the following list, the Enterprise
Edge functional area is composed of four main modules:
■
E-commerce module: The E-commerce module includes the devices and services necessary
for an organization to provide e-commerce applications.
Using a Modular Approach to Network Design
151
■
Internet Connectivity module: The Internet Connectivity module provides enterprise users
with Internet access.
■
Remote Access and VPN module: This module terminates VPN traffic and dial-in
connections from external users.
■
WAN and MAN and Site-to-Site VPN module: This module provides connectivity between
remote sites and the central site over various WAN technologies.
Figure 3-12
Enterprise Edge Functional Area
Database Servers Application Servers
E-Commerce
Module
Web
Servers
Service Provider Modules
ISP A Module
ISP A
Edge Distribution
Internet Connectivity
Module
Public
Servers
Internet
ISP B
ISP B Module
Remote Access
and VPN Module
VPN and IPSec
PSTN Module
Remote
Access
WAN and MAN and
Site-to-Site VPN Module
PSTN
Frame Relay
and ATM Module
Frame Relay/
ATM/MPLS/…
152
Chapter 3: Structuring and Modularizing the Network
These modules connect to the Campus Core directly or through an optional Edge Distribution
module. The optional Edge Distribution module aggregates the connectivity from the various
elements at the enterprise edge and routes the traffic into the Campus Core layer. In addition, the
Edge Distribution module acts as a boundary between the Enterprise Campus and the Enterprise
Edge and is the last line of defense against external attacks; its structure is similar to that of the
Building Distribution layer.
The following sections detail each of the four main Enterprise Edge modules.
E-commerce Module
The E-commerce module enables enterprises to successfully deploy e-commerce applications and
take advantage of the opportunities the Internet provides. The majority of traffic is initiated
external to the enterprise. All e-commerce transactions pass through a series of intelligent services
that provide scalability, security, and high availability within the overall e-commerce network
design. To build a successful e-commerce solution, the following network devices might be
included:
■
Web servers: Act as the primary user interface for e-commerce navigation
■
Application servers: Host the various applications
■
Database servers: Contain the application and transaction information that is the heart of the
e-commerce business implementation
■
Firewalls or firewall routers: Govern communication and provide security between the
system’s various users
■
Network Intrusion Detection System/Network Intrusion Protection System (NIDS/
NIPS) appliances: Monitor key network segments in the module to detect and respond to
attacks against the network
■
Multilayer switch with Intrusion Detection System/Intrusion Protection System
(IDS/IPS) modules: Provide traffic transport and integrated security monitoring
■
Host-Based Intrusion Protection Systems: Deployed on sensitive core application servers
and on dedicated appliances to provide real-time reporting and prevention of attacks as an
extra layer of defense
Internet Connectivity Module
The Internet Connectivity module provides internal users with connectivity to Internet services,
such as HTTP, FTP, Simple Mail Transfer Protocol (SMTP), and DNS. This module also provides
Internet users with access to information published on an enterprise’s public servers, such as
HTTP and FTP servers. Internet session initiation is typically from inside the enterprise toward
Using a Modular Approach to Network Design
153
the Internet. Additionally, this module accepts VPN traffic from remote users and remote sites and
forwards it to the Remote Access and VPN module, where VPN termination takes place. The
Internet Connectivity module is not designed to serve e-commerce applications. Major
components used in the Internet Connectivity module include the following:
■
SMTP mail servers: Act as a relay between the Internet and the intranet mail servers.
■
DNS servers: Serve as the authoritative external DNS server for the enterprise and relay
internal DNS requests to the Internet.
■
Public servers (for example, FTP and HTTP): Provide public information about the
organization. Each server on the public services segment contains host-based intrusion
detection systems (HIDS) to monitor against any rogue activity at the operating system level
and in common server applications including HTTP, FTP, and SMTP.
■
Firewalls or firewall routers: Provide network-level protection of resources, provide stateful
filtering of traffic, and forward VPN traffic from remote sites and users for termination.
■
Edge routers: Provide basic filtering and multilayer connectivity to the Internet.
Remote Access and VPN Module
The Remote Access and VPN module terminates remote access traffic and VPN traffic that the
Internet Connectivity Module forwards from remote users and remote sites. It also uses the
Internet Connectivity module to initiate VPN connections to remote sites. Furthermore, the
module terminates dial-in connections received through the public switched telephone network
(PSTN) and, after successful authentication, grants dial-in users access to the network. Major
components used in the Remote Access and VPN module include the following:
■
Dial-in access concentrators: Terminate dial-in connections and authenticate individual
users
■
Cisco Adaptive Security Appliances (ASA): Terminate IPsec tunnels, authenticate
individual remote users, and provide firewall and intrusion prevention services
■
Firewalls: Provide network-level protection of resources and stateful filtering of traffic,
provide differentiated security for remote access users, authenticate trusted remote sites, and
provide connectivity using IPsec tunnels
■
NIDS appliances: Provide Layer 4 to Layer 7 monitoring of key network segments in the
module
154
Chapter 3: Structuring and Modularizing the Network
WAN and MAN and Site-to-Site VPN Module
The WAN and MAN and Site-to-Site VPN module uses various WAN technologies, including siteto-site VPNs, to route traffic between remote sites and the central site. In addition to traditional
media (such as leased lines) and circuit-switched data-link technologies (such as Frame Relay and
ATM), this module can use more recent WAN physical layer technologies, including Synchronous
Optical Network/Synchronous Digital Hierarchy (SDH), cable, DSL, MPLS, Metro Ethernet,
wireless, and service provider VPNs. This module incorporates all Cisco devices that support
these WAN technologies, and routing, access control, and QoS mechanisms. Although security is
not as critical when all links are owned by the enterprise, it should be considered in the network
design.
KEY
The WAN and MAN and Site-to-Site VPN module does not include the WAN connections
POINT or links; it provides only the interfaces to the WAN.
Enterprise Edge Guidelines
Follow these guidelines for creating the modules within the Enterprise Edge functional area:
Step 1
Create the E-commerce module (for business-to-business or business-tocustomer scenarios) when customers or partners require Internet access to
business applications and database servers. Deploy a high-security policy
that allows customers to access predefined servers and services yet restricts
all other operations.
Step 2
Determine the connections from the corporate network into the Internet,
and assign them to the Internet Connectivity module. This module should
implement security to prevent any unauthorized access from the Internet to
the internal network. Public web servers reside in this module or the
E-commerce module.
Step 3
Design the Remote Access and VPN module if the enterprise requires VPN
connections or dial-in for accessing the internal network from the outside
world. Implement a security policy in this module; users should not be able
to access the internal network directly without authentication and
authorization. The VPN sessions use connectivity from the Internet
Connectivity module.
Step 4
Determine which part of the edge is used exclusively for permanent
connections to remote locations (such as branch offices), and assign it to the
WAN and MAN and Site-to-Site VPN module. All WAN devices
supporting Frame Relay, ATM, cable, MPLS, leased lines, SONET/SDH,
and so on, are located here.
Using a Modular Approach to Network Design
155
Service Provider Modules
Figure 3-13 shows the modules within the Service Provider functional area. The enterprise itself
does not implement these modules; however, they are necessary to enable communication with
other networks, using a variety of WAN technologies, and with Internet service providers (ISP).
The modules within the Service Provider functional area are as follows:
■
Internet Service Provider module
■
PSTN module
■
Frame Relay/ATM module
Figure 3-13
Service Provider Functional Area
ISP A Module
ISP A
Internet
ISP B
ISP B Module
PSTN Module
PSTN
Frame Relay
and ATM Module
Frame Relay/
ATM/MPLS/…
The following sections describe each of these modules.
156
Chapter 3: Structuring and Modularizing the Network
Internet Service Provider Module
The Internet Service Provider module represents enterprise IP connectivity to an ISP network
for basic access to the Internet or for enabling Enterprise Edge services, such as those in the
E-commerce, Remote Access and VPN, and Internet Connectivity modules. Enterprises can
connect to two or more ISPs to provide redundant connections to the Internet. The physical
connection between the ISP and the enterprise can use any of the WAN technologies.
PSTN Module
KEY
The PSTN module represents all nonpermanent WAN connections.
POINT
The PSTN module represents the dialup infrastructure for accessing the enterprise network using
ISDN, analog, and wireless telephony (cellular) technologies. Enterprises can also use this
infrastructure to back up existing WAN links; WAN backup connections are generally established
on demand and torn down after an idle timeout.
Frame Relay/ATM Module
KEY
The Frame Relay/ATM module covers all WAN technologies for permanent connectivity
POINT with remote locations.
Traditional Frame Relay and ATM are still used; however, despite the module’s name, it also
represents many modern technologies. The technologies in this module include the following:
■
Frame Relay is a connection-oriented, packet-switching technology designed to efficiently
transmit data traffic at data rates of up to those used by E3 and T3 connections. Its capability
to connect multiple remote sites across a single physical connection reduces the number of
point-to-point physical connections required to link sites.
NOTE E3 is a European standard with a bandwidth of 34.368 megabits per second (Mbps).
T3 is a North American standard with a bandwidth of 44.736 Mbps.
■
ATM is a higher-speed alternative to Frame Relay. It is a high-performance, cell-oriented,
switching and multiplexing technology for carrying different types of traffic.
■
Leased lines provide the simplest permanent point-to-point connection between two remote
locations. The carrier (service provider) reserves point-to-point links for the customer’s
private use. Because the connection does not carry anyone else’s communications, the carrier
can ensure a given level of quality. The fee for the connection is typically a fixed monthly rate.
Using a Modular Approach to Network Design
157
■
SONET/SDH are standards for transmission over optical networks. Europe uses SDH,
whereas North America uses SONET.
■
Cable technology uses existing coaxial cable TV cables. Coupled with cable modems, this
technology provides much greater bandwidth than telephone lines and can be used to achieve
extremely fast access to the Internet or enterprise network.
■
DSL uses existing twisted-pair telephone lines to transport high-bandwidth data, such as
voice, data, and video. DSL is sometimes referred to as last-mile technology because it is used
only for connections from a telephone switching station (at a service provider) to a home or
office, not between switching stations. DSL is used by telecommuters to access enterprise
networks; however, more and more companies are migrating from traditional Frame Relay to
DSL technology using VPNs because of its cost efficiency.
■
Wireless bridging technology interconnects remote LANs using point-to-point signal
transmissions that go through the air over a terrestrial radio or microwave platform, rather
than through copper or fiber cables. Wireless bridging requires neither satellite feeds nor local
phone service. One of the advantages of bridged wireless is its capability to connect users in
remote areas without having to install new cables. However, this technology is limited to
shorter distances, and weather conditions can degrade its performance.
■
MPLS combines the advantages of multilayer routing with the benefits of Layer 2 switching.
With MPLS, labels are assigned to each packet at the edge of the network. Rather than
examining the IP packet header information, MPLS nodes use this label to determine how to
process the data, resulting in a faster, more scalable, and more flexible WAN solution.
NOTE Chapter 5, “Designing Remote Connectivity,” discusses WANs in more detail.
Remote Enterprise Modules
The three modules supporting remote enterprise locations are the Enterprise Branch, the
Enterprise Data Center, and the Enterprise Teleworker.
Enterprise Branch Module
The Enterprise Branch module extends the enterprise by providing each location with a resilient
network architecture with integrated security, Cisco Unified Communications, and wireless
mobility.
A branch office generally accommodates employees who have a compelling reason to be located
away from the central site, such as a regional sales office. A branch office is sometimes called a
remote site, remote office, or sales office. Branch office users must be able to connect to the central
site to access company information. Therefore, they benefit from high-speed Internet access, VPN
158
Chapter 3: Structuring and Modularizing the Network
connectivity to corporate intranets, telecommuting capabilities for work-at-home employees,
videoconferencing, and economical PSTN-quality voice and fax calls over managed IP networks.
The Enterprise Branch module typically uses a simplified version of the Campus Infrastructure
module design.
Enterprise Data Center Module
The Enterprise Data Center module has an architecture that is similar to the campus Server Farm
module discussed earlier. The Enterprise Data Center network architecture allows the network to
evolve into a platform that enhances the application, server, and storage solutions and equips
organizations to manage increased security, cost, and regulatory requirements while providing the
ability to respond quickly to changing business environments. The Enterprise Data Center module
may include the following components:
■
At the networked infrastructure layer: Gigabit Ethernet, 10-Gigabit Ethernet, or
InfiniBand connections, with storage switching and optical transport devices
NOTE InfiniBand is a high-speed switched fabric mesh technology.
■
At the interactive services layer: Services include storage fabric services, computer
services, security services, and application optimization services
■
At the management layer: Tools include Fabric Manager (for element and network
management) and Cisco VFrame (for server and service provisioning)
The remote Enterprise Data Center module also needs highly available WAN connectivity with
business continuance capabilities to integrate it with the rest of the Cisco Enterprise Architecture.
The Server Farm module in the campus can leverage the WAN connectivity of the campus core,
but the remote Enterprise Data Center must implement its own WAN connectivity.
Enterprise Teleworker Module
The Enterprise Teleworker module provides people in geographically dispersed locations, such as
home offices or hotels, with highly secure access to central-site applications and network services.
The Enterprise Teleworker module supports a small office with one to several employees or the
home office of a telecommuter. Telecommuters might also be mobile users—people who need
access while traveling or who do not work at a fixed company site.
Depending on the amount of use and the WAN services available, telecommuters working from
home tend to use broadband or dialup services. Mobile users tend to access the company network
using a broadband Internet service and the VPN client software on their laptops or via an
asynchronous dialup connection through the telephone company. Telecommuters working from
home might also use a VPN tunnel gateway router for encrypted data and voice traffic to and from
Services Within Modular Networks
159
the company intranet. These solutions provide simple and safe access for teleworkers to the
corporate network site, according to the needs of the users at the sites.
The Cisco Teleworker solution provides an easy-to-deploy, centrally managed solution that
addresses both the workers’ mobility needs and the enterprise’s needs for lower operational costs,
security, productivity, business resiliency, and business responsiveness. Small ISRs form the
backbone of the Enterprise Teleworker architecture. An optional IP phone can be provided to take
advantage of a centralized Cisco Unified Communications system.
Services Within Modular Networks
Businesses that operate large enterprise networks strive to create an enterprise-wide networked
infrastructure and interactive services to serve as a solid foundation for business and collaborative
applications. This section explores some of the interactive services with respect to the modules
that form the Cisco Enterprise Architecture.
KEY
A network service is a supporting and necessary service, but not an ultimate solution. For
POINT example, security and QoS are not ultimate goals for a network; they are necessary to
enable other services and applications and are therefore classified as network services.
However, IP telephony might be an ultimate goal of a network and is therefore a network
application (or solution), rather than a service.
Interactive Services
Since the inception of packet-based communications, networks have always offered a forwarding
service. Forwarding is the fundamental activity within an internetwork. In IP, this forwarding
service was built on the assumption that end nodes in the network were intelligent, and that the
network core did not have intelligence. With advances in networking software and hardware, the
network can offer an increasingly rich, intelligent set of mechanisms for forwarding information.
Interactive services add intelligence to the network infrastructure, beyond simply moving a
datagram between two points.
For example, through intelligent network classification, the network distinguishes and identifies
traffic based on application content and context. Advanced network services use the traffic
classification to regulate performance, ensure security, facilitate delivery, and improve
manageability.
Network applications such as IP telephony support the entire enterprise network environment—
from the teleworker to the campus to the data center. These applications are enabled by critical
network services and provide a common set of capabilities to support the application’s
networkwide requirements, including security, high availability, reliability, flexibility,
responsiveness, and compliancy.
160
Chapter 3: Structuring and Modularizing the Network
Recall the layers of the Cisco SONA framework, illustrated in Figure 3-14. The SONA interactive
services layer includes both application networking services and infrastructure services.
Collaboration
Applications
Adaptive Management
Services
Business
Applications
Collaboration
Layer
Cisco SONA Framework
Application
Layer
Figure 3-14
Networked
Infrastructure
Layer
Interactive
Services
Layer
Application Networking Services
Infrastructure Services
Places in the Network
Server
Storage
Clients
For example, the following infrastructure services (shown earlier in Figure 3-8) enhance classic
network functions to support today’s applications environments by mapping the application’s
requirements to the resources that they require from the network:
■
Security services: Ensure that all aspects of the network are secure, from devices connecting
to the network to secured transport to data theft prevention
■
Mobility services: Allow users to access network resources regardless of their physical
location
■
Storage services: Provide distributed and virtual storage across the infrastructure
■
Voice and collaboration services: Deliver the foundation by which voice can be carried
across the network, such as security and high availability
■
Compute services: Connect and virtualize compute resources based on the application
■
Identity services: Map resources and policies to the user and device
Services Within Modular Networks
161
Examples of network services imbedded in the infrastructure services include the following:
■
Network management: Includes LAN management for advanced management of multilayer
switches; routed WAN management for monitoring, traffic management, and access control
to administer the routed infrastructure of multiservice networks; service management for
managing and monitoring service level agreements (SLAs); and VPN security management
for optimizing VPN performance and security administration.
■
High availability: Ensures end-to-end availability for services, clients, and sessions.
Implementation includes reliable, fault-tolerant network devices to automatically identify and
overcome failures, and resilient network technologies.
■
QoS: Manages the delay, delay variation (jitter), bandwidth availability, and packet loss
parameters of a network to meet the diverse needs of voice, video, and data applications. QoS
features provide value-added functionality, such as network-based application recognition for
classifying traffic on an application basis, Cisco IOS IP SLAs (previously called the service
assurance agent) for end-to-end QoS measurements, Resource Reservation Protocol
signaling for admission control and reservation of resources, and a variety of configurable
queue insertion and servicing functions.
■
IP multicasting: Provides bandwidth-conserving technology that reduces network traffic by
delivering a single stream of information intended for many recipients through the transport
network. Multicasting enables distribution of videoconferencing, corporate communications,
distance learning, software, and other applications. Multicast packets are replicated only as
necessary by Cisco routers enabled with Protocol Independent Multicast and other supporting
multicast protocols that result in the most efficient delivery of data to multiple receivers.
To support network applications efficiently, deploy the underlying infrastructure services in some
or all modules of the enterprise network as required. These design elements can be replicated
simply to other enterprise network modules as the network changes. As a result, modularization
to small subsets of the overall network simplifies the network design and often reduces the
network’s cost and complexity.
The following sections explore some of the infrastructure services and application networking
services. Network management services are described in the “Network Management Protocols
and Features” section later in this chapter.
162
Chapter 3: Structuring and Modularizing the Network
Security Services in a Modular Network Design
KEY
Security is an infrastructure service that increases the network’s integrity by protecting
POINT network resources and users from internal and external threats.
Without a full understanding of the threats involved, network security deployments tend to be
incorrectly configured, too focused on security devices, or lacking appropriate threat response
options.
Security both in the Enterprise Campus (internal security) and at the Enterprise Edge (from
external threats) is important. An enterprise should include several layers of protection so that a
breach at one layer or in one network module does not mean that other layers or modules are also
compromised; Cisco calls deploying layered security defense-in-depth.
Internal Security
Strongly protecting the internal Enterprise Campus by including security functions in each
individual element is important for the following reasons:
■
If the security established at the Enterprise Edge fails, an unprotected Enterprise Campus is
vulnerable. Deploying several layers of security increases the protection of the Enterprise
Campus, where the most strategic assets usually reside.
■
Relying on physical security is not enough. For example, as a visitor to the organization, a
potential attacker could gain physical access to devices in the Enterprise Campus.
■
Often external access does not stop at the Enterprise Edge; some applications require at least
indirect access to the Enterprise Campus resources. Strong security must protect access to
these resources.
Figure 3-15 shows how internal security can be designed into the Cisco Enterprise Architecture.
Services Within Modular Networks
Figure 3-15
163
Designing Internal Security into the Network
Campus Infrastructure
Building
Access
Use switch port
security to control
access to the network.
Building
Distribution
Use access lists to
provide security.
Campus
Core
Do not implement
packet manipulation here.
Server
Farm
Use host- and
network-based IDS and IPS,
private VLANs, access
control lists, and
secure password.
Use authentication
servers, OTPs, IPS, and
logging to minimize
security threats.
Network
Management
Internal
E-Mail
Cisco Unified
Corporate Communications
Manager
Server
The following are some recommended security practices in each module:
■
At the Building Access layer, access is controlled at the port level using the data link layer
information. Some examples are filtering based on media access control addresses and IEEE
802.1X port authentication.
164
Chapter 3: Structuring and Modularizing the Network
■
The Building Distribution layer performs filtering to keep unnecessary traffic from the
Campus Core. This packet filtering can be considered a security function because it does
prevent some undesired access to other modules. Given that switches in the Building
Distribution layer are typically multilayer switches (and are therefore Layer 3–aware), this is
the first place on the data path in which filtering based on network layer information can be
performed.
■
The Campus Core layer is a high-speed switching backbone and should be designed to switch
packets as quickly as possible; it should not perform any security functions, because doing so
would slow down the switching of packets.
■
The Server Farm module’s primary goal is to provide application services to end users and
devices. Enterprises often overlook the Server Farm module from a security perspective.
Given the high degree of access that most employees have to these servers, they often become
the primary goal of internally originated attacks. Simply relying on effective passwords does
not provide a comprehensive attack mitigation strategy. Using host-based and network-based
IPSs and IDSs, private VLANs, and access control provides a much more comprehensive
attack response. For example, onboard IDS within the Server Farm’s multilayer switches
inspects traffic flows.
NOTE Private VLANs provide Layer 2 isolation between ports within the same broadcast
domain.
■
The Server Farm module typically includes network management systems to securely manage
all devices and hosts within the enterprise architecture. For example, syslog provides
important information on security violations and configuration changes by logging securityrelated events (authentication and so on). An authentication, authorization, and accounting
(AAA) security server also works with a one-time password (OTP) server to provide a high
level of security to all local and remote users. AAA and OTP authentication reduces the
likelihood of a successful password attack.
IPS and IDS
IDSs act like an alarm system in the physical world. When an IDS detects something it considers
an attack, it either takes corrective action or notifies a management system so that an administrator
can take action.
HIDSs work by intercepting operating system and application calls on an individual host and can
also operate via after-the-fact analysis of local log files. The former approach allows better attack
prevention, and the latter approach is a more passive attack-response role.
Services Within Modular Networks
165
Because of their specific role, HIDSs are often more effective at preventing specific attacks than
NIDSs, which usually issue an alert only on discovering an attack. However, this specificity does
not allow the perspective of the overall network; this is where NIDS excels.
Intrusion prevention solutions form a core element of a successful security solution because they
detect and block attacks, including worms, network viruses, and other malware through inline
intrusion prevention, innovative technology, and identification of malicious network activity.
Network-based IPS solutions protect the network by helping detect, classify, and stop threats,
including worms, spyware or adware, network viruses, and application abuse. Host-based IPS
solutions protect server and desktop computing systems by identifying threats and preventing
malicious behavior.
This information was derived from the SAFE Blueprint for Small, Midsize, and Remote-User
Networks, available at http://www.cisco.com/go/safe/, and the Cisco Intrusion Prevention System
Introduction, available at http://www.cisco.com/en/US/products/sw/secursw/ps2113/index.html.
Authentication, Authorization, and Accounting
AAA is a crucial aspect of network security that should be considered during the network design.
An AAA server handles the following:
■
Authentication—Who? Authentication checks the user’s identity, typically through a
username and password combination.
■
Authorization—What? After the user is authenticated, the AAA server dictates what activity
the user is allowed to perform on the network.
■
Accounting—When? The AAA server can record the length of the session, the services
accessed during the session, and so forth.
The principles of strong authentication should be included in the user authentication. Strong
authentication refers to the two-factor authentication method in which users are authenticated
using two of the following factors:
■
Something you know: Such as a password or personal identification number (PIN)
■
Something you have: Such as an access card, bank card, or token
■
Something you are: For example, some biometrics, such as a retina print or fingerprint
■
Something you do: Such as your handwriting, including the style, pressure applied, and so
forth
As an example, when accessing an automated teller machine, strong authentication is enforced
because a bank card (something you have) and a PIN (something you know) are used.
166
Chapter 3: Structuring and Modularizing the Network
Tokens are key-chain-sized devices that show OTPs, one at a time, in a predefined order. The OTP
is displayed on the token’s small LCD, typically for 1 minute, before the next password in the
sequence appears. The token is synchronized with a token server, which has the same predefined
list of passcodes for that one user. Therefore, at any given time, only one valid password exists
between the server and a token.
This information was derived from Cisco Press’s Campus Network Design Fundamentals by
Diane Teare and Catherine Paquet, 2006.
External Threats
When designing security in an enterprise network, the Enterprise Edge is the first line of defense
at which potential outside attacks can be stopped. The Enterprise Edge is like a wall with small
doors and strong guards that efficiently control any access. The following four attack methods are
commonly used in attempts to compromise the integrity of the enterprise network from the
outside:
■
IP spoofing: An IP spoofing attack occurs when a hacker uses a trusted computer to launch
an attack from inside or outside the network. The hacker uses either an IP address that is in
the range of a network’s trusted IP addresses or a trusted external IP address that provides
access to specified resources on the network. IP spoofing attacks often lead to other types of
attacks. For example, a hacker might launch a denial of service (DoS) attack using spoofed
source addresses to hide his identity.
■
Password attacks: Using a packet sniffer to determine usernames and passwords is a simple
password attack; however, the term password attack usually refers to repeated brute-force
attempts to identify username and password information. Trojan horse programs are another
method that can be used to determine this information. A hacker might also use IP spoofing
as a first step in a system attack by violating a trust relationship based on source IP addresses.
First, however, the system would have to be configured to bypass password authentication so
that only a username is required.
■
DoS attacks: DoS attacks focus on making a service unavailable for normal use and are
typically accomplished by exhausting some resource limitation on the network or within an
operating system or application.
■
Application layer attacks: Application layer attacks typically exploit well-known
weaknesses in common software programs to gain access to a computer.
Services Within Modular Networks
167
DoS Attacks
DoS attacks are different from most other attacks because they are not generally targeted at
gaining access to a network or its information. Rather, these attacks focus on making a service
unavailable for normal use. They are typically accomplished by exhausting some resource
limitation on the network or within an operating system or application.
When involving specific network server applications, such as a web server or an FTP server, these
attacks focus on acquiring and keeping open all the available connections supported by that server,
thereby effectively locking out valid users of the server or service. DoS attacks are also
implemented using common Internet protocols, such as TCP and Internet Control Message
Protocol (ICMP).
Rather than exploiting a software bug or security hole, most DoS attacks exploit a weakness in the
overall architecture of the system being attacked. However, some attacks compromise a network’s
performance by flooding the network with undesired and often useless network packets and by
providing false information about the status of network resources. This type of attack is often the
most difficult to prevent, because it requires coordinating with the upstream network provider. If
traffic meant to consume the available bandwidth is not stopped there, denying it at the point of
entry into your network does little good, because the available bandwidth has already been
consumed. When this type of attack is launched from many different systems at the same time, it
is often referred to as a distributed denial of service attack.
This information was derived from the SAFE Blueprint for Small, Midsize, and Remote-User
Networks, available at http://www.cisco.com/go/safe/.
Application Layer Attacks
Hackers perform application layer attacks using several different methods. One of the most
common methods is exploiting well-known weaknesses in software commonly found on servers,
such as SMTP, HTTP, and FTP. By exploiting these weaknesses, hackers gain access to a computer
with the permissions of the account that runs the application—usually a privileged system-level
account. These application layer attacks are often widely publicized in an effort to allow
administrators to rectify the problem with a patch. Unfortunately, many hackers also subscribe to
these same informative mailing lists and therefore learn about the attack at the same time (if they
have not discovered it already).
The primary problem with application-layer attacks is that they often use ports that are allowed
through a firewall. For example, a hacker who executes a known vulnerability against a web server
often uses TCP port 80 in the attack. A firewall needs to allow access on that port because the web
server serves pages to users using port 80. From a firewall’s perspective, the attack appears as
merely standard port 80 traffic.
This information was derived from the SAFE Blueprint for Small, Midsize, and Remote-User
Networks, available at http://www.cisco.com/go/safe/.
168
Chapter 3: Structuring and Modularizing the Network
Figure 3-16 shows these four attack methods and how they relate to the Enterprise Edge modules.
Figure 3-16
External Threats
Database Servers Application Servers
E-Commerce
Module
Internet Connectivity
Module
Web
Servers
Application
Layer Attacks
Denial of
Service Attacks:
TCP SYN Floods,
UDP Floods
Public
Servers
Spoofed IP
Addresses
Remote Access
and VPN Module
VPN and IPsec
Remote
Access
Password
Attacks
WAN and MAN and
Site-to-Site VPN Module
Because of the complexity of network applications, access control must be extremely granular and
flexible yet still provide strong security. Tight borders between outside and inside cannot be
defined, because interactions are continuously taking place between the Enterprise Edge and
Services Within Modular Networks
169
Enterprise Campus. The ease of use of the network applications and resources must be balanced
against the security measures imposed on the network users.
NOTE Chapter 10, “Evaluating Security Solutions for the Network,” covers security in the
network in more detail.
High-Availability Services in a Modular Network Design
Most enterprise networks carry mission-critical information. Organizations that run such networks
are usually interested in protecting the integrity of that information. Along with security, these
organizations expect the internetworking platforms to offer a sufficient level of resilience.
This section introduces another network infrastructure service: high availability. To ensure
adequate connectivity for mission-critical applications, high availability is an essential component
of an enterprise environment.
Designing High Availability into a Network
Redundant network designs duplicate network links and devices, eliminating single points of
failure on the network. The goal is to duplicate components whose failure could disable critical
applications.
Because redundancy is expensive to deploy and maintain, redundant topologies should be
implemented with care. Redundancy adds complexity to the network topology and to network
addressing and routing. The level of redundancy should meet the organization’s availability and
affordability requirements.
KEY
Before selecting redundant design solutions, analyze the business and technical goals and
POINT constraints to establish the required availability and affordability.
Critical applications, systems, internetworking devices, and links must be identified. Analyze the
risk tolerance and the consequences of not implementing redundancy, and ensure that you
consider the trade-offs of redundancy versus cost and simplicity versus complexity. Duplicate any
component whose failure could disable critical applications.
Redundancy is not provided by simply duplicating all links. Unless all devices are completely
fault-tolerant, redundant links should terminate at different devices; otherwise, devices that are
not fault-tolerant become single points of failure.
KEY
Because many other modules access the Server Farm and Campus Core modules, they
POINT typically require higher availability than other modules.
170
Chapter 3: Structuring and Modularizing the Network
The following types of redundancy may be used in the modules of an enterprise:
■
Device redundancy, including card and port redundancy
■
Redundant physical connections to critical workstations and servers
■
Route redundancy
■
Link redundancy
■
Power redundancy, including redundant power supplies integral to the network devices and
redundant power to the building’s physical plant
KEY
The key requirement in redundancy is to provide alternative paths for mission-critical
POINT applications. Simply making the backbone fault-tolerant does not ensure high availability.
For example, if communication on a local segment is disrupted for any reason, that
information will not reach the backbone. End-to-end high availability is possible only
when redundancy is deployed throughout the internetwork.
High Availability in the Server Farm
Improving the reliability of critical workstations and servers usually depends on the hardware and
operating system software in use. Some common ways of connecting include the following:
■
Single attachment: When a workstation or server has traffic to send to a station that is not
local, it must know the address of a router on its network segment. If that router fails, the
workstation or server needs a mechanism to discover an alternative router. If the workstation
or server has a single attachment, it needs a Layer 3 mechanism to dynamically find an
alternative router; therefore, the single-attachment method is not recommended. The available
mechanisms include Address Resolution Protocol (ARP), Router Discovery Protocol (RDP),
routing protocols (such as Routing Information Protocol [RIP]), Hot Standby Router Protocol
(HSRP), Gateway Load Balancing Protocol (GLBP), and Virtual Router Redundancy
Protocol (VRRP). These router discovery methods are described in the “Router Discovery”
sidebar on the next page.
■
Attachment through a redundant transceiver: Physical redundancy with a redundant
transceiver attachment is suitable in environments where the workstation hardware or
software does not support redundant attachment options.
■
Attachment through redundant network interface cards (NIC): Some environments (for
example, most UNIX servers) support a redundant attachment through dual NICs (primary
and backup); the device driver represents this attachment as a single interface to the operating
system.
Services Within Modular Networks
■
171
Fast EtherChannel or Gigabit EtherChannel port bundles: Fast EtherChannel and
Gigabit EtherChannel port bundles group multiple Fast or Gigabit Ethernet ports into a single
logical transmission path between a switch and a router, host, or another switch. STP treats
this EtherChannel as one logical link. The switch distributes frames across the ports in an
EtherChannel. This load balancing was originally done based only on MAC addresses;
however, newer implementations can also load-balance based on IP addresses or Layer 4 port
numbers. Source, destination, or source and destination addresses or port numbers can be
used. If a port within an EtherChannel fails, traffic previously carried over the failed port
reverts to the remaining ports within the EtherChannel.
Router Discovery
When a workstation has traffic to send to a station that is not local, the workstation has many possible
ways of discovering the address of a router on its network segment, including the following:
■
Explicit configuration: Most IP workstations must be configured with a default router’s IP
address, called the default gateway.
If the workstation’s default router becomes unavailable, the workstation must be reconfigured
with a different router’s address. Some IP stacks enable multiple default routers to be
configured, but many IP stacks do not support this.
■
ARP: Some IP workstations send an ARP frame to find a remote station. A router running
proxy ARP responds with its own data link layer address; Cisco routers run proxy ARP by
default.
■
RDP: RFC 1256, ICMP Router Discovery Messages, specifies an extension to ICMP that
allows an IP workstation and router to run RDP to allow the workstation to learn a router’s
address. With RDP, each router periodically multicasts a router advertisement from each of
its interfaces, thereby announcing the IP address of that interface. Hosts discover the
addresses of their neighboring routers simply by listening for these advertisements. When a
host starts up, it multicasts a router solicitation to ask for immediate advertisements rather
than waiting for the next periodic one to arrive.
NOTE RFCs are available at http://www.cis.ohio-state.edu/cs/Services/rfc/index.html.
■
Routing protocol: An IP workstation can run RIP in passive, rather than active, mode to learn
about routers. (Active mode means that the station sends RIP packets every 30 seconds;
passive mode means that the station just listens for RIP packets but does not send any.)
Alternatively, some workstations run the Open Shortest Path First (OSPF) protocol.
■
HSRP: The Cisco HSRP provides a way for IP workstations to continue communicating even
if their default router becomes unavailable. HSRP works by creating a virtual router that has
its own IP and MAC addresses. The workstations use this virtual router as their default router.
172
Chapter 3: Structuring and Modularizing the Network
HSRP routers on a LAN communicate among themselves to designate one router as active
and one as standby. The active router sends periodic hello messages. The other HSRP routers
listen for the hello messages. If the active router fails and the other HSRP routers stop
receiving hello messages, the standby router takes over and becomes the active router.
Because the new active router assumes the virtual router’s IP and MAC addresses, end nodes
do not see any change; they continue to send packets to the virtual router’s MAC address, and
the new active router delivers those packets. HSRP also works with proxy ARP: When an
active HSRP router receives an ARP request for a node that is not on the local LAN, the router
replies with the virtual router’s MAC address rather than its own. If the router that originally
sent the ARP reply later loses its connection, the new active router still delivers the traffic.
■
GLBP: GLBP is similar to HSRP, but it allows packet sharing between redundant routers in
a group. GLBP provides load balancing over multiple routers (gateways) using a single virtual
IP address and multiple virtual MAC addresses. Each host is configured with the same virtual
IP address for its default gateway, and all routers in the virtual router group participate in
forwarding packets.
■
VRRP: VRRP is an election protocol that dynamically assigns responsibility for one or more
virtual routers to the VRRP routers on a LAN, allowing several routers on a multiaccess link
to use the same virtual IP address. A VRRP router is configured to run the VRRP protocol in
conjunction with one or more other routers attached to a LAN. In a VRRP configuration, one
router is elected as the virtual router master, with the other routers acting as backups in case
the virtual router master fails.
Figure 3-17 shows a server-to-switch connection implemented with a redundant transceiver.
Physical Redundancy: Redundant Transceiver
TCP
Stack
Device
Driver
NIC
Card
Critical Workstation or Server
Transceiver
Figure 3-17
Primary Switch
Backup Switch
The redundant transceiver has two uplink ports that are usually connected to two access switches.
The transceiver activates the backup port after it detects a link failure (carrier loss) on the primary
port. The redundant transceiver can detect only physical layer failures; it cannot detect failures
inside the switch or failures beyond the first switch. This type of redundancy is most often
implemented on servers.
Services Within Modular Networks
173
In Figure 3-18, the installation of an additional interface card in the server provides redundancy.
Figure 3-18
Physical Redundancy: Redundant NICs
NIC
Card
TCP
Stack
Primary Switch
Device
Driver
NIC
Card
Backup Switch
Critical Workstation or Server
In this case, the device driver presents the configured NIC cards as a single interface (one IP
address) to the operating system. If the primary link dies, the backup card activates. The two NICs
might use a common MAC address, or they might use two distinct MAC addresses and send
gratuitous ARP messages to provide proper IP-to-MAC address mapping on the switches when
the backup interface card activates. With a redundant NIC, a VLAN shared between the two access
switches is required to support the single IP address on the two server links.
NOTE The workstation sends gratuitous ARP messages to update the ARP tables and the
forwarding tables on attached neighboring nodes (in this example, the Layer 2 switches).
Designing Route Redundancy
Redundant routes have two purposes:
■
To minimize the effect of link failures
■
To minimize the effect of an internetworking device failure
Redundant routes might also be used for load balancing when all routes are up.
Load Balancing
By default, the Cisco IOS balances between a maximum of four equal-cost paths for IP. Using the
maximum-paths maximum-path router configuration command, you can request that up to 16
equally good routes be kept in the routing table (set maximum-path to 1 to disable load balancing).
174
Chapter 3: Structuring and Modularizing the Network
When a packet is process-switched, load balancing over equal-cost paths occurs on a per-packet
basis. When packets are fast-switched, load balancing over equal-cost paths is on a per-destination
basis.
To support load balancing, keep the bandwidth consistent within a layer of the hierarchical model
so that all paths have the same metric. Cisco’s EIGRP includes the variance feature to load-balance
traffic across multiple routes that have different metrics.
Possible ways to make the connection redundant include the following:
■
Parallel physical links between switches and routers
■
Backup LAN and WAN links (for example, DDR backup for a leased line)
The following are possible ways to make the network redundant:
■
A full mesh to provide complete redundancy and good performance
■
A partial mesh, which is less expensive and more scalable
The common approach when designing route redundancy is to implement partial redundancy by
using a partial mesh instead of a full mesh and backup links to the alternative device. This protects
only the most vital parts of the network, such as the links between the layers and concentration
devices.
A full-mesh design forms any-to-any connectivity and is ideal for connecting a reasonably small
number of devices. However, as the network topology grows, the number of links required to
maintain a full mesh increases exponentially. (The number of links in a full mesh is n(n–1)/2,
where n is the number of routers.) As the number of router peers increases, the bandwidth and CPU
resources devoted to processing routing updates and service requests also increase.
A partial-mesh network is similar to the full-mesh network with some of its connections removed.
A partial-mesh backbone might be appropriate for a campus network in which traffic
predominantly goes into one centralized Server Farm module.
Figure 3-19 illustrates an example of route redundancy in a campus. In this example, the access
layer switches are fully meshed with the distribution layer switches. If a link or distribution switch
fails, an access layer switch can still communicate with the distribution layer. The multilayer
switches select the primary and backup paths between the access and distribution layers based on
the link’s metric as computed by the routing protocol algorithm in use. The best path is placed in
the forwarding table, and, in the case of equal-cost paths, load sharing takes place.
Services Within Modular Networks
Figure 3-19
175
Campus Infrastructure Redundancy Example
Building Access
Multilayer Switching
Building Distribution
Multilayer Switching
NOTE Chapter 7, “Selecting Routing Protocols for the Network,” discusses routing protocols
in detail.
Designing Link Redundancy
It is often necessary to provision redundant media in locations where mission-critical application
traffic travels. In Layer 2–switched networks, redundant links are permitted as long as STP is
running. STP guarantees one, and only one, active path within a broadcast domain, avoiding
problems such as broadcast storms (when a broadcast continuously loops). The redundant path
automatically activates when the active path goes down.
Because WAN links are often critical pieces of the internetwork, redundant media are often
deployed in WAN environments. As is the case in Figure 3-20, where a Frame Relay circuit is used
in parallel with a backup IPsec connection over the Internet, backup links can use different
technologies. It is important that the backup provide sufficient capacity to meet the critical
requirements if the primary route fails.
176
Chapter 3: Structuring and Modularizing the Network
Figure 3-20
Example of Enterprise Edge Link Redundancy
Primary WAN
Access
Router
Frame Relay
Remote
Site
Building
Distribution
Multilayer
Switching
IPsec
over Internet
Redundancy
in Access
Backup WAN
Access
Router
Backup links can be always-on or become active when a primary link goes down or becomes
congested.
Backup Links
Backup links often use a different technology. For example, a leased line can be parallel with a
backup IPsec connection over the Internet.
Using a floating static route, specify that the backup route has a higher administrative distance
(used by Cisco routers to select which routing information to use) than the primary route learned
from the routing protocol in use. Doing so ensures that the backup link is not used unless the
primary route goes down.
When provisioning backup links, learn as much as possible about the actual physical circuit
routing. Different carriers sometimes use the same facilities, meaning that your backup path is
susceptible to the same failures as your primary path. Do some investigative work to ensure that
the backup really is a backup.
Backup links can be used for load balancing and channel aggregation. Channel aggregation means
that a router can bring up multiple channels (such as ISDN B channels) as bandwidth requirements
increase.
Cisco supports the Multilink Point-to-Point Protocol (MLP), also referred to as MPPP, which
is an Internet Engineering Task Force (IETF) standard for ISDN B channel (or asynchronous
serial interface) aggregation. MLP does not specify how a router should accomplish the
Services Within Modular Networks
177
decision-making process to bring up extra channels. Instead, it seeks to ensure that packets arrive
in sequence at the receiving router. The data is encapsulated within PPP, and the datagram is given
a sequence number. At the receiving router, PPP uses this sequence number to re-create the
original data stream. Multiple channels appear as one logical link to upper-layer protocols.
Voice Services in a Modular Network Design
To ensure successful implementation of voice applications, network designers must consider the
enterprise services and infrastructure, and its configuration. For example, to support VoIP, the
underlying IP infrastructure must be functioning and robust. In other words, don’t even think of
adding voice to a network experiencing other problems such as congestion or network failures.
Two Voice Implementations
Voice transport is a general term that can be divided into the following two implementations:
■
VoIP: VoIP uses voice-enabled routers to convert analog voice into IP packets or packetized
digital voice channels and route those packets between corresponding locations. Users do not
often notice that VoIP is implemented in the network—they use their traditional phones,
which are connected to a PBX. However, the PBX is not connected to the PSTN or to another
PBX, but to a voice-enabled router that is an entry point to VoIP. Voice-enabled routers can
also terminate IP phones using Session Initiation Protocol for call control and signaling.
■
IP telephony: For IP telephony, traditional phones are replaced with IP phones. A server for
call control and signaling, such as a Cisco Unified Communications Manager, is also used.
The IP phone itself performs voice-to-IP conversion, and no voice-enabled routers are
required within the enterprise network. However, if a connection to the PSTN is required, a
voice-enabled router or other gateway in the Enterprise Edge is added where calls are
forwarded to the PSTN.
NOTE Earlier names for the Cisco Unified Communications Manager include Cisco
CallManager and Cisco Unified CallManager.
Both implementations require properly designed networks. Using a modular approach in a voice
transport design is especially important because of the voice sensitivity to delay and the
complexity of troubleshooting voice networks. All Cisco Enterprise Architecture modules are
involved in voice transport design.
178
Chapter 3: Structuring and Modularizing the Network
IP Telephony Components
An IP telephony network contains four main voice-specific components:
■
IP phones: IP phones are used to place calls in an IP telephony network. They perform voiceto-IP (and vice versa) coding and compression using special hardware. IP phones offer
services such as user directory lookups and Internet access. The phones are active network
devices that require power to operate; power is supplied through the LAN connection using
PoE or with an external power supply.
■
Switches with inline power: Switches with inline power (PoE) enable the modular wiring
closet infrastructure to provide centralized power for Cisco IP telephony networks. These
switches are similar to traditional switches, with an added option to provide power to the LAN
ports where IP phones are connected. The switches also perform some basic QoS tasks, such
as packet classification, which is required for prioritizing voice through the network.
■
Call-processing manager: The call-processing manager, such as a Cisco Unified
Communications Manager, provides central call control and configuration management for IP
phones. It provides the core functionality to initialize IP telephony devices and to perform call
setup and call routing throughout the network. Cisco Unified Communications Manager can
be clustered to provide a distributed, scalable, and highly available IP telephony model.
Adding more servers to a cluster of servers provides more capacity to the system.
■
Voice gateway: Voice gateways, also called voice-enabled routers or voice-enabled switches,
provide voice services such as voice-to-IP coding and compression, PSTN access, IP packet
routing, backup call processing, and voice services. Backup call processing allows voice
gateways to take over call processing in case the primary call-processing manager fails. Voice
gateways typically support a subset of the call-processing functionality supported by the
Cisco Unified Communications Manager.
Other components of an IP telephony network include a robust IP network, voice messaging and
applications, and digital signal processor resources to process voice functions in hardware, which
is much faster than doing it in software. These components are located throughout the enterprise
network, as illustrated in Figure 3-21.
Services Within Modular Networks
Figure 3-21
179
IP Telephony Components
Voice Messaging and
Applications
Voice Mail
Call-Processing
Engine
DSP Resources for
Conferencing
Cisco Unified
Communications Manager
IP WAN
QoS-Enabled
WAN Infrastructure
V
Router or
Gateway
IP Phones
and Endpoints
PSTN
PSTN Gateway
or Router
Modular Approach in Voice Network Design
Implementing voice requires deploying delay-sensitive services from end to end in all enterprise
network modules. Use the modular approach to simplify design, implementation, and especially
troubleshooting. Voice implementation requires some modifications to the existing enterprise
network infrastructure in terms of performance, capacity, and availability because it is an end-toend solution. For example, clients (IP phones) are located in the Building Access layer, and the
call-processing manager is located in the Server Farm module; therefore, all modules in the
enterprise network are involved in voice processing and must be adequately considered. Voice
affects the various modules of the network as follows:
■
Building Access layer: IP phones and end-user computers are attached to Layer 2 switches
here. Switches provide power to the IP phones and provide QoS packet classification and
marking, which is essential for proper voice packet manipulation through the network.
■
Building Distribution layer: This layer performs packet reclassifications if the Building
Access layer is unable to classify packets or is not within the trusted boundary. It aggregates
Building Access layer switches (wiring closets) and provides redundant uplinks to the
Campus Core layer.
180
Chapter 3: Structuring and Modularizing the Network
■
Campus Core layer: The Campus Core layer forms the network’s core. All enterprise
network modules are attached to it; therefore, virtually all traffic between application servers
and clients traverses the Campus Core. With the advent of wire-speed multilayer gigabit
switching devices, LAN backbones have migrated to switched gigabit architectures that
combine all the benefits of routing with wire-speed packet forwarding.
■
Server Farm module: This module includes multilayer switches with redundant connections
to redundant Cisco Unified Communications Managers, which are essential for providing
high availability and reliability.
■
Enterprise Edge: The Enterprise Edge extends IP telephony from the Enterprise Campus to
remote locations via WANs, the PSTN, and the Internet.
Figure 3-22 shows the voice network solution in the Cisco Enterprise Architecture. It illustrates
how a call is initiated on an IP phone, how the call setup goes through the Cisco Unified
Communications Manager, and how the end-to-end session between two IP phones is established.
Note that Cisco Unified Communications Manager is involved in only the call setup.
Figure 3-22
Voice Transport Example
Campus Infrastructure
3
Server Farm
4
Cisco Unified
Communications
Managers
IP Phone-to-IP Phone
Session
2
Dialing Plan:
-Local IP Phone
-Remote over IP WAN
-Remote over PSTN
1
Enterprise Edge
IP WAN
PSTN
Calls destined for remote locations traverse the Enterprise Edge through the WAN and MAN and
Site-to-Site VPN module or through the Remote Access and VPN module. Calls destined for
Services Within Modular Networks
181
public phone numbers on the PSTN are routed over the Enterprise Edge through the Remote
Access and VPN module. Calls between IP phones traverse the Building Access, Building
Distribution, and Campus Core layers, and the Server Farm module. Although call setup uses all
these modules, speech employs only the Building Access, Building Distribution, and, in some
cases, the Campus Core layers.
Evaluating the Existing Data Infrastructure for Voice Design
When designing IP telephony, designers must document and evaluate the existing data
infrastructure in each enterprise module to help determine upgrade requirements. Items to
consider include the following:
■
Performance: Enhanced infrastructure for additional bandwidth, consistent performance, or
higher availability, if required, might be necessary for the converging environment.
Performance evaluation includes analyzing network maps, device inventory information, and
network baseline information. Links and devices such as those with high peak or busy-hour
use might have to be upgraded to provide sufficient capacity for the additional voice traffic.
Devices with high CPU use, high backplane use, high memory use, queuing drops, or buffer
misses might have to be upgraded.
■
Availability: Redundancy in all network modules should be reviewed to ensure that the
network can meet the recommended IP telephony availability goals with the current or new
network design.
■
Features: Examine the router and switch characteristics—including the chassis, module, and
software version—to determine the IP telephony feature capabilities in the existing
environment.
■
Capacity: Evaluate the overall network capacity and the impact of IP telephony on a moduleby-module basis to ensure that the network meets capacity requirements and that there is no
adverse impact on the existing network and application requirements.
■
Power: Assess the power requirements of the new network infrastructure, ensuring that the
additional devices will not oversubscribe existing power. Consider taking advantage of PoE
capabilities in devices.
NOTE Chapter 8, “Voice Network Design Considerations,” covers voice in detail.
Wireless Services in a Modular Network
A wireless LAN (WLAN) supports mobile clients connecting to the enterprise network. The
mobile clients do not have a physical connection to the network because WLANs replace the
Layer 1 traditional wired network (usually Category 5 cable) with radio frequency (RF)
182
Chapter 3: Structuring and Modularizing the Network
transmissions through the air. WLANs are for local networks, either in-building, line-of-sight
outdoor bridging applications, or a combination of both.
In a wireless network, many issues can arise to prevent the RF signal from reaching all parts of the
facility, including multipath distortion, hidden node problems, interference from other wireless
sources, and near/far issues. A site survey helps find the regions where these issues occur by
defining the contours of RF coverage in a particular facility, discovering regions where multipath
distortion can occur, areas where RF interference is high, and finding solutions to eliminate such
issues.
Privacy and security issues must also be considered in a wireless network. Because WLANs are
typically connected to the wired network, all the modules within the enterprise infrastructure must
be considered to ensure the success of a wireless deployment.
Centralized WLAN Components
As illustrated in Figure 3-23, the four main components in a centralized WLAN deployment are
as follows:
■
End-user devices: A PC or other end-user device in the access layer uses a wireless NIC to
connect to an access point (AP) using radio waves.
■
Wireless APs: APs, typically in the access layer, are shared devices that function similar to a
hub. Cisco APs can be either lightweight or autonomous.
Lightweight APs are used in centralized WLAN deployments. A lightweight AP receives
control and configuration from a WLAN controller (WLC) with which it is associated,
providing a centralized point of management and reducing the security concern of a stolen
AP. An autonomous AP has a local configuration and requires local management, which
might make consistent configurations difficult and add to the cost of network management.
■
WLC: A WLC provides management and support for wireless services such as roaming. The
WLC is typically in the core layer of an enterprise network.
■
Existing switched and routed wired network: The wireless APs connect to the wired
enterprise network.
Services Within Modular Networks
Figure 3-23
183
Centralized WLAN Components
Wireless LAN
Controller
Switched and Routed
Wired Network
Wireless
Access
Point
LWAPP
LWAPP
LWAPP
LWAPP
PCs or
Endpoints
NOTE WLANs are described further in Chapter 9, “Wireless Network Design
Considerations.”
Application Networking Services in a Modular Network Design
Traditional networks handled static web pages, e-mail, and routine client/server traffic. Today,
enterprise networks must handle more sophisticated types of network applications that include
voice and video. Examples include voice transport, videoconferencing, online training, and audio
and video broadcasts. Applications place increasing demands on IT infrastructures as they evolve
into highly visible services that represent the face of the business to internal and external
audiences.
The large amount and variety of data requires that the modern network be application-aware—in
other words, be aware of the content carried across it to optimally handle that content. It is no
longer enough simply to add more bandwidth as needs grow. Networks have had to become
smarter. A new role is emerging for the network as a provider of application infrastructure services
that extend the value of applications, either by improving delivery of content to users and other
applications or by offloading infrastructure functions that today burden development and
operations teams. Application Networking Services (ANS) provide this intelligence.
184
Chapter 3: Structuring and Modularizing the Network
ANS Examples
Table 3-1 illustrates some sample application deployment issues that many IT managers face today
and how ANS resolves these issues.
Examples of Application Deployment Issues and Solutions
Table 3-1
Sample Deployment Issue
Sample ANS Solution
Consolidation of data centers results in remote
employees having slower access to centrally
managed applications
Wide-area application services in the branch office
that compress, cache, and optimize content for
remote users so that they experience LAN-like
responsiveness
A new web-based ordering system experiences a
high proportion of abandoned orders because of
poor responsiveness during the checkout process
Optimization of web streams being sent to an
e-commerce portal, which reduces latency,
suppresses unnecessary reloading of web objects,
and offloads low-level tasks from the web server
Business partners need immediate and secure
electronic access to information held in backoffice applications, such as shipment information
Security and remote connectivity services that
automatically validate a partner’s request, route it
to the appropriate back-office application, and
encrypt and prioritize the response
A purchasing application needs to log and track
orders over a certain value for compliance
purposes
Application messaging service that intercepts
purchase orders, locates the value, and logs large
orders to a database according to business policy
rules
ANS Components
Figure 3-24 illustrates an example of ANS deployed in offices connected over a WAN, providing
LAN-like performance to users in the branch, regional, and remote offices. ANS components are
deployed symmetrically in the data center and the distant offices. The ANS components in this
example are as follows:
■
Cisco Wide Area Application Services (WAAS) software: Cisco WAAS software gives
remote offices LAN-like access to centrally hosted applications, servers, storage, and
multimedia.
■
Cisco Wide Area Application Engine (WAE) appliance: Cisco WAE appliances provide
high-performance global LAN-like access to enterprise applications and data. WAEs use
either WAAS or Application and Content Networking System [ACNS] software. WAEs help
consolidate storage, servers, and so forth in the corporate data center, with only low-cost,
easy-to-maintain network appliances in distant offices.
Services Within Modular Networks
185
Each Cisco WAE device can be managed using the embedded command-line
interface, the device Web GUI, or the Cisco WAAS Central Manager GUI. The Cisco
WAAS Central Manager runs on Cisco WAE appliances and can be configured for
high availability by deploying a pair of Cisco WAEs as central managers. The two
central manager WAEs automatically share configuration and monitoring data.
■
Figure 3-24
Cisco 2600/3600/3700 Series Content Engine Module: Content Engine Modules can be
deployed in the data center or branch offices to optimize WAN bandwidth, accelerate
deployment of mission-critical web applications, add web content security, and deliver live
and on-demand business video.
ANS Components in a WAN Environment
Regional
Office
Wide Area
Application Engine
Appliance
Branch
Office
Remote
Office
WAN
Content Engine
Network
Module
Wide Area
Application Engine
Appliance
Cisco WAAS
Central Manager
Primary/Standby
Data
Center
Wide Area
Application Engine
Appliance
186
Chapter 3: Structuring and Modularizing the Network
NOTE Further details on ANS are available at http://www.cisco.com/go/applicationservices/.
Network Management Protocols and Features
Proper network management is a critical component of an efficient network. Network
administrators need tools to monitor the functionality of the network devices, the connections
between them, and the services they provide. SNMP has become the de facto standard for use in
network management solutions and is tightly connected with remote monitoring (RMON) and
Management Information Bases (MIB). Each managed device in the network has several variables
that quantify the state of the device. You can monitor managed devices by reading the values of
these variables, and you can control managed devices by writing values into these variables.
This section introduces SNMP and describes the differences between SNMP versions 1, 2, and 3.
The role of MIBs in SNMP and RMON monitoring is described, and Cisco’s network discovery
protocol, Cisco Discovery Protocol (CDP), is introduced. The section concludes with a description
of methods for gathering network statistics.
Network Management Architecture
Figure 3-25 illustrates a generic network management architecture.
Figure 3-25
Network Management Architecture
Network
Management
Protocols and
Standards:
• SNMP
• MIB
• RMON
Network
Management
Applications
Managed
Devices
Network
Management
Platform(s)
Network Management Protocols and Features
187
The network management architecture consists of the following:
■
Network management system (NMS): A system that executes applications that monitor and
control managed devices. NMSs provide the bulk of the processing and memory resources
that are required for network management.
■
Network management protocol: A protocol that facilitates the exchange of management
information between the NMS and managed devices, including SNMP, MIB, and RMON.
■
Managed devices: A device (such as a router) managed by an NMS.
■
Management agents: Software, on managed devices, that collects and stores management
information, including SNMP agents and RMON agents.
■
Management information: Data that is of interest to a device’s management, usually stored
in MIBs.
A variety of network management applications can be used on a network management system; the
choice depends on the network platform (such as the hardware or operating system). The
management information resides on network devices; management agents that reside on the
device collect and store data in a standardized data definition structure known as the MIB.
The network management application uses SNMP or other network management protocols to
retrieve the data that the management agents collect. The retrieved data is typically processed and
prepared for display with a GUI, which allows the operator to use a graphical representation of the
network to control managed devices and program the network management application.
Protocols and Standards
Several protocols are used within the network management architecture.
KEY
SNMP is the simplest network management protocol. SNMP version 1 (SNMPv1) was
POINT extended to SNMP version 2 (SNMPv2) with its variants, which were further extended
with SNMP version 3 (SNMPv3).
The MIB is a detailed definition of the information on a network device and is accessible
through a network management protocol, such as SNMP.
RMON is an extension of the MIB. The MIB typically provides only static information
about the managed device; the RMON agent collects specific groups of statistics for longterm trend analysis.
188
Chapter 3: Structuring and Modularizing the Network
NOTE The ISO network management model defines the following five functional areas of
network management (which are abbreviated as FCAPS): fault management, configuration
management, accounting management, performance management, and security management.
The FCAPS model and these functional areas are rarely implemented in a single enterprise-wide
network management system. A typical enterprise uses a variety of network infrastructure and
service elements managed by element-specific network management systems.
NOTE Information on specific management systems for technologies such as voice, security,
and wireless are provided in the relevant chapters in this book.
The following sections discuss SNMP, MIB, and RMON in detail.
SNMP
SNMP has become the de facto standard for network management. SNMP is a simple solution that
requires little code to implement, which enables vendors to easily build SNMP agents for their
products. In addition, SNMP is often the foundation of the network management architecture.
SNMP defines how management information is exchanged between network management
applications and management agents. Figure 3-26 shows the terms used in SNMP; they are
described as follows:
■
Manager: The manager, a network management application in an NMS, periodically polls
the SNMP agents that reside on managed devices for the data, thereby enabling information
to be displayed using a GUI on the NMS. A disadvantage of periodic SNMP polling is the
possible delay between when an event occurs and when it is collected by the NMS; there is a
trade-off between polling frequency and bandwidth usage.
■
Protocol: SNMP is a protocol for message exchange. It uses the User Datagram Protocol
(UDP) transport mechanism to send and retrieve management information, such as MIB
variables.
■
Managed device: A device (such as a router) managed by the manager.
■
Management agents: SNMP management agents reside on managed devices to collect and
store a range of information about the device and its operation, respond to the manager’s
requests, and generate traps to inform the manager about certain events. SNMP traps are sent
by management agents to the NMS when certain events occur. Trap notifications could result
in substantial network and agent resource savings by eliminating the need for some SNMP
polling requests.
Network Management Protocols and Features
■
Figure 3-26
189
MIB: The management agent collects data and stores it locally in the MIB, a database of
objects about the device. Community strings, which are similar to passwords, control access
to the MIB. To access or set MIB variables, the user must specify the appropriate read or write
community string; otherwise, access is denied.
NMP Is a Protocol for Management Information Exchange
Manager
SNMP Protocol for Message Exchange
Managed
Device
Managed
Device
Agent
Agent
MIB
MIB
SNMPv1
The initial version of SNMP, SNMPv1 is defined in RFC 1157, Simple Network Management
Protocol (SNMP). The protocol’s simplicity is apparent by the set of operations that are available.
Figure 3-27 shows the basic SNMP messages, which the manager uses to transfer data from agents
that reside on managed devices. These messages are described as follows:
■
Get Request: Used by the manager to request a specific MIB variable from the agent.
■
Get Next Request: Used after the initial get request to retrieve the next object instance from
a table or list.
■
Set Request: Used to set a MIB variable on an agent.
■
Get Response: Used by an agent to respond to a manager’s Get Request or Get Next Request
message.
■
Trap: Used by an agent to transmit an unsolicited alarm to the manager. A Trap message is
sent when specific conditions occur, such as a change in the state of a device, a device or
component failure, or an agent initialization or restart.
190
Chapter 3: Structuring and Modularizing the Network
Figure 3-27
SNMPv1 Message Types
Get Request
Retrieve Value of Specific MIB Variable
Get Next Request
Manager
Retrieve Next Issuance of MIB Variable
Set Request
Managed
Device
Agent
Modify the Value of a MIB Variable
Get Response
MIB
Contains Values of Requested Variable
Trap
Transmits an Unsolicited Alarm Condition
SNMPv2
SNMPv2 is a revised protocol that includes performance and manager-to-manager
communication improvements to SNMP. SNMPv2 was introduced with RFC 1441, Introduction
to version 2 of the Internet-standard Network Management Framework, but members of the IETF
subcommittee could not agree on several sections of the SNMPv2 specification (primarily the
protocol’s security and administrative needs). Several attempts to achieve acceptance of SNMPv2
have been made by releasing experimental modified versions, commonly known as SNMPv2*,
SNMPv2, SNMPv2u, SNMPv1+, and SNMPv1.5, which do not contain the disputed parts.
Community-based SNMPv2 (or SNMPv2c), which is defined in RFC 1901, Introduction to
Community-based SNMPv2, is referred to as SNMPv2 because it is the most common
implementation. The “c” stands for community-based security because SNMPv2c uses the same
community strings as SNMPv1 for read and write access. SNMPv2 changes include the
introduction of the following two new message types:
■
GetBulk message type: Used for retrieving large amounts of data, such as tables. This
message reduces repetitive requests and replies, thereby improving performance.
■
InformRequest: Used to alert the SNMP manager of a specific condition. Unlike
unacknowledged trap messages, InformRequest messages are acknowledged. A managed
device sends an InformRequest to the NMS; the NMS acknowledges the receipt of the
message by sending a Response message back to the managed device.
Another improvement of SNMPv2 over SNMPv1 is the addition of new data types with 64-bit
counters because 32-bit counters were quickly overflowed by fast network interfaces.
On Cisco routers, Cisco IOS software release 11.3 and later versions implement SNMPv2.
However, neither SNMPv1 nor SNMPv2 offers security features. Specifically, SNMPv1 and
SNMPv2 can neither authenticate the source of a management message nor encrypt the message.
Network Management Protocols and Features
191
Because of the lack of security features, many SNMPv1 and SNMPv2 implementations are
limited to a read-only capability, reducing their usefulness to that of a network monitor.
SNMPv3
SNMPv3 is the latest SNMP version to become a full standard. Its introduction has moved
SNMPv1 and SNMPv2 to historic status. SNMPv3, which is described in RFCs 3410 through
3415, adds methods to ensure the secure transmission of critical data to and from managed
devices. Table 3-2 lists these RFCs. Note that these RFCs make RFCs 2271 through 2275 and
RFCs 2570 through 2575 obsolete.
SNMPv3 Proposed Standards Documents
Table 3-2
RFC Number
Title of RFC
3410
Introduction and Applicability Statements for Internet-Standard Management
Framework
3411
An Architecture for Describing Simple Network Management Protocol (SNMP)
Management Frameworks
3412
Message Processing and Dispatching for the Simple Network Management
Protocol (SNMP)
3413
Simple Network Management Protocol (SNMP) Applications
3414
User-based Security Model (USM) for Version 3 of the Simple Network
Management Protocol (SNMPv3)
3415
View-based Access Control Model (VACM) for the Simple Network Management
Protocol (SNMP)
SNMPv3 introduces the following three security levels:
■
NoAuthNoPriv: Without authentication and without privacy (encryption).
■
AuthNoPriv: With authentication but without privacy. Authentication is based on HashBased Message Authentication Code-Message Digest 5 or HMAC-Secure Hash Algorithm
algorithms.
■
AuthPriv: With authentication as described earlier and privacy using the 56-bit Cipher-Block
Chaining-Data Encryption Standard encryption standard.
Security levels can be specified per user or per group of users via direct interaction with the
managed device or via SNMP operations. Security levels determine which SNMP objects a user
can access for reading, writing, or creating, and the list of notifications that users can receive. On
Cisco routers, Cisco IOS software release 12.0 and later versions implement SNMPv3.
192
Chapter 3: Structuring and Modularizing the Network
MIB
KEY
A MIB is a collection of managed objects. A MIB stores information, which is collected
POINT by the local management agent, on a managed device for later retrieval by a network
management protocol.
Each object in a MIB has a unique identifier that network management applications use to identify
and retrieve the value of the specific object. The MIB has a tree-like structure in which similar
objects are grouped under the same branch of the MIB tree. For example, different interface
counters are grouped under the MIB tree’s interfaces branch.
Internet MIB Hierarchy
As shown in Figure 3-28, the MIB structure is logically represented by a tree hierarchy. The root
of the tree is unnamed and splits into three main branches: Consultative Committee for
International Telegraph and Telephone (CCITT), ISO, and joint ISO/CCITT.
These branches and those that fall below each category are identified with short text strings and
integers. Text strings describe object names, whereas integers form object identifiers that allow
software to create compact, encoded representations of the names. The object identifier in the
Internet MIB hierarchy is the sequence of numeric labels on the nodes along a path from the root
to the object. The Internet standard MIB is represented by the object identifier 1.3.6.1.2.1, which
can also be expressed as iso.org.dod.internet.mgmt.mib.
Network Management Protocols and Features
Figure 3-28
193
Internet MIB Hierarchy
unnamed
CCITT
D
iso
1
joint
ISO/CCITT
2
org
3
dod
6
internet
1
directory
1
mgmt
2
experimental
3
private
4
mib
1
This information was adapted from the Cisco Management Information Base (MIB) User Quick
Reference, which is available at http://www.cisco.com/univercd/cc/td/doc/product/software/
ios112/mbook/index.htm.
Standard MIBs are defined in various RFCs. For example, RFC 1213, Management Information
Base for Network Management of TCP/IP-based internets: MIB-II, defines the TCP/IP MIB.
194
Chapter 3: Structuring and Modularizing the Network
In addition to standard MIBs, vendors can obtain their own branch of the MIB subtree and create
custom managed objects under that branch. A Cisco router MIB uses both standard and private
managed objects.
A Cisco router’s MIB tree contains several defined standard managed objects, including from the
following groups:
■
Interface group (including interface description, type, physical address, counts of incoming
and outgoing packets, and so forth)
■
IP group (including whether the device is acting as an IP gateway, the number of input
packets, the number of packets discarded because of error, and so forth)
■
ICMP group (including the number of ICMP messages received, the number of messages
with errors, and so forth)
The Cisco private section of the MIB tree contains private managed objects, which were
introduced by Cisco, such as the following objects for routers:
■
Small, medium, large, and huge buffers
■
Primary and secondary memory
■
Proprietary protocols
Private definitions of managed objects must be compiled into the NMS before they can be used;
the result is output that is more descriptive, with variables and events that can be referred to by
name.
MIB-II
MIB-II is an extension of the original MIB (which is now called MIB-I) and is defined by RFC
1213. MIB-II supports a number of new protocols and provides more detailed, structured
information. It remains compatible with the previous version, which is why MIB-II retains the
same object identifier as MIB-I (1.3.6.1.2.1).
The location of MIB-II objects is under the iso.org.dod.internet.mgmt subtree, where the top-level
MIB objects are defined as follows (definitions of these objects can be found in RFC 1213):
■
System (1)
■
Interfaces (2)
■
Address Translation (3)
■
IP (4)
Network Management Protocols and Features
■
ICMP (5)
■
TCP (6)
■
UDP (7)
■
EGP (8)
■
Transmission (10)
■
SNMP (11)
195
Although the MIB-II definition is an improvement over MIB-I, the following unresolved issues
exist:
■
MIB-II is still a device-centric solution, meaning that its focus is on individual devices, not
the entire network or data flows.
■
MIB-II is poll-based, meaning that data is stored in managed devices and a management
system must request (poll) it via the management protocol; the data is not sent automatically.
Cisco MIB
The Cisco private MIB definitions are under the Cisco MIB subtree (1.3.6.1.4.1.9 or
iso.org.dod.internet.private.enterprise.cisco). Cisco MIB definitions supported on Cisco devices
are available at http://www.cisco.com/public/mibs/.
The Cisco private MIB subtree contains three subtrees: Local (2), Temporary (3), and CiscoMgmt
(9). The Local (2) subtree contains MIB objects defined before Cisco IOS software release 10.2;
these MIB objects are implemented in the SNMPv1 Structure of Management Information (SMI).
The SMI defines the structure of data that resides within MIB-managed objects. Beginning with
Cisco IOS software release 10.2, however, Cisco MIBs are defined according to the SNMPv2 SMI
and are placed in the CiscoMgmt subtree (9). The variables in the temporary subtree are subject
to change for each Cisco IOS software release.
MIB Polling Guidelines
Monitoring networks using SNMP requires that the NMS poll each managed device on a periodic
basis to determine its status. Frequently polling many devices or MIB variables on a device across
a network to a central NMS might result in performance issues, including congestion on slower
links or at the NMS connection, or overwhelming the NMS resources when processing all the
collected data. The following are recommended polling guidelines:
■
Restrict polling to only those MIB variables necessary for analysis.
■
Analyze and use the data collected; do not collect data if it is not analyzed.
196
Chapter 3: Structuring and Modularizing the Network
■
Increase polling intervals (in other words, reduce the number of polls per period) over lowbandwidth links.
■
For larger networks, consider deploying management domains, a distributed model for
deploying an NMS. Management domains permit polling to be more local to the managed
devices. As a result, they reduce overall management traffic across the network and the
potential for one failed device or link to interrupt management visibility to the remaining
network. Aggregated management data might still be centralized when management domains
are used. This model is particularly appropriate for networks that already have separate
administrative domains or where large campuses or portions of the network are separated by
slower WAN links.
■
Leverage nonpolling mechanisms such as SNMP traps, RMON, and syslog (as described in
later sections of this chapter).
MIB Example
Figure 3-29 depicts SNMP MIB variable retrieval in action.
Figure 3-29
SNMP MIB Variable Retrieval
— Base format to retrieve the number of errors on an interface
iso org dod internet mgmt mib interface ifTable ifEntry ifOutErrors
1
3
6
1
2
1
2
2
1
20
— Specific format to retrieve the number of errors on first
interface
iso org dod internet mgmt mib interface ifTable ifEntry ifOutErrors Instance
1
3
6
1
2
1
2
2
1
20
0
Managed
Device
Manager
SNMP Get Request
OBJECT IDENTIFIER = 1.3.6.1.2.1.2.2.1.20.0
Agent
SNMP Get Response
1.3.6.1.2.1.2.2.1.20.0 = 11 Port Errors
MIB
Host A
In this example, the network manager wants to retrieve the number of errors on the first interface.
Starting with interface number 0, the valid range for interface numbers is 0 through the maximum
number of ports minus one. The manager creates the SNMP Get Request message with reference
to the MIB variable 1.3.6.1.2.1.2.2.1.20.0, which represents interface outgoing errors on interface
Network Management Protocols and Features
197
0. The agent creates the SNMP Get Response message in response to the manager’s request. The
response includes the value of the referenced variable. In the example, the agent returned value is
11, indicating that there were 11 outgoing errors on that interface.
RMON
KEY
RMON is a MIB that provides support for proactive management of LAN traffic.
POINT
The RMON standard allows packet and traffic patterns on LAN segments to be monitored. RMON
tracks the following items:
■
Number of packets
■
Packet sizes
■
Broadcasts
■
Network utilization
■
Errors and conditions, such as Ethernet collisions
■
Statistics for hosts, including errors generated by hosts, busiest hosts, and which hosts
communicate with each other
RMON features include historical views of RMON statistics based on user-defined sample
intervals, alarms that are based on user-defined thresholds, and packet capture based on userdefined filters.
NOTE RMON is defined as a portion of the MIB II database. RFC 2819, Remote Network
Monitoring Management Information Base, defines the objects for managing remote network
monitoring devices. RFC 1513, Token Ring Extensions to the Remote Network Monitoring MIB,
defines extensions to the RMON MIB for managing IEEE 802.5 Token Ring networks.
KEY
Without RMON, a MIB could be used to check the device’s network performance.
POINT However, doing so would lead to a large amount of bandwidth required for management
traffic. By using RMON, the managed device itself (via its RMON agent) collects and
stores the data that would otherwise be retrieved from the MIB frequently.
RMON agents can reside in routers, switches, hubs, servers, hosts, or dedicated RMON probes.
Because RMON can collect a lot of data, dedicated RMON probes are often used on routers and
198
Chapter 3: Structuring and Modularizing the Network
switches instead of enabling RMON agents on these devices. Performance thresholds can be set
and reported on if the threshold is breached; this helps reduce management traffic. RMON
provides effective network fault diagnosis, performance tuning, and planning for network
upgrades.
RMON1
KEY
RMON1 works on the data link layer (with MAC addresses) and provides aggregate LAN
POINT traffic statistics and analysis for remote LAN segments.
Because RMON agents must look at every frame on the network, they might cause performance
problems on a managed device. The agent’s performance can be classified based on processing
power and memory.
NOTE The RMON MIB is 1.3.6.1.2.1.16 (iso.ord.dod.internet.mgmt.mib.rmon).
RMON1 Groups
RMON agents gather nine groups of statistics, ten including Token Ring, which are forwarded to
a manager on request, usually via SNMP. As summarized in Figure 3-30, RMON1 agents can
implement some or all of the following groups:
■
Statistics: Contains statistics such as packets sent, bytes sent, broadcast packets, multicast
packets, CRC errors, runts, giants, fragments, jabbers, collisions, and so forth, for each
monitored interface on the device.
■
History: Used to store periodic statistical samples for later retrieval.
■
Alarm: Used to set specific thresholds for managed objects and to trigger an event on crossing
the threshold (this requires an Events group).
■
Host: Contains statistics associated with each host discovered on the network.
■
Host Top N: Contains statistics for hosts that top a list ordered by one of their observed
variables.
■
Matrix: Contains statistics for conversations between sets of two addresses, including the
number of packets or bytes exchanged between two hosts.
■
Filters: Contains rules for data packet filters; data packets matched by these rules generate
events or are stored locally in a Packet Capture group.
Network Management Protocols and Features
■
Packet Capture: Contains data packets that match rules set in the Filters group.
■
Events: Controls the generation and notification of events from this device.
■
TokenRing: Contains the following Token Ring Extensions:
199
— Ring Station—Detailed statistics on individual stations
— Ring Station Order—Ordered list of stations currently on the ring
— Ring Station Configuration—Configuration information and insertion/removal
data on each station
— Source Routing—Statistics on source routing, such as hop counts
Figure 3-30
RMON1 Groups
1
statistics
Real Time—Current Statistics
2
history
Statistics Over Time
3
alarm
Predetermined Threshold Watch
4
host
Tracks Individual Host Statistics
5
hostTopN
6
matrix
A< >B—Conversation Statistics
7
filters
Packet Structure and Content Matching
8
packetCapture
9
events
10
tokenRing
“N” Statistically Most Active Hosts
Collection for Subsequent Analysis
Reaction to Predetermined Conditions
Token Ring—RMON Extensions
RMON1 and RMON2
RMON1 only provides visibility into the data link and the physical layers; potential problems that
occur at the higher layers still require other capture and decode tools. Because of RMON1’s
limitations, RMON2 was developed to extend functionality to upper-layer protocols. As illustrated
in Figure 3-31, RMON2 provides full network visibility from the network layer through to the
application layer.
200
Chapter 3: Structuring and Modularizing the Network
Figure 3-31
RMON2 Is an Extension of RMON1
Application
Presentation
RMON2
Session
Transport
Network
Data Link
Ethernet and
Token Ring
RMON1
Physical
KEY
RMON2 is not a replacement for RMON1, but an extension of it. RMON2 extends
POINT RMON1 by adding nine more groups that provide visibility to the upper layers.
With visibility into the upper-layer protocols, the network manager can monitor any upper-layer
protocol traffic for any device or subnet in addition to the MAC layer traffic.
RMON2 allows the collection of statistics beyond a specific segment’s MAC layer and provides
an end-to-end view of network conversations per protocol. The network manager can view
conversations at the network and application layers. Therefore, traffic generated by a specific host
or even a specific application (for example, a Telnet client or a web browser) on that host can be
observed.
RMON2 Groups
Figure 3-32 illustrates the RMON groups that were added when RMON2 was introduced. They
include the following:
■
Protocol Directory: Provides the list of protocols that the device supports
■
Protocol Distribution: Contains traffic statistics for each supported protocol
■
Address Mapping: Contains network layer-to-MAC layer address mappings
■
Network Layer Host: Contains statistics for the network layer traffic to or from each host
Network Management Protocols and Features
201
■
Network Layer Matrix: Contains network layer traffic statistics for conversations between
pairs of hosts
■
Application Layer Host: Contains statistics for the application layer traffic to or from each host
■
Application Layer Matrix: Contains application layer traffic statistics for conversations
between pairs of hosts
■
User History Collection: Contains periodic samples of user-specified variables
■
Probe Configuration: Provides a standard way of remotely configuring probe parameters,
such as trap destination and out-of-band management
Figure 3-32
RMON2 Groups Extend RMON1 Groups
iso
.1
probeConfig
org
.3
usrHistory
.19
alMatrix
dod
.6
internet
alHost
.17
nlMatrix
.1
nlHost
mgmt
.2
.1
protocolDir
.16
.15
addressMap
.14
protocolDist
mib-2
.18
.13
.12
.11
RMON
.16
RMON2 (RFC 4502)
RMON1 (RFC 2819)
.1
.2
.3
.4
.5
.6
.7
.8
.9
.10
Token Ring
(RFC -1513)
statistics
history
alarm
hosts
hostTopN
matrix
filter
capture
events
token ring
202
Chapter 3: Structuring and Modularizing the Network
NOTE See RFC 3577, Introduction to the Remote Monitoring (RMON) Family of MIB
Modules, for a description of RMON1, RMON2, and pointers to many of the RFCs describing
extensions to RMON.
NetFlow
Cisco NetFlow is a measurement technology that measures flows that pass through Cisco devices.
NOTE NetFlow was originally implemented only on larger devices; it is now available on
other devices, including ISRs.
NetFlow answers the questions of what, when, where, and how traffic is flowing in the network.
NetFlow data can be exported to network management applications to further process the
information, providing tables and graphs for accounting and billing or as an aid for network
planning. The key components of NetFlow are the NetFlow cache or data source that stores IP flow
information and the NetFlow export or transport mechanism that sends NetFlow data to a network
management collector, such as the NetFlow Collection Engine.
NetFlow-collected data serves as the basis for a set of applications, including network traffic
accounting, usage-based network billing, network planning, and network monitoring. NetFlow
also provides the measurement base for QoS applications: It captures the traffic classification (or
precedence) associated with each flow, thereby enabling differentiated charging based on QoS.
KEY
A network flow is a unidirectional sequence of packets between source and destination
POINT endpoints. Network flows are highly granular; both IP address and transport layer
application port numbers identify flow endpoints. NetFlow also identifies the flows by IP
protocol type, ToS, and the input interface identifier.
Non-NetFlow–enabled switching handles incoming packets independently, with separate serial
tasks for switching, security services (access control lists [ACL]), and traffic measurements that
are applied to each packet. Processing is applied only to a flow’s first packet with NetFlow-enabled
switching; information from the first packet is used to build an entry in the NetFlow cache.
Subsequent packets in the flow are handled via a single, streamlined task that handles switching,
security services, and data collection concurrently. Multilayer switches support multilayer
NetFlow.
Therefore, NetFlow services capitalize on the network traffic’s flow nature to provide detailed data
collection with minimal impact on router performance and to efficiently process ACLs for packet
filtering and security services. Figure 3-33 illustrates the NetFlow infrastructure.
Network Management Protocols and Features
Figure 3-33
203
NetFlow Infrastructure
Ecosystem
Collector
Cisco-NAM
RMON Probe
Network Planning
RMON Application
NetFlow
Data Export:
– Data Switching
– Data Export
– Data Aggregation
NetFlow
Collector:
– Data Collection
– Data Filtering
– Data Aggregation
– Data Storage
– File System Management
Accounting and Billing
Network Data Analyzer
NetFlow can be configured to export data to a flow collector, a device that provides NetFlow export
data filtering and aggregation capabilities, such as the NetFlow Collection Engine. Expired flows
are grouped into NetFlow Export datagrams for export from the NetFlow-enabled device.
The focus of NetFlow used to be on IP flow information; this is changing with the Cisco
implementation of a generic export transport format. NetFlow version 9 (v9) export format is a
flexible and extensible export format that is now on the IETF standards track in the IP Flow
Information Export (IPFIX) working group. IPFIX export is a new generic data transport
capability within Cisco routers. It can be used to transport performance information from a router
or switch, including Layer 2 information, security detection and identification information, IP
version 6 (IPv6), multicast, MPLS, and Border Gateway Protocol (BGP) information, and so forth.
NetFlow enables several key customer applications, including the following:
■
Accounting and billing: Because flow data includes details such as IP addresses, packet and
byte counts, time stamps, and application port numbers, NetFlow data provides fine-grained
metering for highly flexible and detailed resource utilization accounting. For example, service
providers can use this information to migrate from single-fee, flat-rate billing to more flexible
204
Chapter 3: Structuring and Modularizing the Network
charging mechanisms based on time of day, bandwidth usage, application usage, QoS, and so
forth. Enterprise customers can use the information for departmental cost recovery or cost
allocation for resource utilization.
■
Network planning and analysis: NetFlow data provides key information for sophisticated
network architecture tools to optimize both strategic planning (such as whom to peer with,
backbone upgrade planning, and routing policy planning) and tactical network engineering
decisions (such as adding resources to routers or upgrading link capacity). This has the benefit
of minimizing the total cost of network operations while maximizing network performance,
capacity, and reliability.
■
Network monitoring: NetFlow data enables extensive near-real-time network monitoring.
To provide aggregate traffic- or application-based views, flow-based analysis techniques can
be used to visualize the traffic patterns associated with individual routers and switches on a
networkwide basis. This analysis provides network managers with proactive problem
detection, efficient troubleshooting, and rapid problem resolution.
■
Application monitoring and profiling: NetFlow data enables network managers to gain a
detailed, time-based view of application usage over the network. Content and service
providers can use this information to plan and allocate network and application resources
(such as web server sizing and location) to meet customer demands.
■
User monitoring and profiling: NetFlow data enables network managers to understand
customer and user network utilization and resource application. This information can be used
to plan efficiently; allocate access, backbone, and application resources; and detect and
resolve potential security and policy violations.
■
NetFlow data warehousing and data mining: In support of proactive marketing and
customer service programs, NetFlow data or the information derived from it can be
warehoused for later retrieval and analysis. For example, you can determine which
applications and services are being used by internal and external users and target them for
improved service. This is especially useful for service providers, because NetFlow data
enables them to create a wider range of offered services. For example, a service provider can
easily determine the traffic characteristics of various services and, based on this data, provide
new services to the users. An example of such a service is VoIP, which requires QoS
adjustment; the service provider might charge users for this service.
NetFlow Versus RMON Information Gathering
NetFlow can be configured on individual interfaces, thereby providing information on traffic that
passes through those interfaces and collecting the following types of information:
■
Source and destination interfaces and IP addresses
Network Management Protocols and Features
■
Input and output interface numbers
■
TCP/UDP source port and destination ports
■
Number of bytes and packets in the flow
■
Source and destination autonomous system numbers (for BGP)
■
Time of day
■
IP ToS
205
Compared to using SNMP with RMON MIB, NetFlow’s information-gathering benefits include
greater detail of collected data, data time-stamping, support for various data per interface, and
greater scalability to a large number of interfaces (RMON is also limited by the size of its memory
table). NetFlow’s performance impact is much lower than RMON’s, and external probes are not
required.
CDP
KEY
CDP is a Cisco-proprietary protocol that operates between Cisco devices at the data link
POINT layer. CDP information is sent only between directly connected Cisco devices; a Cisco
device never forwards a CDP frame.
CDP enables systems that support different network layer protocols to communicate and
enables other Cisco devices on the network to be discovered. CDP provides a summary of
directly connected switches, routers, and other Cisco devices.
CDP is a media- and protocol-independent protocol that is enabled by default on each supported
interface of Cisco devices (such as routers, access servers, and switches). The physical media must
support Subnetwork Access Protocol encapsulation. Figure 3-34 illustrates the relationship
between CDP and other protocols.
Figure 3-34
CDP Runs at the Data Link Layer and Enables the Discovery of Directly Connected Cisco
Devices
Upper-Layer
Addresses
Cisco Proprietary
Data-Link Protocol
Media Supporting
SNAP
TCP/IP
Novell IPX
AppleTalk
Others
CDP discovers and shows information about directly connected
Cisco devices
LANs
Frame
Relay
ATM
Others
206
Chapter 3: Structuring and Modularizing the Network
CDP Information
Information in CDP frames includes the following:
■
Device ID: The name of the neighbor device and either the MAC address or the serial number
of the device.
■
Local Interface: The local (on this device) interface connected to the discovered neighbor.
■
Holdtime: The remaining amount of time (in seconds) that the local device holds the CDP
advertisement from a sending device before discarding it.
■
Capability List: The type of device discovered (R—Router, T—Trans Bridge, B—Source
Route Bridge, S—Switch, H—Host, I—IGMP, r—Repeater).
■
Platform: The device’s product type.
■
Port Identifier (ID): The port (interface) number on the discovered neighbor on which the
advertisement is sent. This is the interface on the neighbor device to which the local device is
connected.
■
Address List: All network layer protocol addresses configured on the interface (or, in the case
of protocols configured globally, on the device). Examples include IP, Internetwork Packet
Exchange, and DECnet.
How CDP Works
As illustrated in Figure 3-35, CDP information is sent only between directly connected Cisco
devices. In this figure, the person connected to Switch A can see the router and the two switches
directly attached to Switch A; other devices are not visible via CDP. For example, the person
would have to log in to Switch B to see Router C with CDP.
Figure 3-35
CDP Provides Information About Neighboring Cisco Devices
C
B
CDP
A
CDP
CDP
Network Management Protocols and Features
207
KEY
Cisco devices never forward a CDP frame.
POINT
CDP is a hello-based protocol, and all Cisco devices that run CDP periodically advertise their
attributes to their neighbors using a multicast address. These frames advertise a time-to-live value
(the holdtime, in seconds) that indicates how long the information must be retained before it can
be discarded. CDP frames are sent with a time-to-live value that is nonzero after an interface is
enabled. A time-to-live value of 0 is sent immediately before an interface is shut down, allowing
other devices to quickly discover lost neighbors.
Cisco devices receive CDP frames and cache the received information; it is then available to be
sent to the NMS via SNMP. If any information changes from the last received frame, the new
information is cached and the previous information is discarded, even if its time-to-live value has
not yet expired.
CDP is on by default and operates on any operational interface. However, CDP can be disabled on
an interface or globally on a device. Consequently, some caveats are indicated:
■
Do not run CDP on links that you do not want discovered, such as Internet connections.
■
Do not run CDP on links that do not go to Cisco devices.
For security reasons, block SNMP access to CDP data (or any other data) from outside your
network and from subnets other than the management station subnet.
Syslog Accounting
A system message and error reporting service is an essential component of any operating system.
The syslog system message service provides a means for the system and its running processes to
report system state information to a network manager.
Cisco devices produce syslog messages as a result of network events. Every syslog message
contains a time stamp (if enabled), severity level, and facility.
Example 3-1 shows samples of syslog messages produced by the Cisco IOS software. The most
common messages are those that a device produces upon exiting configuration mode, and the link
up and down messages. If ACL logging is configured, the device generates syslog messages when
208
Chapter 3: Structuring and Modularizing the Network
packets match the ACL condition. ACL logging can be useful to detect packets that are denied
access based on the security policy that is set by an ACL.
Example 3-1
Syslog Messages
20:11:31: %SYS-5- CONFIG I: Configured from console by console
20:11:57: %LINK-5-CHANGED: Interface FastEthernet0/0, changed state to administratively
down
20:11:58: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state
to down
20:12:04: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up
20:12:06: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state
to up
20:13:53: %SEC-6-IPACCESSLOGP: list internet-inbound denied udp 66.56.16.77(1029) > 63.78.199.4(161), 1 packet
20:14:26: %MLS-5-MLSENABLED:IP Multilayer switching is enabled
20:14:26: %MLS-5-NDEDISABLED: Netflow
Data Export disabled
20:14:26: %SYS-5-MOD_OK:Module 1 is online
20:15:47: %SYS-5-MOD_OK:Module 3 is online
20:15:42: %SYS-5-MOD_OK:Module 6 is online
20:16:27: %PAGP-5-PORTTOSTP:Port 3/1 joined bridge port 3/1
20:16:28: %PAGP-5-PORTTOSTP:Port 3/2 joined bridge port 3/2
Syslog messages contain up to 80 characters; a percent sign (%) follows the optional sequence
number or time-stamp information if configured. Syslog messages are structured as follows:
seq no:timestamp: %facility-severity-MNEMONIC:description
The following parameters are used in the syslog messages:
■
A sequence number appears on the syslog message if the service sequence-numbers global
configuration command is configured.
■
The time stamp shows the date and time of the message or event if the service timestamps
log [datetime | log] global configuration command is configured. The time stamp can have
one of three formats:
— mm/dd hh:mm:ss
— hh:mm:ss (for short uptimes)
— d h (for long uptimes)
Network Management Protocols and Features
■
209
Facility: A code consisting of two or more uppercase letters that indicate the facility to which
the message refers. Syslog facilities are service identifiers used to identify and categorize
system state data for error and event message reporting. A facility can be a hardware device,
a protocol, or a module of the system software. The Cisco IOS software has more than 500
different facilities; the following are the most common:
— IP
— OSPF (OSPF protocol)
— SYS (operating system)
— IPsec (IP Security)
— RSP (Route Switch Processor)
— IF (interface)
— LINK (data link messages)
Other facilities include CDP, QoS, RADIUS, multicast (MCAST), MLS, TCP, VLAN
trunking protocol (VTP), Telnet, and trivial file transfer protocol (TFTP).
■
Severity: A single-digit code (from 0 to 7) that reflects the severity of the condition; the lower
the number, the more serious the situation. Syslog defines the following severity levels:
— Emergency (Level 0, which is the highest level)
— Alert (Level 1)
— Critical (Level 2)
— Error (Level 3)
— Warning (Level 4)
— Notice (Level 5)
— Informational (Level 6)
— Debugging (Level 7)
■
Mnemonic: A code that uniquely identifies the error message.
■
Description: A text string that describes the condition. This portion of the message
sometimes contains detailed information about the event, including port numbers, network
addresses, or addresses that correspond to locations in the system memory address space.
NOTE For more syslog information, see http://www.cisco.com/univercd/cc/td/doc/product/
software/ios124/124sup/124sms/index.htm.
210
Chapter 3: Structuring and Modularizing the Network
Syslog Distributed Architecture
Figure 3-36 illustrates the syslog distributed architecture.
Figure 3-36
Syslog Distributed Architecture
Routers
Switches
Network Management System
Syslog Server
Syslog
Messages
Syslog
Messages
Remote
Filter
Remote Syslog
Daemons
Syslog messages are sent to the console session by default. A device must be configured to send
syslog messages elsewhere; the configuration includes the address of the NMS or another device.
Network devices can be configured to send syslog messages directly to the NMS or to the remote
network host on which a syslog analyzer is installed. A syslog analyzer conserves bandwidth on
WAN links because the analyzer usually applies different filters and sends only the predefined
subset of all syslog messages it receives. The analyzer filters and periodically forwards messages
to the central NMS. For example, the analyzer could filter ACL logging data from other router or
switch syslog entries to ensure that the ACL logging data does not overwhelm the syslog reporting
tool.
The Syslog Analyzer is a CiscoWorks Resource Manager Essentials application that supports a
distributed syslog server architecture for localized collection, filtering, aggregation, and
forwarding of syslog data to a central syslog server for further processing and analysis. The Syslog
Analyzer also supports reporting functions to automatically parse the log data into predefined or
custom formats for ease of use and readability.
Summary
211
When it receives a syslog message, the NMS applies filters to remove unwanted messages. Filters
can also be applied to perform actions based on the received syslog message, such as paging or
e-mailing the network manager.
Syslog data can consume large amounts of network bandwidth and might require a very large
storage capacity based on the number of devices sending syslog messages, the syslog facility and
severity levels set for each, and any error conditions that may trigger excessive log messages.
Therefore, it is important to enable logging only for network facilities of particular interest and to
set the appropriate severity level to provide sufficient, but not excessive, detail.
KEY
If the collected data will not be analyzed, do not collect it.
POINT
Selectively filter and aggregate syslog data that the distributed or centralized syslog servers receive
based on the requirements.
Summary
In this chapter, you learned about modularizing the network, with a focus on the following topics:
■
The hierarchical network model’s three layers: access, distribution, and core
■
The Cisco SONA framework that integrates the enterprise-wide network
■
The Cisco Enterprise Architecture functional areas:
— Enterprise Campus: Including the Campus Infrastructure module (composed of the
Campus Core layer, the Building Distribution layer, and the Building Access layer)
and the Server farm module
— Enterprise Edge: Including the E-commerce module, the Internet Connectivity
module, the Remote Access and VPN module, and the WAN and MAN and Site-toSite VPN module
— Service Provider: Including the Internet Service Provider module, the PSTN
module, and the Frame Relay/ATM module
— Enterprise Branch
— Enterprise Data Center
— Enterprise Teleworker
212
Chapter 3: Structuring and Modularizing the Network
■
The infrastructure services and application networking services used within the Cisco
Enterprise Architecture modules
■
Security services to protect network resources and users from internal and external threats
■
High-availability services to ensure adequate connectivity for mission-critical applications
■
Voice services to support VoIP and IP telephony
■
Wireless services to support mobile clients connecting to the enterprise network
■
ANS to make the network aware of the content carried across it and to optimally handle that
content
■
Network management protocols and features, including SNMP, MIBs, RMON, NetFlow,
CDP, and syslog
References
See the following resources for additional information:
■
“Service-Oriented Network Architecture: Introduction,” http://www.cisco.com/go/sona/
■
Top-Down Network Design, Second Edition, Priscilla Oppenheimer, Cisco Press, 2004
■
“Internetworking Design Basics,” Cisco Internetwork Design Guide, http://www.cisco.com/
univercd/cc/td/doc/cisintwk/idg4/nd2002.htm
■
“SAFE Blueprint for Small, Midsize, and Remote-User Networks,” http://www.cisco.com/
go/safe/
■
“Enterprise Architectures: Introduction,” http://www.cisco.com/en/US/netsol/ns517/
networking_solutions_market_segment_solutions_home.html
■
NetFlow Services Solutions Guide, http://www.cisco.com/en/US/products/sw/netmgtsw/
ps1964/products_implementation_design_guide09186a00800d6a11.html
Case Study: ACMC Hospital Modularity
This case study is a continuation of the ACMC Hospital case study introduced in Chapter 2.
Case Study: ACMC Hospital Modularity
213
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study, you apply the Cisco Enterprise Architecture to the ACMC Hospital network
requirements and develop a high-level view of the planned network hierarchy. Complete the
following steps:
Step 1
Consider each of the functional areas of the Cisco Enterprise Architecture:
• Enterprise Campus: Including the Campus Infrastructure module
(composed of the Campus Core layer, the Building Distribution layer,
and the Building Access layer) and the Server farm module
• Enterprise Edge: Including the E-commerce module, the Internet
Connectivity module, the WAN and MAN and Site-to-Site VPN module,
and the Remote Access and VPN module
• Enterprise Branch
• Enterprise Data Center
• Enterprise Teleworker
Mark up the existing network diagram, provided in Figure 3-37, indicating
where each of the modules would be at a high level.
214
Chapter 3: Structuring and Modularizing the Network
Figure 3-37
Existing ACMC Hospital Network
56 Kbps
56 Kbps
56 Kbps
Dial
Dial
Remote
Clinics
Main Building #1
Smaller Buildings
Main Building #2
Smaller Buildings
Children’s Place
Smaller Buildings
Step 2
List some key considerations or functions for each of the modules in the
Cisco Enterprise Architecture. Indicate whether each module is used in the
ACMC Hospital network.
Step 3
Since the time initial discussions with ACMC occurred, the following
additional requirements have surfaced:
• The staff needs Internet access for purchasing supplies and reviewing
research documents and new medical products.
• There has been some discussion about allowing employees to
telecommute.
Review Questions
215
• ACMC has a web server for a patient communications and community
relations service called “Text a Nurse.” This for-fee service allows a
patient to send a text message to the hospital, requesting medical advice.
How does this new information change the design? Incorporate the changes
into your high-level design, and update the list of modules and
considerations.
Step 4
Which of the following infrastructure or network services are immediately
applicable to your design?
• Security services
• Voice services
• Wireless
• Network management
• High availability
• QoS
• Multicast
Are there specific locations or modules where some of these services are
particularly relevant?
Step 5
Indicate where redundancy should be supported in the design.
Review Questions
Answer the following questions, and then refer to Appendix A for the answers.
1.
Figure 3-38 presents a sample hierarchically structured network. Some of the devices are
marked with letters. Map the marked devices to the access, distribution, and core layers in this
figure.
2.
Describe the role of each layer in the hierarchical network model.
3.
True or false: Each layer in the hierarchical network model must be implemented with distinct
physical devices.
4.
Which two statements are true?
a.
UplinkFast immediately unblocks a blocked port after root port failure.
b.
PortFast immediately puts a port into the forwarding state.
c.
UplinkFast immediately puts a port into the forwarding state.
d.
PortFast immediately unblocks a blocked port after root port failure.
216
Chapter 3: Structuring and Modularizing the Network
5.
What features of a multilayer switch could be used in the access layer?
6.
Which layer in the hierarchical model provides media translation?
Figure 3-38
Hierarchical Network
Servers
B
C
D
A
F
E
WAN
Internet
Workstations
7.
Why might the distribution layer need to redistribute between routing protocols?
8.
What are three roles of the hierarchical model’s core layer?
9.
a.
Provide fast and efficient data transport
b.
Provide maximum availability and reliability
c.
Provide access to the corporate network via some wide-area technology
d.
Implement security policies
e.
Delineate broadcast domains
f.
Implement scalable routing protocols
What is a benefit of using multilayer switching in the core network layer?
10.
What are the six major functional areas in the Cisco Enterprise Architecture?
11.
What are the modules and layers within the Enterprise Campus functional area?
12.
The Enterprise Edge functional area includes which modules?
Review Questions
217
13.
The Service Provider functional area is composed of which modules?
14.
Which module of the Cisco Enterprise Architecture includes wireless bridging connectivity
to remote locations?
15.
What is an advantage of using the Cisco Enterprise Architecture?
16.
What is the Campus Core layer’s role?
17.
Indicate which types of devices would be found in each of these modules (note that some
devices are found in more than one module).
Modules:
■
E-commerce module
■
Internet Connectivity module
■
Remote Access and VPN module
Devices:
■
Web servers
■
SMTP mail servers
■
Firewalls
■
Network Intrusion Detection System (NIDS) appliances
■
DNS servers
■
ASAs
■
Public FTP servers
18.
What is the role of the Service Provider functional area?
19.
Which other module has a design similar to that of the Enterprise Branch module?
20.
Which other module has an architecture similar to that of the Enterprise Data Center module?
21.
Which module of the Cisco Enterprise Architecture provides telecommuter connectivity?
22.
The SONA interactive services layer includes both ___________ services and
______________ services.
23.
How can the Server Farm module be involved in an organization’s internal security?
24.
High availability from end to end is possible only when ___________ is deployed throughout
the internetwork.
25.
What is the purpose of designing route redundancy in a network?
218
Chapter 3: Structuring and Modularizing the Network
26.
A full-mesh design is ideal for connecting a ________ number of devices.
a.
small
b.
large
27.
True or false: Backup links can use different technologies.
28.
What components are required for IP telephony?
29.
What role does the Building Access layer play in voice transportation?
30.
What should you consider when evaluating an existing data infrastructure for IP telephony?
31.
What are the main components of a centralized WLAN deployment?
32.
What is a Cisco WAE appliance?
33.
What is a network management agent?
34.
How does an SNMPv1 manager request a list of data?
35.
How does an SNMPv2 manager request a list of data?
36.
What is the MIB structure?
37.
How are private MIB definitions supported?
38.
What are the RMON1 groups?
39.
What groups are added to the RMON1 groups by RMON2?
40.
How does RMON simplify proactive network management?
41.
What is a NetFlow network flow?
42.
How does NetFlow compare to RMON?
43.
At which layer does CDP work?
44.
Two routers are connected via Frame Relay, but ping is not working between them. How
could CDP help troubleshoot this situation?
45.
What are the syslog severity levels?
46.
What syslog severity level is indicated by the messages in Example 3-2?
Example 3-2
Sample Message for Question 46
20:11:58: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state
to down
20:12:04: %LINK-3-UPDOWN: Interface FastEthernet0/0, changed state to up
This page intentionally left blank
This chapter introduces general campus
switching and data center design
considerations. It includes the following
sections:
■
Campus Design Considerations
■
Enterprise Campus Design
■
Enterprise Data Center Design
Considerations
■
Summary
■
References
■
Case Study: ACMC Hospital Network
Campus Design
■
Review Questions
CHAPTER
4
Designing Basic Campus
and Data Center Networks
The availability of multigigabit campus switches gives customers the opportunity to build
extremely high-performance, high-reliability networks—if they follow correct network design
approaches. Unfortunately, some alternative network design approaches can result in a network
that has lower performance, reliability, and manageability.
This chapter describes a hierarchical modular design approach called multilayer design. This
chapter examines the designs of the Enterprise Campus and the Enterprise Data Center network
infrastructures. First, it addresses general campus design considerations, followed by a
discussion of the design of each of the modules and layers within the Enterprise Campus. The
chapter concludes with an introduction to design considerations for the Enterprise Data Center.
Campus Design Considerations
The multilayer approach to campus network design combines data link layer and multilayer
switching to achieve robust, highly available campus networks. This section discusses factors to
consider in a Campus LAN design.
Designing an Enterprise Campus
The Enterprise Campus network is the foundation for enabling business applications, enhancing
productivity, and providing a multitude of services to end users. The following three
characteristics should be considered when designing the campus network:
■
Network application characteristics: The organizational requirements, services, and
applications place stringent requirements on a campus network solution—for example, in
terms of bandwidth and delay.
■
Environmental characteristics: The network’s environment includes its geography and
the transmission media used.
— The physical environment of the building or buildings influences the design, as do
the number of, distribution of, and distance between the network nodes (including
end users, hosts, and network devices). Other factors include space, power, and
heating, ventilation, and air conditioning support for the network devices.
222
Chapter 4: Designing Basic Campus and Data Center Networks
— Cabling is one of the biggest long-term investments in network deployment.
Therefore, transmission media selection depends not only on the required bandwidth
and distances, but also on the emerging technologies that might be deployed over the
same infrastructure in the future.
■
Infrastructure device characteristics: The characteristics of the network devices selected
influence the design (for example, they determine the network’s flexibility) and contribute to
the overall delay. Trade-offs between data link layer switching—based on media access
control (MAC) addresses—and multilayer switching—based on network layer addresses,
transport layer, and application awareness—need to be considered.
— High availability and high throughput are requirements that might require
consideration throughout the infrastructure.
— Most Enterprise Campus designs use a combination of data link layer switching in
the access layer and multilayer switching in the distribution and core layers.
The following sections examine these factors.
Network Application Characteristics and Considerations
The network application’s characteristics and requirements influence the design in many ways.
The applications that are critical to the organization, and the network demands of these
applications, determine enterprise traffic patterns inside the Enterprise Campus network, which
influences bandwidth usage, response times, and the selection of the transmission medium.
Different types of application communication result in varying network demands. The following
sections review four types of application communication:
■
Peer-peer
■
Client–local server
■
Client–Server Farm
■
Client–Enterprise Edge server
Peer-Peer Applications
From the network designer’s perspective, peer-peer applications include applications in which the
majority of network traffic passes from one network edge device to another through the
organization’s network, as shown in Figure 4-1. Typical peer-peer applications include the
following:
■
Instant messaging: After the connection is established, the conversation is directly between
two peers.
Campus Design Considerations
■
223
IP phone calls: Two peers establish communication with the help of an IP telephony
manager; however, the conversation occurs directly between the two peers when the
connection is established. The network requirements of IP phone calls are strict because of
the need for quality of service (QoS) treatment to minimize delay and variation in delay
(jitter).
NOTE QoS is discussed in the later section “QoS Considerations in LAN Switches.”
■
File sharing: Some operating systems and applications require direct access to data on other
workstations.
■
Videoconference systems: Videoconferencing is similar to IP telephony; however, the
network requirements are usually higher, particularly related to bandwidth consumption and
QoS.
Figure 4-1
Peer-Peer Applications
Client–Local Server Applications
Historically, clients and servers were attached to a network device on the same LAN segment and
followed the 80/20 workgroup rule for client/server applications. This rule indicates that 80
percent of the traffic is local to the LAN segment and 20 percent leaves the segment.
With increased traffic on the corporate network and a relatively fixed location for users, an
organization might split the network into several isolated segments, as shown in Figure 4-2. Each
of these segments has its own servers, known as local servers, for its application. In this scenario,
servers and users are located in the same VLAN, and department administrators manage and
control the servers. The majority of department traffic occurs in the same segment, but some data
exchange (to a different VLAN) happens over the campus backbone. The bandwidth requirements
224
Chapter 4: Designing Basic Campus and Data Center Networks
for traffic passing to another segment typically are not crucial. For example, traffic to the Internet
goes through a common segment and has lower performance requirements than traffic to the local
segment servers.
Figure 4-2
Client–Local Server Application
Building
Access
Building
Access
Building
Distribution/
Campus
Core
Client–Server Farm Applications
Large organizations require their users to have fast, reliable, and controlled access to critical
applications.
Because high-performance multilayer switches have an insignificant switch delay, and because of
the reduced cost of network bandwidth, locating the servers centrally rather than in the workgroup
is technically feasible and reduces support costs.
To fulfill these demands and keep administrative costs down, the servers are located in a common
Server Farm, as shown in Figure 4-3. Using a Server Farm requires a network infrastructure that
is highly resilient (providing security) and redundant (providing high availability) and that
provides adequate throughput. High-end LAN switches with the fastest LAN technologies, such
as Gigabit Ethernet, are typically deployed in such an environment.
Campus Design Considerations
Figure 4-3
225
Client–Server Farm Application
Building
Access
Building
Access
Building
Distribution/
Campus
Core
Server Farm
In a large organization, application traffic might have to pass across more than one wiring closet,
LAN, or VLAN to reach servers in a Server Farm. Client–Server Farm applications apply the 20/
80 rule, where only 20 percent of the traffic remains on the local LAN segment, and 80 percent
leaves the segment to reach centralized servers, the Internet, and so on. Such applications include
the following:
■
Organizational mail servers (such as Microsoft Exchange)
■
Common file servers (such as Microsoft and Sun)
■
Common database servers for organizational applications (such as Oracle)
226
Chapter 4: Designing Basic Campus and Data Center Networks
Client–Enterprise Edge Applications
As shown in Figure 4-4, client–Enterprise Edge applications use servers on the Enterprise Edge to
exchange data between the organization and its public servers. The most important issues between
the Enterprise Campus network and the Enterprise Edge are security and high availability; data
exchange with external entities must be in constant operation. Applications installed on the
Enterprise Edge can be crucial to organizational process flow; therefore, any outages can increase
costs.
Figure 4-4
Client–Enterprise Edge Application
Building
Access
Building
Access
Building
Distribution/
Campus
Core
Server
Farm
Enterprise
Edge
Typical Enterprise Edge applications are based on web technologies. Examples of these
application types—such as external mail and DNS servers and public web servers—can be found
in any organization.
Organizations that support their partnerships through e-commerce applications also place their
e-commerce servers into the Enterprise Edge. Communication with these servers is vital because
of the two-way replication of data. As a result, high redundancy and resiliency of the network,
along with security, are the most important requirements for these applications.
Campus Design Considerations
227
Application Requirements
Table 4-1 lists the types of application communication and compares their requirements with
respect to some important network parameters. The following sections discuss these parameters.
Table 4-1
Network Application Requirements
Client–Local
Server
Client–
Server Farm
Client–Enterprise
Edge Servers
Switched
Switched
Switched
Switched
Low
Medium
to high
Medium
High
Medium
High availability
Low
Low
Medium
High
High
Total network
cost
Low
Low
Medium
High
Medium
Parameter
Peer-Peer
Connectivity
type
Shared
Total required
throughput
Connectivity
The wide use of LAN switching at Layer 2 has revolutionized local-area networking and has
resulted in increased performance and more bandwidth for satisfying the requirements of new
organizational applications. LAN switches provide this performance benefit by increasing
bandwidth and throughput for workgroups and local servers.
NOTE Using shared media for peer-to-peer communication is suitable only in a limited scope,
typically when the number of client workstations is very low (for example, with four or fewer
workstations in small home offices).
Throughput
The required throughput varies from application to application. An application that exchanges data
between users in the workgroup usually does not require a high throughput network infrastructure.
However, organizational-level applications usually require a high-capacity link to the servers,
which are usually located in the Server Farm.
NOTE Peer-peer communication, especially in the case of frequent file transfers, could be
intensive, and the total throughput requirements can be high.
Applications located on servers in the Enterprise Edge are normally not as bandwidth-consuming
as applications in the Server Farm, but they might require high availability and security features.
228
Chapter 4: Designing Basic Campus and Data Center Networks
High Availability
The high availability of an application is a function of the application and the entire network
between a client workstation and a server located in the network. Although the network design
primarily determines the network’s availability, the individual components’ mean time between
failures (MTBF) is a factor. Redundancy in the Building Distribution and Campus Core layers is
recommended.
Total Network Cost
Depending on the application and the resulting network infrastructure, the cost varies from low in
a peer-peer environment to high in a network with redundancy in the Building Distribution,
Campus Core, and Server Farm. In addition to the cost of duplicate components for redundancy,
costs include the cables, routers, switches, software, and so forth.
Environmental Characteristics and Considerations
The campus environment, including the location of the network nodes, the distance between the
nodes, and the transmission media used, influences the network topology. This section examines
these considerations.
Network Geography Considerations
The location of Enterprise Campus nodes and the distances between them determine the network’s
geography.
Nodes, including end-user workstations and servers, can be located in one or multiple buildings.
Based on the location of nodes and the distance between them, the network designer decides
which technology should interconnect them based on the required maximum speed, distance, and
so forth.
Consider the following structures with respect to the network geography:
■
Intrabuilding
■
Interbuilding
■
Distant remote building
These geographic structures, described in the following sections, serve as guides to help determine
Enterprise Campus transmission media and the logical modularization of the Enterprise Campus
network.
Campus Design Considerations
229
Intrabuilding Structure
An intrabuilding campus network structure provides connectivity for all end nodes located in the
same building and gives them access to the network resources. The Building Access and Building
Distribution layers are typically located in the same building.
User workstations are usually attached to the Building Access switches in the floor wiring closet
with twisted-pair copper cables. Wireless LANs (WLAN) can also be used to provide
intrabuilding connectivity, enabling users to establish and maintain a wireless network connection
throughout—or between—buildings, without the limitations of wires or cables.
NOTE WLANs are covered in Chapter 9, “Wireless Network Design Considerations.”
Access layer switches usually connect to the Building Distribution switches over optical fiber,
providing better transmission performance and less sensitivity to environmental disturbances than
copper. Depending on the connectivity requirements to resources in other parts of the campus, the
Building Distribution switches may be connected to Campus Core switches.
Interbuilding Structure
As shown in Figure 4-5, an interbuilding network structure provides connectivity between the
individual campus buildings’ central switches (in the Building Distribution and/or Campus Core
layers). These buildings are usually in close proximity, typically only a few hundred meters to a
few kilometers apart.
Figure 4-5
Interbuilding Network Structure
Building
Access
Building
Access
Building
Distribution/
Campus
Core
Building A
Building B
230
Chapter 4: Designing Basic Campus and Data Center Networks
Because the nodes in all campus buildings usually share common devices such as servers, the
demand for high-speed connectivity between the buildings is high. Within a campus, companies
might deploy their own physical transmission media. To provide high throughput without
excessive interference from environmental conditions, optical fiber is the medium of choice
between the buildings.
Depending on the connectivity requirements to resources in other parts of the campus, the
Building Distribution switches might be connected to Campus Core switches.
Distant Remote Building Structure
When connecting buildings at distances that exceed a few kilometers (but still within a
metropolitan area), the most important factor to consider is the physical media. The speed and cost
of the network infrastructure depend heavily on the media selection.
If the bandwidth requirements are higher than the physical connectivity options can support, the
network designer must identify the organization’s critical applications and then select the
equipment that supports intelligent network services—such as QoS and filtering capabilities—that
allow optimal use of the bandwidth.
Some companies might own their media, such as fiber, microwave, or copper lines. However, if
the organization does not own physical transmission media to certain remote locations, the
Enterprise Campus must connect through the Enterprise Edge using connectivity options from
public service providers, such as traditional WAN links or Metro Ethernet.
The risk of downtime and the service level agreements available from the service providers must
also be considered. For example, inexpensive but unreliable and slowly repaired fiber is not
desirable for mission-critical applications.
NOTE Chapter 5, “Designing Remote Connectivity,” includes further discussion of
connecting remote locations.
Transmission Media Considerations
An Enterprise Campus can use various physical media to interconnect devices. The type of cable
is an important consideration when deploying a new network or upgrading an existing one.
Cabling infrastructure represents a long-term investment—it is usually installed to last for ten
years or more. The cost of the medium (including installation costs) and the available budget must
be considered in addition to the technical characteristics such as signal attenuation and
electromagnetic interference.
Campus Design Considerations
231
A network designer must be aware of physical media characteristics, because they influence the
maximum distance permitted between devices and the network’s maximum transmission speed.
Twisted-pair cables (copper), optical cables (fiber), and wireless (satellite, microwave, and
Institute of Electrical and Electronics Engineers [IEEE] 802.11 LANs) are the most common
physical transmission media used in modern networks.
Copper
Twisted-pair cables consist of four pairs of isolated wires that are wrapped together in plastic
cable. With unshielded twisted-pair (UTP), no additional foil or wire is wrapped around the core
wires. This makes these wires less expensive, but also less immune to external electromagnetic
influences than shielded twisted-pair cables. Twisted-pair cabling is widely used to interconnect
workstations, servers, or other devices from their network interface card (NIC) to the network
connector at a wall outlet.
The characteristics of twisted-pair cable depend on the quality of the material from which they are
made. As a result, twisted-pair cables are sorted into categories. Category 5 or greater is
recommended for speeds of 100 megabits per second (Mbps) or higher. Category 6 is
recommended for Gigabit Ethernet. Because of the possibility of signal attenuation in the wires,
the maximum cable length is usually limited to 100 meters. One reason for this length limitation
is collision detection. If one PC starts to transmit and another PC is more than 100 meters away,
the second PC might not detect the signal on the wire and could therefore start to transmit at the
same time, causing a collision on the wire.
One of the main considerations in network cabling design is electromagnetic interference. Due to
high susceptibility to interference, twisted pair is not suitable for use in environments with
electromagnetic influences. Similarly, twisted pair is not appropriate for environments that can be
affected by the interference created by the cable itself.
NOTE Some security issues are also associated with electromagnetic interference. Hackers
with access to the cabling infrastructure might eavesdrop on the traffic carried across UTP,
because these cables emit electromagnetic signals that can be detected.
Distances longer than 100 meters may require Long-Reach Ethernet (LRE). LRE is Ciscoproprietary technology that runs on voice-grade copper wires; it allows higher distances than
traditional Ethernet and is used as an access technology in WANs. Chapter 5 further describes
LRE.
232
Chapter 4: Designing Basic Campus and Data Center Networks
Optical Fiber
Typical requirements that lead to the selection of optical fiber cable as a transmission medium
include distances longer than 100 meters and immunity to electromagnetic interference. Different
types of optical cable exist; the two main types are multimode (MM) and single-mode (SM).
Multimode fiber is optical fiber that carries multiple light waves or modes concurrently, each at a
slightly different reflection angle within the optical fiber core. Because modes tend to disperse
over longer lengths (modal dispersion), MM fiber transmission is used for relatively short
distances. Typically, LEDs are used with MM fiber. The typical diameter of an MM fiber is 50 or
62.5 micrometers.
Single-mode (also known as monomode) fiber is optical fiber that carries a single wave (or laser)
of light. Lasers are typically used with SM fiber. The typical diameter of an SM fiber core is
between 2 and 10 micrometers. Single-mode fiber limits dispersion and loss of light, and therefore
allows for higher transmission speeds, but it is more expensive than multimode fiber.
Both MM and SM cables have lower loss of signal than copper cable. Therefore, optical cables
allow longer distances between devices. Optical fiber cable has precise production and installation
requirements; therefore, it costs more than twisted-pair cable.
Optical fiber requires a precise technique for cable coupling. Even a small deviation from the ideal
position of optical connectors can result in either a loss of signal or a large number of frame losses.
Careful attention during optical fiber installation is imperative because of the traffic’s high
sensitivity to coupling misalignment. In environments where the cable does not consist of a single
fiber from point to point, coupling is required, and loss of signal can easily occur.
Wireless
The inherent nature of wireless is that it does not require wires to carry information across
geographic areas that are otherwise prohibitive to connect. WLANs can either replace a traditional
wired network or extend its reach and capabilities. In-building WLAN equipment includes access
points (AP) that perform functions similar to wired networking hubs, and PC client adapters. APs
are distributed throughout a building to expand range and functionality for wireless clients.
Wireless bridges and APs can also be used for interbuilding connectivity and outdoor wireless
client access.
Wireless clients supporting IEEE 802.11g allow speeds of up to 54 Mbps in the 2.4-GHz band over
a range of about 100 feet. The IEEE 802.11b standard supports speeds of up to 11 Mbps in the 2.4GHz band. The IEEE 802.11a standard supports speeds of up to 54 Mbps in the 5-GHz band.
NOTE Wireless issues are discussed further in Chapter 9.
Campus Design Considerations
233
Transmission Media Comparison
Table 4-2 presents various characteristics of the transmission media types.
Transmission Media Type Characteristics
Table 4-2
Parameter
Distance
(range)
Copper
Twisted Pair
Up to 100 meters
MM Fiber
SM Fiber
Wireless
Up to 2 kilometers
(km) (Fast
Ethernet)
Up to 10 km (Fast
Ethernet)
Up to 500 m at
1 Mbps
Up to 550 m
(Gigabit Ethernet)
Up to 300 m (10
Gigabit Ethernet)
Up to 5 km (Gigabit
Ethernet)
Up to 80 km (10
Gigabit Ethernet)
Bandwidth
Up to 10 Gigabits
per second (Gbps)
Up to 10 Gbps
Up to 10 Gbps or
higher
Up to 54 Mbps1
Price
Inexpensive
Moderate
Moderate to
expensive
Moderate
Deployment
area
Wiring closet
Internode or
interbuilding
Internode or
interbuilding
Internode or
interbuilding
1Wireless
is half-duplex, so effective bandwidth will be no more than half of this rate.
The parameters listed in Table 4-2 are as follows:
■
Distance: The maximum distance between network devices (such as workstations, servers,
printers, and IP phones) and network nodes, and between network nodes. The distances
supported with fiber vary, depending on whether it supports Fast Ethernet or Gigabit Ethernet,
the type of fiber used, and the fiber interface used.
■
Bandwidth: The required bandwidth in a particular segment of the network, or the connection
speed between the nodes inside or outside the building.
NOTE The wireless throughput is significantly less than its maximum data rate due to the
half-duplex nature of radio frequency technology.
■
Price: Along with the price of the medium, the installation cost must be considered. For
example, fiber installation costs are significantly higher than copper installation costs because
of strict requirements for optical cable coupling.
234
Chapter 4: Designing Basic Campus and Data Center Networks
■
Deployment area: Indicates whether wiring is for wiring closet only (where users access the
network), for internode, or for interbuilding connections.
When deploying devices in an area with high electrical or magnetic interference—for
example, in an industrial environment—you must pay special attention to media selection. In
such environments, the disturbances might interfere with data transfer and therefore result in
an increased number of frame errors. Electrical grounding can isolate some external
disturbance, but the additional wiring increases costs. Fiber- optic installation is the only
reasonable solution for such networks.
Cabling Example
Figure 4-6 illustrates a typical campus network structure. End devices such as workstations, IP
phones, and printers are no more than 100 m away from the LAN switch. UTP wiring can easily
handle the required distance and speed; it is also easy to set up, and the price-performance ratio is
reasonable.
Figure 4-6
Campus Networks Use Many Different Types of Cables
UTP Cable
Fast Ethernet: <100 m
Fiber MM
GE: <200 m
Fiber MM
Fiber SM
Gigabit Ethernet: <5 km
NOTE The distances shown in the figure are for a sample network; however, the maximum
distance supported varies depending on the fiber interface used.
Campus Design Considerations
235
Optical fiber cables handle the higher speeds and distances that may be required among switch
devices. MM optical cable is usually satisfactory inside the building. Depending on distance,
organizations use MM or SM optical for interbuilding communication cable. If the distances are
short (up to 500 m), MM fiber is a more reasonable solution for speeds up to 1 Gbps.
However, an organization can install SM fiber if its requirements are for longer distances, or if
there are plans for future higher speeds (for example, 10 Gbps).
NOTE Selecting the less expensive type of fiber might satisfy a customer’s current needs, but
this fiber might not meet the needs of future upgrades or equipment replacement. Replacing
cable can be very expensive. Planning with future requirements in mind might result in higher
initial costs but lower costs in the long run.
Infrastructure Device Characteristics and Considerations
Network end-user devices are commonly connected using switched technology rather than using
a shared media segment. Switched technology provides dedicated network bandwidth for each
device on the network. Switched networks can support network infrastructure services, such as
QoS, security, and management; a shared media segment cannot support these features.
In the past, LAN switches were Layer 2–only devices. Data link layer (Layer 2) switching
supports multiple simultaneous frame flows. Multilayer switching performs packet switching and
several functions at Layer 3 and at higher Open Systems Interconnection (OSI) layers and can
effectively replace routers in the LAN switched environment. Deciding whether to deploy pure
data link layer switches or multilayer switches in the enterprise network is not a trivial decision.
It requires a full understanding of the network topology and user demands.
KEY
The difference between data link layer and multilayer switching is the type of information
POINT used inside the frame to determine the correct output interface.
Data link layer switching forwards frames based on data link layer information (the MAC
address), whereas multilayer switching forwards frames based on network layer
information (such as IP address).
Multilayer switching is hardware-based switching and routing integrated into a single
platform. See the upcoming “Multilayer Switching and Cisco Express Forwarding”
section for implementation details.
236
Chapter 4: Designing Basic Campus and Data Center Networks
When deciding on the type of switch to use and the features to be deployed in a network, consider
the following factors:
■
Infrastructure service capabilities: The network services that the organization requires (IP
multicast, QoS, and so on).
■
Size of the network segments: How the network is segmented and how many end devices
will be connected, based on traffic characteristics.
■
Convergence time: The maximum amount of time the network will be unavailable in the
event of network outages.
■
Cost: The budget for the network infrastructure. Note that multilayer switches are typically
more expensive than their Layer 2 counterparts; however, multilayer functionality can be
obtained by adding cards and software to a modular Layer 2 switch.
The following sections examine the following infrastructure characteristics: convergence time,
multilayer switching and Cisco Express Forwarding, IP multicast, QoS, and load sharing.
Convergence Time
Loop-prevention mechanisms in a Layer 2 topology cause the Spanning Tree Protocol (STP) to
take between 30 and 50 seconds to converge. To eliminate STP convergence issues in the Campus
Core, all the links connecting core switches should be routed links, not VLAN trunks. This also
limits the broadcast and failure domains.
NOTE STP tools are covered in the section “The Cisco STP Toolkit” later in this chapter.
In the case where multilayer switching is deployed everywhere, convergence is within seconds
(depending on the routing protocol implemented) because all the devices detect their connected
link failure immediately and act on it promptly (sending respective routing updates).
In a mixed Layer 2 and Layer 3 environment, the convergence time depends not only on the Layer
3 factors (including routing protocol timers such as hold-time and neighbor loss detection), but
also on the STP convergence.
Using multilayer switching in a structured design reduces the scope of spanning-tree domains. It
is common to use a routing protocol, such as Enhanced Interior Gateway Routing Protocol
(EIGRP) or Open Shortest Path First (OSPF), to handle load balancing, redundancy, and recovery
in the Campus Core.
Campus Design Considerations
237
Multilayer Switching and Cisco Express Forwarding
As noted in Chapter 3, “Structuring and Modularizing the Network,” in this book the term
multilayer switching denotes a switch’s generic capability to use information at different protocol
layers as part of the switching process; the term Layer 3 switching is a synonym for multilayer
switching in this context.
The use of protocol information from multiple layers in the switching process is implemented in
two different ways within Cisco switches. The first way is called multilayer switching (MLS), and
the second way is called Cisco Express Forwarding.
Multilayer Switching
Multilayer switching, as its name implies, allows switching to take place at different protocol
layers. Switching can be performed only on Layers 2 and 3, or it can also include Layer 4. MLS
is based on network flows.
KEY
A network flow is a unidirectional sequence of packets between a source and a destination.
POINT Flows can be very specific. For example, a network flow can be identified by source and
destination IP addresses, protocol numbers, and port numbers as well as the interface on
which the packet enters the switch.
The three major components of MLS are as follows:
■
MLS Route Processor (MLS-RP): The MLS-enabled router that performs the traditional
function of routing between subnets
■
MLS Switching Engine (MLS-SE): The MLS-enabled switch that can offload some of the
packet-switching functionality from the MLS-RP
■
Multilayer Switching Protocol (MLSP): Used by the MLS-RP and the MLS-SE to
communicate with each other
KEY
MLS allows communication between two devices that are in different VLANs (on
POINT different subnets), that are connected to the same MLS-SE, and that share a common
MLS-RP. The communication bypasses the MLS-RP and instead uses the MLS-SE to
relay the packets, thus improving overall performance.
238
Chapter 4: Designing Basic Campus and Data Center Networks
MLS History
Pure MLS is an older technique used on the Catalyst 5500 switches with a Route Switch Module
(manually configured as the MLS-RP) and a Supervisor Engine III with a NetFlow Feature Card
(manually configured as the MLS-SE). The first packet of a flow is routed by the MLS-RP,
whereas the MLS-SE records (caches) all flow, or header, information; all subsequent packets in
the identical flow are hardware-switched by the MLS-SE.
Most of Cisco’s modern multilayer switches use Cisco Express Forwarding–based multilayer
switching (as described in the next section), using hardware integrated in the switch platform.
Cisco Express Forwarding
Cisco Express Forwarding, like MLS, aims to speed the data routing and forwarding process in a
network. However, the two methods use different approaches.
Cisco Express Forwarding uses two components to optimize the lookup of the information
required to route packets: the Forwarding Information Base (FIB) for the Layer 3 information and
the adjacency table for the Layer 2 information.
Cisco Express Forwarding creates an FIB by maintaining a copy of the forwarding information
contained in the IP routing table. The information is indexed, so it is quick to search for matching
entries as packets are processed. Whenever the routing table changes, the FIB is also changed so
that it always contains up-to-date paths. A separate routing cache is not required.
The adjacency table contains Layer 2 frame header information, including next-hop addresses, for
all FIB entries. Each FIB entry can point to multiple adjacency table entries—for example, if two
paths exist between devices for load balancing.
After a packet is processed and the route is determined from the FIB, the Layer 2 next-hop and
header information is retrieved from the adjacency table, and the new frame is created to
encapsulate the packet.
Cisco Express Forwarding can be enabled on a router (for example, on a Cisco 7600 Series router)
or on a switch with Layer 3 functionality (such as the Catalyst 6500 Series switch).
NOTE Not all Catalyst switches support Cisco Express Forwarding. See the specific product
documentation on the Cisco website for device support information.
Campus Design Considerations
239
IP Multicast
A traditional IP network is not efficient when sending the same data to many locations; the data is
sent in unicast packets and therefore is replicated on the network for each destination. For
example, if a CEO’s annual video address is sent out on a company’s network for all employees
to watch, the same data stream must be replicated for each employee. Obviously, this would
consume many resources, including precious WAN bandwidth.
IP multicast technology enables networks to send data to a group of destinations in the most
efficient way. The data is sent from the source as one stream; this single data stream travels as far
as it can in the network. Devices replicate the data only if they need to send it out on multiple
interfaces to reach all members of the destination group.
Multicast groups are identified by Class D IP addresses, which are in the range from 224.0.0.0 to
239.255.255.255. IP multicast involves some new protocols for network devices, including two for
informing network devices which hosts require which multicast data stream and one for
determining the best way to route multicast traffic. These three protocols are described in the
following sections.
Internet Group Management Protocol and Cisco Group Management Protocol
Internet Group Management Protocol (IGMP) is used between hosts and their local routers. Hosts
register with the router to join (and leave) specific multicast groups; the router then knows that it
needs to forward the data stream destined for a specific multicast group to the registered hosts.
In a typical network, hosts are not directly connected to routers but are connected to a Layer 2
switch, which is in turn connected to the router. IGMP is a network layer (Layer 3) protocol.
Consequently, Layer 2 switches do not participate in IGMP and therefore are not aware of which
hosts attached to them might be part of a particular multicast group. By default, Layer 2 switches
flood multicast frames to all ports (except the port from which the frame originated), which means
that all multicast traffic received by a switch would be sent out on all ports, even if only one device
on one port required the data stream. Cisco therefore developed Cisco Group Management
Protocol (CGMP), which is used between switches and routers. The routers tell each of their
directly connected switches about IGMP registrations that were received from hosts through the
switch—in other words, from hosts accessible through the switch. The switch then forwards the
multicast traffic only to ports that those requesting hosts are on, rather than flooding the data to all
ports. Switches, including non-Cisco switches, can alternatively use IGMP snooping to eavesdrop
on the IGMP messages sent between routers and hosts to learn similar information.
Figure 4-7 illustrates the interaction of these two protocols. Hosts A and D register, using IGMP,
to join the multicast group to receive data from the server. The router informs both switches of
these registrations using CGMP. When the router forwards the multicast data to the hosts, the
240
Chapter 4: Designing Basic Campus and Data Center Networks
switches ensure that the data goes out of only the ports on which hosts A and D are connected. The
ports on which hosts B and C are connected do not receive the multicast data.
Figure 4-7
IGMP and CGMP Tell Network Devices Which Hosts Want Which Multicast Data
Server
IGMP (join)
IGMP (join)
CGMP CGMP
A
B
C
D
Multicast Data
Protocol-Independent Multicast Routing Protocol
Protocol-Independent Multicast (PIM) is used by routers that forward multicast packets. The
protocol-independent part of the name indicates that PIM is independent of the unicast routing
protocol (for example, EIGRP or OSPF) running in the network. PIM uses the normal routing
table, populated by the unicast routing protocol, in its multicast routing calculations.
NOTE EIGRP, OSPF, and so forth are called unicast routing protocols because they are used
to create and maintain unicast routing information in the routing table. Recall, though, that they
use multicast packets (or broadcast packets in some protocols) to send their routing update
traffic. Note that a variant of OSPF, called multicast OSPF, supports multicast routing; Cisco
routers do not support multicast OSPF.
Unlike other routing protocols, no routing updates are sent between PIM routers.
Campus Design Considerations
241
When a router forwards a unicast packet, it looks up the destination address in its routing table and
forwards the packet out of the appropriate interface. However, when forwarding a multicast
packet, the router might have to forward the packet out of multiple interfaces, toward all the
receiving hosts. Multicast-enabled routers use PIM to dynamically create distribution trees that
control the path that IP multicast traffic takes through the network to deliver traffic to all receivers.
The following two types of distribution trees exist:
■
Source tree: A source tree is created for each source sending to each multicast group. The
source tree has its root at the source and has branches through the network to the receivers.
■
Shared tree: A shared tree is a single tree that is shared between all sources for each multicast
group. The shared tree has a single common root, called a rendezvous point (RP).
Multicast routers consider the source address of the multicast packet as well as the destination
address, and they use the distribution tree to forward the packet away from the source and toward
the destination. Forwarding multicast traffic away from the source, rather than to the receiver, is
called Reverse Path Forwarding (RPF). To avoid routing loops, RPF uses the unicast routing table
to determine the upstream (toward the source) and downstream (away from the source) neighbors
and ensures that only one interface on the router is considered to be an incoming interface for data
from a specific source. For example, data received on one router interface and forwarded out
another interface can loop around the network and come back into the same router on a different
interface; RPF ensures that this data is not forwarded again.
PIM operates in one of the following two modes:
■
Sparse mode: This mode uses a “pull” model to send multicast traffic. Sparse mode uses a
shared tree and therefore requires an RP to be defined. Sources register with the RP. Routers
along the path from active receivers that have explicitly requested to join a specific multicast
group register to join that group. These routers calculate, using the unicast routing table,
whether they have a better metric to the RP or to the source itself; they forward the join
message to the device with the better metric.
■
Dense mode: This mode uses a “push” model that floods multicast traffic to the entire
network. Dense mode uses source trees. Routers that have no need for the data (because they
are not connected to receivers that want the data or to other routers that want it) request that
the tree be pruned so that they no longer receive the data.
QoS Considerations in LAN Switches
A campus network transports many types of applications and data, which might include highquality video and delay-sensitive data (such as real-time voice). Bandwidth-intensive applications
enhance many business processes but might also stretch network capabilities and resources.
242
Chapter 4: Designing Basic Campus and Data Center Networks
Networks must provide secure, predictable, measurable, and sometimes guaranteed services.
Achieving the required QoS by managing delay, delay variation (jitter), bandwidth, and packet
loss parameters on a network can be the key to a successful end-to-end business solution. QoS
mechanisms are techniques used to manage network resources.
The assumption that a high-capacity, nonblocking switch with multigigabit backplanes never
needs QoS is incorrect. Many networks or individual network elements are oversubscribed; it is
easy to create scenarios in which congestion can potentially occur and that therefore require some
form of QoS. The sum of the bandwidths on all ports on a switch where end devices are connected
is usually greater than that of the uplink port; when the access ports are fully used, congestion on
the uplink port is unavoidable. Uplinks from the Building Access layer to the Building Distribution
layer, or from the Building Distribution layer to the Campus Core layer, most often require QoS.
Depending on traffic flow and uplink oversubscription, bandwidth is managed with QoS
mechanisms on the Building Access, Building Distribution, or even Campus Core switches.
QoS Mechanisms
QoS mechanisms or tools implemented on LAN switches include the following:
■
Classification and marking: Packet classification is the process of partitioning traffic into
multiple priority levels, or classes of service. Information in the frame or packet header is
inspected, and the frame’s priority is determined. Marking is the process of changing the
priority or class of service (CoS) setting within a frame or packet to indicate its classification.
For IEEE 802.1Q frames, the 3 user priority bits in the Tag field—commonly referred to as
the 802.1p bits—are used as CoS bits. However, Layer 2 markings are not useful as end-toend QoS indicators, because the medium often changes throughout a network (for example,
from Ethernet to a Frame Relay WAN). Thus, Layer 3 markings are required to support endto-end QoS.
For IPv4, Layer 3 marking can be done using the 8-bit type of service (ToS) field in the packet
header. Originally, only the first 3 bits were used; these bits are called the IP Precedence bits.
Because 3 bits can specify only eight marking values, IP precedence does not allow a granular
classification of traffic. Thus, more bits are now used: the first 6 bits in the TOS field are now
known as the DiffServ Code Point (DSCP) bits.
Campus Design Considerations
243
NOTE Two models exist for deploying end-to-end QoS in a network for traffic that is not
suitable for best-effort service: Integrated Services (IntServ) and Differentiated Services
(DiffServ). End-to-end QoS means that the network provides the level of service required by
traffic throughout the entire network, from one end to the other.
With IntServ, an application requests services from the network, and the network devices
confirm that they can meet the request, before any data is sent. The data from the application is
considered a flow of packets.
In contrast, with DiffServ, each packet is marked as it enters the network based on the type of
traffic that it contains. The network devices then use this marking to determine how to handle
the packet as it travels through the network. The DSCP bits are used to implement the DiffServ
model.
■
Congestion management: Queuing: Queuing separates traffic into various queues or
buffers; the marking in the frame or packet can be used to determine which queue traffic goes
in. A network interface is often congested (even at high speeds, transient congestion is
observed); queuing techniques ensure that traffic from the critical applications is forwarded
appropriately. For example, real-time applications such as VoIP and stock trading might have
to be forwarded with the least latency and jitter.
■
Congestion Management: Scheduling: Scheduling is the process that determines the order
in which queues are serviced.
■
Policing and shaping: Policing and shaping tools identify traffic that violates some threshold
level and reduces a stream of data to a predetermined rate or level. Traffic shaping buffers the
frames for a short time. Policing simply drops or lowers the priority of the frame that is out
of profile.
NOTE Later chapters in this book describe two other QoS mechanisms: congestion avoidance
and link efficiency techniques.
QoS in LAN Switches
When configuring QoS features, classify the specific network traffic, prioritize and mark it
according to its relative importance, and use congestion management and policing and shaping
techniques to provide preferential treatment. Implementing QoS in the network makes network
performance more predictable and bandwidth use more effective. Figure 4-8 illustrates where the
various categories of QoS may be implemented in LAN switches.
244
Chapter 4: Designing Basic Campus and Data Center Networks
Figure 4-8
QoS in LAN Switches
Building Access
Data Link Layer
Switching
Building Distribution
Multilayer Switching
Campus Core
Multilayer Switching
Classification, Marking,
Congestion Management
Congestion Management,
Policing, Shaping
Data link layer switches are commonly used in the Building Access layer. Because they do not
have knowledge of Layer 3 or higher information, these switches provide QoS classification and
marking based only on the switch’s input port or MAC address. For example, traffic from a
particular host can be defined as high-priority traffic on the uplink port. Multilayer switches may
be used in the Building Access layer if Layer 3 services are required.
Building Distribution layer, Campus Core layer, and Server Farm switches are typically multilayer
switches and can provide QoS selectively—not only on a port basis, but also according to higherlayer parameters, such as IP addresses, port numbers, or QoS bits in the IP packet. These switches
make QoS classification more selective by differentiating the traffic based on the application. QoS
in distribution and core switches must be provided in both directions of traffic flow. The policing
for certain traffic is usually implemented on the distribution layer switches.
Load Sharing in Layer 2 and Layer 3 Switches
Layer 2 and Layer 3 switches handle load sharing differently.
Layer 2 Load Sharing
Because Layer 2 switches are aware of only MAC addresses, they cannot perform any intelligent
load sharing. In an environment characterized by multiple VLANs per access switch and more
than one connection to the uplink switch, the solution is to put all uplink connections into trunks
(Inter-Switch Link [ISL] or IEEE 802.1Q). Each trunk carries all VLANs; however, without
additional configuration, the STP protocol disables all nonprimary uplink ports. This
Enterprise Campus Design
245
configuration might result in a bandwidth shortage, because the traffic for all the VLANs passes
through the same link. To overcome this problem, the STP parameters must be configured to carry
some VLANs across one uplink and the rest of the VLANs across the other uplink. For example,
one uplink could be configured to carry the VLANs with odd numbers, whereas the other uplink
would be configured to carry the VLANs with even numbers.
NOTE Some options related to STP are described in the “Building Access Layer Design
Considerations” section on the next page.
Layer 3 Load Sharing
Layer 3–capable switches can perform load sharing based on IP addresses, either per packet or per
destination-source IP pair.
The advantage of Layer 3 IP load sharing is that links are used more proportionately than with
Layer 2 load sharing, which is based on VLANs only. For example, the traffic in one VLAN can
be very heavy, while the traffic in another VLAN is very low; in this case, per-VLAN load sharing
by using even and odd VLANs is not appropriate. Due to the dynamic nature of organizational
applications, Layer 3 load sharing is more appropriate. Layer 3 allows for dynamic adaptation to
link utilization and depends on the routing protocol design. Layer 3 switches also support Layer
2 load sharing, so they can still apply per-VLAN load sharing while connected to other Layer 2
switches.
Enterprise Campus Design
As discussed in Chapter 3, the Enterprise Campus functional area is divided into the following
modules:
■
Campus Infrastructure—This module includes three layers:
— The Building Access layer
— The Building Distribution layer
— The Campus Core layer
■
Server Farm
■
Edge Distribution (optional)
This section discusses the design of each of the layers and modules within the Enterprise Campus
and identifies best practices related to the design of each.
246
Chapter 4: Designing Basic Campus and Data Center Networks
Enterprise Campus Requirements
As shown in Table 4-3, each Enterprise Campus module has different requirements. For example,
this table illustrates how modules located closer to the users require a higher degree of scalability
so that the Campus network can be expanded in the future without redesigning the complete
network. For example, adding new workstations to a network should result in neither high
investment cost nor performance degradations.
Enterprise Campus Design Requirements
Table 4-3
Building
Access
Building
Distribution
Campus
Core
Server
Farm
Edge
Distribution
Technology
Data link
layer or
multilayer
switched
Multilayer
switched
Multilayer
switched
Multilayer
switched
Multilayer
switched
Scalability
High
Medium
Low
Medium
Low
High availability
Medium
Medium
High
High
Medium
Performance
Medium
Medium
High
High
Medium
Cost per port
Low
Medium
High
High
Medium
Requirement
End users (in the Building Access layer) usually do not require high performance or high
availability, but these features are crucial to the Campus Core layer and the Server Farm module.
The price per port increases with increased performance and availability. The Campus Core and
Server Farm require a guarantee of higher throughput so they can handle all traffic flows and not
introduce additional delays or drops to the network traffic.
The Edge Distribution module does not require the same performance as in the Campus Core.
However, it can require other features and functionalities that increase the overall cost.
Building Access Layer Design Considerations
When implementing the campus infrastructure’s Building Access layer, consider the following
questions:
■
How many users or host ports are currently required in the wiring closet, and how many will
it require in the future? Should the switches be fixed or modular configuration?
■
How many ports are available for end-user connectivity at the walls of the buildings?
■
How many access switches are not located in wiring closets?
Enterprise Campus Design
247
■
What cabling is currently available in the wiring closet, and what cabling options exist for
uplink connectivity?
■
What data link layer performance does the node need?
■
What level of redundancy is needed?
■
What is the required link capacity to the Building Distribution layer switches?
■
How will VLANs and STP be deployed? Will there be a single VLAN, or several VLANs per
access switch? Will the VLANs on the switch be unique or spread across multiple switches?
The latter design was common a few years ago, but today end-to-end VLANs (also called
campuswide VLANs) are not desirable.
■
Are additional features, such as port security, multicast traffic management, and QoS (such as
traffic classification based on ports), required?
Based on the answers to these questions, select the devices that satisfy the Building Access layer’s
requirements. The Building Access layer should maintain the simplicity of traditional LAN
switching, with the support of basic network intelligent services and business applications.
KEY
The following are best-practice recommendations for optimal Building Access layer
POINT design:
■
Manage VLANs and STP
■
Manage trunks between switches
■
Manage default Port Aggregation Protocol (PAgP) settings
■
Consider implementing routing
These recommendations are described in the following sections.
Managing VLANs and STP
This section details best-practice recommendations related to VLANs and STP.
Limit VLANs to a Single Wiring Closet Whenever Possible
As a best practice, limit VLANs to a single wiring closet whenever possible.
NOTE Cisco (and other vendors) use the term local VLAN to refer to a VLAN that is limited
to a single wiring closet.
248
Chapter 4: Designing Basic Campus and Data Center Networks
Avoid Using STP if Possible
STP is defined in IEEE 802.1d. Avoid requiring any type of STP (including Rapid STP [RSTP])
by design for the most deterministic and highly available network topology that is predictable and
bounded and has reliably tuned convergence.
For example, the behavior of Layer 2 environments (using STP) and Layer 3 environments (using
a routing protocol) are different under “soft failure” conditions, when keepalive messages are lost.
In an STP environment, if bridge protocol data units (BPDU) are lost, the network fails in an
“open” state, forwarding traffic with unknown destinations on all ports, potentially causing
broadcast storms.
In contrast, routing environments fail “closed,” dropping routing neighbor relationships, breaking
connectivity, and isolating the soft failed devices.
Another reason to avoid using STP is for load balancing: If there are two redundant links, STP by
default uses only one of the links, while routing protocols by default use both.
If STP Is Required, Use RSTP with Per-VLAN Spanning Tree Plus
Cisco developed Per-VLAN Spanning Tree (PVST) so that switches can have one instance of STP
running per VLAN, allowing redundant physical links within the network to be used for different
VLANs and thus reducing the load on individual links. PVST works only over ISL trunks.
However, Cisco extended this functionality for 802.1Q trunks with the Per-VLAN Spanning Tree
Plus (PVST+) protocol. Before this became available, 802.1Q trunks supported only Common
Spanning Tree (CST), with one instance of STP running for all VLANs.
Multiple-Instance STP (MISTP) is an IEEE standard (802.1s) that uses RSTP and allows several
VLANs to be grouped into a single spanning-tree instance. Each instance is independent of the
other instances so that a link can forward for one group of VLANs while blocking for other
VLANs. MISTP therefore allows traffic to be shared across all the links in the network, but it
reduces the number of STP instances that would be required if PVST/PVST+ were implemented.
RSTP is defined by IEEE 802.1w. RPVST+ is a Cisco enhancement of RSTP. As a best practice,
if STP must be used, use RPVST+.
NOTE When Cisco documentation refers to implementing RSTP, it is referring to RPVST+.
The Cisco RPVST+ implementation is far superior to 802.1d STP and even PVST+ from a
convergence perspective. It greatly improves the convergence times for any VLAN on which a link
comes up, and it greatly improves the convergence time compared to BackboneFast (as described
in the next section) for any indirect link failures.
Enterprise Campus Design
249
Two other STP-related recommendations are as follows:
■
If a network includes non-Cisco switches, isolate the different STP domains with Layer 3
routing to avoid STP compatibility issues.
■
Even if the recommended design does not depend on STP to resolve link or node failure
events, use STP in Layer 2 designs to protect against user-side loops. A loop can be introduced
on the user-facing access layer ports in many ways, such as wiring mistakes, misconfigured
end stations, or malicious users. STP is required to ensure a loop-free topology and to protect
the rest of the network from problems created in the access layer.
NOTE Some security personnel have recommended disabling STP at the network edge. Cisco
does not recommend this practice, however, because the risk of lost connectivity without STP is
far greater than any STP information that might be revealed.
The Cisco STP Toolkit
The Cisco STP toolkit provides tools to better manage STP when RSTP+ is not available:
■
PortFast: Used for ports to which end-user stations or servers are directly connected. When
PortFast is enabled, there is no delay in passing traffic, because the switch immediately puts
the port in STP forwarding state, skipping the listening and learning states. Two additional
measures that prevent potential STP loops are associated with the PortFast feature:
— BPDU Guard: PortFast transitions the port into the STP forwarding state
immediately on linkup. Because the port still participates in STP, the potential for an
STP loop exists if some device attached to that port also runs STP. The BPDU Guard
feature enforces the STP domain borders and keeps the active topology predictable.
If the port receives a BPDU, the port is transitioned into errdisable state (meaning
that it was disabled due to an error), and an error message is reported.
NOTE Additional information on the errdisable state is available in Recovering from
errDisable Port State on the CatOS Platforms, at http://www.cisco.com/en/US/tech/tk389/
tk214/technologies_tech_note09186a0080093dcb.shtml.
— BPDU Filtering: This feature blocks PortFast-enabled, nontrunk ports from
transmitting BPDUs. STP does not run on these ports. BPDU filtering is not
recommended, because it effectively disables STP at the edge and can lead to STP
loops.
250
Chapter 4: Designing Basic Campus and Data Center Networks
■
UplinkFast: If the link on a switch to the root switch goes down and the blocked link is
directly connected to the same switch, UplinkFast enables the switch to put a redundant port
(path) into the forwarding state immediately, typically resulting in convergence of 3 to 5
seconds after a link failure.
■
BackboneFast: If a link on the way to the root switch fails but is not directly connected to the
same switch (in other words, it is an indirect failure), BackboneFast reduces the convergence
time by max_age (which is 20 seconds by default), from 50 seconds to approximately 30
seconds. When this feature is used, it must be enabled on all switches in the STP domain.
■
STP Loop Guard: When one of the blocking ports in a physically redundant topology stops
receiving BPDUs, usually STP creates a potential loop by moving the port to forwarding state.
With the STP Loop Guard feature enabled, and if a blocking port no longer receives BPDUs,
that port is moved into the STP loop-inconsistent blocking state instead of the listening/
learning/forwarding state. This feature avoids loops in the network that result from
unidirectional or other software failures.
■
RootGuard: The RootGuard feature prevents external switches from becoming the root.
RootGuard should be enabled on all ports where the root bridge should not appear; this
feature ensures that the port on which RootGuard is enabled is the designated port. If a
superior BPDU (a BPDU with a lower bridge ID than that of the current root bridge) is
received on a RootGuard-enabled port, the port is placed in a root-inconsistent state—the
equivalent of the listening state.
■
BPDU Skew Detection: This feature allows the switch to keep track of late-arriving BPDUs
(by default, BPDUs are sent every 2 seconds) and notify the administrator via Syslog
messages. Skew detection generates a report for every port on which BPDU has ever arrived
late (this is known as skewed arrival). Report messages are rate-limited (one message every
60 seconds) to protect the CPU.
■
Unidirectional Link Detection (UDLD): A unidirectional link occurs whenever traffic
transmitted by the local switch over a link is received by the neighbor but traffic transmitted
from the neighbor is not received by the local device. If the STP process that runs on the
switch with a blocking port stops receiving BPDUs from its upstream (designated) switch on
that port, STP eventually ages out the STP information for this port and moves it to the
forwarding state. If the link is unidirectional, this action would create an STP loop. UDLD is
a Layer 2 protocol that works with the Layer 1 mechanisms to determine a link’s physical
status. If the port does not see its own device/port ID in the incoming UDLD packets for a
specific duration, the link is considered unidirectional from the Layer 2 perspective. After
UDLD detects the unidirectional link, the respective port is disabled, and an error message is
generated.
NOTE PortFast, Loop Guard, RootGuard, and BPDU Guard are also supported for RPVST+.
Enterprise Campus Design
251
As an example of the use of these features, consider when a switch running a version of STP is
introduced into an operating network. This might not always cause a problem, such as when the
switch is connected in a conference room to temporarily provide additional ports for connectivity.
However, sometimes this is undesirable, such as when the switch that is added has been configured
to become the STP root for the VLANs to which it is attached. BDPU Guard and RootGuard are
tools that can protect against these situations. BDPU Guard requires operator intervention if an
unauthorized switch is connected to the network, and RootGuard protects against a switch
configured in a way that would cause STP to reconverge when it is being connected to the network.
Managing Trunks Between Switches
Trunks are typically deployed on the interconnection between the Building Access and Building
Distribution layers. There are several best practices to implement with regard to trunks.
Trunk Mode and Encapsulation
As a best practice when configuring trunks, set Dynamic Trunking Protocol (DTP) to desirable
on one side and desirable (with the negotiate option) one the other side to support DTP protocol
(encapsulation) negotiation.
NOTE Although turning DTP to on and on with the no negotiate option could save seconds
of outage when restoring a failed link or node, with this configuration DTP does not actively
monitor the state of the trunk, and a misconfigured trunk is not easily identified.
NOTE The specific commands used to configure trunking vary; refer to your switch's
documentation for details.
Manually Pruning VLANs
Another best practice is to manually prune unused VLANs from trunked interfaces to avoid
broadcast propagation. Cisco recommends not using automatic VLAN pruning; manual pruning
provides stricter control. As mentioned, campuswide or access layer–wide VLANs are no longer
recommended, so VLAN pruning is less of an issue than it used to be.
VTP Transparent Mode
VTP transparent mode should be used as a best practice because hierarchical networks have little
need for a shared common VLAN database. Using VTP transparent mode decreases the potential
for operational error.
Trunking on Ports
Trunking should be disabled on ports to which hosts will be attached so that host devices do not
need to negotiate trunk status. This practice speeds up PortFast and is a security measure to prevent
VLAN hopping.
252
Chapter 4: Designing Basic Campus and Data Center Networks
Managing Default PAgP Settings
Fast EtherChannel and Gigabit EtherChannel solutions group several parallel links between LAN
switches into a channel that is seen as a single link from the Layer 2 perspective. Two protocols
handle automatic EtherChannel formation: PAgP, which is Cisco-proprietary, and the Link
Aggregation Control Protocol (LACP), which is standardized and defined in IEEE 802.3ad.
When connecting a Cisco IOS software device to a Catalyst operating system device using PAgP,
make sure that the PAgP settings used to establish EtherChannels are coordinated; the defaults are
different for a Cisco IOS software device and a Catalyst operating system device. As a best
practice, Catalyst operating system devices should have PAgP set to off when connecting to a
Cisco IOS software device if EtherChannels are not configured. If EtherChannel/PAgP is used, set
both sides of the interconnection to desirable.
Implementing Routing in the Building Access Layer
Although not as widely deployed in the Building Access layer, a routing protocol, such as EIGRP,
when properly tuned, can achieve better convergence results than Layer 2 and Layer 3 boundary
hierarchical designs that rely on STP. However, adding routing does result in some additional
complexities, including uplink IP addressing and subnetting, and loss of flexibility.
Figure 4-9 illustrates a sample network with Layer 3 routing in both the Building Access and
Building Distribution layers. In this figure, equal-cost Layer 3 load balancing is performed on all
links (although EIGRP could perform unequal-cost load balancing). STP is not run, and a first-hop
redundancy protocol (such as Hot Standby Router Protocol [HSRP]) is not required. VLANs
cannot span across the multilayer switch.
Figure 4-9
Layer 3 Access-to-Distribution Layer Interconnection
Layer 3
Si
Si
Layer 3
Equal-Cost
Links
Layer 3
Equal-Cost
Links
Building
Access
Routed Model
10.1.20.0
10.1.120.0
VLAN 20 Data
VLAN 120 Voice
Building
Distribution
10.1.40.0
10.1.140.0
VLAN 40 Data
VLAN 140 Voice
NOTE HSRP and other first-hop redundancy protocols are discussed in the “Using First-Hop
Redundancy Protocols” section.
Enterprise Campus Design
253
Building Distribution Layer Design Considerations
The Building Distribution layer aggregates the Building Access layer, segments workgroups, and
isolates segments from failures and broadcast storms. This layer implements many policies based
on access lists and QoS settings. The Building Distribution layer can protect the Campus Core
network from any impact of Building Access layer problems by implementing all the
organization’s policies.
When implementing the Building Distribution layer, consider the following questions:
■
How many devices will each Building Distribution switch handle?
■
What type and level of redundancy are required?
■
How many uplinks are needed?
■
What speed do the uplinks need to be to the building core switches?
■
What cabling is currently available in the wiring closet, and what cabling options exist for
uplink connectivity?
■
As network services are introduced, can the network continue to deliver high performance for
all its applications, such as video on demand, IP multicast, or IP telephony?
The network designer must pay special attention to the following network characteristics:
■
Performance: Building Distribution switches should provide wire-speed performance on all
ports. This feature is important because of Building Access layer aggregation on one side and
high-speed connectivity of the Campus Core module on the other side. Future expansions
with additional ports or modules can result in an overloaded switch if it is not selected
properly.
■
Redundancy: Redundant Building Distribution layer switches and redundant connections to
the Campus Core should be implemented. Using equal-cost redundant connections to the core
supports fast convergence and avoids routing black holes. Network bandwidth and capacity
should be engineered to withstand node or link failure.
When redundant switches cannot be implemented in the Campus Core and Building
Distribution layers, redundant supervisors and the Stateful Switchover (SSO) and Nonstop
Forwarding (NSF) technologies can provide significant resiliency improvements. These
technologies result in 1 to 3 seconds of outage in a failover, which is less than the time needed
to replace a supervisor and recover its configuration. Depending on the switch platform, fullimage In Service Software Upgrade (ISSU) technology might be available such that the
complete Cisco IOS software image can be upgraded without taking the switch or network
out of service, maximizing network availability.
254
Chapter 4: Designing Basic Campus and Data Center Networks
■
Infrastructure services: Building Distribution switches should not only support fast
multilayer switching, but should also incorporate network services such as high availability,
QoS, security, and policy enforcement.
Expanding and/or reconfiguring distribution layer devices must be easy and efficient. These
devices must support the required management features.
With the correct selection of Building Distribution layer switches, the network designer can easily
add new Building Access modules.
KEY
Multilayer switches are usually preferred as the Building Distribution layer switches,
POINT because this layer must usually support network services, such as QoS and traffic filtering.
KEY
The following are best-practice recommendations for optimal Building Distribution layer
POINT design:
■
Use first-hop redundancy protocols.
■
Deploy Layer 3 routing protocols between the Building Distribution switches and
Campus Core switches.
■
If required, Building Distribution switches should support VLANs that span
multiple Building Access layer switches.
The following sections describe these recommendations.
Using First-Hop Redundancy Protocols
If Layer 2 is used between the Building Access switch and the Building Distribution switch,
convergence time when a link or node fails depends on default gateway redundancy and failover
time. Building Distribution switches typically provide first-hop redundancy (default gateway
redundancy) using HSRP, Gateway Load-Balancing Protocol (GLBP), or Virtual Router
Redundancy Protocol (VRRP).
This redundancy allows a network to recover from the failure of the device acting as the default
gateway for end nodes on a physical segment. Uplink tracking should also be implemented with
the first-hop redundancy protocol.
HSRP or GLBP timers can be reliably tuned to achieve subsecond (800 to 900 ms) convergence
for link or node failure in the boundary between Layer 2 and Layer 3 in the Building Distribution
layer.
Enterprise Campus Design
255
In Cisco deployments, HSRP is typically used as the default gateway redundancy protocol. VRRP
is an Internet Engineering Task Force (IETF) standards-based method of providing default
gateway redundancy. More deployments are starting to use GLBP because it supports load
balancing on the uplinks from the access layer to the distribution layer, as well as first-hop
redundancy and failure protection.
As shown in Figure 4-10, this model supports a recommended Layer 3 point-to-point
interconnection between distribution switches.
Figure 4-10
Layer 3 Distribution Switch Interconnection
Layer 3
HSRP Active
VLAN 20,140
HSRP Active
VLAN 40,120
Si
HSRP
Si
Layer 2
Links
Layer 2
Links
Building
Distribution
Building
Access
10.1.20.0
10.1.120.0
VLAN 20 Data
VLAN 120 Voice
10.1.40.0
10.1.140.0
VLAN 40 Data
VLAN 140 Voice
No VLANs span the Building Access layer switches across the distribution switches, so from an
STP perspective, both access layer uplinks are forwarding, and no STP convergence is required if
uplink failure occurs. The only convergence dependencies are the default gateway and return path
route selection across the Layer 3 distribution-to-distribution link.
NOTE Notice in Figure 4-10 that the Layer 2 VLAN number is mapped to the Layer 3 subnet
for ease of management.
If Layer 3 is used to the Building Access switch, the default gateway is at the multilayer Building
Access switch, and a first-hop redundancy protocol is not needed.
Deploying Layer 3 Routing Protocols Between Building Distribution and Campus Core
Switches
Routing protocols between the Building Distribution switches and the Campus Core switches
support fast, deterministic convergence for the distribution layer across redundant links.
256
Chapter 4: Designing Basic Campus and Data Center Networks
Convergence based on the up or down state of a point-to-point physical link is faster than timerbased nondeterministic convergence. Instead of indirect neighbor or route loss detection using
hellos and dead timers, physical link loss indicates that a path is unusable; all traffic is rerouted to
the alternative equal-cost path.
For optimum distribution-to-core layer convergence, build redundant triangles, not squares, to
take advantage of equal-cost redundant paths for the best deterministic convergence. Figure 4-11
illustrates the difference.
Figure 4-11
Redundant Triangles Versus Redundant Squares
Triangle Redundancy
Square Redundancy
Si
Si
Si
Si
Si
Si
Si
Si
On the left of Figure 4-11, the multilayer switches are connected redundantly with a triangle of
links that have Layer 3 equal costs. Because the links have equal costs, they appear in the routing
table (and by default will be used for load balancing). If one of the links or distribution layer
devices fails, convergence is extremely fast, because the failure is detected in hardware and there
is no need for the routing protocol to recalculate a new path; it just continues to use one of the
paths already in its routing table. In contrast, on the right of Figure 4-11, only one path is active
by default, and link or device failure requires the routing protocol to recalculate a new route to
converge.
Other related recommended practices are as follows:
■
Establish routing protocol peer relationships only on links that you want to use as transit links.
■
Summarize routes from the Building Distribution layer into the Campus Core layer.
Enterprise Campus Design
257
Supporting VLANs That Span Multiple Building Access Layer Switches
In a less-than-optimal design where VLANs span multiple Building Access layer switches, the
Building Distribution switches must be linked by a Layer 2 connection, or the Building Access
layer switches must be connected via trunks.
This design is more complex than when the Building Distribution switches are interconnected
with Layer 3. STP convergence is required if an uplink failure occurs.
As shown in Figure 4-12, the following are recommendations for use in this (suboptimal) design:
■
Use RPVST+ as the version of STP.
■
Provide a Layer 2 link between the two Building Distribution switches to avoid unexpected
traffic paths and multiple convergence events.
■
If you choose to load-balance VLANs across uplinks, be sure to place the HSRP primary and
the RPVST+ root on the same Building Distribution layer switch to avoid using the
interdistribution switch link for transit.
Figure 4-12
Layer 2 Building Distribution Switch Interconnection
Layer 2
HSRP Active
and RSTP Root
VLAN 20,140
Si
Layer 2
Links
Trunk
Si
HSRP Standby
and RSTP
Secondary Root
VLAN 20,140
Layer 2
Links
RSTP
Building
Distribution
Building
Access
10.1.20.0
10.1.140.0
VLAN 20 Data
VLAN 140 Voice
10.1.20.0
10.1.140.0
VLAN 20 Data
VLAN 140 Voice
Campus Core Design Considerations
Low price per port and high port density can govern switch choice for wiring closet environments,
but high-performance wire-rate multilayer switching drives the Campus Core design.
Using Campus Core switches reduces the number of connections between the Building
Distribution layer switches and simplifies the integration of the Server Farm module and
Enterprise Edge modules. Campus Core switches are primarily focused on wire-speed forwarding
258
Chapter 4: Designing Basic Campus and Data Center Networks
on all interfaces and are differentiated by the level of performance achieved per port rather than
by high port densities.
KEY
As a recommended practice, deploy a dedicated Campus Core layer to connect three or
POINT more buildings in the Enterprise Campus, or four or more pairs of Building Distribution
switches in a very large campus.
Campus Core switches are typically multilayer switches.
Using a Campus Core makes scaling the network easier. For example, with a Campus Core, new
Building Distribution switches only need connectivity to the core rather than full-mesh
connectivity to all other Building Distribution switches.
NOTE Not all campus implementations need a Campus Core. As discussed in the upcoming
“Small and Medium Campus Design Options” section, the Campus Core and Building
Distribution layers can be combined at the Building Distribution layer in a smaller campus.
Issues to consider in a Campus Core layer design include the following:
■
The performance needed in the Campus Core network.
■
The number of high-capacity ports for Building Distribution layer aggregation and
connection to the Server Farm module or Enterprise Edge modules.
■
High availability and redundancy requirements. To provide adequate redundancy, at least two
separate switches (ideally located in different buildings) should be deployed.
Another Campus Core consideration is Enterprise Edge and WAN connectivity. For many
organizations, the Campus Core provides Enterprise Edge and WAN connectivity through Edge
Distribution switches connected to the core. However, for large enterprises with a data center, the
Enterprise Edge and WAN connectivity are aggregated at the data center module.
Typically, the Campus Core switches should deliver high-performance, multilayer switching
solutions for the Enterprise Campus and should address requirements for the following:
■
Gigabit density
■
Data and voice integration
■
LAN, WAN, and metropolitan area network (MAN) convergence
Enterprise Campus Design
259
■
Scalability
■
High availability
■
Intelligent multilayer switching in the Campus Core, and to the Building Distribution and
Server Farm environments
Large Campus Design
For a large campus, the most flexible and scalable Campus Core layer consists of dual multilayer
switches, as illustrated in Figure 4-13.
Figure 4-13
Large Campus Multilayer Switched Campus Core Design
Building
Access
Building
Distribution
Campus
Core
260
Chapter 4: Designing Basic Campus and Data Center Networks
Multilayer-switched Campus Core layers have several best-practice features:
■
Reduced multilayer switch peering (routing adjacencies): Each multilayer Building
Distribution switch connects to only two multilayer Campus Core switches, using a redundant
triangle configuration. This implementation simplifies any-to-any connectivity between
Building Distribution and Campus Core switches and is scalable to an arbitrarily large size.
It also supports redundancy and load sharing.
■
Topology with no spanning-tree loops: No STP activity exists in the Campus Core or on the
Building Distribution links to the Campus Core layer, because all the links are Layer 3
(routed) links. Arbitrary topologies are supported by the routing protocol used in the Campus
Core layer. Because the core is routed, it also provides multicast and broadcast control.
■
Improved network infrastructure services support: Multilayer Campus Core switches
provide better support for intelligent network services than data link layer core switches could
support.
This design maintains two equal-cost paths to every destination network. Thus, recovery from any
link failure is fast and load sharing is possible, resulting in higher throughput in the Campus Core
layer.
One of the main considerations when using multilayer switches in the Campus Core is switching
performance. Multilayer switching requires more sophisticated devices for high-speed packet
routing. Modern Layer 3 switches support routing in the hardware, even though the hardware
might not support all the features. If the hardware does not support a selected feature, it must be
performed in software; this can dramatically reduce the data transfer. For example, access lists
might not be processed in the hardware if they have too many entries, resulting in switch
performance degradation.
Small and Medium Campus Design Options
A small campus (or large branch) network might have fewer than 200 end devices, and the network
servers and workstations might be connected to the same wiring closet. Because switches in a
small campus network design may not require high-end switching performance or much scaling
capability, in many cases, the Campus Core and Building Distribution layers can be combined into
a single layer, as illustrated on the left of Figure 4-14. This design can scale to only a few Building
Access layer switches. A low-end multilayer switch provides routing services closer to the end
user when multiple VLANs exist. For a very small office, one low-end multilayer switch may
support the LAN access requirements for the entire office.
Enterprise Campus Design
Figure 4-14
261
Small and Medium Campus Design Options
Small Campus Network
Medium Campus Network
Building Access Switching
Building Access Switching
Building
Distribution/
Campus
Core
Building
Distribution/
Campus
Core
For a medium-sized campus with 200 to 1000 end devices, the network infrastructure typically
consists of Building Access layer switches with uplinks to Building Distribution/Campus Core
multilayer switches that can support the performance requirements of a medium-sized campus
network. If redundancy is required, redundant multilayer switches connect to the Building Access
switches, providing full link redundancy, as illustrated on the right of Figure 4-14.
NOTE Branch and teleworker infrastructure considerations are described further in Chapter 5.
Edge Distribution at the Campus Core
As mentioned in Chapter 3, the Enterprise Edge modules connect to the Campus Core directly or
through an optional Edge Distribution module, as illustrated in Figure 4-15.
262
Chapter 4: Designing Basic Campus and Data Center Networks
Figure 4-15
Edge Distribution Design
Campus Core
Enterprise Edge
E-Commerce
Core
Edge
Distribution
Internet
Connectivity
WAN and MAN
and Site-to-Site
VPN
Remote Access
and VPN
The Edge Distribution multilayer switches filter and route traffic into the Campus Core, aggregate
Enterprise Edge connectivity, and provide advanced services.
Switching speed is not as important as security in the Edge Distribution module, which isolates
and controls access to devices that are located in the Enterprise Edge modules (for example,
servers in an E-commerce module or public servers in an Internet Connectivity module). These
servers are closer to the external users and therefore introduce a higher risk to the internal campus.
To protect the Campus Core from threats, the switches in the Edge Distribution module must
protect the campus from the following attacks:
■
Unauthorized access: All connections from the Edge Distribution module that pass through
the Campus Core must be verified against the user and the user’s rights. Filtering mechanisms
must provide granular control over specific edge subnets and their capability to reach areas
within the campus.
Enterprise Campus Design
263
■
IP spoofing: IP spoofing is a hacker technique for impersonating the identity of another user
by using that user’s IP address. Denial of service (DoS) attacks use IP spoofing to generate
requests to servers, using the stolen IP address as a source. The server therefore does not
respond to the original source, but it does respond to the stolen IP address. A significant
amount of this type of traffic causes the attacked server to be unavailable, thereby interrupting
business. DoS attacks are a problem because they are difficult to detect and defend against;
attackers can use a valid internal IP address for the source address of IP packets that produce
the attack.
■
Network reconnaissance: Network reconnaissance (or discovery) sends packets into the
network and collects responses from the network devices. These responses provide basic
information about the internal network topology. Network intruders use this approach to find
out about network devices and the services that run on them.
Therefore, filtering traffic from network reconnaissance mechanisms before it enters the
enterprise network can be crucial. Traffic that is not essential must be limited to prevent a
hacker from performing network reconnaissance.
■
Packet sniffers: Packet sniffers are devices that monitor and capture the traffic in the network
and might be used by hackers. Packets belonging to the same broadcast domain are vulnerable
to capture by packet sniffers, especially if the packets are broadcast or multicast. Because
most of the traffic to and from the Edge Distribution module is business-critical, corporations
cannot afford this type of security lapse. Multilayer switches can prevent such an occurrence.
The Edge Distribution devices provide the last line of defense for all external traffic that is destined
for the Campus Infrastructure module. In terms of overall functionality, the Edge Distribution
switches are similar to the Building Distribution layer switches. Both use access control to filter
traffic, although the Edge Distribution switches can rely on the Enterprise Edge modules to
provide additional security. Both modules use multilayer switching to achieve high performance,
but the Edge Distribution module can provide additional security functions because its
performance requirements might not be as high.
When the enterprise includes a significant data center rather than a simple server farm, remote
connectivity and performance requirements are more stringent. Edge Distribution switches can be
located in the data center, giving remote users easier access to corporate resources. Appropriate
security concerns need to be addressed in this module.
Server Placement
Within a campus network, servers may be placed locally in the Building Access or Building
Distribution layer, or attached directly to the Campus Core. Centralized servers are typically
grouped into a server farm located in the Enterprise Campus or in a separate data center.
264
Chapter 4: Designing Basic Campus and Data Center Networks
Servers Directly Attached to Building Access or Building Distribution Layer Switches
If a server is local to a certain workgroup that corresponds to one VLAN, and all workgroup
members and the server are attached to a Building Access layer switch, most of the traffic to the
server is local to the workgroup. If required, an access list at the Building Distribution layer switch
could hide these servers from the enterprise.
In some midsize networks, building-level servers that communicate with clients in different
VLANs, but that are still within the same physical building, can be connected to Building
Distribution layer switches.
Servers Directly Attached to the Campus Core
The Campus Core generally transports traffic quickly, without any limitations. Servers in a
medium-sized campus can be connected directly to Campus Core switches, making the servers
closer to the users than if the servers were in a Server Farm, as illustrated in Figure 4-16. However,
ports are typically limited in the Campus Core switches. Policy-based control (QoS and access
control lists [ACL]) for accessing the servers is implemented in the Building Distribution layer,
rather than in the Campus Core.
Figure 4-16
Servers Directly Attached to the Campus Core in a Medium-Sized Network
Building Access
Campus Core
Building Distribution
Servers in a Server Farm Module
Larger enterprises may have moderate or large server deployments. For enterprises with moderate
server requirements, common servers are located in a separate Server Farm module connected to
the Campus Core layer using multilayer server distribution switches, as illustrated in Figure 4-17.
Enterprise Campus Design
265
Because of high traffic load, the servers are usually Gigabit Ethernet–attached to the Server Farm
switches. Access lists at the Server Farm module’s multilayer distribution switches implement the
controlled access to these servers. Redundant distribution switches in a Server Farm module and
solutions such as the HSRP and GLBP provide fast failover. The Server Farm module distribution
switches also keep all server-to-server traffic off the Campus Core.
Figure 4-17
Server Farm in a Large Network
Campus Core
Multilayer
Server
Distribution
Multilayer
Data Link
Layer
Server
Access
Servers
Rather than being installed on only one server, modern applications are distributed among several
servers. This approach improves application availability and responsiveness. Therefore, placing
servers in a common group (in the Server Farm module) and using intelligent multilayer switches
provide the applications and servers with the required scalability, availability, responsiveness,
throughput, and security.
For a large enterprise with a significant number of servers, a separate data center, possibly in a
remote location, is often implemented. Design considerations for an Enterprise Data Center are
discussed in the later “Enterprise Data Center Design Considerations” section.
266
Chapter 4: Designing Basic Campus and Data Center Networks
Server Farm Design Guidelines
As shown in Figure 4-18, the Server Farm can be implemented as a high-capacity building block
attached to the Campus Core using a modular design approach. One of the main concerns with the
Server Farm module is that it receives the majority of the traffic from the entire campus. Random
frame drops can result because the uplink ports on switches are frequently oversubscribed. To
guarantee that no random frame drops occur for business-critical applications, the network
designer should apply QoS mechanisms to the server links.
NOTE Switch oversubscription occurs when a switch allows more ports (bandwidth) in the
chassis than the switch’s hardware can transfer through its internal structure.
Figure 4-18
Sample Server Farm Design
Campus
Core
To Enterprise
Edge
Server
Farm
The Server Farm design should ensure that the Server Farm uplink ports are not as oversubscribed
as the uplink ports on the switches in the Building Access or Building Distribution layers. For
example, if the campus consists of a few Building Distribution layers connected to the Campus
Core layer with Gigabit Ethernet, attach the Server Farm module to the Campus Core layer with
either a 10-Gigabit Ethernet or multiple Gigabit Ethernet links.
The switch performance and the bandwidth of the links from the Server Farm to the Campus Core
are not the only considerations. You must also evaluate the server’s capabilities. Although server
manufacturers support a variety of NIC connection rates (such as Gigabit Ethernet), the underlying
Enterprise Campus Design
267
network operating system might not be able to transmit at the maximum line capacity. As such,
oversubscription ratios can be raised, reducing the Server Farm’s overall cost.
Server Connectivity Options
Servers can be connected in several different ways. For example, a server can attach by one or two
Fast Ethernet connections. If the server is dual-attached (dual-NIC redundancy), one interface can
be active while the other is in hot standby. Installing multiple single-port NICs or multiport NICs
in the servers extends dual homing past the Server Farm module switches to the server itself.
Servers needing redundancy can be connected with dual-NIC homing in the access layer or a NIC
that supports EtherChannel. With the dual-homing NIC, a VLAN or trunk is needed between the
two access switches to support the single IP address on the two server links to two separate
switches.
Within the Server Farm module, multiple VLANs can be used to create multiple policy domains
as required. If one particular server has a unique access policy, a unique VLAN and subnet can be
created for that server. If a group of servers has a common access policy, the entire group can be
placed in a common VLAN and subnet. ACLs can be applied on the interfaces of the multilayer
switches.
Several other solutions are available to improve server responsiveness and evenly distribute the
load to them. For example, Figure 4-18 includes content switches that provide a robust front end
for the Server Farm by performing functions such as load balancing of user requests across the
Server Farm to achieve optimal performance, scalability, and content availability.
The Effect of Applications on Switch Performance
Server Farm design requires that you consider the average frequency at which packets are generated
and the packets’ average size. These parameters are based on the enterprise applications’ traffic
patterns and number of users of the applications.
Interactive applications, such as conferencing, tend to generate high packet rates with small packet
sizes. In terms of application bandwidth, the packets-per-second limitation of the multilayer
switches might be more critical than the throughput (in Mbps). In contrast, applications that
involve large movements of data, such as file repositories, transmit a high percentage of full-length
(large) packets. For these applications, uplink bandwidth and oversubscription ratios become key
factors in the overall design. Actual switching capacities and bandwidths vary based on the mix of
applications.
268
Chapter 4: Designing Basic Campus and Data Center Networks
Enterprise Data Center Design Considerations
This section describes general Enterprise Data Center design considerations and provides an
overview of the general technologies and models used in an Enterprise Data Center.
The Enterprise Data Center
This section describes technology and trends influencing the Enterprise Data Center. For large
enterprises with a significant number of servers, a dedicated Enterprise Data Center provides
employees, partners, and customers with access to data and resources to effectively work,
collaborate, and interact. Historically, most Enterprise Data Centers grew rapidly as organizational
requirements expanded. Applications were implemented as needed, often resulting in
underutilized, isolated infrastructure silos. Each silo was designed based on the specific
application being deployed, so a typical data center supported a broad assortment of operating
systems, computing platforms, and storage systems, resulting in various application “islands” that
were difficult to change or expand and expensive to manage, integrate, secure, and back up.
This server-centric data center model is evolving to a service-centric model, as illustrated in
Figure 4-19. This evolution includes the following:
■
The deployment of virtual machine software, such as VMware and Xen, which breaks the
one-to-one relationship between applications and the server hardware and operating system
on which they run. Virtual machine software allows multiple applications to run on a single
server, independent of each other and of the underlying operating system.
NOTE VMware information is available at http://www.vmware.com/. Xen information is
available at http://www.xensource.com/.
■
The removal of storage from the server, consolidating it in storage pools. Networked storage
(such as storage area networks [SAN]) allows easier management, provisioning, improved
utilization, and consistent recovery practices.
■
The creation of pools of one-way, two-way, or four-way servers that can be pooled and
provisioned, on demand.
NOTE One-way servers have a single processor, two-way servers have two processors, and
four-way servers have four processors.
■
The consolidation of I/O resources so that the I/O can be pooled and provisioned on demand
for connectivity to other servers, storage, and LAN pools.
Enterprise Data Center Design Considerations
Figure 4-19
269
Evolution from Server-Centric to Service-Centric Data Center
Application
Silos
Application
Silos
User Access
Network
Application
Silos
Data
Center
Network
Pooled
Storage
Resources
Pooled
Compute
Resources
Shared Application
Services
Server-Centric
Monolithic
Proprietary
Compute Silos
Aggregation of
Storage into SAN
Prevalence of
1-RU and Blade
Servers with
Consolidated I/O
Service-Centric Model
“Pools” of Standardized Resources
Assembled on Demand to Create
“Virtual Infrastructure”
The resulting service-centric data center has pooled compute, storage, and I/O resources that are
provisioned to support applications over the data center network. Because the network touches and
can control all the components, the network can be used to integrate all the applications and
services; network technology actively participates in the delivery of applications to end users.
The Cisco Enterprise Data Center Architecture Framework
The consolidation and virtualization of data center resources requires a highly scalable, resilient,
secure data center network foundation.
As described in Chapter 2, “Applying a Methodology to Network Design,” the Cisco ServiceOriented Network Architecture (SONA) framework defines how enterprises can evolve toward
intelligence in the network that optimizes applications, business processes, and resources. The
Cisco Enterprise Data Center Architecture, based on SONA, provides organizations with a
framework to address immediate data center demands for consolidation and business continuance
while enabling emerging service-oriented architectures (SOA), virtualization, and on-demand
computing technologies in the data center.
270
Chapter 4: Designing Basic Campus and Data Center Networks
The Cisco Enterprise Data Center Architecture, as illustrated in Figure 4-20, aligns data center
resources with business applications and provides multiple resources to end users in an enterprise.
The Cisco Enterprise Data Center Architecture has the following layers:
■
Networked Infrastructure layer: Meets all the bandwidth, latency, and protocol
requirements for user-to-server, server-to-server, and server-to-storage connectivity and
communications in a modular, hierarchical infrastructure.
■
Interactive Services layer: Provides the infrastructure services that ensure the fast and secure
alignment of resources with application requirements and Cisco Application Networking
Services that optimize application integration and the delivery of applications to end users.
Figure 4-20
Cisco Enterprise Data Center Network Architecture Framework
Business
Applications
Collaboration
Applications
Traditional Architecture/Service-Oriented Architecture
WAAS, Application Acceleration,
Optimization, Security and Server Offload
Infrastructure Enhancing Services
Firewalls, Intrusion Prevention,
Security Agents
RDMA,
Low Latency Clustering
Virtualization, Replication,
Virtual Fabrics
Adaptive Management
Services
Application Networking Services
Services
Virtualization
Services Management
Interactive
Services
Layer
Advanced Analytics and Decision Support
Network Infrastructure Virtualization
Infrastructure Management
Networked
Infrastructure
Layer
Compute Network
Server
Fabric
InfiniBand
Switching
SFS Family
Storage Network
Server
Switching
Storage
Switching
Data Center
Interconnect
Modular
Rack
Blade
Director
Fabric
DWDM,
SONET,
SDH, FCIP
Catalyst Family
MDS Family
ONS Family
WAAS = Wide-Area Application Services; RDMA = Remote Data Memory Access; SFS =
Server Fabric Switching; MDS = Multilayer Directors and Fabric Switches; ONS = Optical
Networking Solutions; DWDM = Dense Wave Division Multiplexing; SONET = Synchronous
Optical Network; SDH = Synchronous Digital Hierarchy; FCIP = Fiber Channel over IP
Enterprise Data Center Design Considerations
271
The Cisco Enterprise Data Center Architecture provides a scalable foundation that allows data
centers to host a variety of legacy and emerging systems and technologies, including the
following:
■
N-tier applications: Secure network zones support two, three, or n-tier application
environments with techniques that optimize application availability and server and storage
utilization.
■
Web applications: Application acceleration and server optimization technologies provide
improved scalability and delivery of web applications to end users, wherever they are.
■
Blade servers: As self-contained servers, blade servers, housed in a blade enclosure, have all
the functional components required to be considered computers but have reduced physical
components, so they require less space, power, and so forth. The Cisco Enterprise Data Center
Architecture provides an intelligent network foundation using integrated Ethernet and
InfiniBand switching technology that helps optimize blade server availability, security, and
performance.
■
Clustering, high-performance computing and grid: The Cisco high-performance data,
server, and storage switching solutions, whether based on Ethernet, InfiniBand, or Fiber
Channel, enable the deployment of data- and I/O-intensive applications that make use of these
distributed compute and storage architectures.
■
SOA and web service: The Cisco Enterprise Data Center Architecture facilitates the reliable,
secure, and rapid deployment of an SOA by enabling dynamic deployment and scaling of
secure infrastructures and by enhancing application integration with message-based services.
■
Mainframe computing: Cisco offers a comprehensive set of technologies supporting
Systems Network Architecture (SNA), SNA-to-IP migration, fiber connection, and native IP
mainframe services.
The Cisco Enterprise Data Center Architecture is supported by networking technologies and
solutions that allow organizations to evolve their data center infrastructures through the following
phases:
■
Consolidation: Integration of network, server, application, and storage services into a shared
infrastructure enhances scalability and manageability while reducing cost and complexity.
■
Virtualization: Network-enabled virtualization of computing and storage resources and
virtual network services increase utilization and adaptability while reducing overall costs.
■
Automation: Dynamic monitoring, provisioning, and orchestration of data center
infrastructure resources resulting from changing loads, disruptions, or attacks increases
overall IT agility while minimizing operational requirements.
272
Chapter 4: Designing Basic Campus and Data Center Networks
Figure 4-21 illustrates a sample high-performance data center network topology that requires
many technologies and connectivity options among applications and data centers. This network
topology provides connectivity services for networked elements within the data center, such as
servers and storage, as well as to external users or other data centers.
Figure 4-21
Sample Data Center Network Topology
Storage
Network
Primary Data Center
Front End Network
Applications
Clustered Servers
Resilient
IP
Security
VPN
SAN
Application
Delivery
GSS
Integrated
Business
Applications
ApplicationOriented
Networking
Disk
Firewall
SSL
IDS
Content
Switch
InfiniBand
Tape
NAS
NAS
GE/
10GE
Anomaly
Detect/Guard
File Cache
WAFS
WAFS
Internet
MPLS VPN
IPsec/SSL VPN
Remote
or Backup
Data
Center
Metro Network
Optical/Ethernet
ONS 15000
Access Network
MDS
9216
NAS = Network Attached Storage; WAFS = Wide-Area File Services; GE = Gigabit Ethernet;
VPN = Virtual Private Network; IDS = intrusion detection system; GSS = Global Site Selector;
SSL = Secure Sockets Layer
Enterprise Data Center Infrastructure
Figure 4-22 shows a typical large Enterprise Data Center infrastructure design. The design follows
the Cisco multilayer infrastructure architecture, including core, aggregation, and access layers.
NOTE In the Enterprise Data Center, the distribution layer is known as the aggregation layer.
Enterprise Data Center Design Considerations
Figure 4-22
Sample Data Center Infrastructure
Campus
Core
Data Center Core
Services
Modules
Data Center
Aggregation
Layer 2 Clustering
and NIC Teaming
Blade Chassis
with Pass-Through
Blade Chassis
with Integrated
Switch
Mainframe
with OSA
Data Center Access
OSA = Open Systems Adapter
Layer 3
Access
273
274
Chapter 4: Designing Basic Campus and Data Center Networks
The data center infrastructure must provide port density and Layer 2 and Layer 3 connectivity for
servers at the access layer, while supporting security services provided by ACLs, firewalls, and
intrusion detection systems (IDS) at the data center aggregation layer. It must support Server Farm
services, such as content switching, caching, and Secure Sockets Layer (SSL) offloading while
integrating with multitier Server Farms, mainframes, and mainframe services (such as TN3270,
load balancing, and SSL offloading). Network devices are often deployed in redundant pairs to
avoid a single point of failure.
The following sections describe the three layers of the Enterprise Data Center infrastructure.
Data Center Access Layer
The Data Center Access layer provides Layer 2, Layer 3, and mainframe connectivity. The design
of the Data Center Access layer varies depending on whether Layer 2 or Layer 3 access switches
are used; it is typically built with high-performance, low-latency Layer 2 switches, allowing better
sharing of service devices across multiple servers and allowing the use of Layer 2 clustering,
which requires the servers to be Layer 2–adjacent. With Layer 2 access switches, the default
gateway for the servers can be configured at the access or aggregation layer.
Servers can be single- or dual-attached; with dual-attached NICs in the servers, a VLAN or trunk
is required between the two redundant access layer switches to support having a single IP address
on the two server links to two separate switches. The default gateway is implemented at the access
layer.
A mix of both Layer 2 and Layer 3 access switches using one rack unit (1RU) and modular
platforms results in a flexible solution and allows application environments to be optimally
positioned.
Data Center Aggregation Layer
The Data Center Aggregation (distribution) layer aggregates the uplinks from the access layer to
the Data Center Core layer and is the critical point for control and application services.
Security and application service devices (such as load-balancing devices, SSL offloading devices,
firewalls, and IDS devices) provide Layer 4 through Layer 7 services and are often deployed as a
module in the aggregation layer. This highly flexible design takes advantage of economies of scale
by lowering the total cost of ownership (TCO) and reducing complexity by reducing the number
of components to configure and manage. Service devices deployed at the aggregation layer are
shared among all the servers, whereas service devices deployed at the access layer benefit only the
servers that are directly attached to the specific access switch.
Enterprise Data Center Design Considerations
275
Although Layer 2 at the aggregation (distribution) layer is tolerated for legacy designs, new
designs should have Layer 2 only at the Data Center Access layer. With Layer 2 at the Data Center
Aggregation layer, physical loops in the topology would have to be managed by STP; in this case,
as for other designs, RPVST+ is a recommended best practice to ensure a logically loop-free
topology over the physical topology.
The Data Center Aggregation layer typically provides Layer 3 connectivity from the data center
to the core and maintains the connection and session state for redundancy. Depending on the
requirements and the design, the boundary between Layer 2 and Layer 3 at the Data Center
Aggregation layer can be in the multilayer switches, the firewalls, or the content-switching devices
in the aggregation layer. Depending on the data center applications, the aggregation layer might
also need to support a large STP processing load.
Data Center Core Layer
Implementing a Data Center Core layer is a best practice for large data centers. The following
should be taken into consideration when determining whether a core is appropriate:
■
10-Gigabit Ethernet density: Without a Data Center Core, will there be enough 10-Gigabit
Ethernet ports on the Campus Core switch pair to support both the campus Building
Distribution layer and the Data Center Aggregation layer?
■
Administrative domains and policies: Separate campus and data center cores help isolate
the campus Building Distribution layers from Data Center Aggregation layers for
troubleshooting, maintenance, administration, and implementation of policies (using QoS
and ACLs).
■
Anticipation of future development: The impact that could result from implementing a
separate Data Center Core layer at a later date might make it worthwhile to install it at the
beginning.
The data center typically connects to the Campus Core using Layer 3 links. The data center
network addresses are summarized into the Campus Core, and the Campus Core injects a default
route into the data center network. Key Data Center Core layer characteristics include the
following:
■
A distributed forwarding architecture
■
Low-latency switching
■
10-Gigabit Ethernet scalability
■
Scalable IP multicast support
276
Chapter 4: Designing Basic Campus and Data Center Networks
Density and Scalability of Servers
Some scaling issues in the data center relate to the physical environment.
The most common access layer in enterprises today is based on the modular chassis Cisco Catalyst
6500 or 4500 Series switches. This topology has also proven to be a very scalable method of
building Server Farms that provide high-density, high-speed uplinks and redundant power and
processors. Although this approach has been very successful, it results in challenges when used in
Enterprise Data Center environments. The typical Enterprise Data Center experiences high growth
in the sheer number of servers; at the same time, server density has been improved with 1RU and
blade server solutions. Three particular challenges result from this trend:
■
Cable bulk: Typically, three to four interfaces are connected on a server. With a higher
density of servers per rack, cable routing and management can become quite difficult.
■
Power: The increased density of components in a rack is driving a need for a larger power
feed to the rack. Many data centers do not have the power capacity at the server rows to
support this increase.
■
Cooling: The number of cables lying under the raised floor and the cable bulk at the cabinet
base entry is blocking the airflow required to cool equipment in the racks. At the same time,
the servers in the rack require more cooling volume because of their higher density.
These challenges have forced customers to find alternative solutions by spacing cabinets,
modifying cable routes, or other means, including not deploying high-density server solutions.
Another way that customers seek to solve some of these problems is by using a rack-based
switching solution. Using 1RU top-of-rack switches keeps the server interface cables in the
cabinet, reducing the amount of cabling in the floor and thus reducing the cabling and cooling
issues. Another option is to place Cisco Catalyst 6500 Series switches like bookends near the ends
of the row of racks so that there are fewer switches to manage.
Summary
In this chapter you learned about campus and data center network design, with a focus on the
following topics:
■
The effects of the characteristics of the following on the campus network design:
— Application: Including peer-peer, client–local server, client–Server Farm, and
client–Enterprise Edge server
— Environment: Including the location of the network nodes, the distance between the
nodes, and the transmission media used
— Infrastructure devices: Including Layer 2 or multilayer switching, convergence
time, type of multilayer switching, IP multicast, QoS, and load sharing
Case Study: ACMC Hospital Network Campus Design
277
■
The design considerations and recommended practices for the Building Access layer, the
Building Distribution layer, the Campus Core layer, the optional Edge Distribution module,
and the Server Farm module.
■
Enterprise Data Center module design considerations, including an introduction to the
general technologies and models used in Enterprise Data Center design.
References
For additional information, refer to the following resources:
■
Cisco Systems, Inc., Introduction to Gigabit Ethernet, http://www.cisco.com/en/US/tech/
tk389/tk214/tech_brief09186a0080091a8a.html
■
Cisco Systems, Inc., Ethernet Introduction, http://www.cisco.com/en/US/tech/tk389/tk214/
tsd_technology_support_protocol_home.html
■
Cisco Systems, Inc., SAFE Blueprint Introduction, http://www.cisco.com/go/safe
■
Cisco Systems, Inc., Designing a Campus Network for High Availability,
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns432/c649/
cdccont_0900aecd801a8a2d.pdf
■
Cisco Systems, Inc., Enterprise Data Center: Introduction, http://www.cisco.com/en/US/
netsol/ns340/ns394/ns224/networking_solutions_packages_list.html
■
Cisco Systems, Inc., Cisco Data Center Network Architecture and Solutions Overview,
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns377/c643/
cdccont_0900aecd802c9a4f.pdf
■
Cisco Systems, Inc., Switches: Compare Products and Solutions, http://www.cisco.com/en/
US/products/hw/switches/products_category_buyers_guide.html
■
Szigeti and Hattingh, End-to-End QoS Network Design: Quality of Service in LANs, WANs,
and VPNs, Indianapolis, Cisco Press, 2004.
■
Cisco Systems, Inc., Spanning Tree Protocol: Introduction, http://www.cisco.com/en/US/
tech/tk389/tk621/tsd_technology_support_protocol_home.html
Case Study: ACMC Hospital Network Campus Design
This case study is a continuation of the ACMC Hospital case study introduced in Chapter 2.
278
Chapter 4: Designing Basic Campus and Data Center Networks
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study you create a high-level design for the Cisco Enterprise Campus Architecture of
the ACMC Hospital network.
Case Study Additional Information
Figure 4-23 identifies the device counts throughout the ACMC campus.
Assume that each building needs as many spare ports as there are people. Each patient room or
staff position has two jacks, and spare server ports should be provided to allow for migration of
all servers to the Server Farm.
The hospital has 500 staff members and 1000 patients.
Each floor of the main buildings has about 75 people, except for the first floor of Main Building
1, which has only the Server Farm with 40 servers. Each floor of the Children’s Place has 60
people. Buildings A through D have 10 people each, buildings E through J have 20 people each,
and buildings K through L have 40 each.
Assume that the hospital has structured cabling with plenty of MM fiber in the risers and plenty
of fiber between buildings. If there is not enough fiber, either the hospital will have to install the
fiber or the design will have to be modified for the existing cabling; produce an ideal design before
making any adjustments.
Case Study: ACMC Hospital Network Campus Design
Figure 4-23
279
Case Study: ACMC Campus Device Counts
4 Wiring Closets per Floor
40 Servers
in Server
Farm + 30
6 Floors +
Server Farm
12 Smaller Buildings
1 or 2 Wiring Closets
per Building
Main Building 1
4 Wiring Closets per Floor
7 floors
3 floors
3 Wiring Closets per Floor
Children’s Place
Main Building 2
Case Study Questions
Complete the following steps:
Step 1
Determine the location, quantity, and size of the required Campus Core
switch or switches and what connections are required within the core and
to the distribution layer.
Step 2
Determine the location of required Building Distribution layer switches or
whether a collapsed core/distribution approach makes more sense. In a
design with distribution layer switches, determine their location and size,
how they connect to the Campus Core, and the use of VLANs versus Layer
3 switching.
280
Chapter 4: Designing Basic Campus and Data Center Networks
Step 3
Table 4-4
Determine the location and size of the required Building Access layer
switches, and complete Table 4-4.
Building Access Layer Port Counts by Location
Location
Port Counts
Port Counts with Spares
Comments
Main building 1, per floor
Main building Server Farm
Main building 2, per floor
Children’s Place, per floor
Buildings A–D
Buildings E–J
Buildings K–L
Step 4
Determine how the Building Access layer switches will connect to the
Building Distribution layer switches (or to the combined distribution/core
switches).
Step 5
Determine how the Server Farm should be connected. If Server Farm
access or distribution switches are used, determine how they will connect
to each other and to the core.
Step 6
Does any other information need to be included in your final design?
Step 7
Determine appropriate Cisco switch models for each part of your campus
design.
The following links might be helpful (note that these links were correct at
the time this book was published):
• The Cisco switch main page at http://www.cisco.com/en/US/products/
hw/switches/index.html
• The Cisco switch comparison page at http://www.cisco.com/en/US/
products/hw/switches/products_category_buyers_guide.html
• The Cisco Product Quick Reference Guide at http://www.cisco.com/
warp/public/752/qrg/index.shtml
• The Cisco Catalyst Switch Solution Finder at http://www.cisco.com/en/
US/partner/products/hw/switches/
products_promotion0900aecd8050364f.html
Case Study: ACMC Hospital Network Campus Design
Step 8
(Optional) Use the Cisco Dynamic Configuration Tool to configure one or
more of the switches in your design. The Cisco Dynamic Configuration
Tool is available at the following link: http://www.cisco.com/en/US/
ordering/or13/or8/ordering_ordering_help_dynamic_configuration_tool_
launch.html. (Note that a valid username and password on www.cisco.com
are required to access this tool.)
Figure 4-24 displays a screen shot showing the options available for a
Catalyst 6506 switch.
Figure 4-24
Cisco Dynamic Configuration Tool Screen Output
Selecting the options for devices is easier and faster if you use only a few
switch models repeatedly in your design, possibly with different numbers
of blades in them.
NOTE There are not many options for the smaller switches in the Cisco Dynamic
Configuration Tool.
281
282
Chapter 4: Designing Basic Campus and Data Center Networks
Step 9
(Optional) Develop a bill of materials (BOM) listing switch models,
numbers, prices, and total price.
Creating a BOM can be time-consuming; you might want to use the Sample
Price List provided in Table 4-5 for this exercise. Note that the prices shown
in this table are not actual equipment prices; they are loosely derived from
Cisco list prices at the time of publication and are provided for your
convenience.
Table 4-5
Case Study: Sample Price List
Category
Part Number
Description
Fictional
Price
Generic SFP
$400
Generic GBIC
$400
Generic LR Xenpack
$4000
Port Transceiver Modules
Cisco Catalyst 2960 Series Workgroup Switches
WS-C2960-24TC-L
Catalyst 2960 24 10/100 + 2T/SFP LAN Base
Image
$2500
WS-C2960-24TT-L
Catalyst 2960 24 10/100 + 2 1000BT LAN
Base Image
$1300
WS-C2960-48TC-L
Catalyst 2960 48 10/100 + 2T/SFP LAN Base
Image
$4500
WS-C2960-48TT-L
Catalyst 2960 48 10/100 Ports + 2 1000BT
LAN Base Image
$2500
WS-C2960G-24TC-L
Catalyst 2960 24 10/100/1000, 4T/SFP LAN
Base Image
$3300
WS-C2960G-48TC-L
Catalyst 2960 48 10/100/1000, 4T/SFP LAN
Base Image
$6000
WS-C3560G-48TS-S
Catalyst 3560 48 10/100/1000T + 4 SFP
Standard Image
$8000
WS-C3560G-24TS-S
Catalyst 3560 24 10/100/1000T + 4 SFP
Standard Image
$4800
Cisco Catalyst 3560 Series
Case Study: ACMC Hospital Network Campus Design
Table 4-5
283
Case Study: Sample Price List (Continued)
Category
Fictional
Price
Part Number
Description
WS-C3560-48TS-S
Catalyst 3560 48 10/100 + 4 SFP Standard
Image
$5000
WS-C3560-24TS-S
Catalyst 3560 24 10/100 + 2 SFP Standard
Image
$3000
Cisco IOS Upgrades for the Catalyst 3560 (EMI = Layer 3 image)
CD-3560-EMI=
Enhanced Multilayer Image upgrade for 3560
10/100 models
$2000
CD-3560G-EMI=
Enhanced Multilayer Image upgrade for 3560
GE models
$4000
Cisco Catalyst 3750 Series 10/100/1000, GE, 10GE Workgroup Switches
WS-C3750G-24T-S
Catalyst 3750 24 10/100/1000T Standard
Multilayer Image
$6000
WS-C3750G-24TS-S1U
Catalyst 3750 24 10/100/1000 + 4 SFP
Standard Multilayer;1RU
$7000
WS-C3750G-48TS-S
Catalyst 3750 48 10/100/1000T + 4 SFP
Standard Multilayer
$14,000
WS-C3750G-16TD-S
Catalyst 3750 16 10/100/1000BT+ 10GE
(requires XENPAK) Standard Image
$12,000
WS-C3750G-12S-S
Catalyst 3750 12 SFP Standard Multilayer
Image
$8000
Cisco Catalyst 3750 Series 10/100 Workgroup Switches
WS-C3750-24TS-S
Catalyst 3750 24 10/100 + 2 SFP Standard
Multilayer Image
$4000
WS-C3750-48TS-S
Catalyst 3750 48 10/100 + 4 SFP Standard
Multilayer Image
$7000
CD-3750-EMI=
Enhanced Multilayer Image upgrade for 3750
FE models
$2000
CD-3750G-EMI=
Enhanced Multilayer Image upgrade for 24port 3750 GE models
$4000
Cisco IOS Upgrades for the Catalyst 3750
continues
284
Chapter 4: Designing Basic Campus and Data Center Networks
Table 4-5
Case Study: Sample Price List (Continued)
Category
Fictional
Price
Part Number
Description
CD-3750G-48EMI=
Enhanced Multilayer Image upgrade for 48port 3750 GE models
$8000
3750-AISK9-LIC-B=
Advanced IP Services upgrade for 3750 FE
models running SMI
$5000
3750G-AISK9-LIC-B=
Advanced IP Services upgrade for 3750 GE
models running SMI
$7000
3750G48-AISK9LC-B=
Advanced IP Services upgrade for 3750G-48
models running SMI
$11,000
WS-C4948-S
Catalyst 4948, IPB software, 48-port 10/100/
1000+4 SFP, 1 AC power supply
$10,500
WS-C4948-E
Catalyst 4948, ES software, 48-port 10/100/
1000+4 SFP, 1 AC power supply
$14,500
WS-C4948-10GE-S
Catalyst 4948, IPB software, 48*10/100/
1000+2*10GE(X2), 1 AC power supply
$17,500
WS-C4948-10GE-E
Catalyst 4948, ES Image, 48*10/100/
1000+2*10GE(X2), 1 AC power supply
$21,500
S49L3K9-12220EWA
Cisco Catalyst 4948 IOS Standard Layer 3
3DES (RIP, St. Routes, IPX, AT)
$0
S49L3EK9-12220EWA
Cisco Catalyst 4948 IOS Enhanced Layer 3
3DES (OSPF, EIGRP, IS-IS, BGP)
$4000
S49ESK9-12225SG
Cisco Catalyst 4900 IOS Enterprise Services
SSH
$4000
WS-C4510R
Catalyst 4500 Chassis (10-slot), fan, no power
supply, Redundant Supervisor Capable
$12,500
WS-C4507R
Catalyst 4500 Chassis (7-slot), fan, no power
supply, Redundant Supervisor Capable
$10,000
WS-C4506
Catalyst 4500 Chassis (6-slot), fan, no power
supply
$5000
Cisco Catalyst 4948 Switches
Cisco Catalyst 4948 Software
Cisco Catalyst 4500—Chassis
Case Study: ACMC Hospital Network Campus Design
Table 4-5
285
Case Study: Sample Price List (Continued)
Category
Fictional
Price
Part Number
Description
WS-C4503
Catalyst 4500 Chassis (3-slot),fan, no power
supply
$1000
WS-C4506-S2+96
Catalyst 4506 Bundle, 1x 1000AC, 1x S2+, 2x
WS-X4148-RJ
$16,800
WS-C4503-S2+48
Catalyst 4503 Bundle, 1x 1000AC, 1x S2+, 1x
WS-X4148-RJ
$10,000
Cisco Catalyst 4500 Non-PoE Power Supplies
PWR-C45-1400AC
Catalyst 4500 1400W AC Power Supply (Data
Only)
$1500
PWR-C45-1000AC
Catalyst 4500 1000W AC Power Supply (Data
Only)
$1000
WS-X4516-10GE
Catalyst 4500 Supervisor V-10GE, 2x10GE
(X2) and 4x1GE (SFP)
$20,000
WS-X4516-10GE/2
Catalyst 45xxR Supervisor V-10GE, 2x10GE
(X2) or 4x1GE (SFP)
$20,000
WS-X4516
Catalyst 4500 Supervisor V (2 GE), Console
(RJ-45)
$16,500
WS-X4515
Catalyst 4500 Supervisor IV (2 GE), Console
(RJ-45)
$12,000
WS-X4013+10GE
Catalyst 4500 Supervisor II+10GE, 2x10GE
(X2), and 4x1GE (SFP)
$12,000
WS-X4013+
Catalyst 4500 Supervisor II-Plus (IOS), 2GE,
Console (RJ-45)
$6000
WS-X4013+TS
Catalyst 4503 Supervisor II-Plus-TS, 12 10/
100/1000 PoE+8 SFP slots
$6000
WS-X4148-RJ
Catalyst 4500 10/100 Auto Module, 48-Ports
(RJ-45)
$4500
WS-X4124-RJ45
Catalyst 4500 10/100 Module, 24-Ports (RJ-45)
$2500
Cisco Catalyst 4500 Supervisor Engines
Cisco Catalyst 4500 10/100 Linecards
continues
286
Chapter 4: Designing Basic Campus and Data Center Networks
Table 4-5
Case Study: Sample Price List (Continued)
Category
Fictional
Price
Part Number
Description
WS-X4148-RJ21
Catalyst 4500 10/100 Module, 48-Ports Telco
(4xRJ21)
$4500
WS-X4232-GB-RJ
Catalyst 4500 32-10/100 (RJ-45), 2-GE
(GBIC)
$4500
WS-X4232-RJ-XX
Catalyst 4500 10/100 Module, 32-ports (RJ-45)
+ Modular uplinks
$3500
Cisco Catalyst 4500 10/100/1000 Linecards
WS-X4548-GB-RJ45
Catalyst 4500 Enhanced 48-Port 10BASE-T,
100BASE-T, 1000BASE-T (RJ-45)
$500
WS-X4506-GB-T
Catalyst 4500 6-Port 10/100/1000 PoE or SFP
(Optional)
$3500
WS-X4448-GB-RJ45
Catalyst 4500 48-Port 10/100/1000 Module
(RJ-45)
$6000
WS-X4424-GB-RJ45
Catalyst 4500 24-port 10/100/1000 Module
(RJ-45)
$3500
Cisco Catalyst 4500 1000 Base-X GE Linecards
WS-X4306-GB
Catalyst 4500 Gigabit Ethernet Module, 6Ports (GBIC)
$3000
WS-X4506-GB-T
Catalyst 4500 6-Port 10/100/1000 PoE or SFP
(Optional)
$3500
WS-X4302-GB
Catalyst 4500 Gigabit Ethernet Module, 2Ports (GBIC)
$1000
WS-X4418-GB
Catalyst 4500 GE Module, Server Switching
18-Ports (GBIC)
$10,000
WS-X4448-GB-SFP
Catalyst 4500 48-Port 1000BASE-X (SFPs
Optional)
$16,500
Cisco Catalyst 4500 Series Supervisor IOS Software Options
S4KL3-12220EWA
Cisco IOS Basic Layer 3 Catalyst 4500
Supervisor 2+/4/5 (RIP, St. Routes, IPX, AT)
$0
S4KL3E-12220EWA
Cisco IOS Enhanced Layer 3 Catalyst 4500
Supervisor 4/5 (OSPF, EIGRP, IS-IS)
$10,000
Case Study: ACMC Hospital Network Campus Design
Table 4-5
287
Case Study: Sample Price List (Continued)
Category
Part Number
Description
Fictional
Price
Cisco Catalyst 6500 Series Supervisor 32-GE Bundles—Top Sellers
WS-C6503E-S32-GE
Cisco Catalyst 6503E, WS-SUP32-GE-3B, Fan
Tray (requires power supply)
$13,000
WS-C6504E-S32-GE
6504-E Chassis + Fan Tray + Supervisor
32-GE
$13,000
WS-C6506E-S32-GE
Cisco Catalyst 6506E, WS-Supervisor 32-GE3B, Fan Tray (requires power supply)
$16,000
WS-C6509E-S32-GE
Cisco Catalyst 6509E, WS-Supervisor
32-GE-3B, Fan Tray (requires power supply)
$20,000
WS-C6513-S32-GE
Cisco Catalyst 6513, WS-Supervisor
32-GE-3B, Fan Tray (requires power supply)
$26,000
Cisco Catalyst 6500 Series Supervisor 32-10GE Bundles—Top Sellers
WS-C6503E-S32-10GE
Cat6503E chassis, WS-Supervisor
32-10GE-3B, Fan Tray (requires power supply)
$23,000
WS-C6504E-S32-10GE
6504-E Chassis + Fan Tray + Supervisor
32-10GE
$23,000
WS-C6506E-S32-10GE
Cat6506E chassis, WS-Supervisor
32-10GE-3B, Fan Tray (requires power supply)
$26,000
WS-C6509E-S32-10GE
Cat6509E chassis, WS-Supervisor
32-10GE-3B, Fan Tray (requires power supply)
$30,000
Cisco Catalyst 6500 Series AC Power Supplies—Top Sellers
PWR-2700-AC/4
2700W AC power supply for Cisco
7604/6504-E
$3000
WS-CAC-3000W
Catalyst 6500 3000W AC power supply
$3000
WS-CAC-6000W
Cat6500 6000W AC power supply
$5000
Cisco Catalyst 6500 Series 10 Gigabit Ethernet—Top Sellers
WS-X6704-10GE
Cat6500 4-port 10 Gigabit Ethernet Module
(requires XENPAKs)
$20,000
S-67-10GE-C2
Cat6500, 1x6704-10 GE, 1xWS-F6700DFC3B, 2xXENPAK-10GB-SR=
$33,500
continues
288
Chapter 4: Designing Basic Campus and Data Center Networks
Table 4-5
Case Study: Sample Price List (Continued)
Category
Part Number
Description
Fictional
Price
Cisco Catalyst 6500 Series Gigabit Ethernet—Top Sellers
WS-X6408A-GBIC
Catalyst 6000 8-port GE, Enhanced QoS
(requires GBICs)
$10,000
WS-X6516A-GBIC
Catalyst 6500 16-port Gigabit Ethernet
Module, fabric-enabled (requires GBICs)
$15,000
WS-X6724-SFP
Catalyst 6500 24-port Gigabit Ethernet
Module, fabric-enabled (requires SFPs)
$15,000
WS-X6748-SFP
Catalyst 6500 48-port Gigabit Ethernet
Module, fabric-enabled (requires SFPs)
$25,000
Cisco Catalyst 6500 Series 10/100/1000—Top Sellers
WS-X6148A-GE-TX
Catalyst 6500 48-port 10/100/1000 with Jumbo
Frame, RJ-45
$7000
WS-X6548-GE-TX
Catalyst 6500 48-port fabric-enabled 10/100/
1000 Module
$12,000
WS-X6748-GE-TX
Catalyst 6500 48-port 10/100/1000 GE
Module: fabric-enabled, RJ-45
$15,000
Cisco Catalyst 6500 Series 10/100—Top Sellers
WS-X6148A-RJ-45
Catalyst 6500 48-port 10/100 with TDR,
upgradable to PoE 802.3af
$6000
WS-X6148-RJ-21
Catalyst 6500 48-port 10/100 upgradable to
voice, RJ-21
$6000
WS-X6196-RJ-21
Catalyst 6500 96-port 10/100 upgradable to
PoE 802.3af
$10,500
Cisco Catalyst 6500 Series Supervisor 32 Cisco IOS—Top Sellers
S323IBK9-12218SXF
Cisco Catalyst 6000 IP Base SSH
$0
S323ESK9-12218SXF
Cisco Catalyst 6000 Enterprise Services SSH
$10,000
S323AEK9-12218SXF
Cisco Catalyst 6000 Advanced Enterprise
Services SSH
$15,000
Review Questions
289
NOTE For other options not listed in Table 4-5, assume a 5 to 10 percent upgrade charge from
components shown. For example, if PoE is desired on upgradeable modules, include an upgrade
charge of 10 percent per module.
Review Questions
Answer the following questions, and then refer to Appendix A for the answers.
Table 4-6
1.
What characteristics must you consider when designing a campus network?
2.
What are the most important network requirements for client–Enterprise Edge application
communication?
3.
List examples of applications that would be appropriate to reside in a Server Farm.
4.
A company keeps all its servers and workstations within one building. What geographic
design structure should you choose?
5.
Describe how interbuilding and distant remote network geographic structures are different.
6.
What is the difference between the 80/20 rule and the 20/80 rule?
7.
What type of cable would you recommend for connecting two switches that are 115 m apart?
8.
Compare the range and bandwidth specifications of copper twisted pair, MM fiber, SM fiber,
and wireless.
9.
Fill in Table 4-6 for the IEEE 802.11 wireless standards.
IEEE 802.11 Wireless Standards
Standard
Frequency Band
Maximum Bandwidth
802.11a
802.11b
802.11g
10.
What is the difference between data link layer and multilayer switching?
11.
What is a network flow?
12.
What applications might require the network to handle multicast traffic?
13.
A company is using video on demand, which uses IP multicast as part of its distance-learning
program. The routers are configured for IP multicast. Taking into account that the majority of
the LAN switches are Layer 2 switches, which protocol should be enabled on the LAN
switches to reduce flooding?
290
Chapter 4: Designing Basic Campus and Data Center Networks
14.
What is PIM?
15.
Why might QoS mechanisms be required on a LAN switch?
16.
Which parts of the Enterprise Campus typically have both high availability and high
performance requirements?
17.
A link between the Building Distribution and Campus Core is oversubscribed, but it carries
mission-critical data along with Internet traffic. How would you ensure that the missioncritical applications are not adversely affected by the bandwidth limitations?
18.
A corporate network is spread over four floors. Each floor has a Layer 2 switch and more than
one VLAN. One connection from each floor leads to the basement, where all WAN
connections are terminated and all servers are located. Traffic between VLANs is essential.
What type of device should be used in the basement?
19.
What are the recommended best practices related to managing VLANs and STP in the
Building Access layer?
20.
What functions does the Building Distribution layer provide?
21.
As a recommended practice, when should a dedicated Campus Core layer be deployed?
22.
An organization requires a highly available core network and uses IP telephony for all its
voice communication, both internal and external. Which devices and topology would you
recommend for the Campus Core design?
23.
What is the function of the Edge Distribution module?
24.
A company has mission-critical applications hosted on common servers that are accessible to
selected employees throughout the company’s multiple buildings. Where and how would you
recommend that these servers be placed within the network?
25.
Describe how the Enterprise Data Center has evolved to a service-centric model from a
server-centric model.
26.
An organization evolves its data center infrastructure; put the following phases of evolution
in the correct order:
■
Virtualization
■
Consolidation
■
Automation
27.
What is the purpose of the Data Center aggregation layer?
28.
When determining whether to implement a Core layer within a Data Center design, what
factors should you consider?
This page intentionally left blank
This chapter discusses wide-area network
technologies and design, and includes the
following sections:
■
Enterprise Edge WAN Technologies
■
WAN Design
■
Using WAN Technologies
■
Enterprise Edge WAN and MAN
Architecture
■
Selecting Enterprise Edge Components
■
Enterprise Branch and Teleworker
Design
■
Summary
■
References
■
Case Study: ACMC Hospital Network
WAN Design
■
Review Questions
CHAPTER
5
Designing Remote Connectivity
This chapter discusses the WAN function that provides access to remote sites and the outside
world. It details WAN technologies and WAN design considerations. The chapter explores how
these technologies are used, including for remote access, with virtual private networks (VPN),
for backup, and how the Internet is used as a backup WAN.
This chapter describes the Enterprise WAN and metropolitan-area network (MAN) architecture,
and the Enterprise Branch and Teleworker architectures. The selection of WAN hardware and
software components is also discussed.
Enterprise Edge WAN Technologies
This section introduces the concept of the WAN, beginning with the definition of a WAN and
the types of WAN interconnections. Various WAN technologies are described. The section
concludes with a discussion of WAN pricing and contract considerations.
Introduction to WANs
This section defines a WAN and describes its primary design objectives.
KEY
A WAN is a data communications network that covers a relatively broad geographic
POINT area. A WAN typically uses the transmission facilities provided by service providers
(SP) (also called carriers), such as telephone companies.
Switches, or concentrators, connect the WAN links, relay information through the WAN, and
enable the services it provides. A network provider often charges users a fee, called a tariff, for
the services provided by the WAN. Therefore, WAN communication is often known as a service.
Recall that the purpose of the Cisco Enterprise architecture is to modularize the enterprise
network. All WAN connections are concentrated in a single functional area: the Enterprise Edge.
A WAN provides the Enterprise Edge with access to remote sites and the outside world. Using
various Layer 2 and Layer 3 technologies, WANs operate between the Enterprise Edge and the
Service Provider Edge.
294
Chapter 5: Designing Remote Connectivity
Designing a WAN is a challenging task. The first design step is to understand the WAN’s
networking requirements, which are driven by two primary goals:
■
Service level agreement (SLA): Networks carry application information between
computers. If the applications are not available to network users, the network fails to achieve
its design objectives. Organizations need to define the level of service, such as bandwidth,
allowed latency, packet loss, and so forth, that is acceptable for the applications running
across the WAN.
■
Cost of investment and usage: WAN designs are always subject to budget limitations.
Selecting the right type of WAN technology is critical to providing reliable services for enduser applications in a cost-effective and efficient manner.
Flowing from these goals are the following objectives of an effective WAN design:
■
A well-designed WAN must reflect the goals, characteristics, and policies of an organization.
■
The selected WAN technology should be sufficient for current and, to some extent, future
application requirements.
■
The associated costs of investment and usage should stay within the budget limits.
WAN Interconnections
Figure 5-1 illustrates the three ways that WAN technologies connect the Enterprise Edge modules
with the outside world, represented by the service provider network. Typically, the intent is to
provide the following connections:
■
Connectivity between the Enterprise Edge modules and the Internet Service Provider (ISP)
Edge module
■
Connectivity between Enterprise sites across the ISP network
■
Connectivity between Enterprise sites across the SP or public switched telephone network
(PSTN) carrier network
WAN connections can be point-to-point between two locations or connections to a multipoint
WAN service offering, such as a Frame Relay or Multiprotocol Label Switching (MPLS) network.
NOTE The available service provider offerings often limit designers and thus directly affect
the WAN selection process. Review the availability of offerings from multiple service providers
to support your WAN design.
Enterprise Edge WAN Technologies
295
Different Types of WAN Connections Are Appropriate for Different Uses
Figure 5-1
Enterprise Campus
Enterprise Edge
Building
Access
E-Commerce
Service Provider
Enterprise
Branch
ISP A
Building
Distribution
Internet
Connectivity
ISP B
Campus
Core
Enterprise
Data Center
Remote Access
and VPN
PSTN
Server Farm and
Data Center
WAN and MAN
Site-to-Site VPN
Network
Management
Frame
Relay/ATM
Enterprise
Teleworker
One of the main issues in WAN connections is selecting the appropriate physical WAN
technology. The following sections discuss WAN technologies, starting with traditional WAN
technologies.
Traditional WAN Technologies
Traditional WAN technologies include the following:
■
Leased lines: Point-to-point connections indefinitely reserved for transmissions, rather than
used only when transmission is required. The carrier establishes the connection either by
dedicating a physical wire or by delegating a channel using frequency division multiplexing
or time-division multiplexing (TDM). Leased-line connections usually use synchronous
transmission.
■
Circuit-switched networks: A type of network that, for the duration of the connection,
obtains and dedicates a physical path for a single connection between two network endpoints.
Ordinary voice phone service over the PSTN is circuit-switched; the telephone company
296
Chapter 5: Designing Remote Connectivity
reserves a specific physical path to the number being called for the call’s duration. During that
time, no one else can use the physical lines involved. Other circuit-switched examples include
asynchronous serial transmission and ISDN.
■
Packet-switched and cell-switched networks: A carrier creates permanent virtual circuits
(PVC) or switched virtual circuits (SVC) that deliver packets of data among customer sites.
Users share common carrier resources and can use different paths through the WAN (for
example, when congestion or delay is encountered). This allows the carrier to use its
infrastructure more efficiently than it can with leased point-to-point links. Examples of
packet-switched networks include X.25, Frame Relay, and Switched Multimegabit Data
Service.
Leased lines and circuit-switched networks offer users dedicated bandwidth that other users
cannot take. In contrast, packet-switched networks have traditionally offered more flexibility and
used network bandwidth more efficiently than circuit-switched networks. Cell switching
combines some aspects of circuit switching and packet switching to produce networks with low
latency and high throughput.
Circuit-/Packet-/Cell-Switched Versus the Open Systems Interconnection Model
Circuit-switched technologies properly fit into Layer 1 of the Open Systems Interconnection (OSI)
model—the physical layer. Layer 1 OSI protocols describe methods for binary encoding on
physical transmission media. PSTN networks, however, use analog methods to encode data on a
phone line. For a network device such as a router to interface with this analog network, a means
of converting binary-encoded data to analog is required. This function is provided by a modulator/
demodulator (modem). ISDN networks, on the other hand, are digital (the “D” in ISDN stands for
“digital”). There is no need to convert from digital to analog, so devices adapt to an ISDN network
using not a modem, but a terminal adapter.
In contrast, packet- and cell-switched networks operate at the data link layer (Layer 2) of the OSI
model. As such, they use protocols that define methods to control access to the physical layer,
allowing many conversations to multiplex over the same physical transmission medium. This is
achieved by framing the binary transmission at Layer 2 and providing addressing to identify the
endpoints of the data link. Virtual circuits (either permanent or switched) provide logical paths
between the endpoints in the same way that circuit-switched technologies create a physical path.
Packet-Switched Network Topologies
As shown in Figure 5-2, packet-switched networks use three basic topologies: star, full mesh, and
partial mesh.
Enterprise Edge WAN Technologies
Figure 5-2
297
Three Topologies for Packet-Switched Networks
Star or Hub-and-Spoke Topology
Fully Meshed Topology
Partially Meshed Topology
Star Topology
A star topology (also called a hub-and-spoke topology) features a single internetworking hub (for
example, a central router) that provides access from remote networks into the core router.
Communication between remote networks is possible only through the core router. The
advantages of a star approach are simplified management and minimized tariff costs, which result
from the low number of circuits. However, the disadvantages are significant, including the
following:
■
The central router (the hub) is a single point of failure.
■
The central router limits overall performance for access to centralized resources because all
traffic intended for the centralized resources or for the other regional routers goes through this
single device.
■
The topology is not scalable.
Fully Meshed Topology
In a fully meshed topology, each routing node on the periphery of a given packet-switching
network has a direct path to every other node, providing any-to-any connectivity. The key rationale
for creating a fully meshed environment is to provide a high level of redundancy; however, a fully
298
Chapter 5: Designing Remote Connectivity
meshed topology is not scalable to large packet-switched networks. Key issues include the
following:
■
The large number of virtual circuits required—one for every connection between routers. The
number of circuits required in a fully meshed topology is n(n–1)/2, where n is the number of
routers.
■
The problems associated with the requirement for large numbers of packet and broadcast
replications.
■
The configuration complexity of routers that must handle the absence of multicast support in
nonbroadcast environments.
Partially Meshed Topology
A partially meshed topology reduces, within a region, the number of routers that have direct
connections to all other nodes within that region. Not all nodes are connected to all other nodes;
for a nonmeshed node to communicate with another nonmeshed node, it must send traffic through
one of the fully connected routers.
There are many forms of partially meshed topologies. In general, partially meshed approaches
provide the best balance for regional topologies in terms of the number of virtual circuits,
redundancy, and performance.
WAN Transport Technologies
Table 5-1 compares various WAN technologies, based on the main factors that influence
technology selection. This table provides typical baseline characteristics to help you compare the
performance and features offered by different technologies. Often, the offerings of the service
provider limit your technology decisions.
NOTE Some WAN technology characteristics differ between service providers; Table 5-1 is
meant to illustrate typical characteristics.
Table 5-1
WAN Transport Technology Comparison
Technology1
Bandwidth
Latency
and Jitter
Connect
Time
Tariff
Initial Cost
Reliability
TDM (leased line)
M
L
L
M
M
M
ISDN
L
M/H
M
M
L
M
Frame Relay
L
L
L
M
M
M
Enterprise Edge WAN Technologies
Table 5-1
299
WAN Transport Technology Comparison (Continued)
Technology1
Bandwidth
Latency
and Jitter
Connect
Time
Tariff
Initial Cost
Reliability
ATM
M/H
L
L
M
M
H
MPLS
M/H
L
L
M
M
H
Metro Ethernet
M/H
L
L
M
M
H
DSL
L/M2
M/H
L
L
L
M
Cable modem
L/M2
M/H
L
L
M
L
Wireless
L/M
M/H
L
L
M
L
SONET/SDH
H
L
L
M
H
H
DWDM
H
L
L
M
H
H
Dark fiber
H
L
L
M
H
H
L = low, M = medium, H = high
Nonstandard acronyms are expanded within the text of the chapter
2 Unbalanced (asymmetric) transmit and receive
1
These technologies are introduced in the following sections.
TDM (Leased Lines)
KEY
TDM is a type of digital multiplexing in which pulses representing bits from two or more
POINT channels are interleaved, on a time basis. Rather than using bandwidth only as required,
TDM indefinitely reserves point-to-point connection bandwidth for transmissions.
The base channel bandwidth is 64 kilobits per second (kbps), also known as digital signal level 0
(DS0). 64 kbps is the bandwidth required for an uncompressed digitized phone conversation.
DS0 Rate
Standard speech is typically below 4000 hertz (Hz); analog speech is therefore filtered at 4000 Hz
before being sampled. The Nyquist theorem states that a signal should be sampled at a rate at least
two times the input frequency to obtain a quality representation of the signal. Therefore, the input
analog signal is sampled at 8000 times per second.
Each of the samples is encoded into 8-bit octets. The DS0 rate is therefore 8000 samples per
second times 8 bits per sample, which results in 64,000 bits per second, or 64 kbps.
300
Chapter 5: Designing Remote Connectivity
For example, a North American T1 circuit is made up of 24 channels, each at 64 kbps, resulting in
a bandwidth of 1.544 megabits per second (Mbps). A T3 circuit has 672 channels and runs at
44.736 Mbps. Corresponding European standards are the E1 standard, supporting 30 channels at
2.048 Mbps, and the E3 standard, supporting 480 channels at 34.368 Mbps.
A carrier establishes a connection in a TDM network by dedicating a channel for a specific
connection. In contrast, packet-switched networks traditionally offer the service provider more
flexibility and use network bandwidth more efficiently than TDM networks because the network
resources are shared dynamically and subscribers are charged on the basis of their network use.
ISDN
ISDN is a system of digital phone connections that has been available as a communications
standard since 1984. This system allows voice and data to be transmitted simultaneously across
the world using end-to-end digital connectivity.
KEY
ISDN connectivity offers increased bandwidth, reduced call setup time, reduced latency,
POINT and lower signal-to-noise ratios, compared to analog dialup.
However, because the industry is moving toward using broadband technologies—such as Digital
Subscriber Line (DSL), cable, and public wireless—to IP security (IPsec) VPNs, ISDN presents an
effective solution only for remote-user applications where broadband technologies are not available.
NOTE Analog modem dialup, also called plain old telephone service (POTS), provides data
connectivity over the PSTN using analog modems. Dialup supports relatively low-speed
connections, compared to broadband technologies. Dialup point-to-point service is typically no
longer a cost-effective solution for WAN connectivity. It might be cost-effective only as a
backup access solution for Internet connectivity in teleworker environments.
Frame Relay
Frame Relay is an example of a packet-switched technology for connecting devices on a WAN that
has been deployed since the late 1980s. Frame Relay is an industry-standard networking protocol
that handles multiple virtual circuits (VC) using a derivation of High-Level Data Link Control
(HDLC) encapsulation between connected devices.
KEY
Frame Relay networks transfer data using one of two connection types:
POINT
■
PVCs, which are permanent connections.
■
SVCs, which are temporary connections created for each data transfer and then
terminated when the data transfer is complete. SVCs are not widely used.
Enterprise Edge WAN Technologies
301
Asynchronous Transfer Mode
KEY
ATM uses cell-switching technology to transmit fixed-sized (53-byte) cells.
POINT
Each ATM cell can be processed asynchronously (relative to other related cells), queued, and
multiplexed over the transmission path. ATM provides support for multiple Quality of Service
(QoS) classes to meet delay and loss requirements.
MPLS
MPLS is an Internet Engineering Task Force (IETF) standard architecture that combines the
advantages of Layer 3 routing with the benefits of Layer 2 switching.
KEY
With MPLS, short fixed-length labels are assigned to each packet at the edge of the
POINT network. Rather than examining the IP packet header information, MPLS nodes use this
label to determine how to process the data.
This process results in a faster, more scalable, and more flexible WAN solution. The MPLS
standards evolved from the efforts of many companies, including Cisco’s tag-switching technology.
MPLS enables scalable VPNs, end-to-end QoS, and other IP services that allow efficient
utilization of existing networks with simpler configuration and management and quicker fault
correction.
MPLS Operation
MPLS is a connection-oriented technology whose operation is based on a label attached to each
packet as it enters the MPLS network. A label identifies a flow of packets (for example, voice
traffic between two nodes), also called a Forwarding Equivalence Class (FEC). An FEC is a
grouping of packets; packets belonging to the same FEC receive the same treatment in the
network. The FEC can be determined by various parameters, including source or destination IP
address or port numbers, IP protocol, IP precedence, or Layer 2 circuit identifier. Therefore, the
FEC can define the flow’s QoS requirements. In addition, appropriate queuing and discard policies
can be applied for FECs.
The MPLS network nodes, called Label-Switched Routers (LSR), use the label to determine the
packet’s next hop. The LSRs do not need to examine the packet’s IP header; rather, they forward
it based on the label.
After a path has been established, packets destined for the same endpoint with the same
requirements can be forwarded based on these labels without a routing decision at every hop.
302
Chapter 5: Designing Remote Connectivity
Labels usually correspond to Layer 3 destination addresses, which makes MPLS equivalent to
destination-based routing.
A Label-Switched Path (LSP) must be defined for each FEC before packets can be sent. It is
important to note that labels are locally significant to each MPLS node only; therefore, the nodes
must communicate what label to use for each FEC. One of two protocols is used for this
communication: the Label Distribution Protocol or an enhanced version of the Resource
Reservation Protocol. An interior routing protocol, such as Open Shortest Path First (OSPF) or
Enhanced Interior Gateway Routing Protocol (EIGRP), is also used within the MPLS network to
exchange routing information.
A unique feature of MPLS is its capability to perform label stacking, in which multiple labels can
be carried in a packet. The top label, which is the last one in, is always processed first. Label
stacking enables multiple LSPs to be aggregated, thereby creating tunnels through multiple levels
of an MPLS network.
An MPLS label is a 32-bit field placed between a packet’s data link layer header and its IP header.
Figure 5-3 illustrates the flow of two packets through an MPLS network.
Figure 5-3
Labels Are Used to Assign a Path for a Packet Flow Through an MPLS Network
17
19
Packet A
21
Packet A
Packet A
Packet A
P
V
Packet A
Packet B
X
Y
Z
Packet B
Q
18
Pa
cke
t
et
ck
Pa
B
B
22
W
NOTE The links shown in Figure 5-3 are meant to be generic; thus, they do not represent any
particular type of interface.
In Figure 5-3, each of the MPLS nodes has previously communicated the labels it uses for each of
the defined FECs to its neighboring nodes. Packet A and Packet B represent different flows; for
Enterprise Edge WAN Technologies
303
example, Packet A might be from an FTP session, whereas Packet B is from a voice conversation.
Without MPLS, these packets would take the same route through the network.
For Packets A and B, Router V is the ingress edge LSR—that is, the point at which the packets
enter the network. Router V examines each packet and determines the appropriate FEC. Packet A
is assigned label 17 and is sent to Router X; Packet B is assigned label 18 and is sent to Router W.
As each LSR receives a labeled packet, it removes the label, locates the label in its table, applies
the appropriate outgoing label, and forwards the packet to the next LSR in the LSP. When the
packets reach Router Z (the egress edge LSR, or the point at which the packets leave the MPLS
network), Router Z removes the label and forwards the packets appropriately, based on its IP
routing table.
KEY
Packets sent between the same endpoints might belong to different MPLS FECs, and
POINT therefore might flow through different paths in the network.
MPLS Services
The following are some of the most common services provided by MPLS:
■
Traffic engineering: MPLS allows traffic to be directed through a specific path, which might
be different from the least-cost path determined by the IP routing protocol. This ability to
define routes and resource utilization is known as traffic engineering.
■
QoS support: MPLS creates a connection-oriented network for IP traffic, thereby providing
the foundation for QoS traffic controls. For example, it might provide guaranteed bandwidth
to specific traffic between two locations.
■
Fast reroute (FRR): Because FRR allows extremely quick recovery from node or link
failure, it prevents applications from timing out and losing data.
■
MPLS VPNs: MPLS VPNs are much easier to deploy than traditional VPNs. They scale
easily with increasing numbers of routes and customers and provide the same level of privacy
as Layer 2 technologies. MPLS VPNs can also support nonunique IP addresses in various
locations; for example, two organizations that use the 10.0.0.0 private address space can be
supported simultaneously. MPLS VPNs are described in the later “Peer-to-Peer VPNs”
section.
■
Multiprotocol support: MPLS can be used in an ATM network, a Frame Relay network, or
a pure IP-based Internet. MPLS can be used to carry many kinds of traffic, including IP
packets, and native ATM, SONET, and Ethernet frames.
The key for the designer of an MPLS WAN is to minimize routing decisions and maximize MPLS
switching use.
304
Chapter 5: Designing Remote Connectivity
Metro Ethernet
KEY
Metro Ethernet uses Ethernet technology to deliver cost-effective, high-speed connectivity
POINT for MAN and WAN applications.
Service providers offer Metro Ethernet services to deliver converged voice, video, and data
networking. Metro Ethernet provides a data-optimized connectivity solution for the MAN and
WAN based on Ethernet technology widely deployed within the enterprise LAN. It also supports
high-performance networks in the metropolitan area, meeting the increasing need for faster data
speeds and more stringent QoS requirements.
Where traditional TDM access is rigid, complex, and costly to provision, Metro Ethernet services
provide scalable bandwidth in flexible increments, simplified management, and faster and lowercost provisioning. This simple, easy-to-use technology appeals to customers who are already
using Ethernet on their LANs.
DSL Technologies
DSL delivers high bandwidth over traditional telephone copper lines. It works by way of two
modems at either end of the wire. Like dialup, cable, wireless, and T1, DSL is a transmission
technology that enables SPs to deliver a wide variety of services to their customers. These can
include premium, high-speed Internet and intranet access, voice, VPNs, videoconferencing, and
video on demand.
Basic DSL Implementations
The term xDSL covers a variety of similar forms of DSL. The two basic DSL categories are
Asymmetric DSL (ADSL) and Symmetric DSL (SDSL). ADSL can be used only over short
distances (typically less than 2 km).
KEY
ADSL is the most common variety of DSL. Because ADSL operates at frequencies (from
POINT 100 kilohertz [kHz] to 1.1 megahertz [MHz]) that are above the voice channel (300 to
3400 Hz), ADSL allows PSTN telephony services concurrently on the same line.
With ADSL, traffic moves upstream and downstream at different speeds. For example, data that
travels from the Internet to the end-user computer (downstream) could be moving at 1.5 Mbps,
while data traveling from the end-user computer to the Internet (upstream) could be traveling at
Enterprise Edge WAN Technologies
305
384 kbps. ADSL can also be provisioned for symmetric operation, making it a viable residential
and home office solution.
KEY
Downstream refers to data that travels from the Internet to the end-user computer.
POINT
Upstream refers to data that travels from the end-user computer to the Internet.
KEY
With SDSL, traffic in either direction travels at the same speed over a single copper twisted
POINT pair.
The use of a single twisted pair limits the operating range of SDSL to 10,000 feet (3048.8 meters).
Unlike ADSL, SDSL does not allow concurrent PSTN telephony services on the same line. SDSL
is a viable business solution and an excellent choice for running applications such as web and
e-mail servers.
NOTE SDSL is sometimes referred to as single-pair DSL.
Other Implementations of DSL
Other forms of DSL include the following:
■
ISDN DSL (IDSL) is similar to ISDN.
■
High-data-rate DSL (HDSL) delivers 1.544 Mbps of bandwidth each way (symmetric) over
two pairs of copper twisted wire (data travels over two pairs of wires instead of one). HDSL
does not support PSTN. Because HDSL provides T1 speed, telephone companies use it to
provision local access to T1 services whenever possible. The operating range of HDSL is
limited to 12,000 feet (3658.5 meters).
■
HDSL-2 (second generation of HDSL) is a full-rate-only symmetric service that is different
from HDSL because it exists over a single twisted-pair wire. HDSL-2 was conceived
specifically to provide spectral compatibility with ADSL.
■
G.SHDSL combines the best of SDSL and HDSL-2. The standard defines multirates, like
SDSL, but provides the spectral compatibility of HDSL-2.
■
Very-high-data-rate DSL (VDSL) is an extremely fast asymmetric DSL technology that
delivers 13 to 52 Mbps downstream and 1.5 to 2.3 Mbps upstream of data, and PSTN services,
over a single twisted copper pair of wires. The operating range of VDSL is limited to 1,000
to 4,500 feet (304.8 to 1,372 meters).
306
Chapter 5: Designing Remote Connectivity
The next section walks through an example of ADSL architecture and design.
ADSL Architecture and Design
Figure 5-4 illustrates a typical ADSL service architecture. The network consists of Customer
Premises Equipment (CPE), the Network Access Provider (NAP), and the Network Service
Provider (NSP), as follows:
■
CPE refers to an end-user workstation, such as a PC, together with an ADSL modem or an
ADSL transceiver unit remote terminal (ATU-R).
■
The NAP provides ADSL line termination by using DSL access multiplexers (DSLAM).
■
The DSLAM forwards traffic to the NSP, the local access concentrator, which is used for
Layer 3 termination.
Figure 5-4
Sample ADSL Architecture
Remote Access
Internet Service Provider Edge
Virtual Circuit
ADSL CPE
ATU-R
Upstream
Downstream
Layer 2 Concentrator
DSLAM NAP
Backbone
CPE
CO
Splitter Splitter
ATM
Network
Layer 3
Concentrator
NSP
Voice Switch
An ADSL circuit connects an ADSL modem on each end of a twisted-pair telephone line. This
creates three information channels:
■
Medium-speed downstream channel
■
Low-speed upstream channel
■
Basic telephone service channel
Enterprise Edge WAN Technologies
307
Filters, or splitters, split off the basic telephone service channel from the digital modem,
guaranteeing uninterrupted basic telephone service even if ADSL fails. Figure 5-5 illustrates a
typical ADSL network, including (from left to right) customer workstations and PCs on a LAN,
CPE (ADSL routers), a DSLAM on an ATM transport network, an NSP concentrator, and both
packet and ATM core networks. Two very popular point-to-point protocol (PPP) implementations
exist in ADSL designs: PPP over ATM (PPPoA) and PPP over Ethernet (PPPoE) .
Figure 5-5
ADSL Point-to-Point Protocol Implementations
ADSL CPE
ATU-R
Layer 2 Concentrator
DSLAM NAP
Backbone
ATM
Network
Layer 3
Concentrator
NSP
In the PPPoA architecture, the CPE acts as an Ethernet-to-WAN router, and the PPP session is
established between the CPE and the Layer 3 access concentrator (the NSP). A PPPoA
implementation involves configuring the CPE with PPP authentication information (login and
password).
In the PPPoE architecture, the CPE acts as an Ethernet-to-WAN bridge, and the PPP session is
established between the end user’s PC or PPPoE router and the Layer 3 access concentrator (the
NSP). The client initiates a PPP session by encapsulating PPP frames into an Ethernet frame and
then bridging the frame (over ATM/DSL) to the gateway router (the NSP). From this point, the
PPP sessions are established, authenticated, and addressed. The client receives its IP address using
PPP negotiation from the termination point (the NSP).
Long Reach Ethernet Technology
KEY
Long Reach Ethernet (LRE) is Cisco-proprietary technology that allows greater distances
POINT than traditional Ethernet and is a WAN access technology. LRE technology enables the use
of Ethernet over existing, unconditioned, telephone-grade wire (copper twisted pair) using
DSL coding and digital modulation techniques.
LRE technology allows Ethernet LAN transmissions to coexist with POTS, ISDN, or advanced
PBX signaling services over the same pair of ordinary copper wires. LRE technology uses coding
308
Chapter 5: Designing Remote Connectivity
and digital modulation techniques from the DSL world in conjunction with Ethernet, the most
popular LAN protocol.
An LRE system provides a point-to-point transmission that can deliver a symmetrical, full-duplex,
raw data rate of up to 15 Mbps over distances of up to 1 mile (1.6 km). The channel’s speed
decreases with distance.
Cable Technology
KEY
The cable technology for data transport uses coaxial cable media over cable distribution
POINT systems. The cable network is a high-speed copper platform that supports analog and
digital video services over coaxial cables.
This technology is a good option for environments where cable television is widely deployed.
Cable service providers support both residential and commercial customers.
Figure 5-6 illustrates some of the components used to transmit data and voice on a cable network.
The Universal Broadband Router (uBR), also referred to as the Cable Modem Termination System
(CMTS), provides high-speed data connectivity and is deployed at the cable company’s headend.
The uBR forwards data upstream to connect with either the PSTN or the Internet. The cable
modem, also referred to as the cable access router, at the customer location offers support for
transmission of voice, modem, and fax calls over the TCP/IP cable network.
Figure 5-6
Data and Voice over IP over Cable
Remote Access
Internet Service Provider Edge
Headend
WAN
Core
Cable Modem
Termination System
(Universal Broadband
Router)
PSTN
Cable Modem
(Cable Access
Router)
Cable
Enterprise Edge WAN Technologies
309
Cable modems are installed at the customer premises to support small businesses, branch offices,
and corporate telecommuters.
The uBR is designed to be installed at a cable operator’s headend facility or distribution hub and
to function as the CMTS for subscriber-end devices.
The Data over Cable Service Interface Specification (DOCSIS) Radio Frequency (RF) Interface
Specification defines the interface between the cable modem and the CMTS, and the data-overcable procedures that the equipment must support.
Upstream and Downstream Data Flow
A data service is delivered to a subscriber through channels in a coaxial or optical fiber cable to a
cable modem installed externally or internally to a subscriber’s computer or television set. One
television channel is used for upstream signals from the cable modem to the CMTS, and another
channel is used for downstream signals from the CMTS to the cable modem.
When a CMTS receives signals from a cable modem, it converts these signals into IP packets that
are then sent to an IP router for transmission across the Internet. When a CMTS sends signals to
a cable modem, it modulates the downstream signals for transmission across the cable, or across
the optical fiber and cable, to the cable modem. All cable modems can communicate with the
CMTS, but not with other cable modems on the line.
The actual bandwidth for Internet service over a cable TV line is shared 27 Mbps on the download
path to the subscriber, with about 2.5 Mbps of shared bandwidth for interactive responses in the
other direction.
CATV Transmission
Before converting to their respective channel assignments in the downstream frequency domain,
signals from broadcasters and satellite services are descrambled. Video signals are converted from
optical signals to electrical signals and then are amplified and forwarded downstream over coaxial
cable for distribution to the cable operator’s customers.
Wireless Technologies
KEY
With wireless technologies, networks do not have the limitations of wires or cables;
POINT instead, electromagnetic waves carry the RF signals.
310
Chapter 5: Designing Remote Connectivity
Common examples of wireless equipment include cellular phones and pagers, global positioning
systems, cordless computer peripherals, satellite television, and wireless LANs (WLAN). As
shown in Figure 5-7, wireless implementations include the following:
■
Bridged wireless: Designed to connect two or more networks, typically located in different
buildings, at high data rates for data-intensive, line-of-sight applications. A series of wireless
bridges or routers connect discrete, distant sites into a single LAN, interconnecting hard-towire sites, noncontiguous floors, satellite offices, school or corporate campus settings,
temporary networks, and warehouses.
■
Mobile wireless: Includes cellular voice and data applications. Wireless technology usage
increased with the introduction of digital services on wireless. Second- and third-generation
mobile phones offer better connectivity and higher speeds. Mobile wireless technologies
include the following:
— Global System for Mobile (GSM): GSM is a digital mobile radio standard that
uses time division multiple access (TDMA) technology. It allows eight simultaneous
calls on the same frequency, in three different bands: 900, 1800, and 1900 MHz. The
transfer data rate is 9.6 kbps. One of the unique benefits of the GSM service is its
international roaming capability, a result of roaming agreements established among
the various operators.
— General Packet Radio Service (GPRS): GPRS extends the capability of GSM and
supports intermittent and bursty data transfer. Speeds offered to the client are in the
range of ISDN speeds (64 kbps to 128 kbps).
— Universal Mobile Telephone Service (UMTS): UTMS is a so-called thirdgeneration (3G) broadband, packet-based transmission of text, digitized voice,
video, and multimedia at data rates up to 2 Mbps. UMTS offers a consistent set of
services to mobile computer and phone users, regardless of their location in the
world.
— Code Division Multiple Access (CDMA): CDMA is a spread-spectrum technology
that assigns a code to each conversation; individual conversations are encoded in a
pseudo-random digital sequence.
■
WLAN: Developed because of demand for LAN connections over the air and often used for
intrabuilding communication. WLAN technology can replace a traditional wired network or
extend its reach and capabilities. WLANs cover a growing range of applications, such as guest
access and voice, and support services, such as advanced security and location of wireless
devices.
The IEEE 802.11g standard supports speeds of up to 54 Mbps in the 2.4-GHz band.
The IEEE 802.11b standard supports speeds of up to 11 Mbps in the 2.4-GHz band.
The IEEE 802.11a standard supports speeds of up to 54 Mbps in the 5-GHz band.
Enterprise Edge WAN Technologies
Figure 5-7
311
Three Wireless Implementations
MESH
MESH
MESH
MESH
MESH
MESH
Bridged Wireless
Mobile Wireless
Campus
LAN
Wireless LAN
NOTE Wireless networks are discussed further in Chapter 9, “Wireless Network Design
Considerations.”
Synchronous Optical Network and Synchronous Digital Hierarchy
Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) is a circuit-based
bandwidth-efficient technology. SONET/SDH establishes high-speed circuits using TDM frames
312
Chapter 5: Designing Remote Connectivity
in ring topologies over an optical infrastructure, as illustrated in Figure 5-8. It results in guaranteed
bandwidth, regardless of actual usage. Common bit rates are 155 Mbps and 622 Mbps, with a
current maximum of 10 Gigabits per second (Gbps).
Figure 5-8
SONET/SDH
Transparent, Line-Rate
Gigabit Ethernet
Cisco ONS
15454
Network Layer IP
Trunk EoMPLS
to MPLS
OC-192
SONET
RING
802.1Q
Trunk
KEY
SONET is an ANSI specification. SDH is the SONET-equivalent specification proposed
POINT by the ITU. Whereas European carriers use SDH widely, North American, Asian, and
Pacific Rim carriers use SONET more frequently.
SONET/SDH rings support two IP encapsulations for user interfaces: ATM, and Packet over
SONET/SDH (POS), which sends native IP packets directly over SONET/SDH frames. SONET/
SDH rings provide major innovations for transport and have important capabilities, such as
proactive performance monitoring and automatic recovery (self-healing) via an automatic
protection switching mechanism. These capabilities increase their reliability to cope with system
faults. Failure of a single SONET/SDH link or a network element does not lead to failure of the
entire network.
Optical carrier (OC) rates are the digital hierarchies of the SONET standard, supporting the
following speeds:
■
OC-1 = 51.85 Mbps
■
OC-3 = 155.52 Mbps
Enterprise Edge WAN Technologies
■
OC-12 = 622.08 Mbps
■
OC-24 = 1.244 Gbps
■
OC-48 = 2.488 Gbps
■
OC-192 = 9.952 Gbps
■
OC-255 = 13.21 Gbps
313
Dense Wavelength Division Multiplexing
Dense Wavelength Division Multiplexing (DWDM), illustrated in Figure 5-9, increases
bandwidth on an optical medium.
KEY
DWDM increases the available bandwidth on a single strand of fiber by using
POINT multichannel signaling.
Figure 5-9
DWDM
Gigabit
Ethernet
Cisco
ONS
15201
1
Cisco
ONS
Nx Gigabit
15252
Ethernet
2
Gigabit
Ethernet
Cisco
ONS
15201
DWDM is a crucial component of optical networks. It maximizes the use of installed fiber cable
and allows new services to be provisioned efficiently over existing infrastructure. Flexible add and
drop modules allow individual channels to be dropped and inserted along a route. An open
architecture system allows the connection of a variety of devices, including SONET terminals,
ATM switches, and IP routers. DWDM is also used inside the SONET/SDH ring.
314
Chapter 5: Designing Remote Connectivity
Dark Fiber
KEY
Dark fiber refers to fiber-optic cables leased from an SP and connected to a company’s
POINT own infrastructure.
Dark fiber use is illustrated in Figure 5-10. The framing for the dark fiber is provided by the
company’s devices and does not have to be SONET/SDH. As a result, the dark-fiber connection
eliminates the need for SONET/SDH multiplexers, which are required in SONET/SDH rings. The
edge devices connect directly over the site-to-site dark fiber using a Layer 2 encapsulation such as
Gigabit Ethernet. When such connectivity is used to transmit data over significantly long
distances, regenerators or DWDM concentrators are inserted into the link to maintain signal
integrity and provide appropriate jitter control.
Figure 5-10
Dark Fiber
Dark Fiber
Regenerators
Depending on the carrier and location, dark fiber might be available for sale on the wholesale
market for both metro and wide-area links at prices previously associated with leased-line rental.
WAN Transport Technology Pricing and Contract Considerations
This section discusses pricing and contract considerations for WAN technologies.
NOTE The pricing, time frame, and contract details provided here are examples from the
United States market. Organizations in other countries might have different experiences.
However, the items in this section should be considered when implementing a WAN.
Service and pricing options between carriers should be compared and negotiated, depending on
competition in the area.
Historically, WAN transport costs include an access circuit charge and, for TDM, a distancesensitive rate. Some carriers have dropped or reduced distance-based factors as TDM circuits have
become a commodity.
A service provider might need 60 days or more to provision access circuits. The higher the
bandwidth, the more lead time it might take to install.
Enterprise Edge WAN Technologies
315
Metro Ethernet might not be available everywhere, and the lead times could be long. Construction
and associated fees might be required when provisioning the fiber access.
For Frame Relay and ATM, typical charges include a combination of an access circuit charge (perPVC) and possibly per-bandwidth (committed information rate [CIR] or minimum information
rate [MIR]) charges. Some carriers have simplified these rates by charging based on the access
circuit and then setting CIR or MIR to half that speed; this technique allows bursts to two times
the guaranteed rate.
Frame Relay might be generally available up to T3 speeds. However, in some cases, the trunks
between Frame Relay switches is T3 speed and the service providers do not want to offer T3 access
circuits because all the bandwidth would be utilized.
For MPLS VPNs, pricing generally is set to compete with Frame Relay and ATM. Some providers
encourage customers to move to MPLS VPN by offering lower prices than for Frame Relay and
ATM. Other service providers price MPLS VPNs somewhat higher than Frame Relay or ATM
because they include a routing service.
Tariffed commercial services typically are available at published rates and are subject to certain
restrictions. Some carriers are moving toward unpublished rates, allowing more flexibility in
options and charges.
In general, the time needed to contract a WAN circuit in a standard carrier package is on the order
of one month or so. If you choose to negotiate an SLA, expect six months or more of discussions
with the service provider, and include your legal department. You might not be able to influence
many changes in the SLA unless you represent a very large customer.
Contract periods are usually in the range of one to five years. Because the telecommunications
industry is changing quickly, enterprises generally do not want to get locked into a long-term
contract. Escape clauses in case of merger or poor performance might help mitigate the business
risks of long-term contracts.
For dark fiber, contract periods are generally for 20 years. One option to consider is the right of
nonreversion, meaning that no matter what happens to the provider, the fiber is yours to use for
the full 20 years, protecting the enterprise in case of a service provider merger, bankruptcy, and so
on. The process and responsibility to repair the fiber when necessary should also be defined in the
contract.
316
Chapter 5: Designing Remote Connectivity
WAN Design
This section describes the WAN design methodology and the application and technical
requirement aspects of WAN design. The different possibilities for WAN ownership are discussed.
WAN bandwidth optimization techniques are described.
The methodology espoused here follows the guidelines of the Prepare-Plan-Design-ImplementOperate-Optimize (PPDIOO) methodology introduced in Chapter 2, “Applying a Methodology to
Network Design.” The network designer should follow these steps when planning and designing
the Enterprise Edge based on the PPDIOO methodology:
Step 1
Analyzing customer requirements: The initial step in the design
methodology is to analyze the requirements of the network and its users,
including the type of applications, the traffic volume, and traffic patterns.
User needs continually change in response to changing business conditions
and changing technology. For example, as more voice and video-based
network applications become available, there is pressure to increase
network bandwidth.
Step 2
Characterizing the existing network and sites: The second step is to
analyze the existing networking infrastructure and sites, including the
technology used and the location of hosts, servers, terminals, and other end
nodes. Together with the network’s physical description, the analysis
should evaluate the possibility of extending the network to support new
sites, new features, or the reallocation of existing nodes. For example, the
future integration of data and telephone systems requires considerable
changes in the network’s configuration. In this case, a detailed evaluation
of current options is important.
Step 3
Designing the network topology and solutions: The final step in the
design methodology is to develop the overall network topology and its
appropriate services, based on the availability of technology, and taking
into account the projected traffic pattern, technology performance
constraints, and network reliability. The design document describes a set of
discrete functions performed by the Enterprise Edge modules and the
expected level of service provided by each selected technology, as dictated
by the SP.
Planning and designing WAN networks involves a number of trade-offs, including the following:
■
Application aspects of the requirements driven by the performance analysis
■
Technical aspects of the requirements dealing with the geographic regulations and the
effectiveness of the selected technology
WAN Design
■
317
Cost aspects of the requirements; costs include those of the equipment and of the owned or
leased media or communication channel
NOTE WAN connections are typically characterized by the cost of leasing WAN
infrastructure and transmission media from an SP. WAN designs must therefore trade off
between the cost of bandwidth and the bandwidth efficiency.
The network’s design should also be adaptable for the inclusion of future technologies and should
not include any design elements that limit the adoption of new technologies as they become
available. There might be trade-offs between these considerations and cost throughout the network
design and implementation. For example, many new internetworks are rapidly adopting VoIP
technology. Network designs should be able to support this technology without requiring a
substantial upgrade by provisioning hardware and software that have options for expansion and
upgradeability.
Application Requirements of WAN Design
Just as application requirements drive the Enterprise Campus design (as illustrated in Chapter 4,
“Designing Basic Campus and Data Center Networks”), they also affect the Enterprise Edge WAN
design. Application availability is a key user requirement; the chief components of application
availability are response time, throughput, packet loss, and reliability. Table 5-2 analyzes these
components, which are discussed in the following sections.
Table 5-2
Application Requirements on the WAN
Requirement
Data File
Transfer
Data Interactive
Application
Response time
Reasonable
Throughput
Real-Time Voice
Real-Time Video
Within a second
150 ms of one-way
delay with low jitter
Minimum delay and
jitter
High
Low
Low
High
Packet loss
tolerance
Medium
Low
Low
Medium
Downtime (high
reliability has low
downtime)
Reasonable
Low
Low
Minimum
← Zero downtime for mission-critical applications →
318
Chapter 5: Designing Remote Connectivity
Response Time
KEY
Response time is the time between a user request (such as the entry of a command or
POINT keystroke) and the host system’s command execution or response delivery.
Users accept response times up to some limit, at which point user satisfaction declines.
Applications for which fast response time is considered critical include interactive online services,
such as point-of-sale machines.
NOTE Voice and video applications use the terms delay and jitter, respectively, to express the
responsiveness of the line and the variation in the delays.
Throughput
KEY
In data transmission, throughput is the amount of data moved successfully from one place
POINT to another in a given time period.
Applications that put high-volume traffic onto the network have more effect on throughput than
interactive end-to-end connections. Throughput-intensive applications typically involve filetransfer activities that usually have low response-time requirements and can often be scheduled at
times when response-time-sensitive traffic is low (such as after normal work hours). This could be
accomplished via time-based access lists, for example.
Packet Loss
KEY
In telecommunication transmission, packet loss is expressed as a bit error rate (BER),
POINT which is the percentage of bits that have errors, relative to the total number of bits received
in a transmission.
BER is usually expressed as 10 to a negative power. For example, a transmission might have a
BER of 10 to the minus 6 (10-6), meaning that 1 bit out of 1,000,000 bits transmitted was in error.
The BER indicates how frequently a packet or other data unit must be retransmitted because of an
error. A BER that is too high might indicate that a slower data rate could improve the overall
transmission time for a given amount of transmitted data; in other words, a slower data rate can
reduce the BER, thereby lowering the number of packets that must be re-sent.
Reliability
Although reliability is always important, some applications have requirements that exceed typical
needs. Financial services, securities exchanges, and emergency, police, and military operations are
WAN Design
319
examples of organizations that require nearly 100 percent uptime for critical applications. These
situations imply a requirement for a high level of hardware and topological redundancy. Determining the cost of any downtime is essential for determining the relative importance of the network’s
reliability.
Technical Requirements: Maximum Offered Traffic
The goal of every WAN design should be to optimize link performance in terms of offered traffic,
link utilization, and response time. To optimize link performance, the designer must balance
between end-user and network manager requirements, which are usually diametrically opposed.
End users usually require minimum application response times over a WAN link, whereas the
network manager’s goal is to maximize the link utilization; WAN resources have finite capacity.
Response time problems typically affect only users. For example, it probably does not matter to
the network manager if query results are returned 120 ms sooner rather than later. Response time
is a thermometer of usability for users. Users perceive the data processing experience in terms of
how quickly they can get their screen to update. They view the data processing world in terms of
response time and do not usually care about link utilization. The graphs in Figure 5-11 illustrate
the response time and link utilization relative to the offered traffic. The response time increases
with the offered traffic, until it reaches an unacceptable point for the end user. Similarly, the link
utilization increases with the offered traffic to the point that the link becomes saturated. The
designer’s goal is to determine the maximum offered traffic that is acceptable to both the end user
and the network manager.
Figure 5-11
Determining the Maximum Offered Traffic
Response
Time
User’s
View
Offered Traffic
Link
Utilization
Network
Manager’s
View
Additional WAN Capacity Planning
Additional WAN Capacity Purchasing
Additional WAN Capacity Critical
Offered Traffic
320
Chapter 5: Designing Remote Connectivity
However, planning for additional WAN capacity should occur much earlier than the critical
point—usually at about 50% link utilization. Additional bandwidth purchasing should start at
about 60% utilization; if the link utilization reaches 75%, increasing the capacity is critical.
Technical Requirements: Bandwidth
KEY
Bandwidth is the amount of data transmitted or received per unit time, such as 100 Mbps.
POINT
In a qualitative sense, the required bandwidth is proportional to the data’s complexity for a given
level of system performance. For example, downloading a photograph in 1 second takes more
bandwidth than downloading a page of text in 1 second. Large sound files, computer programs,
and animated videos require even more bandwidth for acceptable system performance. One of the
main issues involved in WAN connections is the selection of appropriate technologies that provide
sufficient bandwidth. Table 5-3 illustrates the ranges of bandwidths commonly supported by the
given technologies.
WAN Physical Media Bandwidths
Table 5-3
Bandwidth
WAN Media
Type
Copper
From 1.5/2 Mbps
to 45/34 Mbps
<= 1.5/2 Mbps (Low) (Medium)
Serial or asynchronous
serial, ISDN, TDM
(DS0, E1/ T1), X25,
Frame Relay, ADSL
Ethernet, TDM (T3/
E3)
Coaxial
Shared bandwidth;
27 Mbps
downstream, 2.5
Mbps upstream
2.4 or 5 GHz
WAN Wireless
Varies, based on
distance and RF
quality
POS = Packet over SONET/SDH
From 100 Mbps to
10 Gbps (Higher)
LRE (up to 15
Mbps), ADSL (8
Mbps downstream)
Fiber
1
From 45/34 Mbps
to 100 Mbps
(High)
Fast Ethernet, ATM
over SONET/SDH,
POS1
Gigabit Ethernet, 10
Gigabit Ethernet,
ATM over SONET/
SDH, POS
WAN Design
321
Bandwidth is inexpensive in the LAN, where connectivity is typically limited only by hardware,
implementation, and ongoing maintenance costs. In the WAN, bandwidth has typically been the
overriding cost, and delay-sensitive traffic such as voice has remained separate from data.
However, new applications and the economics of supporting them are forcing these conventions
to change.
Evaluating the Cost-Effectiveness of WAN Ownership
In the WAN environment, the following usually represent fixed costs:
■
Equipment purchases, such as modems, channel service unit/data service units, and router
interfaces
■
Circuit and service provisioning
■
Network-management tools and platforms
Recurring costs include the monthly circuit fees from the SP and the WAN’s support and
maintenance, including any network management center personnel.
From an ownership perspective, WAN links can be thought of in the following three categories:
■
Private: A private WAN uses private transmission systems to connect distant LANs. The
owner of a private WAN must buy, configure, and maintain the physical layer connectivity
(such as copper, fiber, wireless, and coaxial) and the terminal equipment required to connect
locations. This makes private WANs expensive to build, labor-intensive to maintain, and
difficult to reconfigure for constantly changing business needs. The advantages of using a
private WAN might include higher levels of security and transmission quality.
NOTE When the WAN media and devices are privately owned, transmission quality is not
necessarily improved, nor is reliability necessarily higher.
■
Leased: A leased WAN uses dedicated bandwidth from a carrier company, with either private
or leased terminal equipment. The provider provisions the circuit and provides the
maintenance. However, the company pays for the allocated bandwidth whether or not it is
used, and operating costs tend to be high. Some examples include TDM and SONET circuits.
■
Shared: A shared WAN shares the physical resources with many users. Carriers offer a variety
of circuit- or packet-switching transport networks, such as MPLS and Frame Relay. The
provider provisions the circuit and provides the maintenance. Linking LANs and private
322
Chapter 5: Designing Remote Connectivity
WANs into shared network services is a trade-off among cost, performance, and security. An
ideal design optimizes the cost advantages of shared network services with a company’s
performance and security requirements.
NOTE Circuits often span regional or national boundaries, meaning that several SPs handle a
connection in the toll network. In these cases, devices the subscriber owns (private) and devices
the carrier leases to or shares with the subscriber determine the path.
Optimizing Bandwidth in a WAN
It is expensive to transmit data over a WAN. Therefore, one of many different techniques—such
as data compression, bandwidth combination, tuning window size, congestion management
(queuing and scheduling), congestion avoidance, and traffic shaping and policing—can be used to
optimize bandwidth usage and improve overall performance. The following sections describe
these techniques.
Data Compression
KEY
Compression is the reduction of data size to save transmission time.
POINT
Compression enables more efficient use of the available WAN bandwidth, which is often limited
and is generally a bottleneck. Compression allows higher throughput because it squeezes packet
size and therefore increases the amount of data that can be sent through a transmission resource in
a given time period. Compression can be of an entire packet, of the header only, or of the payload
only. Payload compression is performed on a Layer 2 frame’s payload and therefore compresses
the entire Layer 3 packet.
You can easily measure the success of these solutions using compression ratio and platform
latency. However, although compression might seem like a viable WAN bandwidth optimization
feature, it might not always be appropriate. Cisco IOS software compression support includes the
following data software compression types:
■
FRF.9 Frame Relay Payload Compression
■
Link Access Procedure Balanced payload compression using the Lempel-Ziv Stack (LZS)
algorithm, which is commonly referred to as the Stacker (STAC) or Predictor algorithm
■
HDLC using LZS
■
X.25 payload compression of encapsulated traffic
■
PPP using Predictor
WAN Design
■
Van Jacobson header compression for TCP/IP (conforms to RFC 1144)
■
Microsoft Point-to-Point Compression
323
Compression Techniques
The basic function of data compression is to reduce the size of a frame of data to be transmitted
over a network link. Data compression algorithms use two types of encoding techniques, statistical
and dictionary:
■
Statistical compression, which uses a fixed, usually nonadaptive encoding method, is best
applied to a single application where the data is relatively consistent and predictable. Because
the traffic on internetworks is neither consistent nor predictable, statistical algorithms are
usually not suitable for data compression implementations on routers.
■
An example of dictionary compression is the Lempel-Ziv algorithm, which is based on a
dynamically encoded dictionary that replaces a continuous stream of characters with codes.
The symbols represented by the codes are stored in memory in a dictionary-style list. This
approach is more responsive to variations in data than statistical compression.
Cisco internetworking devices use the Stacker (abbreviated as STAC) and Predictor data
compression algorithms. Developed by STAC Electronics, STAC is based on the Lempel-Ziv
algorithm. The Cisco IOS software uses an optimized version of STAC that provides good
compression ratios but requires many CPU cycles to perform compression.
The Predictor compression algorithm tries to predict the next sequence of characters in the data
stream by using an index to look up a sequence in the compression dictionary. It then examines
the next sequence in the data stream to see whether it matches. If so, that sequence replaces the
looked-up sequence in the dictionary. If not, the algorithm locates the next character sequence in
the index, and the process begins again. The index updates itself by hashing a few of the most
recent character sequences from the input stream.
The Predictor data compression algorithm was obtained from the public domain and optimized by
Cisco engineers. It uses CPU cycles more efficiently than STAC does, but it also requires more
memory.
Real-Time Transport Protocol and Compression
Real-Time Transport Protocol (RTP) is used for carrying packetized audio and video traffic over
an IP network. RTP is not intended for data traffic, which uses TCP or User Datagram Protocol.
RTP provides end-to-end network transport functions intended for applications that have real-time
transmission requirements such as audio, video, or simulation data over multicast or unicast
network services. Because RTP header compression (cRTP) compresses the voice headers from
40 bytes to 2 or 4 bytes, it offers significant bandwidth savings. cRTP is also referred to as
Compressed Real-Time Transfer Protocol.
324
Chapter 5: Designing Remote Connectivity
Hardware-assisted data compression achieves the same goal as software-based data compression,
except that it accelerates compression rates by offloading the task from the main CPU to
specialized compression circuits. Compression is implemented in compression hardware that is
installed in a system slot.
Impact of Compression and Encryption on Router Performance
System performance can be affected when compression or encryption is performed in software
rather than hardware. Perform the following operations to determine whether these services are
stressing a router’s CPU:
■
Use the show processes Cisco IOS software command to obtain a baseline reading before
enabling encryption or compression.
■
Enable the service, and use the show processes command again to assess the difference.
Cisco recommends that you disable compression or encryption if the router CPU load exceeds 40
percent, and that you disable compression if encryption is enabled. Also, do not enable
compression on your routers if the files being sent across the network are already compressed
(such as zip files).
Bandwidth Combination
PPP is commonly used to establish a direct connection between two devices; PPP is a Layer 2
protocol for connection over synchronous and asynchronous circuits. For example, PPP is used
when connecting computers using serial cables, phone lines, trunk lines, cellular telephones,
specialized radio links, or fiber-optic links. As mentioned earlier, ISPs use PPP for customer dialup access to the Internet. An encapsulated form of PPP (PPPoE or PPPoA) is commonly used in
a similar role with DSL Internet service.
Multilink PPP (MLP) logically connects multiple links between two systems, as needed, to
provide extra bandwidth. The bandwidths of two or more physical communication links, such as
analog modems, ISDN, and other analog or digital links, are logically aggregated, resulting in an
increase in overall throughput. MLP is based on the IETF standard RFC 1990, The PPP Multilink
Protocol (MP).
Window Size
KEY
Window size is the maximum number of frames (or amount of data) the sender can
POINT transmit before it must wait for an acknowledgment. The current window is defined as the
number of frames (or amount of data) that can be sent at the current time; this is always
less than or equal to the window size.
WAN Design
325
Window size is an important tuning factor for achieving high throughput on a WAN link. The
acknowledgment procedure confirms the correct delivery of the data to the recipient.
Acknowledgment procedures can be implemented at any protocol layer. They are particularly
important in a protocol layer that provides reliability, such as hop-by-hop acknowledgment in a
reliable link protocol or end-to-end acknowledgment in a transport protocol (for example, TCP).
This form of data acknowledgment provides a means of self-clocking the network, such that a
steady-state flow of data between the connection’s two endpoints is possible.
For example, if the TCP window size is set to 8192 octets, the sender must stop after sending 8192
octets in the event that the receiver does not send an acknowledgment. This might be unacceptable
for long (high-latency) WAN links with significant delays, in which the transmitter would waste
the majority of its time waiting. The more acknowledgments (because of a smaller window size)
and the longer the distance, the lower the throughput. Therefore, on highly reliable WAN links that
do not require many acknowledgments, the window size should be adjusted to a higher value to
enable maximum throughput. However, the risk is frequent retransmissions in the case of poorquality links, which can dramatically reduce the throughput. Adjustable windows and equipment
that can adapt to line conditions are strongly recommended.
Selective ACK
The TCP selective acknowledgment mechanism, which is defined in RFC 2018, TCP Selective
Acknowledgment Options, helps overcome the limitations of the TCP acknowledgments. TCP
performance can be affected if multiple packets are lost from one window of data; a TCP sender
learns about only one lost packet per round trip. With selective acknowledgment enabled (using
the ip tcp selective-ack global configuration command in Cisco IOS), the receiver returns
selective acknowledgment packets to the sender, informing the sender about data that has been
received. The sender can then resend only the missing data segments.
This feature is used only when multiple packets drop from a TCP window. Performance is not
affected when the feature is enabled but not used.
Queuing to Improve Link Utilization
To improve link utilization, Cisco has developed QoS techniques to avoid temporary congestion
and to provide preferential treatment for critical applications. QoS mechanisms such as queuing
and scheduling, policing (limiting) the access rate, and traffic shaping enable network operators to
326
Chapter 5: Designing Remote Connectivity
deploy and operate large-scale networks that efficiently handle both bandwidth-hungry (such as
multimedia and web traffic) and mission-critical applications (such as host-based applications).
KEY
QoS does not create bandwidth; rather, QoS optimizes the use of existing resources,
POINT including bandwidth.
If WAN links are constantly congested, either the network requires greater bandwidth or
compression should be used.
QoS queuing strategies are unnecessary if WAN links are never congested.
Congestion management includes two separate processes: queuing, which separates traffic into
various queues or buffers, and scheduling, which decides from which queue traffic is to be sent
next.
Queuing allows network administrators to manage the varying demands of applications on
networks and routers. When positioning the role of queuing in networks, the primary issue is the
duration of congestion.
KEY
Queuing is configured on outbound interfaces and is appropriate for cases in which WAN
POINT links are congested from time to time.
Following are the two types of queues:
■
Hardware queue: Uses a FIFO strategy, which is necessary for the interface drivers to
transmit packets one by one. The hardware queue is sometimes referred to as the transmit
queue or TxQ.
■
Software queue: Schedules packets into the hardware queue based on the QoS requirements.
The following sections discuss the following types of queuing: weighted fair queuing (WFQ),
priority queuing (PQ), custom queuing (CQ), class-based WFQ (CBWFQ), and low-latency
queuing (LLQ) .
WFQ
WFQ handles the problems inherent in queuing schemes on a FIFO basis. WFQ assesses the size
of each message and ensures that high-volume senders do not force low-volume senders out of the
queue. WFQ sorts different traffic flows into separate streams, or conversation sessions, and
alternately dispatches them. The algorithm also solves the problem of round-trip delay variability.
When high-volume conversations are active, their transfer rates and interarrival periods are quite
predictable.
WAN Design
327
WFQ is enabled by default on most low-speed serial interfaces (with speeds at or below 2.048
Mbps) on Cisco routers. (Faster links use a FIFO queue by default.) This makes it very easy to
configure (there are few adjustable parameters) but does not allow much control over which traffic
takes priority.
PQ
PQ is useful for time-sensitive, mission-critical protocols (such as IBM Systems Network
Architecture traffic). PQ works by establishing four interface output queues (high, medium,
normal, and low), each serving a different level of priority; queues are configurable for queue type,
traffic assignment, and size. The dispatching algorithm begins servicing a queue only when all
higher-priority queues are empty. This way, PQ ensures that the most important traffic placed in
the higher-level queues gets through first, at the expense of all other traffic types. As shown in
Figure 5-12, the high-priority queue is always emptied before the lower-priority queues are
serviced. Traffic can be assigned to the various queues based on protocol, port number, or other
criteria. Because priority queuing requires extra processing, you should not recommend it unless
it is necessary.
Figure 5-12
Priority Queuing Has Four Queues; the High-Priority Queue Is Always Emptied First
Incoming Packet
High
Packet?
Select
Queue
No
Medium
Packet?
Yes
Yes
Queue
Full?
No
Place in
Queue
No
Normal No
Packet?
No
Low
Packet?
Yes
Yes
Yes
Yes
Timeout?
No
Dispatch
Packet
CQ
CQ is a different approach for prioritizing traffic. Like PQ, traffic can be assigned to various
queues based on protocol, port number, or other criteria. However, CQ handles the queues in a
round-robin fashion.
CQ works by establishing up to 16 interface output queues that are configurable in terms of type,
traffic assignment, and size. CQ specifies the transmission window size of each queue in bytes.
When the appropriate number of frames is transmitted from a queue, the transmission window size
is reached, and the next queue is checked. CQ is a less drastic solution for mission-critical
applications than PQ because it guarantees some level of service to all traffic.
328
Chapter 5: Designing Remote Connectivity
CQ is fairer than PQ, but PQ is more powerful for prioritizing a mission-critical protocol. For
example, with CQ, you can prioritize a particular protocol by assigning it more queue space;
however, it will never monopolize the bandwidth. Figure 5-13 illustrates the custom queuing
process.
Figure 5-13
Custom Queuing Services Each Queue in a Round-Robin Fashion
Next
Queue
Current
Queue
Yes
Yes
Reached
Transmission
Window
Size?
No
Dispatch
Packet
More?
No
Like PQ, CQ causes the router to perform extra processing. Do not recommend CQ unless you
have determined that one or more protocols need special processing.
CBWFQ
CBWFQ allows you to define a traffic class and then assign characteristics to it. For example, you
can designate the minimum bandwidth delivered to the class during congestion. CBWFQ extends
the standard WFQ functionality to provide support for user-defined traffic classes. With CBWFQ,
traffic classes are defined based on match criteria, including protocols, access control lists (ACL),
and input interfaces. Packets that satisfy the match criteria for a class constitute the traffic for that
class. A queue is reserved for each class, and traffic that belongs to a class is directed to the queue
for that class.
After a class has been defined according to its match criteria, you can assign it characteristics,
including bandwidth, weight, and maximum queue packet limit. The bandwidth assigned to a class
is the guaranteed bandwidth delivered to the class during times of congestion.
The queue packet limit is the maximum number of packets allowed to accumulate in the queue for
the class. Packets that belong to a class are subject to the bandwidth and queue limits that
characterize the class.
For CBWFQ, the weight for a packet that belongs to a specific class derives from the bandwidth
assigned to the class during configuration. Therefore, the bandwidth assigned to the packets of a
class determines the order in which packets are sent. All packets are serviced fairly, based on
WAN Design
329
weight; no class of packets may be granted strict priority. This scheme poses problems for voice
traffic, which is largely intolerant of delay and variation in delay.
LLQ
LLQ brings strict PQ to CBWFQ; it is a combination of PQ and CBWFQ. Strict PQ allows delaysensitive data such as voice to be dequeued and sent first (before packets in other queues are
dequeued), giving delay-sensitive data preferential treatment over other traffic. Without LLQ,
CBWFQ provides WFQ based on defined classes with no strict priority queue available for realtime traffic.
Congestion Avoidance
Congestion-avoidance techniques monitor network traffic loads so that congestion can be
anticipated and avoided before it becomes problematic. If congestion-avoidance techniques are
not used and interface queues become full, packets trying to enter the queue are discarded,
regardless of what traffic they hold. This is known as tail drop—packets arriving after the tail of
the queue are dropped.
KEY
Congestion-avoidance techniques allow packets from streams identified as being eligible
POINT for early discard (those with lower priority) to be dropped when the queue is getting full.
Congestion avoidance works well with TCP-based traffic. TCP has a built-in flow control
mechanism so that when a source detects a dropped packet, the source slows its transmission.
Weighted random early detection (WRED) is the Cisco implementation of the random early
detection (RED) mechanism. RED randomly drops packets when the queue gets to a specified
level (when it is nearing full). RED is designed to work with TCP traffic: When TCP packets are
dropped, TCP’s flow-control mechanism slows the transmission rate and then progressively begins
to increase it again. Therefore, RED results in sources slowing down and hopefully avoiding
congestion.
WRED extends RED by using the IP precedence bits in the IP packet header to determine which
traffic should be dropped; the drop-selection process is weighted by the IP precedence. Similarly,
Differentiated Services Code Point (DSCP)–based WRED uses the DSCP value in the IP packet
header in the drop-selection process. WRED selectively discards lower-priority traffic when the
interface begins to get congested.
Starting in IOS Release 12.2(8)T, Cisco implemented an extension to WRED called explicit
congestion notification (ECN). ECN is defined in RFC 3168, The Addition of Explicit Congestion
Notification (ECN) to IP, and it uses the lower 2 bits in the ToS byte. Devices use these two ECN
bits to communicate that they are experiencing congestion. When ECN is in use, it marks packets
330
Chapter 5: Designing Remote Connectivity
as experiencing congestion (rather than dropping them) if the senders are ECN-capable and the
queue has not yet reached its maximum threshold. If the queue does reach the maximum, packets
are dropped, as they would be without ECN.
Traffic Shaping and Policing to Rate-Limit Traffic Classes
Traffic shaping and traffic policing, illustrated in Figure 5-14, also referred to as committed access
rate, are similar mechanisms in that both inspect traffic and take action based on the various
characteristics of that traffic. These characteristics can be based on whether the traffic is over or
under a given rate, or is based on some bits in the IP packet header, such as the DSCP or IP
precedence.
Figure 5-14
Policing Drops Excess Traffic, Whereas Shaping Delays It
Max Rate
ing
lic
Po
Max Rate
Time
Sh
ap
ing
Max Rate
KEY
Policing either discards the packet or modifies some aspect of it, such as its IP precedence,
POINT when the policing agent determines that the packet meets a given criterion.
For example, an enterprise’s policy management scheme could deem the traffic generated by a
particular resource (such as the first 100 kbps) as first-class traffic, so it receives a top priority
marking. Traffic above the first 100 kbps generated by that same resource could drop to a lower
priority class or be discarded altogether. Similarly, all incoming streaming Moving Picture Experts
WAN Design
331
Group (MPEG)-1 Audio Layer 3 (MP3) traffic could be limited to, for example, 10 percent of all
available bandwidth so that it does not starve other applications.
KEY
By comparison, traffic shaping attempts to adjust the transmission rate of packets that
POINT match a certain criterion.
Topologies that have high-speed links (such as a central site) feeding into lower-speed links (such
as a remote or branch site) often experience bottlenecks at the remote end because of the speed
mismatch. Traffic shaping helps eliminate the bottleneck situation by throttling back traffic
volume at the source end. It reduces the flow of outbound traffic from a router interface by holding
packets in a buffer and releasing them at a preconfigured rate; routers can be configured to transmit
at a lower bit rate than the interface bit rate.
One common use of traffic shaping in the enterprise is to smooth the flow of traffic across a single
link toward a service provider transport network to ensure compliance with the traffic contract,
avoiding service provider policing at the receiving end. Traffic shaping reduces the bursty nature
of the transmitted data and is most useful when the contract rate is less than the line rate. Traffic
shaping can also respond to signaled congestion from the transport network when the traffic rates
exceed the contract rate.
Token Bucket
A term you might encounter related to traffic shaping and policing is a token bucket. In the token
bucket analogy, tokens are put into the bucket at a certain rate, and the bucket itself has a specified
capacity. If the bucket fills to capacity, newly arriving tokens are discarded. Each token is
permission for the source to send a certain number of bits into the network. To send a packet, the
shaper or policer must remove from the bucket a number of tokens equal in representation to the
packet size.
If not enough tokens are in the bucket to send a packet, the packet either waits until the bucket has
enough tokens or the packet is discarded or marked down. If the bucket is already full of tokens,
incoming tokens overflow and are not available to future packets. Consequently, at any time, the
largest burst a source can send into the network is roughly proportional to the size of the bucket.
Note that the token bucket mechanism used for traffic shaping has both a token bucket and a data
buffer, or queue; if it did not have a data buffer, it would be a policer. For traffic shaping, arriving
packets that cannot be sent immediately are delayed in the data buffer.
The information in this sidebar was derived from the Cisco IOS Quality of Service Solutions
Configuration Guide, Release 12.2, available at http://www.cisco.com/en/US/products/sw/
iosswrel/ps1835/products_configuration_guide_book09186a00800c5e31.html.
332
Chapter 5: Designing Remote Connectivity
Using WAN Technologies
Numerous WAN technologies exist today, and new technologies are constantly emerging. The
most appropriate WAN selection usually results in high efficiency and leads to customer
satisfaction. The network designer must be aware of all possible WAN design choices while taking
into account customer requirements. This section describes the use of various WAN technologies,
including the following:
■
Remote access
■
VPNs
■
WAN backup
■
The Internet as a backup WAN
Remote Access Network Design
When you’re designing remote-access networks for teleworkers and traveling employees, the type
of connection drives the technology selection, such as whether to choose a data link or a network
layer connection. By analyzing the application requirements and service provider offerings, you
can choose the most suitable of a wide range of remote-access technologies. Typical remoteaccess requirements include the following:
■
Data link layer WAN technologies from remote sites to the Enterprise Edge network.
Investment and operating costs are the main issues.
■
Low- to medium-volume data file transfer and interactive traffic.
■
Increasing need to support voice services.
Remote access to the Enterprise Edge network is typically provided over permanent connections
for remote teleworkers through a dedicated circuit or a provisioned service, or on-demand
connections for traveling workers.
Remote-access technology selections include dialup (both analog and digital), DSL, cable, and
hot-spot wireless service.
Dial-on-Demand Routing
Dial-on-demand routing is a technique whereby a router can dynamically initiate and close a
circuit-switched session when transmitting end-station demands. A router is configured to
consider certain traffic interesting (such as traffic from a particular protocol) and other traffic
uninteresting. When the router receives interesting traffic destined for a remote network, a circuit
is established, and the traffic is transmitted normally. If the router receives uninteresting traffic and
Using WAN Technologies
333
a circuit is already established, that traffic is also transmitted normally. The router maintains an
idle timer that is reset only when it receives interesting traffic. If the router does not receive any
interesting traffic before the idle timer expires, the circuit is terminated. Likewise, if the router
receives uninteresting traffic and no circuit exists, the router drops the traffic.
VPN Design
KEY
A VPN is connectivity deployed on a shared infrastructure with the same policies, security,
POINT and performance as a private network, but typically with lower total cost of ownership.
The infrastructure used can be the Internet, an IP infrastructure, or any WAN infrastructure, such
as a Frame Relay network or an ATM WAN.
The following sections discuss these topics:
■
VPN applications
■
VPN connectivity options
■
VPN benefits
VPN Applications
VPNs can be grouped according to their applications:
■
Access VPN: Access VPNs provide access to a corporate intranet (or extranet) over a shared
infrastructure and have the same policies as a private network. Remote-access connectivity is
through dial-up, ISDN, DSL, wireless, or cable technologies. Access VPNs enable businesses
to outsource their dial or other broadband remote access connections without compromising
their security policy.
The two access VPN architectures are client-initiated and Network Access Server (NAS)–
initiated connections. With client-initiated VPNs, users establish an encrypted IP tunnel from
their PCs across an SP’s shared network to their corporate network. With NAS-initiated
VPNs, the tunnel is initiated from the NAS; in this scenario, remote users dial into the local
SP point of presence (POP), and the SP initiates a secure, encrypted tunnel to the corporate
network.
■
Intranet VPN: Intranet VPNs link remote offices by extending the corporate network across
a shared infrastructure. The intranet VPN services are typically based on extending the basic
remote-access VPN to other corporate offices across the Internet or across the SP’s IP
backbone. Note that there are no performance guarantees with VPNs across the Internet—no
334
Chapter 5: Designing Remote Connectivity
one organization is responsible for the performance of the Internet. The main benefits of
intranet VPNs are reduced WAN infrastructure needs, which result in lower ongoing leasedline, Frame Relay, or other WAN charges, and operational savings.
■
Extranet VPN: Extranet VPNs extend the connectivity to business partners, suppliers, and
customers across the Internet or an SP’s network. The security policy becomes very important
at this point; for example, the company does not want a hacker to spoof any orders from a
business partner. The main benefits of an extranet VPN are the ease of securely connecting a
business partner as needed, and the ease of severing the connection with the business partner
(partner today, competitor tomorrow), which becomes as simple as shutting down the VPN
tunnel. Very granular rules can be created for what traffic is shared with the peer network in
the extranet.
VPN Connectivity Options
The following sections describe three connectivity options that provide IP access through VPNs:
■
Overlay VPNs
■
Virtual private dial-up networks (VPDN)
■
Peer-to-peer VPNs
Overlay VPNs
With overlay VPNs, the provider’s infrastructure provides virtual point-to-point links between
customer sites. Overlay VPNs are implemented with a number of technologies, including
traditional Layer 1 and Layer 2 technologies (such as ISDN, SONET/SDH, Frame Relay, and
ATM) overlaid with modern Layer 3 IP-based solutions (such as Generic Routing Encapsulation
[GRE] and IPsec).
From the Layer 3 perspective, the provider network is invisible: The customer routers are linked
with emulated point-to-point links. The routing protocol runs directly between routers that
establish routing adjacencies and exchange routing information. The provider is not aware of
customer routing and does not have any information about customer routes. The provider’s only
responsibility is the point-to-point data transport between customer sites. Although they are well
known and easy to implement, overlay VPNs are more difficult to operate and have higher
maintenance costs for the following reasons:
■
Every individual virtual circuit must be provisioned.
■
Optimum routing between customer sites requires a full mesh of virtual circuits between sites.
■
Bandwidth must be provisioned on a site-to-site basis.
Using WAN Technologies
335
The concept of VPNs was introduced early in the emergence of data communications with
technologies such as X.25 and Frame Relay. These technologies use virtual circuits to establish
the end-to-end connection over a shared SP infrastructure. In the case of overlay VPNs, emulated
point-to-point links replace the dedicated links, and the provider infrastructure is statistically
shared. Overlay VPNs enable the provider to offer the connectivity for a lower price and result in
lower operational costs.
Figure 5-15 illustrates an overlay VPN. The router on the left (in the Enterprise Edge module) has
one physical connection to the SP, with two virtual circuits provisioned. Virtual Circuit 1 (VC #1)
provides connectivity to the router on the top right. Virtual Circuit 2 (VC #2) provides connectivity
to the branch office router on the bottom right.
Figure 5-15
Overlay VPNs Extend the Enterprise Network
Extends the Enterprise
IP Network Across a
Shared WAN
Enterprise Edge
Remote Access
Virtual Circuit (VC) #1
Provider Core
Device
Branch
Office
Provider Edge Device
(Frame Relay Switch)
Virtual Circuit (VC) #2
Service Provider Network
VPNs Replace Dedicated Point-to-Point Links with
Emulated Point-to-Point Links Sharing Common
Infrastructure
VPDNs
VPDNs enable an enterprise to configure secure networks that rely on an ISP for connectivity.
With VPDNs, the customers use a provider’s dial-in (or other type of connectivity) infrastructure
for their private connections. A VPDN can be used with any available access technology. Ubiquity
is important, meaning that VPDNs should work with any technology, including a modem, ISDN,
xDSL, or cable connections.
336
Chapter 5: Designing Remote Connectivity
The ISP agrees to forward the company’s traffic from the ISP’s POP to a company-run home
gateway. Network configuration and security remain in the client’s control. The SP supplies a
virtual tunnel between the company’s sites using Cisco Layer 2 Forwarding, point-to-point
tunneling, or IETF Layer 2 Tunneling Protocol (L2TP) tunnels.
Figure 5-16 illustrates a VPDN. In this figure, the ISP terminates the dialup connections at the
L2TP Access Concentrator (LAC) and forwards traffic through dynamically established tunnels to
a remote access server called the L2TP Network Server (LNS). A VPDN provides potential
operations and infrastructure cost savings because a company can outsource its dialup equipment,
thereby avoiding the costs of being in the remote access server business.
Figure 5-16
VPDN for Remote Access
ISP
Remote Access
Enterprise Edge
WAN
Universal Access
– Modem, ISDN
– xDSL, Cable
DMZ
LAC
AAA
CA
LNS
Remote User
or Branch Office
VPDN Tunnel
Access VPN connectivity involves the configuration of VPDN tunnels. Following are the two
types of tunnels:
■
The client PC initiates voluntary tunnels. The client dials into the SP network, a PPP session
is established, and the user logs on to the SP network. The client then runs the VPN software
to establish a tunnel to the network server.
■
Compulsory tunnels require SP participation and awareness, giving the client no influence
over tunnel selection. The client still dials in and establishes a PPP session, but the SP (not
the client) establishes the tunnel to the network server.
Using WAN Technologies
337
Peer-to-Peer VPNs
In a peer-to-peer VPN, the provider actively participates in customer routing.
Traditional peer-to-peer VPNs are implemented with packet filters on shared provider edge (PE)
routers, or with dedicated per-customer PE routers. In addition to high maintenance costs for the
packet filter approach or equipment costs for the dedicated per-customer PE-router approach, both
methods require the customer to accept the provider-assigned address space or to use public IP
addresses in the private customer network.
Modern MPLS VPNs provide all the benefits of peer-to-peer VPNs and alleviate most of the peerto-peer VPN drawbacks such as the need for common customer addresses. Overlapping addresses,
which are usually the result of companies using private addressing, are one of the major obstacles
to successful peer-to-peer VPN implementations. MPLS VPNs solve this problem by giving each
VPN its own routing and forwarding table in the router, thus effectively creating virtual routers for
each customer.
NOTE RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs), defines MPLS VPNs.
With MPLS VPNs, networks are learned via static route configuration or with a routing protocol
such as OSPF, EIGRP, Routing Information Protocol (RIP) version 2 (RIPv2), or Border Gateway
Protocol (BGP) from other internal routers. As described in the earlier “MPLS” section, MPLS
uses a label to identify a flow of packets. MPLS VPNs use an additional label to specify the VPN
and the corresponding VPN destination network, allowing for overlapping addresses between VPNs.
Benefits of VPNs
The benefits of using VPNs include the following:
■
Flexibility: VPNs offer flexibility because site-to-site and remote-access connections can be
set up quickly and over existing infrastructure to extend the network to remote users. Extranet
connectivity for business partners is also a possibility. A variety of security policies can be
provisioned in a VPN, thereby enabling flexible interconnection of different security domains.
■
Scalability: VPNs allow an organization to leverage and extend the classic WAN to more
remote and external users. VPNs offer scalability over large areas because IP transport is
universally available. This arrangement reduces the number of physical connections and
simplifies the underlying structure of a customer’s WAN.
■
Lower network communication cost: Lower cost is a primary reason for migrating from
traditional connectivity options to a VPN connection. Reduced dialup and dedicated
bandwidth infrastructure and service provider costs make VPNs attractive. Customers can
reuse existing links and take advantage of the statistical packet multiplexing features.
338
Chapter 5: Designing Remote Connectivity
WAN Backup Strategies
This section describes various backup options for providing alternative paths for remote access.
WAN links are relatively unreliable compared to LAN links and often are much slower than the
LANs to which they connect. This combination of uncertain reliability, lack of speed, and high
importance makes WAN links good candidates for redundancy to achieve high availability.
Branch offices should experience minimum downtime in case of primary link failure. A backup
connection can be established, either via dialup or by using permanent connections. The main
WAN backup options are as follows:
■
Dial backup routing
■
Permanent secondary WAN link
■
Shadow PVC
The following sections describe these options.
Dial Backup Routing
Dial backup routing is a way of using a dialup service for backup purposes. In this scenario, the
switched circuit provides the backup service for another type of circuit, such as point-to-point or
Frame Relay. The router initiates the dial backup line when it detects a failure on the primary
circuit. The dial backup line provides WAN connectivity until the primary circuit is restored, at
which time the dial backup connection terminates.
Permanent Secondary WAN Link
Deploying an additional permanent WAN link between each remote office and the central office
makes the network more fault-tolerant. This solution offers the following two advantages:
■
Provides a backup link: The backup link is used if a primary link that connects any remote
office with the central office fails. Routers automatically route around failed WAN links by
using floating static routes and routing protocols, such as EIGRP and OSPF. If one link fails,
the router recalculates and sends all traffic through another link, allowing applications to
proceed if a WAN link fails, thereby improving application availability.
NOTE A floating static route is one that appears in the routing table only when the primary
route goes away. The administrative distance of the static route is configured to be higher than
the administrative distance of the primary route, and it “floats” above the primary route until the
primary route is no longer available.
Using WAN Technologies
■
339
Increased bandwidth: Both the primary and secondary links can be used simultaneously
because they are permanent. The routing protocol automatically performs load balancing
between two parallel links with equal costs (or unequal costs if EIGRP is used). The resulting
increased bandwidth decreases response times.
Cost is the primary disadvantage of duplicating WAN links to each remote office. For example, in
addition to new equipment, including new WAN router interfaces, a large star network with 20
remote sites might need 20 new virtual circuits.
In Figure 5-17, the connections between the Enterprise Edge and remote sites use permanent
primary and secondary WAN links for redundancy. A routing protocol, such as EIGRP, that
supports load balancing over unequal paths on either a per-packet or per-destination basis is used
to increase the utilization of the backup link.
Figure 5-17
Permanent Secondary WAN Link
Enterprise Edge
Central Site
Service Provider
Remote Sites
Primary PVC
for Office A
Separate WAN
Interfaces
WAN
Router A
Branch Office
CIR 512 kbps
A
Backup PVC
for Office A
Separate WAN
Interfaces
CIR 512 kbps
Backup PVC
for Office B
CIR 512 kbps
B
Router B
Primary PVC
for Office B
If the WAN connections are relatively slow (less than 56 kbps), per-packet load balancing should
be used. Load balancing occurs on a per-destination basis when fast switching is enabled, which
is appropriate on WAN connections faster than 56 kbps.
340
Chapter 5: Designing Remote Connectivity
Switching Modes: Process, Fast, and Other Modes
During process switching, the router examines the incoming packet and looks up the Layer 3
address in the routing table, which is located in main memory, to associate this address with a
destination network or subnet. Process switching is a scheduled process performed by the system
processor. Compared to other switching modes, process switching is slow because of the latency
caused by scheduling and the latency within the process itself.
With fast switching, an incoming packet matches an entry in the fast-switching cache (also called
the route cache), which is located in main memory. This cache is populated when the first packet
to the destination is process-switched. Fast switching is done via asynchronous interrupts, which
are handled in real time and result in higher throughput.
Other switching modes are available on some routers (including Autonomous Switching, Silicon
Switching, Optimum Switching, Distributed Switching, and NetFlow Switching). Cisco Express
Forwarding (CEF) technology, described in Chapter 4, is the latest advance in Cisco IOS
switching capabilities for IP.
Shadow PVC
With shadow PVCs, as long as the maximum load on the shadow PVC does not exceed a certain
rate (such as one-fourth of the primary speed) while the primary PVC is available, the SP provides
a secondary PVC without any additional charge. If the traffic limit on the shadow PVC is exceeded
while the primary PVC is up, the SP charges for the excess load on the shadow PVC.
Figure 5-18 illustrates redundant connections between remotes sites and the Enterprise Edge using
the shadow PVCs offered by the SP. Because of the potential for additional costs, the routers must
avoid sending any unnecessary data (except, for example, routing traffic) over the shadow PVC.
Figure 5-18
Shadow PVC
Enterprise Edge
Central Site
WAN
Router A
Service Provider
Remote Sites
Primary PVC
for Office A
Branch Office
CIR 512 kbps
A
Shadow PVC
for Office A
CIR 256 kbps
Shadow PVC
for Office B
CIR 512 kbps
Router B
B
Primary PVC
for Office B
Using WAN Technologies
341
The Internet as a WAN Backup Technology
This section describes the Internet as an alternative option for a failed WAN connection. This type
of connection is considered best-effort and does not guarantee any bandwidth. Common methods
for connecting noncontiguous private networks over a public IP network include the following:
■
IP routing without constraints
■
GRE tunnels
■
IPsec tunnels
The following sections describe these methods.
IP Routing Without Constraints
When relying on the Internet to provide a backup for branch offices, a company must fully
cooperate with the ISP and announce its networks. The backup network—the Internet—therefore
becomes aware of the company’s data, because it is sent unencrypted.
Layer 3 Tunneling with GRE and IPsec
Layer 3 tunneling uses a Layer 3 protocol to transport over another Layer 3 network. Typically,
Layer 3 tunneling is used either to connect two noncontiguous parts of a non-IP network over an
IP network or to connect two IP networks over a backbone IP network, possibly hiding the IP
addressing details of the two networks from the backbone IP network. Following are the two Layer
3 tunneling methods for connecting noncontiguous private networks over a public IP network:
■
GRE: A protocol developed by Cisco that encapsulates a wide variety of packet types inside
IP tunnels. GRE is designed for generic tunneling of protocols. In the Cisco IOS, GRE tunnels
IP over IP, which can be useful when building a small-scale IP VPN network that does not
require substantial security.
GRE enables simple and flexible deployment of basic IP VPNs. Deployment is easy; however,
tunnel provisioning is not very scalable in a full-mesh network because every point-to-point
association must be defined separately. The packet payload is not protected against sniffing
and unauthorized changes (no encryption is used), and no sender authentication occurs.
Using GRE tunnels as a mechanism for backup links has several drawbacks, including
administrative overhead, scaling to large numbers of tunnels, and processing overhead of the
GRE encapsulation.
342
Chapter 5: Designing Remote Connectivity
■
IPsec: IPsec is both a tunnel encapsulation protocol and a security protocol. IPsec provides
security for the transmission of sensitive information over unprotected networks (such as the
Internet) by encrypting the tunnel’s data. IPsec acts as the network layer in tunneling or
transport mode and protects and authenticates IP packets between participating IPsec devices.
Following are some features of IPsec:
— Data confidentiality: An IPsec sender can encrypt packets before transmitting them
across a network.
— Data integrity: An IPsec receiver can authenticate packets sent by an IPsec sender
to ensure that the data has not been altered during transmission.
— Data origin authentication: An IPsec receiver can authenticate the source of the
sent IPsec packets. This service depends on the data integrity service.
— Anti-replay: An IPsec receiver can detect and reject replay by rejecting old or
duplicate packets.
— Easy deployment: IPsec can be deployed with no change to the intermediate
systems (the ISP backbone) and no change to existing applications (it is transparent
to applications).
— Internet Key Exchange (IKE): IPsec uses IKE for automated key management.
— Public Key Infrastructure (PKI): IPsec is interoperable with PKI.
IPsec can be combined with GRE tunnels to provide security in GRE tunnels; for example, the
GRE payload (the IP packet) would be encrypted.
NOTE Routing protocols cannot be run over IPsec tunnels because there is no standard for
IPsec to encrypt the broadcast or multicast packets used by IP routing protocols. Instead of using
IPsec tunnels with routing protocols, use GRE tunnels with IPsec as the security protocol. GRE
tunnels encapsulate the original IP packet—whether it is unicast, multicast, or broadcast—
within a unicast packet, destined for the other end of the GRE tunnel.
IPsec and IKE
Because it is standards-based, IPsec allows Cisco devices to interoperate with other non-Cisco
IPsec-compliant networking devices, including PCs and servers. IPsec also allows the use of
digital certificates using the IKE protocol and certification authorities (CA). A digital certificate
contains information to identify a user or device, such as the name, serial number, company, or IP
address. It also contains a copy of the device’s public key. The CA, which is a third party that the
receiver explicitly trusts to validate identities and create digital certificates, signs the certificate.
When using digital certificates, each device is enrolled with a CA. When two devices want to
Enterprise Edge WAN and MAN Architecture
343
communicate, they exchange certificates and digitally sign data to authenticate each other. Manual
exchange and verification of keys are not required.
When a new device is added to the network, it must simply enroll with a CA; none of the other
devices need modification. When the new device attempts an IPsec connection, certificates are
automatically exchanged, and the device can be authenticated.
Figure 5-19 illustrates two noncontiguous networks connected over a point-to-point logical link
with a backup implemented over an IP network using a GRE IP tunnel. Such tunnels are
configured between a source (ingress) router and a destination (egress) router and are visible as
interfaces on each router.
Figure 5-19
Backup GRE Tunnel over a Public IP Network
Enterprise Edge
Remote Access
Branch Office
SP
(Frame Relay)
Central Office
ISP
Backup GRE Tunnel
Data to be forwarded across the tunnel is already formatted in a packet, encapsulated in the
standard IP packet header. This packet is further encapsulated with a new GRE header and placed
into the tunnel with the destination IP address set to the tunnel endpoint, the new next hop. When
the GRE packet reaches the tunnel endpoint, the GRE header is stripped away and the packet
continues to be forwarded to the destination with the original IP packet header.
Enterprise Edge WAN and MAN Architecture
Recall from Chapter 3, “Structuring and Modularizing the Network,” that the Cisco ServiceOriented Network Architecture (SONA) Enterprise Edge and the WAN and MAN modules are
represented as the Enterprise Edge functional area of the Cisco Enterprise Architectures. This
section describes the Enterprise Edge WAN and MAN architectures and technologies.
344
Chapter 5: Designing Remote Connectivity
Enterprise Edge WAN and MAN Considerations
When selecting Enterprise Edge technologies, consider the following factors:
■
Support for network growth: Enterprises that anticipate significant growth should choose a
technology that allows the network to grow with their business. WAN technologies with high
support for network growth make it possible to add new branches or remote offices with
minimal configuration at existing sites, thus minimizing the costs and IT staff requirements
for such changes. WAN technologies with lower support for network growth require
significantly more time, effort, and cost to expand the network.
■
Appropriate availability: Businesses heavily affected by even the smallest disruption in
network communications should consider high availability an important characteristic when
choosing a connectivity technology. Highly available technologies provide inherent
redundancy where no single point of failure exists in the network. Lower-availability
technologies can still dynamically recover from a network disruption in a short time period,
but this minor disruption might be too costly for some businesses. Technologies that do not
inherently provide high availability can be made more available through redundancy in
design, by using products with redundant characteristics such as multiple WAN connections,
and by using backup power supplies.
■
Operational expenses: Some WAN technologies result in higher costs than others. A privateline technology such as Frame Relay or ATM, for example, typically results in higher carrier
fees than a technology such as an IPsec-based IP VPN, which takes advantage of the public
Internet to help reduce costs. It is important to note, however, that migrating to a particular
technology for the sole purpose of reducing carrier fees, without considering network
performance and QoS, can limit support for some advanced technologies such as voice and
video.
■
Operational complexity: Cisco MAN and WAN technologies have varying levels of inherent
technical complexity, so the level of technical expertise required within the enterprise also
varies. In most cases, businesses can upgrade their MAN or WAN and take advantage of the
expertise of the existing IT staff, requiring minimal training. When an enterprise wants to
maintain greater control over its network by taking on responsibilities usually borne by an SP,
extensive IT training could be required to successfully deploy and manage a particular WAN
technology.
■
Voice and video support: Most Cisco MAN and WAN technologies support QoS, which
helps enable advanced applications such as voice and video over the network. In cases where
a WAN technology uses an SP with a Cisco QoS-certified multiservice IP VPN, an adequate
level of QoS is assured to support voice and video traffic. In cases where the public Internet
is used as the WAN connection, however, QoS cannot always be guaranteed, and a highbandwidth broadband connection might be required for small offices, teleworkers, and remote
contact center agents using voice and video communications.
Enterprise Edge WAN and MAN Architecture
345
■
Effort and equipment cost to migrate from private connectivity: When an enterprise is
migrating from private connectivity to another technology, it is important to evaluate the
short- and long-term costs and benefits of this migration. In many cases, this is accomplished
with minimal investment in equipment, time, and IT staffing. In some instances, however, this
migration requires a significant short-term investment, not only in new equipment, but also in
IT training. Such an investment might also provide long-term increased cost savings, lower
operational expenditures, and increased productivity.
■
Network segmentation support: Network segmentation means supporting a single network
that is logically segmented. One advantage of network segmentation is that it reduces
expenditures associated with equipment and maintenance, network administration, and
network carrier charges as compared to separate physical networks. Another advantage is
increased security; segmentation can help isolate departments or limit partners’ access to the
corporate network.
Cisco Enterprise MAN and WAN Architecture Technologies
The Cisco Enterprise MAN and WAN architecture employs a number of MAN and WAN
technologies engineered and optimized to interoperate as a contiguous system, providing the
integrated QoS, network security, reliability, and manageability required to support a variety of
advanced business applications and services. These technologies include a number of secure
alternatives to traditional private WAN connectivity and help increase network scalability and
reduce monthly carrier fees. The Cisco Enterprise MAN and WAN architecture includes the
following technologies, as summarized in Table 5-4:
■
Private WAN: Private connectivity takes advantage of existing Frame Relay, ATM, or other
connections. To provide an additional level of security when connecting sites, strong
encryption (using Digital Encryption Standard [DES], Triple DES [3DES], and Advanced
Encryption Standard [AES]) can be added. A private WAN is ideally suited for an enterprise
with moderate growth expectations, where relatively few new branches or remote offices will
be deployed over the coming years. Businesses that require secure, dedicated, and reliable
connectivity for compliance with information privacy standards, and that also require support
for advanced applications such as voice and video, benefit from encrypted private
connectivity. However, this technology can result in relatively high recurring monthly carrier
fees and is not the preferred technology for extending connectivity to teleworkers and remote
call agents. An enterprise might choose encrypted private connectivity to network its larger
branch offices, but opt for other technologies, such as a VPN, to connect remote users and
smaller sites.
■
ISP service (site-to-site and remote-access IPsec VPN): These technologies take advantage
of the ubiquity of public and private IP networks. The use of strong encryption standards
(DES, 3DES, and AES) makes this WAN option more secure than traditional private
346
Chapter 5: Designing Remote Connectivity
connectivity and makes it compliant with the many new information security regulations
imposed on government and industry groups (such as healthcare and finance). When
implemented over the public Internet, IPsec VPNs are best suited for businesses that require
basic data connectivity. However, if support for delay-sensitive, advanced applications such
as voice and video is required, an IPsec VPN should be implemented over an SP’s private
network where an adequate level of QoS is assured to support voice and video traffic.
Relatively low monthly carrier fees make this technology appropriate for businesses seeking
to connect a high number of teleworkers, remote contact center agents, or small remote offices
over a geographically dispersed area.
Table 5-4
■
SP MPLS and IP VPN: A network-based IP VPN is similar in many ways to private
connectivity, but with added flexibility, scalability, and reach. The any-to-any nature of an
MPLS-enabled IP VPN (any branch can be networked to any branch), combined with its
comprehensive QoS for voice and video traffic, suits the needs of many enterprises, especially
those with high growth expectations, where many new branches and remote offices will be
added over the next few years. The secure, reliable connectivity and relatively lower carrier
fees that are inherent in this technology make a network-based IP VPN a good choice for
businesses looking to use a managed service solution to connect branches, remote offices,
teleworkers, and remote call agents.
■
Self-deployed MPLS: Self-deployed MPLS is a network segmentation technique that allows
enterprises to logically segment the network. Self-deployed MPLS is typically reserved for
very large enterprises or an SP willing to make a significant investment in network equipment
and training, and for those that have an IT staff that is comfortable with a high degree of
technical complexity.
Cisco Enterprise WAN and MAN Architecture Comparison
Private
WAN
ISP Service (Site-to-Site and
Remote-Access IPsec VPN)
SP MPLS
and IP VPN
Self-Deployed
MPLS
Secure transport
IPsec
(optional)
IPsec (mandatory)
IPsec
(mandatory)
IPsec
(mandatory)
High availability
Excellent
Good
Excellent
Excellent
Multicast
Good
Good
Good
Excellent
Voice and video
support
Excellent
Low
Excellent
Excellent
Scalable network
growth
Moderate
Good
Excellent
Excellent
Easily shared
WAN links
Moderate
Moderate
Moderate
Excellent
Enterprise Edge WAN and MAN Architecture
Table 5-4
347
Cisco Enterprise WAN and MAN Architecture Comparison (Continued)
Private
WAN
ISP Service (Site-to-Site and
Remote-Access IPsec VPN)
SP MPLS
and IP VPN
Self-Deployed
MPLS
Operational costs
High
Low
Moderate;
depends on
transport
Moderate to high
Network control
High
Moderate
Moderate
High
Effort to migrate
Low
from private WAN
Moderate
Moderate
High
Enterprises can use a combination of these technologies to support their remote connectivity
requirements. Figure 5-20 shows a sample implementation of a combination of three technologies
in a healthcare environment.
Figure 5-20
Sample Cisco WAN Architectures in a Healthcare Environment
Hospital
Remote Clinic
Hospital
Service
Provider
MPLS
Data
Center
Remote Clinic
Remote Users
Internet
Hospital
Encrypted Private Connectivity
IPsec VPN Connections
Service Provider-Managed MPLS Connections
Hospital
348
Chapter 5: Designing Remote Connectivity
Selecting Enterprise Edge Components
After identifying the remote connectivity requirements and architecture, you are ready to select
the individual WAN components.
Hardware Selection
When selecting hardware, use the vendor documentation to evaluate the WAN hardware
components. The selection process typically considers the function and features of the particular
devices, including their port densities, packet throughput, expandability capabilities, and
readiness to provide redundant connections.
Software Selection
The next step is to select the appropriate software features; when using Cisco equipment, the
software is the Cisco IOS. As illustrated in Figure 5-21, the Cisco IOS Software has been
optimized for different markets, network roles, and platforms. Cisco IOS Software meets the
requirements of various markets (enterprise, service provider, and commercial) and places in the
network (access, core and distribution, and edge).
KEY
Suited for access layer devices, Cisco IOS Software T releases support advanced
POINT technology business solutions.
Suited for the enterprise core and service provider edge, Cisco IOS Software S releases
support voice transport, video, multicast, MPLS VPN, and advanced technologies, such as
Layer 2 and Layer 3 VPN and integrated services architecture.
Suited for large-scale networks, Cisco IOS Software XR releases provide high availability
with in-service software upgrades.
Cisco IOS software product lines share a common base of technologies. Most of the features
available in the T releases for a given technology are also available in the S and XR releases.
Cisco IOS Software Packaging
Cisco is migrating to using Cisco IOS Packaging to simplify the image-selection process by
consolidating the total number of packages and using consistent package names across all
hardware products. Figure 5-22 illustrates the various packages available with Cisco IOS
packaging.
Selecting Enterprise Edge Components
Figure 5-21
Cisco IOS Software in the Network
Enterprise
Core
Access
Cisco IOS Software T
IP Services and Ease
of Deployment
•
•
•
•
•
Broadband access
Mobility and wireless
Data center
Security
IP communications
Figure 5-22
Service
Provider
Edge
Service
Provider
Core
Cisco IOS Software S
IP Services and Infrastructure
Cisco IOS Software XR
Scale and Availability
• High-end enterprise core
• Service provider edge
• Virtual Private Networks
(MPLS, Layer 2, and Layer 3)
• Video and content multicast
• Large-scale networks
• High availability
• In-service software
upgrade
Cisco IOS Packaging
Advanced Enterprise Services
Full Cisco IOS Software Feature Set
Feature Inheritance
Advanced IP Services
IPv6, Advanced Security, and
Service Provider Services
Advanced Security
Cisco IOS Firewell, VPN,
IPsec, 3DES, IDS, SSH
Enterprise Services
Enterprise Base, Full IBM Support
and Service Provider Services
SP Services
MPLS, NetFlow, SSH, ATM,
VoATM
IP Voice
VoIP, VoFR, and IP Telephony
IP Base
Entry-Level Cisco IOS
Software Image
Enterprise Base
Enterprise Layer 3 Routed
Protocols and IBM Support
349
350
Chapter 5: Designing Remote Connectivity
Four packages have been designed to satisfy the requirements in base service categories; they are
as follows:
■
IP Base: Supports IP data
■
IP Voice: Supports converged voice and data
■
Advanced Security: Provides security and VPN
■
Enterprise Base: Provides enterprise Layer 3 protocols and IBM support
NOTE The features of the lower-tier packages are included in the higher-tier packages.
Three additional premium packages offer new Cisco IOS Software feature combinations that
address more complex network requirements:
■
SP Services: Adds SP features, including MPLS, ATM, Secure Shell (SSH) and NetFlow, to
the IP Voice package
■
Advanced IP Services: Adds advanced SP services to the Advanced Security package
■
Enterprise Services: Adds advanced SP services to the Enterprise Base package
Advanced Enterprise Services, which integrates support for all routing protocols with voice,
security, and VPN capabilities, includes all the features of the other packages.
NOTE Cisco IOS Packaging is available for Cisco IOS Release 12.3 on some Cisco Integrated
Services Routers (ISR). Most Cisco access, distribution or aggregation, and core routers, and
other hardware that runs Cisco IOS software, will support Cisco IOS Packaging in the future.
After a feature is introduced, it is also included in the more comprehensive packages. Cisco calls
this the feature inheritance principle of Cisco IOS Packaging; it provides clear migration,
clarifying the feature content of the various packages and how they relate to one another.
Selecting Enterprise Edge Components
351
Cisco IOS Packaging Technology Segmentation
Table 5-5 illustrates some of the technologies supported in the various Cisco IOS packages.
Table 5-5
Cisco IOS Packaging Technology Segmentation
Data
Connectivity
VoIP and
VoFR1
ATM, VoATM2,
MPLS
AppleTalk,
IPX3, IBM
Protocols
IP Base
X
IP Voice
X
Advanced
Security
X
Enterprise
Base
X
SP Services
X
X
X
Advanced IP
Services
X
X
X
Enterprise
Services
X
X
X
X
Advanced
Enterprise
Services
X
X
X
X
Firewall,
IDS4,
VPN
X
X
X
X
X
1 VoFR
= Voice over Frame Relay
= Voice over ATM
3 IPX = Internetwork Packet Exchange
4 IDS = Intrusion Detection System
2 VoATM
KEY
Use the Cisco Feature Navigator at http://www.cisco.com/go/fn/ to quickly find the right
POINT Cisco IOS and Catalyst operating system software release for the features that you want
to run on your network.
Comparing the Functions of Cisco Router Platforms and Software Families
Table 5-6 compares the functions of the Cisco router platforms and the software families that
support them.
NOTE The specific router platforms and software releases available will change over time;
refer to http://www.cisco.com/ for the latest information.
352
Chapter 5: Designing Remote Connectivity
Table 5-6
Comparing Cisco Router Platforms and Software Features
Hardware
Software
Function
800, 1800, 2800,
3800, 7200
Cisco IOS T Releases
Supports access routing platforms, providing fast, scalable
12.3, 12.4, 12.3T, 12.4T delivery of mission-critical enterprise applications
7200, 7301, 7304,
7500, 10000
Cisco IOS S Release
12.2SB
Delivers midrange broadband and leased-line aggregation
for Enterprise and SP Edge networks
7600
Cisco IOS S Release
12.2SR
Delivers high-end Ethernet LAN switching for Enterprise
access, distribution, core, and data center deployments,
and high-end Metro Ethernet for the SP Edge
12000, CRS-1
Cisco IOS XR
Provides massive scale, continuous system availability,
and service flexibility for SP core and edge (takes
advantage of the massively distributed processing
capabilities of the Cisco CRS-1 and the Cisco 12000)
Comparing the Functions of Multilayer Switch Platforms and Software Families
Table 5-7 compares the functions of the Cisco multilayer switch platforms and the software
families that support them.
NOTE The specific multilayer switch platforms and software releases available will change
over time; refer to http://www.cisco.com/ for the latest information.
Table 5-7
Comparing Cisco Multilayer Switch Platforms and Software Features
Hardware
Software
Function
800, 1800, 2800,
3800, 7200
Cisco IOS S Release
12.2SE
Provides low-end to midrange Ethernet LAN switching for
Enterprise access and distribution deployments
4500, 4900
Cisco IOS S Release
12.2SG
Provides midrange Ethernet LAN switching for Enterprise
access and distribution deployments in the campus, and
supports Metro Ethernet
6500
Cisco IOS S Release
12.2SX
Delivers high-end Ethernet LAN switching for Enterprise
access, distribution, core, and data center deployments, and
high-end Metro Ethernet for the SP Edge
Enterprise Branch and Teleworker Design
This section describes design considerations for the Enterprise Branch and Enterprise Teleworker
architectures.
Enterprise Branch and Teleworker Design
353
Enterprise Branch Architecture
Recall that the Cisco Enterprise Architecture, based on the Cisco SONA, includes branch modules
that focus on the remote places in the network. Enterprises are seeking opportunities to protect,
optimize, and grow their businesses by increasing security; consolidating voice, video, and data
onto a single IP network; and investing in applications that will improve productivity and
operating efficiencies. These services provide enterprises with new opportunities to reduce costs,
improve productivity, and safeguard information assets in all their locations.
The Cisco Enterprise Branch architecture takes into account the services that enterprises want to
deploy at their endpoints, no matter how far away the endpoints are or how they are connected.
Figure 5-23 illustrates how branch services relate to the other parts of the Cisco Enterprise
architectures.
NOTE Teleworker architecture is covered in the later “Enterprise Teleworker (Branch of One)
Design” section.
Figure 5-23
Enterprise Branch Services
Data
Center
Cisco
Security
Agent
Wireless
Security
IP
Communications
Cisco
Security
Agent
WAN
Campus
Security
Cisco
IP 7970
Wireless
Branch
of One/Teleworker
IP
Communications
WAN
Optimization
Branch
WAN
Manageability
VPN
Cisco Aironet
Access Point
Wireless
Cisco
Catalyst
Switches
Resiliency
Cisco
Security
Agent
IP
Communications
Security
Branch
Access Router. WAN+
Firewall+VPN+IDS+
Cisco Unified Communications
Manager Express+Cisco
Unity Express
Resiliency
Security
Cisco
Security
Agent
IP
Communications
Branch
354
Chapter 5: Designing Remote Connectivity
The Cisco Enterprise Branch Architecture, illustrated in Figure 5-24, is an integrated, flexible, and
secure framework for extending headquarters applications in real time to remote sites. The Cisco
Enterprise Branch Architecture applies the SONA framework to the smaller scale of a branch
location.
Enterprise Branch Architecture
Application
Networking
Services
Figure 5-24
Instant
Messaging
Unified
Messaging
Cisco Unified
MeetingPlace
IPCC
RFID
Video
Delivery
Application Delivery (WAAS)
Application-Oriented Networking
Management
Integrated
Services
Layer
Network Infrastructure Virtualization
IP Communication
Services
Mobility Services
Security Services
Infrastructure
Services
Identity Services
Network Infrastructure Virtualization
Network
Infrastructure
Layer
Infrastructure Management
Common Branch Network Components
Router
Switch
Security
Appliance
Phone
Laptop
Access
Point
Video
Equipment
Call
Processing
Common network components that might be implemented in the Enterprise Branch include the
following:
■
Routers providing WAN edge connectivity
■
Switches providing the LAN infrastructure
■
Security appliances defending the branch devices
■
Wireless access points for device mobility
Enterprise Branch and Teleworker Design
■
Call-processing and video equipment for IP telephony and video support
■
End-user devices, including IP phones and computers
355
Enterprise Branch Design
Requirements differ with the size of the branch offices. Consider to the following questions when
designing the Enterprise Branch:
■
How many branch locations need to be supported?
■
How many existing devices (including end users, hosts, and network infrastructure) are to be
supported at each location? The number of devices supported is limited by the physical
number of ports available.
■
How much growth is expected at each location, and therefore what level of scalability is
required?
■
What are the high availability requirements at each location?
■
What level of security is required in the design? Should security be managed locally or
through the central location?
■
Are there any requirements for local server farms or networks between the internal network
and the external network (for example, in a demilitarized zone [DMZ])?
■
Should network management be supported locally or via the central location?
■
What wireless services are needed, and how will they be used by the clients? What effect will
the network and the environment have on the wireless devices?
■
What is the approximate budget available?
KEY
Branch offices can be categorized based on the number of users:
POINT
■
Small office: Up to 50 users, using a single-tier design
■
Medium office: Between 50 and 100 users, using a dual-tier design
■
Large office: Between 100 and 200 users, using a three-tier design
The number of devices, high availability, scalability, and migration to advanced services
requirements also influence the model adopted. The design models for each of these types of
branches are described in the following sections.
356
Chapter 5: Designing Remote Connectivity
Each of the designs in the following sections suggests using an ISR (such as the 2800 series
routers) at the WAN edge, which provides various voice, security, and data services that are
integrated with the LAN infrastructure. Depending on the specific ISR edge router chosen, the
interfaces and modules available include the following:
■
Integrated LAN interfaces (10/100/1000 Mbps)
■
High-speed WAN interface card (HWIC) slots
■
Network modules
■
Embedded security
Alternatively, Cisco multiservice routers (such as the 2600 series routers) can be used.
Small Branch Office Design
Small branch office designs combine an ISR access router with Layer 2 switching and end-user
devices, phones, printers, and so forth; a typical design is illustrated in Figure 5-25.
Figure 5-25
Typical Small Branch Office Design
Corporate
Headquarters
(Corporate Resources
Located in Headquarters)
WAN
Internet
T1
ADSL
Access
Router
Enterprise Branch and Teleworker Design
357
ISR and Switch Connections
The ISR connects with Layer 2 switch ports in one of the following three ways:
■
Integrated switching within the ISR (or multiservice router): This option has a lower port
density that supports from 16 to 48 client devices on either a Cisco EtherSwitch network
module or a Cisco EtherSwitch service module. It provides a one-box solution that offers ease
of management. Depending on the module, the integrated switch ports might provide power
to end devices using Power over Ethernet (PoE).
■
Trunked network interface on the ISR to external access switches: In this case, there is no
link redundancy between the access switches and the ISR. The access switches might provide
power to end devices using PoE.
■
Logical EtherChannel interface between the ISR and access switches: This approach uses
an EtherSwitch module in the ISR configured as an EtherChannel. Link redundancy is
provided to the access layer switches over the EtherChannel. The access switches might
provide power to end devices using PoE.
If redundant access layer links and higher-bandwidth uplinks are required, only the third option,
with higher-performance devices, can be used. The choice of the edge router also depends on the
voice and VPN support needed.
The access switch provides Layer 2 services, and the Cisco ISR provides Layer 3 services such as
Dynamic Host Configuration Protocol (DHCP), firewall, and Network Address Translation.
The 2811 ISR or larger ISR is suggested. Both the Cisco 2821 and 2851 ISRs support two
integrated 10/100/1000 routed (Layer 3) interfaces and have one slot for a network module. The
Cisco 2821 ISR supports the 16-port EtherSwitch network module and the 24-port EtherSwitch
service module. The Cisco 2851, 3825, and 3845 ISRs can support the high-density 48-port
EtherSwitch service module.
Typical access switches include the Cisco Catalyst 2960, 3560, and 3750 Series switches.
To keep manageability simple, the topology has no loops; however, spanning tree must be enabled
and configured to protect the network from any accidental loops. As is the case in the Enterprise
Campus, the recommended spanning-tree protocol is Rapid Per-VLAN Spanning Tree Plus for all
Layer 2 deployments in a branch office environment.
The ISR is the default gateway for each VLAN configured in the topology, and all Layer 3
configurations are done on the ISR. The access switches must be configured with an IP address for
management purposes.
358
Chapter 5: Designing Remote Connectivity
WAN Services
WAN services are typically provided by a T1 primary link. The Internet is used as a WAN backup,
accessed by an ADSL connection.
Network Services
The EIGRP routing protocol is used. High availability across the WAN is provided by a floating
static route across the ADSL Internet connection.
QoS mechanisms used include traffic shaping and policing, and the implementation of a scavenger
class of traffic (applied on both the switch and the ISR).
QoS Classes
As mentioned in Chapter 4, end-to-end QoS is provided for IP version 4 using Layer 3 QoS
marking in the 8-bit Type of Service (ToS) field in the packet header. Originally, only the first 3
bits were used; these bits are called the IP Precedence bits. Because 3 bits can specify only eight
marking values, IP precedence does not allow a granular classification of traffic. Thus, more bits
are now used: The first 6 bits in the TOS field are now known as the DSCP bits.
Cisco has created a QoS Baseline that provides recommendations to ensure that its products, and
the designs and deployments that use them, are consistent in terms of QoS. Although the QoS
Baseline document itself is internal to Cisco, it includes an 11-class classification scheme that can
be used for enterprises. The classes of traffic in the QoS Baseline are defined as follows:
■
IP Routing class: This class is for IP routing protocol traffic such as EIGRP, OSPF, and so
forth.
■
Voice class: This class is for VoIP bearer traffic (the conversation traffic), not for the
associated signaling traffic, which would go in the Call Signaling class.
■
Interactive Video class: This class is for IP videoconferencing traffic.
■
Streaming Video class: This class is for either unicast or multicast unidirectional video.
■
Mission-Critical Data class: This class is intended for a subset of the Transactional Data
applications that are most significant to the business. The applications in this class are
different for every organization.
■
Call Signaling class: This class is intended for voice and video-signaling traffic.
■
Transactional Data class: This class is intended for user-interactive applications such as
database access, transactions, and interactive messaging.
■
Network Management class: This class is intended for traffic from network management
protocols, such as Simple Network Management Protocol.
■
Bulk Data class: This class is intended for background, noninteractive traffic, such as large
file transfers, content distribution, database synchronization, backup operations, and e-mail.
Enterprise Branch and Teleworker Design
359
■
Scavenger class: This class is based on an Internet 2 draft that defines a “less-than-BestEffort” service. If a link becomes congested, this class is dropped the most aggressively. Any
nonbusiness-related traffic (for example, downloading music in most organizations) could be
put into this class.
■
Best Effort class: This class is the default class. Unless an application has been assigned to
another class, it remains in this default class. Most enterprises have hundreds, if not
thousands, of applications on their networks; the majority of these applications remain in the
Best Effort class.
The QoS Baseline does not mandate that these 11 classes be used; rather, this classification scheme
is an example of well-designed traffic classes. Enterprises can have fewer classes, depending on
their specific requirements, and can evolve to using more classes as they grow.
Medium Branch Office Design
A typical medium branch office topology, illustrated in Figure 5-26, is similar to the small office
topology, except that the WAN edge devices are larger, typically two Cisco 2821 or Cisco 2851
ISRs, and the access switches for LAN connectivity are external stackable switches.
Figure 5-26
Typical Medium Branch Office Design
Corporate
Headquarters
(Corporate Resources
Located in Headquarters)
WAN
Access
Router
Access
Router
360
Chapter 5: Designing Remote Connectivity
ISR and Switch Connections
To scale up to 100 users, the following options are available:
■
Use a higher port density external access switch
■
Use an ISR module that supports switched access ports; use EtherChannel to provide a
redundant connection to the access switches
This design uses the integrated 10/100/1000 interfaces on the ISRs as Layer 3 trunks, providing
the flexibility to use various access switches. The stackable Cisco Catalyst 3750 Series switch with
an IP Base image or an IP Services image can be used as the access switch, supporting 24 or 48
users per switch. The IP Base image feature set includes advanced QoS, rate limiting, ACLs, and
basic static and RIP routing capability. The IP Services image provides a richer set of enterpriseclass features, including advanced hardware-based IP unicast and multicast routing. An additional
Advanced IP Services license is also available (for example, this license is required for IPv6
routing).
With Cisco StackWise technology, a single 32-Gbps switching unit can be created, using up to
nine Cisco Catalyst 3750 Series switches. Cisco StackWise technology uses special stackinterconnect cables and stacking software. The stack behaves as a single switching unit that is
managed by a master switch elected from one of the member switches. The master switch
automatically creates and updates all the switching and optional routing tables. The number of
PoE ports supported depends on the specific access switch selected.
WAN Services
WAN services are typically provided by a private WAN—for example, with dual Frame Relay
links.
Network Services
The EIGRP routing protocol is used. High availability across the WAN is provided by dual routers
running router redundancy protocols such as Hot Standby Router Protocol (HSRP), Virtual Router
Redundancy Protocol (VRRP), and Gateway Load-Balancing Protocol (GLBP) (as described in
Chapter 4). QoS mechanisms used include traffic shaping and policing, and the implementation of
a scavenger class of traffic (applied on both the switch and the ISR).
Large Branch Office Design
In a typical large branch office design, illustrated in Figure 5-27, dual ISRs are used for
redundancy at the WAN edge. Firewall functionality is provided by dual adaptive security
appliances (ASA), and dual multilayer switches (stackable or modular) are deployed at the
distribution layer.
Enterprise Branch and Teleworker Design
Figure 5-27
361
Typical Large Branch Office Design
Corporate
Headquarters
(Corporate Resources
Located in Headquarters)
WAN
WAN
Router
Distribution
Layer
Access
Layer
ISR and Switch Connections
In addition to supporting more users, a large office might need higher LAN switching capability
if supporting a server farm or DMZ. Support for some of these services requires the use of
appliance devices if higher throughput is required. To meet these requirements, a distribution layer
362
Chapter 5: Designing Remote Connectivity
is added to the topology by introducing a multilayer switch that provides the required LAN
switching capabilities, port density, and flexibility to support additional appliances.
Either a stackable switch (for example, a Cisco Catalyst 3750 Series switch) or a Cisco Catalyst
4500 Series switch could be used at the distribution layer. This LAN topology is highly available,
scalable, and manageable. High-availability requirements are met by the link redundancy and
device redundancy built into the design. For example, redundant links are used to provide high
availability between the distribution and edge layers.
The port density of the stacked switches allows a number of access switches to be connected
without compromising high availability. The distribution switches typically run the advanced IOS
images, which support more features, including various routing protocols and advanced features
such as policy-based routing.
If Cisco Catalyst 3560 or 3750 Series switches are used at the access layer, other Layer 2 security
features, such as DHCP snooping, Dynamic Address Resolution Protocol (ARP) Inspection
(DAI), and IP Source Guard, can be enabled, providing additional security measures.
The default gateways for all the VLANs at the access layer are on the distribution layer.
WAN Services
WAN services in this typical design are provided by an MPLS network with dual connections.
Network Services
The EIGRP routing protocol is used. High availability across the WAN is provided by dual routers
running router redundancy protocols (such as HSRP, VRRP, and GLBP), ASA failover functionality, and object tracking.
QoS mechanisms used include traffic shaping and policing, and the implementation of a scavenger
class of traffic (applied on both the switch and the ISR).
Enterprise Teleworker (Branch of One) Design
Organizations are constantly striving to reduce costs, improve employee productivity, and retain
valued employees. These goals can be furthered by allowing employees to work from home with
quality, function, performance, convenience, and security similar to that available in the office.
With a work environment in the residence, employees can optimally manage their work schedules,
allowing for higher productivity (less affected by office distractions) and greater job satisfaction
(flexibility in schedule). This transparent extension of the enterprise to employee homes is the
objective of the Cisco Enterprise Teleworker (or Branch of One) architecture.
Enterprise Branch and Teleworker Design
363
Occasional remote users have much lighter application requirements than part-time and full-time
teleworkers. They can connect through a wireless hotspot or a guest network at a hotel and have
little control over network resiliency and availability.
In contrast, Enterprise teleworkers can be differentiated from other forms of work-at-home or
telecommuting scenarios in that the emphasis is on delivering seamless, managed accessibility to
the full range of applications and services critical to the operational effectiveness of enterprises,
as illustrated in Figure 5-28. The Cisco Enterprise Teleworker architecture is part of the overall
secure Cisco Enterprise architecture infrastructure. It companies the capability to integrate and
securely manage their remote workers within the corporate network while simultaneously
providing a high-quality end-user experience supporting a full range of enterprise applications for
the enterprise teleworker.
Figure 5-28
Comparison of Teleworking Options
E-mail
Web-based applications
Occasional
Users
Part-Time or
Full-Time and
Day Extenders
Occasional Remote
Worker
Branch of One
Yes
Yes
Yes
Yes
Mission-critical applications
Best effort
Prioritized
Real-time collaboration
Best effort
Prioritized
Voice over IP
Best effort
High quality
Video on demand, Cisco IP/TV
Unlikely
High quality
Videoconferencing
Unlikely
High quality
Remote configuration and management
Integrated security
Resiliency and availability
No
Yes
Basic
Full
No
Yes
The enterprise teleworker typically connects to an ISP through a DSL or cable modem and might
use an analog dialup session to back up this connection. The enterprise teleworker solution is
implemented with a small ISR, such as the Cisco 871, 876, 877, or 878 ISR, with integrated switch
ports, connected behind a broadband modem, as shown in Figure 5-29. This solution uses a
transparent, always-on VPN tunnel back to the enterprise.
364
Chapter 5: Designing Remote Connectivity
Figure 5-29
Teleworker (Branch of One) Architecture
Advanced Applications
Support (Voice, Video)
E-Mail
Applications
Centralized Management
IT Managed Security Policies
Broadband
Internet
Encrypted VPN Tunnel
Voice
VPN
Headend
Router
Video
Corporate-Pushed
Security Policies
(Not User-Managed)
Corporate
Network
Cisco 871, 876,
877, 878 Router
Corporate Phone, Toll
Bypass, Centralized
Voice Mail
Integrated Security
and Identity Services
Within this architecture, centralized management means that the enterprise applies security
policies, pushes configurations, and periodically tests the connection through the broadband cloud
and back to the corporate office to determine the latency, jitter, and packet loss experienced at any
time. This solution supports advanced applications such as voice and video as part of the full suite
of enterprise services for the end user. For example, a teleworker can access the central-office IP
telephone system from home with comparable voice quality and can thereby take advantage of the
higher-function IP telephony capabilities instead of using the PSTN.
An alternative solution is an unmanaged VPN approach in which the end user implements a
software VPN from a PC across a generic broadband router, access point, or hub appliance. This
solution typically cannot support the level of feature integration, QoS, and managed support
needed to reliably deliver voice, video, multimedia, and traditional data to the end user, but it
might be appropriate for occasional remote users with lighter application requirements.
Summary
In this chapter, you learned about remote connectivity network design with a focus on the
following topics:
■
Definition of a WAN and the types of WAN interconnections
■
Various WAN technologies, including TDM, ISDN, Frame Relay, ATM, MPLS, Metro
Ethernet, DSL, cable, wireless, SONET/SDH, DWDM, and dark fiber
References
365
■
WAN pricing and contract considerations
■
WAN design methodology, including the application and technical requirement aspects of
WAN design
■
WAN bandwidth optimization techniques
■
Use of various WAN technologies for remote access, VPNs, WAN backup, and connecting to
the Internet as a backup WAN
■
Enterprise Edge WAN and MAN architectures and technologies
■
Selection of WAN components, including hardware and software
■
Enterprise Branch and Enterprise Teleworker design considerations
References
For additional information, refer to the following resources:
■
Cisco Systems, Inc., Product Documentation, http://www.cisco.com/univercd/home/
home.htm
■
Cisco Systems, Inc., Solution Reference Network Design Guides home page, http://
www.cisco.com/go/srnd/
■
Cisco Systems, Inc., Enterprise QoS Solution Reference Network Design Guide, http://
www.cisco.com/application/pdf/en/us/guest/netsol/ns432/c649/
ccmigration_09186a008049b062.pdf
■
Cisco Systems, Inc., Frame Relay technical overview, http://www.cisco.com/univercd/cc/td/
doc/cisintwk/ito_doc/frame.htm
■
Cisco Systems, Inc., MPLS and Tag Switching technical overview, http://www.cisco.com/
univercd/cc/td/doc/cisintwk/ito_doc/mpls_tsw.htm
■
Cisco Systems, Inc., Enterprise Architectures, http://www.cisco.com/en/US/netsol/ns517/
networking_solutions_market_segment_solutions_home.html
■
Cisco Systems, Inc., Cisco product index for routers, http://www.cisco.com/en/US/products/
hw/routers/index.html
■
Cisco Systems, Inc., Cisco product index for switches, http://www.cisco.com/en/US/
products/hw/switches/index.html
■
Cisco Systems, Inc., Cisco Feature Navigator, http://www.cisco.com/go/fn
366
Chapter 5: Designing Remote Connectivity
■
Cisco Systems, Inc., Cisco IOS Packaging: Introduction, http://www.cisco.com/en/US/
products/sw/iosswrel/ps5460/index.html
■
Cisco Systems, Inc., Business Ready Branch Solutions for Enterprise and Small Offices—
Reference Design Guide, http://www.cisco.com/application/pdf/en/us/guest/netsol/ns656/
c649/cdccont_0900aecd80488134.pdf
■
Cisco Systems, Inc., LAN Baseline Architecture Branch Office Network Reference Design
Guide, http://www.cisco.com/univercd/cc/td/doc/solution/designex.pdf
■
Cisco Systems, Inc., LAN Baseline Architecture Overview—Branch Office Network, http://
www.cisco.com/univercd/cc/td/doc/solution/lanovext.pdf
■
Cisco Systems, Inc., Cisco Business Ready Teleworker Architecture, http://www.cisco.com/
application/pdf/en/us/guest/netsol/ns430/c654/cdccont_0900aecd800df177.pdf
Case Study: ACMC Hospital Network WAN Design
This case study is a continuation of the ACMC Hospital case study introduced in Chapter 2,
“Applying a Methodology to Network Design.”
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study, you create a high-level design for the WAN portions of the ACMC Hospital
network.
Case Study Additional Information
Figure 5-30 shows the existing WAN links and the planned campus infrastructure.
Case Study: ACMC Hospital Network WAN Design
Figure 5-30
367
Case Study ACMC Hospital WAN Links and Planned Campus Infrastructure
Distribution
Core
Access
X
6X4
X
Remote
Clinics
4
Smaller Buildings
X
40 + 30
Server Switches
Main Building 1
Distribution
Access
X
7X4
X
3X3
5
X
2
X
Smaller Buildings
Main Building 2
Smaller Buildings
Children’s Place
Business Factors
The ACMC Hospital CIO realizes that WAN performance to the remote clinics is poor and that
some new applications will require more bandwidth. These applications include programs that
allow doctors at the central site to access medical images, such as digital X-rays, stored locally at
the clinics. The CIO wants all the remote sites to have the same type of access.
The CIO wants to implement a long-term, cost-effective solution that allows high-bandwidth
application deployment on the network and that allows for growth for the next two to five years.
The CIO also wants to simplify planning, pricing, and deployment of future applications.
Technical Factors
There is no data about the bandwidth requirements of the new applications. Lab testing would
provide better data, but ACMC does not have the time or money for testing. The CIO knows that
because TCP adjusts to use the available bandwidth, such that when congestion occurs, there is no
way to know how much bandwidth the present applications could ideally use unless extensive lab
testing is done.
368
Chapter 5: Designing Remote Connectivity
You discover that your site contact initially supplied you with an out-of-date network diagram. The
hospital upgraded the 56 kbps links to 128 kbps a year ago and upgraded the WAN bandwidth at
the largest clinic to 256 kbps last month. Therefore, the following is the current state of the WAN
links:
■
The connection to the largest remote clinic now runs at 256 kbps.
■
The connections to two other remote clinics were upgraded from 56 kbps to 128 kbps.
■
The two remaining remote clinics have 56-kbps dialup connectivity.
The increased WAN bandwidth you recommend should last for two to five years.
For situations in which you cannot really determine how much WAN bandwidth is needed, one
way to proceed is to multiply current traffic levels by a value of 1.5 or 2 per year. However, if the
customer does not want to be concerned with needing even more bandwidth in the near future,
multiply by bigger numbers. If you expect unknown applications to be added to the network,
multiply by even bigger numbers. In this case study, assume that all clinics are to be upgraded to
at least T1 access speed. (Pricing structures in many areas might even favor a full T1 over
fractional T1 links.)
Case Study Questions
Complete the following steps:
Step 1
Develop a list of relevant information that should be provided in the ACMC
WAN Request for Proposal (RFP).
Step 2
ACMC put out an RFP specifying that it requires at least T1 bandwidth at
the remote clinics. The responses to the RFP, indicating the technologies
currently available to ACMC, are shown in Table 5-8.
Calculate the monthly cost of using each of the technologies shown in Table
5-8 by completing the Monthly Cost column in this table.
Table 5-8
ACMC RFP Results
Option
Technology
Speed
Price per Month
1
Leased line: T1 at
clinics into T3 at
central
T1 or T3
$400 for each T1, $8000
for T3
2
Frame Relay: T1
access at clinics, T3
access at central
T1 or T3
$350 for T1 access, $7000
for T3 access circuit plus
CIR in 5-Mbps increments
times $75 plus $5 per PVC
Monthly Cost
Case Study: ACMC Hospital Network WAN Design
Table 5-8
369
ACMC RFP Results (Continued)
Option
Technology
Speed
Price per Month
3
MPLS VPN: T1 access
at clinics, T3 access at
central
T1 or T3
$500 for T1 access, $8500
for T3 access
4
High-speed business
cable service at clinics
6 Mbps
downstream, 768
kbps upstream
$90
T3 Internet at central
site
Step 3
Monthly Cost
$4000
T3
Which technology do you recommend that ACMC use? (Using either
multilink PPP over multiple T1s or using multilink Frame Relay over
multiple T1s are also options.)
NOTE To simplify this step, budgetary costs are not included for the routers. Make your
choice based on capabilities needed, with the understanding that there is an increasing cost for
increasing capabilities and options.
Table 5-9
Step 4
ACMC mentions that its images might be 100 MB. Transferring 100-MB images
over a T1 connection takes more than 8 minutes (because 100 MB * 8 bits per byte
/ 1.544 Mbps = 518 seconds = 8.6 minutes). Does this information change your
recommendation? Why or why not?
Step 5
The CIO indicates that remote site availability is critical to avoid having servers in
the remote clinics. What redundancy or backup WAN strategy do you recommend?
Step 6
Assume that the CIO has chosen to deploy multilink PPP over two T1s for simple,
reliable service at each remote clinic, with the 6 Mbps cable service as backup.
Select an appropriate Cisco router model to use at the central site and at each remote
location. Select appropriate switching hardware for each site, remembering that the
ISR routers can use integrated switches. Table 5-9 provides the number of switch
ports needed at each of the remote clinics. Tables 5-10 and 5-11 provide a condensed
version of the product and module information from http://www.cisco.com/.
Remote Clinic Switch Port Requirements
Remote Clinic Site
Number of Switch Ports Needed
1
48
2, 3
24
4, 5
16
370
Chapter 5: Designing Remote Connectivity
Table 5-10
ISR Routers and Port Capabilities
Cisco
ISR
Model
Approx. Mbps of Layer 3 Fast
Ethernet or CEF Switching
with 64-Byte Packets
LAN Ports
WAN Ports
851
5.12
10/100 four-port switch
10/100 Fast Ethernet
857
5.12
10/100 four-port switch
ADSL
871
12.8
10/100 four-port switch
10/100 Fast Ethernet
876
12.8
10/100 four-port switch
ADSL over ISDN
877
12.8
10/100 four-port switch
ADSL
878
12.8
10/100 four-port switch
G.SHDSL
1801
35.84
10/100 eight-port switch
One Fast Ethernet, ADSL over POTS
1802
35.84
10/100 eight-port switch
One Fast Ethernet, ADSL over ISDN
1803
35.84
10/100 eight-port switch
One Fast Ethernet, G.SHDSL
1811
35.84
10/100 eight-port switch
Two Fast Ethernet
1812
35.84
10/100 eight-port switch
Two Fast Ethernet
1841
38.40
Two Fast Ethernet. Can
add four-port switch with
HWIC-4ESW.
Can add two HWIC modules: ADSL
WAN Interface Card (WIC),
G.SHDSL WIC, cable WIC, WIC-1T
(one T1), WIC-2T (two T1)
2801
46.08
Two 10/100 Fast Ethernet
Four slots: two slots support HWIC-,
WIC-, VIC-, or VWIC-type modules;
one slot supports WIC-, VIC-, or
VWIC-type modules; one slot
supports VIC or VWIC-type modules
Can add WIC modules listed for 1841,
also HWIC-4T (four T1 HWIC)
2811
61.44
Two 10/100 Fast Ethernet
Four slots: each slot can support
HWIC-, WIC-, VIC-, or VWIC-type
modules. Can add WIC modules listed
for 1841, also HWIC-4T (four T1
HWIC).
Plus, one slot supports NM- and
NME-type modules
Can use NM-1HSSI (T3)
Case Study: ACMC Hospital Network WAN Design
Table 5-10
371
ISR Routers and Port Capabilities (Continued)
Cisco
ISR
Model
Approx. Mbps of Layer 3 Fast
Ethernet or CEF Switching
with 64-Byte Packets
LAN Ports
WAN Ports
2821
87.04
Two 10/100 Fast Ethernet
Four slots: each slot can support
HWIC-, WIC-, VIC-, or VWIC-type
modules; can add WIC modules listed
for 1841, also HWIC-4T (four T1
HWIC)
Plus, one slot supports NM-, NME-,
and NME-X-type modules
Can use NM-1HSSI
2851
112.64
Two 10/100 Fast Ethernet
Four slots: each slot can support
HWIC-, WIC-, VIC-, or VWIC-type
modules. Can add WIC modules listed
for 1841, also HWIC-4T (four T1
HWIC).
Plus, one slot supports NM-, NME-,
NME-X-, NMD-, and NME-XD-type
modules
Can use NM-1HSSI
3825
179.20
Two Gigabit Ethernet
(10/100/1000)
Two NM/NME/NME-X modules or
one NMD/NME-XD
Four HWIC/WIC/VIC/VWIC slots
For relevant NM and WIC/HWICs,
see modules listed for 2851
3845
256
Two Gigabit Ethernet
(10/100/1000)
Four NM/NME/NME-X modules or
two NMD/NME-XDs
Four HWIC/WIC/VIC/VWIC slots
For relevant NM and WIC/HWICs,
see modules listed for 2851
372
Chapter 5: Designing Remote Connectivity
Switch Network Modules for Cisco 2800 and 3800 Series Integrated
Services Routers
Table 5-11
NME16ES-1G-P
NME-X23ES-1G
NME-X23ES-1G-P
NME-XD24ES-1S-P
NME-XD48ES-2S-P
2811 and up
only
2821 and
2851 only
2821 and
2851 only
2851 only
2851 only
Any 3800
Any 3800
Any 3800
Any 3800
Any 3800
Any 3800
10/100: 16
10/100: 16
0/100: 23
10/100/
1000: 1
10/100: 24
10/100: 48
10/100/
1000: 1
10/100/
1000: 1
10/100/
1000: 1
10/100: 23
10/100/1000:
0
10/100/1000:
0
Small
Form
Factor
Pluggable
(SFP): 0
SFP: 0
SFP: 0
SFP: 0
SFP: 1
SFP: 2
Powered
Switch
Ports
0
16
0
24
24
48
IEEE
802.3af
POE
Support
No
Yes
No
Yes
Yes
Yes
Module
NME16ES-1G
Limitations 2811 and
up only
Ports
Up to two of the four-port HWICs can be used for switch HWICs in the Cisco 1841 ISR.
Nine-port switch HWICs are also available for Cisco 2800 and 3800 Series ISRs; two of them can
be used per 2800 or 3800 router.
Step 7
What design changes would you suggest if the CIO decided that a second
router should be used for the backup link at each site?
Review Questions
Answer the following questions, and then refer to Appendix A for the answers.
1.
What is the definition of a WAN?
2.
What are some typical WAN design objectives?
3.
Why are fully meshed networks not always appropriate?
Review Questions
373
4.
What comprises a T1 circuit?
5.
Why is ISDN better than analog dialup for data connections?
6.
What is an MPLS Forwarding Equivalence Class?
7.
How many bits are in the MPLS label field?
8.
True or false: Packets sent from Device A to Device B through an MPLS network always take
the same path through the network.
9.
What is the difference between ADSL and SDSL?
10.
Define downstream and upstream.
11.
Identify the following key ADSL devices shown in Figure 5-31:
Layer 3 concentrator
Layer 2 concentrator or DSLAM
Splitter
ADSL CPE
Figure 5-31
ADSL Devices
Virtual Circuit
D
A
B
Backbone
C
ATM
Network
12.
What type of cable is used for each of the following?
ADSL
VDSL
Cable
LRE
374
Chapter 5: Designing Remote Connectivity
13.
Which of the following two statements do not describe the operation of cable networks?
a.
The CMTS enables the coax users to connect with either the PSTN or the Internet.
b.
The actual bandwidth for Internet service over a cable TV line is shared 2.5 Mbps on the
download path to the subscriber, with about 27 Mbps of shared bandwidth for interactive responses in the other direction.
c.
All cable modems can receive from and send signals to the CMTS and other cable
modems on the line.
d.
DOCSIS defines the interface between the cable modem and the CMTS.
14.
For what purpose is bridged wireless used?
15.
Indicate the frequency and maximum speeds of the following WLAN standards:
802.11a
802.11b
802.11g
16.
What is the difference between SONET and SDH?
17.
What is Packet over SONET/SDH (POS)?
18.
Compare the response time and throughput requirements of a file transfer and an interactive
application.
19.
Which technologies are suitable for WAN connections over 50 Mbps?
20.
Match the terms with their definitions:
Terms:
Compression
Bandwidth
Response time
Window size
Throughput
Definitions:
Amount of data transmitted or received per unit time
Maximum number of frames (or amount of data) the sender can transmit before it must wait
for an acknowledgment
Review Questions
375
Amount of data successfully moved from one place to another in a given time period
Reduction of data size for the purpose of saving transmission time
Time between a user request (such as the entry of a command or keystroke) and the host
system’s command execution or response delivery
21.
What is multilink PPP?
22.
What can be done if WAN links are constantly congested?
23.
Match each of the following queuing mechanisms with its definition:
Queuing mechanisms:
WFQ
PQ
CQ
CBWFQ
LLQ
Definitions:
Allows sensitive data such as voice to be sent first
Enabled by default on most low-speed serial interfaces (with speeds at or below 2.048 Mbps)
on Cisco routers
Establishes up to 16 configurable interface output queues
Reserves a queue for each class
Establishes four interface output queues (high, medium, normal, and low)
24.
What is the difference between an overlay VPN and VPDN?
25.
What is IPsec?
26.
What is the difference between an SP MPLS IP VPN and a self-deployed MPLS network?
27.
Describe the four base packages in Cisco IOS Packaging.
28.
What is the typical number of users in small, medium, and large branch offices?
29.
What is Cisco StackWise technology?
30.
Which models of Cisco ISRs would be appropriate for a teleworker?
This chapter discusses IP addressing
design and includes the following
sections:
■
Designing an IP Addressing Plan
■
Introduction to IPv6
■
Summary
■
References
■
Case Study: ACMC Hospital IP
Addressing Design
■
Review Questions
CHAPTER
6
Designing IP Addressing in the
Network
This chapter begins with a discussion of the design of an Internet Protocol (IP) version 4 (IPv4)
addressing scheme. It continues with an introduction to IP version 6 (IPv6) and a discussion of
IPv4-to-IPv6 migration strategies.
NOTE In this chapter, the term IP refers to IPv4.
Designing an IP Addressing Plan
This section explores private and public address types, how to determine the size of the network
in relation to the addressing plan, and how to plan an IP addressing hierarchy. The section
concludes with a discussion of various IP address assignment and name resolution methods.
NOTE Appendix B, “IPv4 Supplement,” and Chapter 1, “Network Fundamentals Review,”
include detailed information about IPv4 addressing. You are encouraged to review any of the
material in Appendix B and Chapter 1 that you are not familiar with before reading the rest
of this chapter.
Private and Public IPv4 Addresses
Recall from Chapter 1 that the IP address space is divided into public and private spaces. Private
addresses are reserved IP addresses that are to be used only internally within a company’s
network, not on the Internet. Private addresses must therefore be mapped to a company’s
external registered address when sending anything on the Internet. Public IP addresses are
provided for external communication. Figure 6-1 illustrates the use of private and public
addresses in a network.
378
Chapter 6: Designing IP Addressing in the Network
Figure 6-1
Private and Public Addresses Can Be Used in a Network
Private/Public IP Address
Translation Point
Private
Address
Public
Address
Boundary
Private
Address
Boundary
Internet
Private
Address
Public
Address
Isolated Network
Public
Address
RFC 1918, Address Allocation for Private Internets, defines the private IP addresses as follows:
■
10.0.0.0 to 10.255.255.255
■
172.16.0.0 to 172.31.255.255
■
192.168.0.0 to 192.168.255.255
The remaining addresses are public addresses.
Private Versus Public Address Selection Criteria
Very few public IP addresses are currently available, so Internet service providers (ISPs) can
assign only a subset of Class C addresses to their customers. Therefore, in most cases, the number
of public IP addresses assigned to an organization is inadequate for addressing their entire
network.
The solution to this problem is to use private IP addresses within a network and to translate these
private addresses to public addresses when Internet connectivity is required. When selecting
addresses, the network designer should consider the following questions:
■
Are private, public, or both IP address types required?
■
How many end systems need only access to the public network? This is the number of end
systems that need a limited set of external services (such as e-mail, file transfer, or web
browsing) but do not need unrestricted external access. These end systems do not have to be
visible to the public network.
Designing an IP Addressing Plan
379
■
How many end systems must have access to and be visible to the public network? This is the
number of Internet connections and various servers that must be visible to the public network
(such as public servers and servers used for e-commerce, such as web servers, database
servers, and application servers) and defines the number of required public IP addresses.
These end systems require globally unambiguous IP addresses.
■
Where will the boundaries between the private and public IP addresses be, and how will they
be implemented?
Interconnecting Private and Public Addresses
According to its needs, an organization can use both public and private addresses. A router or
firewall acts as the interface between the network’s private and public sections.
When private addresses are used for addressing in a network and this network must be connected
to the Internet, Network Address Translation (NAT) or Port Address Translation (PAT) must be
used to translate from private to public addresses and vice versa. NAT or PAT is required if
accessibility to the public Internet or public visibility is required.
Static NAT is a one-to-one mapping of an unregistered IP address to a registered IP address.
Dynamic NAT maps an unregistered IP address to a registered IP address from a group of
registered IP addresses. NAT overloading, or PAT, is a form of dynamic NAT that maps multiple
unregistered IP addresses to a single registered IP address by using different port numbers. As
shown in Figure 6-2, NAT or PAT can be used to translate the following:
■
One private address to one public address: Used in cases when servers on the internal
network with private IP addresses must be visible from the public network. The translation
from the server’s private IP address to the public IP address is defined statically.
■
Many private addresses to one public address: Used for end systems that require access to
the public network but do not have to be visible to the outside world.
■
Combination: It is common to see a combination of the previous two techniques deployed
throughout networks.
380
Chapter 6: Designing IP Addressing in the Network
Private to Public Address Translation
Figure 6-2
Many to One
Private—Access to
Public Network
NAT or PAT
One to One
Internet
Private—Public
Visibility Needed
No Translation
Public
NOTE As mentioned, the addresses typically used on internal networks are private addresses,
and they are translated to public addresses. However, NAT and PAT can be used to translate
between any two addresses.
For additional details about NAT and PAT, see Appendix D, “Network Address Translation.”
Guidelines for the Use of Private and Public Addresses in an Enterprise Network
As shown in Figure 6-3, the typical enterprise network uses both private and public IP addresses.
Private IP addresses are used throughout the Enterprise Campus, Enterprise Branch, and
Enterprise Teleworker modules. The following modules include public addresses:
■
The Internet Connectivity module, where public IP addresses are used for Internet
connections and publicly accessible servers.
■
The E-commerce module, where public IP addresses are used for the database, application,
and web servers.
■
The Remote Access and virtual private network (VPN) module, the Enterprise Data Center
module, and the WAN and metropolitan-area network (MAN) and Site-to-Site VPN module,
where public IP addresses are used for certain connections.
Designing an IP Addressing Plan
Figure 6-3
381
Private and Public IP Addresses Are Used in the Enterprise Network
Enterprise Campus
Enterprise Edge
Building
Access
E-Commerce
Enterprise
Branch
Database,
Application,
Web Servers
Public
Building
Distribution
Internet Connectivity
Public
Servers
Private
Campus
Core
Remote Access/
VPN
Public/Private
Translation
Private
Enterprise
Data Center
Public/Private
WAN/MAN
Site-to-site VPN
Server Farm
Public/Private
Enterprise
Teleworker
Public/Private
Private
Private
Network
Management
Determining the Size of the Network
The first step in designing an IP addressing plan is determining the size of the network to establish
how many IP subnets and how many IP addresses are needed on each subnet. To gather this
information, answer the following questions:
■
How many locations does the network consist of?: The designer must determine the
number and type of locations.
■
How many devices in each location need addresses?: The network designer must determine
the number of devices that need to be addressed, including end systems, router interfaces,
switches, firewall interfaces, and any other devices.
■
What are the IP addressing requirements for individual locations?: The designer must
collect information about which systems will use dynamic addressing, which will use static
addresses, and which systems can use private instead of public addresses.
382
Chapter 6: Designing IP Addressing in the Network
■
What subnet size is appropriate?: Based on the collected information about the number of
networks and planned switch deployment, the designer estimates the appropriate subnet size.
For example, deploying 48-port switches would mean that subnets with 64 host addresses
would be appropriate, assuming one device per port.
Determining the Network Topology
Initially, the designer should acquire a general picture of the network topology; this will help
determine the correct information to gather about network size and its relation to the IP addressing
plan.
With this general network topology information, the designer determines the number of locations,
location types, and their correlations. For example, the network location information for the
topology shown in Figure 6-4 is shown in Table 6-1.
Figure 6-4
Sample Network Topology
Main Office
Regional/
International Offices
Remote Offices
Denver
Remote Office 1
Remote Office 2
San Francisco
Houston
Table 6-1
Remote Office 3
Network Location Information for the Topology in Figure 6-4
Location
Type
Comments
San Francisco
Main office
The central location where the majority of users are located
Denver
Regional office
Connects to the San Francisco main office
Houston
Regional office
Connects to the San Francisco main office
Designing an IP Addressing Plan
383
Network Location Information for the Topology in Figure 6-4 (Continued)
Table 6-1
Location
Type
Comments
Remote Office 1
Remote office
Connects to the Denver regional office
Remote Office 2
Remote office
Connects to the Denver regional office
Remote Office 3
Remote office
Connects to the Houston regional office
Size of Individual Locations
The network size, in terms of the IP addressing plan, relates to the number of devices and
interfaces that need an IP address. To establish the overall network size in a simplistic way, the
designer determines the approximate number of workstations, servers, Cisco IP phones, router
interfaces, switch management and Layer 3 interfaces, firewall interfaces, and other network
devices at each location. This estimate provides the minimum overall number of IP addresses that
are needed for the network. Table 6-2 provides the IP address requirements by location for the
topology shown in Figure 6-4.
IP Addressing Requirements by Location for the Topology in Figure 6-4
Table 6-2
Office
Type
Workstations
Servers
IP
Phones
Router
Interfaces
Switches
Firewall
and Other
Device
Interfaces
San
Francisco
Main
600
35
600
17
26
12
20%
1290
Denver
Regional
210
7
210
10
4
0
20%
441
Houston
Regional
155
5
155
10
4
0
20%
329
Remote
Office 1
Remote
12
1
12
2
1
0
10%
28
Remote
Office 2
Remote
15
1
15
3
1
0
10%
35
Remote
Office 3
Remote
8
1
8
3
1
0
10%
21
1000
50
1000
45
37
12
Location
Total
Reserve
Total
2144
Some additional addresses should be reserved to allow for seamless potential network growth. The
commonly suggested reserve is 20 percent for main and regional offices, and 10 percent for remote
offices; however, this can vary from case to case. The designer should carefully discuss future
network growth with the organization’s representative to obtain a more precise estimate of the
required resources.
384
Chapter 6: Designing IP Addressing in the Network
Planning the IP Addressing Hierarchy
The IP addressing hierarchy influences network routing. This section describes IP addressing
hierarchy and how it reduces routing overhead. This section discusses the issues that influence the
IP addressing plan and the routing protocol choice, including summarization, fixed-length subnet
masking, variable-length subnet masking, and classful and classless routing protocols.
NOTE Chapter 7, “Selecting Routing Protocols for the Network,” discusses routing protocols
in detail.
Hierarchical Addressing
The telephone numbering system is a hierarchical system. For example, the North American
Numbering Plan includes the country code, the area code, the local exchange, and the line number.
The telephone architecture has handled prefix routing, or routing based only on the prefix part of
the address, for many years. For example, a telephone switch in Detroit, Michigan does not have
to know how to reach a specific line in Portland, Oregon. It must simply recognize that the call is
not local. A long-distance carrier must recognize that area code 503 is for Oregon, but it does not
have to know the details of how to reach the specific line in Oregon.
The IP addressing scheme is also hierarchical, and prefix routing is not new in the IP environment
either. As in the telephone example, IP routers make hierarchical decisions. Recall that an IP
address comprises a prefix part and a host part. A router has to know only how to reach the next
hop; it does not have to know the details of how to reach an end node that is not local. Routers use
the prefix to determine the path for a destination address that is not local. The host part is used to
reach local hosts.
Route Summarization
With route summarization, also referred to as route aggregation or supernetting, one route in the
routing table represents many other routes. Summarizing routes reduces the routing update traffic
(which can be important on low-speed links) and reduces the number of routes in the routing table
and overall router overhead in the router receiving the routes. In a hierarchical network design,
effective use of route summarization can limit the impact of topology changes to the routers in one
section of the network.
If the Internet had not adapted route summarization by standardizing on classless interdomain
routing (CIDR), it would not have survived.
Designing an IP Addressing Plan
385
CIDR
CIDR is a mechanism developed to help alleviate the problem of IP address exhaustion and growth
of routing tables. The idea behind CIDR is that blocks of multiple addresses (for example, blocks
of Class C address) can be combined, or aggregated, to create a larger (that is, more hosts allowed),
classless set of IP addresses. Blocks of Class C network numbers are allocated to each network
service provider; organizations using the network service provider for Internet connectivity are
allocated subsets of the service provider’s address space as required. These multiple Class C
addresses can then be summarized in routing tables, resulting in fewer route advertisements. (Note
that the CIDR mechanism can be applied to blocks of Class A, B, and C addresses; it is not
restricted to Class C.) CIDR is described in RFC 1519, Classless Inter-Domain Routing (CIDR):
An Address Assignment and Aggregation Strategy.
For summarization to work correctly, the following requirements must be met:
■
Multiple IP addresses must share the same leftmost bits.
■
Routers must base their routing decisions on a 32-bit IP address and a prefix length of up to
32 bits.
■
Routing protocols must carry the prefix length with the 32-bit IP address.
For example, assume that a router has the following networks behind it:
192.168.168.0/24
192.168.169.0/24
192.168.170.0/24
192.168.171.0/24
192.168.172.0/24
192.168.173.0/24
192.168.174.0/24
192.168.175.0/24
Each of these networks could be advertised separately; however, this would mean advertising
eight routes. Instead, this router can summarize the eight routes into one route and advertise
192.168.168.0/21. By advertising this one route, the router is saying, “Route packets to me if
the destination has the first 21 bits the same as the first 21 bits of 192.168.168.0.”
Figure 6-5 illustrates how this summary route is determined. The addresses all have the first 21
bits in common and include all the combinations of the other 3 bits in the network portion of the
386
Chapter 6: Designing IP Addressing in the Network
address; therefore, only the first 21 bits are needed to determine whether the router can route to
one of these specific addresses.
Figure 6-5
Find the Common Bits to Summarize Routes
192.168.168.0 =
192.168.169.0 =
192.168.170.0 =
192.168.171.0 =
192.168.172.0 =
192.168.173.0 =
192.168.174.0 =
192.168.175.0 =
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
11000000 10101000 10101
000
001
010
011
100
101
110
111
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
Number of Common Bits = 21
Number of Non-Common Network Bits = 3
Number of Host Bits = 8
IP Addressing Hierarchy Criteria
IP addressing hierarchy has an important impact on the routing protocol choice, and vice versa.
The decision about how to implement the IP addressing hierarchy is usually based on the
following questions:
■
Is hierarchy needed within the IP addressing plan?
■
What are the criteria for dividing the network into route summarization groups?
■
How is route summarization performed, and what is the correlation with routing?
■
Is a hierarchy of route summarization groups required?
■
How many end systems does each route summarization group or subgroup contain?
Benefits of Hierarchical Addressing
A network designer decides how to implement the IP addressing hierarchy based on the network’s
size, geography, and topology. In large networks, hierarchy within the IP addressing plan is
mandatory for a stable network (including stable routing tables). For the following reasons, a
planned, hierarchical IP addressing structure, with room for growth, is recommended for networks
of all sizes:
■
Influence of IP addressing on routing: An IP addressing plan influences the network’s
overall routing. Before allocating blocks of IP addresses to various parts of the network and
assigning IP addresses to devices, consider the criteria for an appropriate and effective IP
Designing an IP Addressing Plan
387
addressing scheme. Routing stability, service availability, network scalability, and modularity
are some crucial and preferred network characteristics and are directly affected by IP address
allocation and deployment.
■
Modular design and scalable solutions: Whether building a new network or adding a new
service on top of an existing infrastructure, a modular design helps to deliver a long-term,
scalable solution. IP addressing modularity allows the aggregation of routing information on
a hierarchical basis.
■
Route aggregation: Route aggregation is used to reduce routing overhead and improve
routing stability and scalability. However, to implement route aggregation, a designer must be
able to divide a network into contiguous IP address areas and must have a solid understanding
of IP address assignment, route aggregation, and hierarchical routing.
Summarization Groups
To reduce the routing overhead in a large network, a multilevel hierarchy might be required. The
depth of hierarchy depends on the network size and the size of the highest-level summarization
group. Figure 6-6 shows an example of a network hierarchy.
Figure 6-6
IP Addressing Hierarchy
Aggregation
Aggregation
Aggregation
Section 2
Section 1
Building 1
Building 2
Building 1
Aggregation
Building 2
Location 1
Location 2
Organization
A typical organization has up to three levels of hierarchy:
■
First level: Network locations typically represent the first level of hierarchy in enterprise
networks. Each location typically represents a group of summarized subnets, known as a
summarization group.
388
Chapter 6: Designing IP Addressing in the Network
■
Second level: A second level of hierarchy can be done within first-level summarization
groups. For example, a large location can be divided into smaller summarization groups that
represent the buildings or cities within that location. Not all first-level summarization groups
require a second level of hierarchy.
■
Third level: To further minimize the potential routing overhead and instability, a third level
of hierarchy can exist within the second-level summarization group. For example, sections or
floors within individual buildings can represent the third-level summarization group.
Impact of Poorly Designed IP Addressing
A poorly designed IP addressing scheme usually results in IP addresses that are randomly assigned
on an as-needed basis. In this case, the IP addresses are most likely dispersed through the network
with no thought as to whether they can be grouped or summarized. A poor design provides no
opportunity for dividing the network into contiguous address areas, and therefore no means of
implementing route summarization.
Figure 6-7 is a sample network with poorly designed IP addressing; it uses a dynamic routing
protocol. Suppose that a link in the network is flapping (changing its state from UP to DOWN, and
vice versa) ten times per minute. Because dynamic routing is used, the routers that detect the
change send routing updates to their neighbors, those neighbors send it to their neighbors, and so
on. Because aggregation is not possible, the routing update is propagated throughout the entire
network, even if there is no need for a distant router to have detailed knowledge of that link.
Figure 6-7
A Poorly Designed IP Addressing Scheme Results in Excess Routing Traffic
10.1.1.0/24
Designing an IP Addressing Plan
389
Impacts of poorly designed IP addressing include the following:
■
Excess routing traffic consumes bandwidth: When any route changes, routers send routing
updates. Without summarization, more updates are sent, and the routing traffic consumes
more bandwidth.
■
Increased routing table recalculation: Routing updates require routing table recalculation,
which affects the router’s performance and ability to forward traffic.
■
Possibility of routing loops: When too many routing changes prevent routers from
converging with their neighbors, routing loops might occur, which might have global
consequences for an organization.
Benefits of Route Aggregation
Implementing route aggregation on border routers between contiguously addressed areas controls
routing table size. Figure 6-8 shows an example of implementing route summarization
(aggregation) on the area borders in a sample network. If a link within an area fails, routing
updates are not propagated to the rest of the network, because only the summarized route is sent
to the rest of the network, and it has not changed; the route information about the failed link stays
within the area. This reduces bandwidth consumption related to routing overhead and relieves
routers from unnecessary routing table recalculation.
Figure 6-8
A Hierarchical IP Addressing Plan Results in Reduced Routing Traffic
Summarization
Point
10.1.0.0/16
Only Summarized
Route 10.1.0.0 is
Propagated to the
Rest of the Network
10.1.1.0/24
Efficient aggregation of routing advertisements narrows the scope of routing update propagation
and significantly decreases the cumulative frequency of routing updates.
390
Chapter 6: Designing IP Addressing in the Network
Fixed- and Variable-Length Subnet Masks
Another consideration when designing the IP addressing hierarchy is the subnet mask to use—
either the same mask for the entire major network or different masks for different parts of the
major network.
KEY
A major network is a Class A, B, or C network.
POINT
Fixed-Length Subnet Masking (FLSM) is when all subnet masks in a major network must
be the same.
Variable-Length Subnet Masking (VLSM) is when subnet masks within a major network
can be different. In modern networks, VLSM should be used to conserve the IP addresses.
Some routing protocols require FLSM; others allow VLSM.
FLSM requires that all subnets of a major network have the same subnet mask, which therefore
results in less efficient address space allocation. For example, in the top network shown in Figure
6-9, network 172.16.0.0/16 is subnetted using FLSM. Each subnet is given a /24 mask. The
network is composed of multiple LANs that are connected by point-to-point WAN links. Because
FLSM is used, all subnets have the same subnet mask. This is inefficient, because even though
only two addresses are needed on the point-to-point links, a /24 subnet mask with 254 available
host addresses is used.
Figure 6-9
Fixed-Length Versus Variable-Length Subnet Mask
FLSM
172.16.0.0/24
A
172.16.1.0/24
B
172.16.3.0/24
C
172.16.1.4/30
C
172.16.2.0/24
VLSM
172.16.0.0/24
A
172.16.1.0/30
B
172.16.2.0/24
VLSM makes it possible to subnet with different subnet masks and therefore results in more
efficient address space allocation. VLSM also provides a greater capability to perform route
summarization, because it allows more hierarchical levels within an addressing plan. VLSM
Designing an IP Addressing Plan
391
requires prefix length information to be explicitly sent with each address advertised in a routing
update.
For example, in the lower network shown in Figure 6-9, network 172.16.0.0/16 is subnetted using
VLSM. The network is composed of multiple LANs that are connected by point-to-point WAN
links. The point-to-point links have a subnet mask of /30, providing only two available host
addresses, which is all that is needed on these links. The LANs have a subnet mask of /24 because
they have more hosts that require addresses.
Routing Protocol Considerations
To use VLSM, the routing protocol in use must be classless. Classful routing protocols permit only
FLSM.
KEY
With classful routing, routing updates do not carry the subnet mask.
POINT
With classless routing, routing updates do carry the subnet mask.
Classful Routing Protocols
As illustrated at the top of Figure 6-10, the following rules apply when classful routing protocols
are used:
■
The routing updates do not include subnet masks.
■
When a routing update is received and the routing information is about one of the following:
— Routes within the same major network as configured on the receiving interface, the
subnet mask configured on the receiving interface is assumed to apply to the received
routes also. Therefore, the mask must be the same for all subnets of a major network.
In other words, subnetting must be done with FLSM.
— Routes in a different major network than configured on the receiving interface, the
default major network mask is assumed to apply to the received routes. Therefore,
automatic route summarization is performed across major network (Class A, B, or
C) boundaries, and subnetted networks must be contiguous.
392
Chapter 6: Designing IP Addressing in the Network
Figure 6-10
Classful Versus Classless Routing Protocols
Automatic Route Summarization
on Network Boundary
Classful
172.16.1.0/24
Routing Table
A
172.16.1.0
172.16.0.0
172.16.2.0/24
192.168.2.0/24
B
C
Next Hop
B
No Automatic Route Summarization
on Network Boundaries Necessary
Classless
172.16.2.0/24
172.16.1.0/24
Routing Table
172.16.1.0/24
172.16.1.0/24
A
Destination
…
172.16.0.0
…
172.16.2.0/24
192.168.2.0/24
B
Next Hop
Destination
…
172.16.1.0/24 B
172.16.2.0/24 B
…
C
Figure 6-11 illustrates a sample network with a discontiguous 172.16.0.0 network that runs a
classful routing protocol. Routers A and C automatically summarize across the major network
boundary, so both send routing information about 172.16.0.0 rather than the individual subnets
(172.16.1.0/24 and 172.16.2.0/24). Consequently, Router B receives two entries for the major
network 172.16.0.0, and it puts both entries into its routing table. Router B therefore might make
incorrect routing decisions.
Figure 6-11
Classful Routing Protocols Do Not Send the Subnet Mask in the Routing Update
172.16.1.0/24
A
192.168.1.0/24
192.168.2.0/24
B
172.16.0.0
172.16.0.0
Routing Table
Destination
…
172.16.0.0
172.16.0.0
…
Next Hop
A
C
C
172.16.2.0/24
Designing an IP Addressing Plan
393
Because of these constraints, classful routing is not often used in modern networks. Routing
Information Protocol (RIP) version 1 (RIPv1) is an example of a classful routing protocol.
Classless Routing Protocols
As illustrated in the lower portion of Figure 6-10, the following rules apply when classless routing
protocols are used:
■
The routing updates include subnet masks.
■
VLSM is supported.
■
Automatic route summarization at the major network boundary is not required, and route
summarization can be manually configured.
■
Subnetted networks can be discontiguous.
Consequently, all modern networks should use classless routing. Examples of classless routing
protocols include RIP version 2 (RIPv2), Enhanced Interior Gateway Routing Protocol (EIGRP),
OSPF, IS-IS, and Border Gateway Protocol (BGP).
NOTE The classless routing protocols do not all behave the same regarding summarization.
For example, RIPv2 and EIGRP automatically summarize at the network boundary by default,
but they can be configured not to, and they can be configured to summarize at other address
boundaries. Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System
(IS-IS) do not summarize at the network boundary by default; they can be configured to
summarize at other address boundaries.
Figure 6-12 illustrates how discontiguous networks are handled by a classless routing protocol.
This figure shows the same network as in Figure 6-11, but running a classless routing protocol that
does not automatically summarize at the network boundary. In this example, Router B learns about
both subnetworks 172.16.1.0/24 and 172.16.2.0/24, one from each interface; routing is performed
correctly.
NOTE Although using discontiguous subnets with classless routing protocols does not pose
the routing issues demonstrated in Figure 6-11, contiguous blocks of IP networks should be used
whenever possible to promote more efficient summarization.
394
Chapter 6: Designing IP Addressing in the Network
Figure 6-12
Classless Routing Protocols Send the Subnet Mask in the Routing Update
172.16.1.0/24
A
192.168.1.0/24
192.168.2.0/24
B
172.16.1.0/24
C
172.16.2.0/24
172.16.2.0/24
Routing Table
Destination
…
172.16.1.0/24
172.16.2.0/24
…
Next Hop
A
C
Hierarchical IP Addressing and Summarization Plan Example
Recall that the number of available host addresses on a subnet is calculated by the formula 2h – 2,
where h is the number of host bits (the number of bits set to 0 in the subnet mask).
The first two columns in Table 6-3 show the location and number of IP addresses required at each
location for the sample network shown in Figure 6-4. The third column in this table is the next
highest power of 2 from the required number of addresses; this value is used to calculate the
required number of host bits, as shown in the fourth column. Assuming that the Class B address
172.16.0.0/16 is used to address this network, the fifth column illustrates sample address blocks
allocated to each location.
Table 6-3
Address Blocks by Location for the Topology in Figure 6-4
Location
Number of
IP Addresses
Required
Rounded
Power of 2
Number
of Host
Bits1
Address Block Assigned
San Francisco
1290
2048
11
172.16.0.0–172.16.7.255/21
1024
10
172.16.8.0–172.16.11.255/22
Denver Region
Denver Campus
441
512
9
172.16.8.0–172.16.9.255/23
Remote Office 1
28
64
6
172.16.10.0/26
Remote Office 2
35
64
6
172.16.10.64/26
1024
10
172.16.12.0–172.16.15.255/22
Houston Region
Houston Campus
329
512
9
172.16.12.0–172.16.13.255/23
Remote Office 3
21
64
6
172.16.14.0/26
1 Note that because the largest remote office needs 35 addresses and there is plenty of address space, 64 addresses are
allocated to each remote office.
Designing an IP Addressing Plan
395
For the main campus, 2048 addresses are allocated; 11 host bits are required. This subnet is further
divided into smaller subnets supporting floors or wiring closets. For the Denver region, 1024
addresses are allocated; 10 host bits are required. This address block is further divided into smaller
subnets supporting buildings, floors, or wiring closets. Similarly, for the Houston region, 1024
addresses are also allocated and further subdivided, as shown in Table 6-3.
Figure 6-13 illustrates one of the links in the Denver region going down and how summarization
is performed to reduce routing update traffic.
Figure 6-13
Hierarchical IP Addressing Plan Example
San Francisco
Campus
Summarization
point
172.16.8.0/22
172.16.11.248/30
172.16.11.248/30
Only summarized
172.16.11.248/30 route 172.16.8.0/22 is
propagated to the
rest of the network.
172.16.11.248/30
Denver
Region
Houston
Region
Details of routing
remain within the area
(simplifying routing
tables and reducing
processing time).
Methods of Assigning IP Addresses
This section discusses methods of assigning IP addresses to end systems and explains their
influence on administrative overhead. Address assignment includes assigning an IP address, a
default gateway, one or more domain name servers that resolve names to IP addresses, time
servers, and so forth. Before selecting the desired IP address assignment method, the following
questions should be answered:
■
How many devices need an IP address?
■
Which devices require static IP address assignment?
396
Chapter 6: Designing IP Addressing in the Network
■
Is IP address renumbering expected in the future?
■
Is the administrator required to track devices and their IP addresses?
■
Do additional parameters (default gateway, name server, and so forth) have to be configured?
■
Are there any availability issues?
■
Are there any security issues?
Static Versus Dynamic IP Address Assignment Methods
Following are the two basic IP address assignment strategies:
■
Static: An IP address is statically assigned to a system. The network administrator configures
the IP address, default gateway, and name servers manually by entering them into a special
file or files on the end system with either a graphical or text interface. Static address
assignment is an extra burden for the administrator—especially on large-scale networks—
who must configure the address on every end system in the network.
■
Dynamic: IP addresses are dynamically assigned to the end systems. Dynamic address
assignment relieves the administrator of manually assigning an address to every network
device. Instead, the administrator must set up a server to assign the addresses. On that server,
the administrator defines the address pools and additional parameters that should be sent to
the host (default gateway, name servers, time servers, and so forth). On the host, the
administrator enables the host to acquire the address dynamically; this is often the default.
When IP address reconfiguration is needed, the administrator reconfigures the server, which
then performs the host-renumbering task. Examples of available address assignment
protocols include Reverse Address Resolution Protocol, Boot Protocol, and DHCP. DHCP is
the newest and provides the most features.
When to Use Static or Dynamic Address Assignment
To select either a static or dynamic end system IP address assignment method or a combination of
the two, consider the following:
■
Node type: Network devices such as routers and switches typically have static addresses.
End-user devices such as PCs typically have dynamic addresses.
■
The number of end systems: If there are more than 30 end systems, dynamic address
assignment is preferred. Static assignment can be used for smaller networks.
■
Renumbering: If renumbering is likely to happen and there are many end systems, dynamic
address assignment is the best choice. With DHCP, only DHCP server reconfiguration is
needed; with static assignment, all hosts must be reconfigured.
Designing an IP Addressing Plan
397
■
Address tracking: If the network policy requires address tracking, the static address
assignment method might be easier to implement than the dynamic address assignment
method. However, address tracking is also possible with dynamic address assignment with
additional DHCP server configuration.
■
Additional parameters: DHCP is the easiest solution when additional parameters must be
configured. The parameters have to be entered only on the DHCP server, which then sends the
address and those parameters to the clients.
■
High availability: Statically assigned IP addresses are always available. Dynamically
assigned IP addresses must be acquired from the server; if the server fails, the addresses
cannot be acquired. To ensure reliability, a redundant DHCP server is required.
■
Security: With dynamic IP address assignment, anyone who connects to the network can
acquire a valid IP address, in most cases. This might be a security risk. Static IP address
assignment poses only a minor security risk.
The use of one address assignment method does not exclude the use of another in a different part
of the network.
Guidelines for Assigning IP Addresses in the Enterprise Network
The typical enterprise network uses both static and dynamic address assignment methods. As
shown in Figure 6-14, the static IP address assignment method is typically used for campus
network infrastructure devices, in the Server Farm and Enterprise Data Center modules, and in the
modules of the Enterprise Edge (the E-Commerce, Internet Connectivity, Remote Access and
VPN, and WAN and MAN and Site-to-Site VPN modules). Static addresses are required for
systems such as servers or network devices, in which the IP address must be known at all times
for connectivity, general access, or management.
398
Chapter 6: Designing IP Addressing in the Network
Figure 6-14
IP Address Assignment in an Enterprise Network
Enterprise Campus
Enterprise Edge
Building
Access
Enterprise
Branch
E-Commerce
Database,
Application,
Web Servers
Dynamic
Static
Building
Distribution
Internet Connectivity
Static
Servers
Static
Campus
Core
Remote Access/
VPN
WAN/MAN
Site-to-site VPN
Server Farm
Dynamic
Enterprise
Data Center
Static/Dynamic
Static
Enterprise
Teleworker
Static
Dynamic
Static
Network
Management
Dynamic IP address assignment is used for assigning IP addresses to end-user devices, including
workstations, Cisco IP phones, and mobile devices.
Using DHCP to Assign IP Addresses
DHCP is used to provide dynamic IP address allocation to hosts. DHCP uses a client/server model;
the DHCP server can be a Windows server, a UNIX-based server, or a Cisco IOS device. Cisco
IOS devices can also be DHCP relay agents and DHCP clients. Figure 6-15 shows the steps that
occur when a DHCP client requests an IP address from a DHCP server.
Step 1
The host sends a DHCPDISCOVER broadcast message to locate a DHCP
server.
Step 2
A DHCP server offers configuration parameters such as an IP address, a
MAC address, a domain name, a default gateway, and a lease for the IP
address to the client in a DHCPOFFER unicast message.
Designing an IP Addressing Plan
Step 3
The client returns a formal request for the offered IP address to the DHCP
server in a DHCPREQUEST broadcast message.
Step 4
The DHCP server confirms that the IP address has been allocated to the
client by returning a DHCPACK unicast message to the client.
Figure 6-15
399
DHCP Operation
DHCP Clie
Client
nt Broadca
sts
Discover M
essage
onds with
rver Resp
DHCP Se
ge
a
Offer Mess
Server
DHCP Clie
Client
nt Respo
nds with
Request
Message
l with
Seals the Dea
DHCP Server
ge
ent Messa
Acknowledgm
Server
Client
DCHP Relay
A DHCP relay agent is required to forward DCHP messages between clients and servers when
they are on different broadcast domains (IP subnets). For example, the DHCP relay agent receives
the DHCPDISCOVER message, which is sent as a broadcast, and forwards it to the DHCP server,
on another subnet.
A DHCP client might receive offers from multiple DHCP servers and can accept any one of the
offers; the client usually accepts the first offer it receives. An offer from the DHCP server is not a
guarantee that the IP address will be allocated to the client; however, the server usually reserves
the address until the client has had a chance to formally accept the address.
DHCP supports three possible address allocation mechanisms:
■
Manual: The network administrator assigns an IP address to a specific MAC address. DHCP
is used to dispatch the assigned address to the host.
400
Chapter 6: Designing IP Addressing in the Network
■
Automatic: DHCP permanently assigns the IP address to a host.
■
Dynamic: DHCP assigns the IP address to a host for a limited time (called a lease) or until
the host explicitly releases the address. This mechanism supports automatic address reuse
when the host to which the address has been assigned no longer needs the address.
Name Resolution
Names are used to identify different hosts and resources on the network and to provide userfriendly interaction with computers; a name is much easier to remember than an IP address. This
section covers the purpose of name resolution, provides information about different available
name resolution strategies, and discusses Domain Name System (DNS) name resolution.
Hosts (computers, servers, printers, and so forth) identify themselves to each other using various
naming schemes. Each computer on the network can have an assigned name to provide easier
communication between devices and among users. Because the IP network layer protocol uses IP
addresses to transport datagrams, a name that is used to identify a host must be mapped or resolved
into an IP address; this is known as name resolution. To select the desired name resolution method,
the following questions should be answered:
■
How many hosts require name resolution?
■
Are applications that depend on name resolution present?
■
Is the network isolated, or is it connected to the Internet?
■
If the network is isolated, how frequently are new hosts added, and how frequently do names
change?
Static Versus Dynamic Name Resolution
The process of resolving a hostname to an IP address can be either static or dynamic. Following
are the differences between these two methods:
■
Static: With static name-to-IP-address resolution, both the administrative overhead and the
configuration are very similar to those of a static address assignment strategy. The network
administrator manually defines name-to-IP-address resolutions by entering the name and IP
address pairs into the local database (HOSTS file) using either a graphical or text interface.
Manual entries create additional work for the administrator; they must be entered on every
host and are prone to errors and omissions.
Designing an IP Addressing Plan
■
401
Dynamic: The dynamic name-to-IP-address resolution is similar to the dynamic address
assignment strategy. The administrator has to enter the name-to-IP-address resolutions only
on a local DNS server rather than on every host. The DNS server then performs the name-toIP-address resolution. Renumbering and renaming are easier with the dynamic name-to-IPaddress resolution method.
When to Use Static or Dynamic Name Resolution
The selection of either a static or dynamic end-system name resolution method depends on the
following criteria:
■
The number of hosts: If there are more than 30 end systems, dynamic name resolution is
preferred. Static name resolution is manageable for fewer hosts.
■
Isolated network: If the network is isolated (it does not have any connections to the Internet)
and the number of hosts is small, static name resolution might be appropriate. The dynamic
method is also possible; the choice is an administrative decision.
■
Internet connectivity: When Internet connectivity is available for end users, static name
resolution is not an option, and dynamic name resolution using DNS is mandatory.
■
Frequent changes and adding of names: When dealing with frequent changes and adding
names to a network, dynamic name resolution is recommended.
■
Applications depending on name resolution: If applications that depend on name resolution
are used, dynamic name resolution is recommended.
Using DNS for Name Resolution
To resolve symbolic names to actual network addresses, applications use resolver or name resolver
programs, which are usually part of the host operating system. An application sends a query to a
name resolver that resolves the request with either the local database (HOSTS file) or the DNS
server.
When numerous hosts or names must be resolved to IP addresses, statically defined resolutions in
HOSTS files are unwieldy to maintain. To ease this process, DNS is used for name resolution.
DNS is a client/server mechanism used to access a distributed database providing address-to-name
resolution. A DNS server is special software that usually resides on dedicated hardware. DNS
servers are organized in a hierarchical structure. A DNS server can query other DNS servers to
retrieve partial resolutions for a certain name; for example, one DNS server could resolve
cisco.com, and another could resolve www.
402
Chapter 6: Designing IP Addressing in the Network
To enable DNS name resolution, the network administrator sets up the DNS server, enters
information about hostnames and corresponding IP addresses, and configures the hosts to use the
DNS server for name resolution.
A recommended practice is to use a DNS server for internal name resolution when there are more
than 30 hosts, services, or fully qualified domain names (FQDN) to resolve to IP addresses. An
external DNS server is required to provide access to hosts outside the organization.
NOTE An FQDN is a complete domain name—for a specific host on the Internet—that
contains enough information for it to be converted into a specific IP address. The FQDN consists
of a hostname and a domain name. For example, www.cisco.com is the FQDN on the web for
the Cisco web server. The host is www, the domain is cisco, and the top-level domain name is
com.
Figure 6-16 illustrates the process of resolving an IP address using a DNS server:
Step 1
A user wants to browse www.cisco.com. Because the host does not know
that site’s IP address, it queries the DNS server.
Step 2
The DNS server responds with the appropriate IP address for
www.cisco.com.
Step 3
The host establishes a connection to the appropriate IP address (the
www.cisco.com site).
NOTE RFC 2136, Dynamic Updates in the Domain Name System (DNS UPDATE), specifies
a technology that helps reduce the administrative overhead of maintaining address-to-name
mappings.
Designing an IP Addressing Plan
Figure 6-16
403
Name Resolution with DNS
www.cisco.com
192.168.1.1
3.
Connection to www.cisco.com Established
1.
What Is the Address of
www.cisco.com?
Internet
IP Address Is
192.168.1.1
2.
DNS
Server
1. What Is the IP Address of
www.cisco.com?
User Application
Name Resolver
DNS Server
2. IP Address Is
192.168.1.1
NOTE Recall that the IP addresses shown in the examples in this book are private addresses.
In practice, public addresses would be used on the Internet.
DHCP and DNS Server Location in a Network
As illustrated in Figure 6-17, DHCP and DNS servers can be located at multiple places in the
network, depending on the service they support.
404
Chapter 6: Designing IP Addressing in the Network
Figure 6-17
Example of Locating DHCP and DNS Servers in a Network
Enterprise Campus
Building
Access
Building
Distribution
Campus
Core
Enterprise Edge
E-Commerce
Enterprise
Branch
Database,
Application,
Web Servers
Enternal
DNS
Internet Connectivity
Static
Servers
Remote Access/
VPN
DHCP/
Internal DNS
Enterprise
Data Center
Internal DNS/
External DNS
WAN/MAN
Site-to-site VPN
Server Farm
DHCP/
Internal DNS
Network
Management
SP Premise
Enterprise
Teleworker
External
DNS
For the Enterprise Campus, DHCP and internal DNS servers should be located in the Server Farm;
these servers should be redundant. For remote locations, Cisco routers can provide DHCP and
DNS at the Enterprise Edge. External DNS servers should be redundant—for example, at two
service provider facilities, or one at a service provider facility and one in a demilitarized zone at
the Enterprise Campus or remote data center.
Introduction to IPv6
IPv6 is a technology developed to overcome the limitations of the current standard, IPv4, which
allows end systems to communicate and forms the foundation of the Internet as we know it today.
This section on IPv6-specific design considerations provides an overview of IPv6 features and
addressing and explains the various IPv6 address types. The address assignment and name
Introduction to IPv6
405
resolution strategies for IPv6 are explored. The transition from IPv4 to IPv6 is discussed, and the
section concludes with a brief description of the IPv6 routing protocols.
NOTE RFC 2460, Internet Protocol, Version 6 (IPv6), defines the IPv6 standard.
Information on IPv6 features supported in specific Cisco IOS releases can be found in Cisco IOS
Software Release Specifics for IPv6 Features, at http://www.cisco.com/univercd/cc/td/doc/
product/software/ios123/123cgcr/ipv6_c/ftipv6s.htm.
IPv6 Features
The ability to scale networks for future demands requires a limitless supply of IP addresses and
improved mobility; IPv6 combines expanded addressing with a more efficient and feature-rich
header to meet these demands. IPv6 satisfies the increasingly complex requirements of
hierarchical addressing that IPv4 does not support.
The Cisco IOS supports IPv6 in Release 12.2(2)T and later. The main benefits of IPv6 include the
following:
■
Larger address space: IPv6 addresses are 128 bits, compared to IPv4’s 32 bits. This larger
addressing space allows more support for addressing hierarchy levels, a much greater number
of addressable nodes, and simpler autoconfiguration of addresses.
■
Globally unique IP addresses: Every node can have a unique global IPv6 address, which
eliminates the need for NAT.
■
Site multihoming: IPv6 allows hosts to have multiple IPv6 addresses and allows networks to
have multiple IPv6 prefixes. Consequently, sites can have connections to multiple ISPs
without breaking the global routing table.
■
Header format efficiency: A simplified header with a fixed header size makes processing
more efficient.
■
Improved privacy and security: IPsec is the IETF standard for IP network security, available
for both IPv4 and IPv6. Although the functions are essentially identical in both environments,
IPsec is mandatory in IPv6. IPv6 also has optional security headers.
■
Flow labeling capability: A new capability enables the labeling of packets belonging to
particular traffic flows for which the sender requests special handling, such as nondefault
quality of service (QoS) or real-time service.
406
Chapter 6: Designing IP Addressing in the Network
■
Increased mobility and multicast capabilities: Mobile IPv6 allows an IPv6 node to change
its location on an IPv6 network and still maintain its existing connections. With Mobile IPv6,
the mobile node is always reachable through one permanent address. A connection is
established with a specific permanent address assigned to the mobile node, and the node
remains connected no matter how many times it changes locations and addresses.
IPv6 Address Format
Rather than using dotted-decimal format, IPv6 addresses are written as hexadecimal numbers with
colons between each set of four hexadecimal digits (which is 16 bits); we like to call this the
“coloned hex” format. The format is x:x:x:x:x:x:x:x, where x is a 16-bit hexadecimal field. A
sample address is as follows:
2035:0001:2BC5:0000:0000:087C:0000:000A
KEY
Fortunately, you can shorten the written form of IPv6 addresses. Leading 0s within each
POINT set of four hexadecimal digits can be omitted, and a pair of colons (::) can be used, once
within an address, to represent any number of successive 0s.
For example, the previous address can be shortened to the following:
2035:1:2BC5::87C:0:A
An all-0s address can be written as ::.
KEY
A pair of colons (::) can be used only once within an IPv6 address. This is because an
POINT address parser identifies the number of missing 0s by separating the two parts and entering
0 until the 128 bits are complete. If two :: notations were to be placed in the address, there
would be no way to identify the size of each block of 0s.
Similar to how IPv4 subnet masks can be written as a prefix (for example, /24), IPv6 uses prefixes
to indicate the number of bits of network or subnet information.
IPv6 Packet Header
The IPv6 header has 40 octets, in contrast to the 20 octets in the IPv4 header. IPv6 has fewer fields,
and the header is 64-bit-aligned to enable fast, efficient, hardware-based processing. The IPv6
address fields are four times larger than in IPv4.
The IPv4 header contains 12 basic header fields, followed by an options field and a data portion
(which usually includes a transport layer segment). The basic IPv4 header has a fixed size of 20
octets; the variable-length options field increases the size of the total IPv4 header.
Introduction to IPv6
407
IPv6 contains fields similar to 7 of the 12 IPv4 basic header fields (5 plus the source and
destination address fields) but does not require the other fields. The IPv6 header contains the
following fields:
■
Version: A 4-bit field, the same as in IPv4. For IPv6, this field contains the number 6; for
IPv4, this field contains the number 4.
■
Traffic class: An 8-bit field similar to the type of service (ToS) field in IPv4. This field tags
the packet with a traffic class that it uses in differentiated services (DiffServ) QoS. These
functions are the same for IPv6 and IPv4.
■
Flow label: This 20-bit field is new in IPv6. It can be used by the source of the packet to tag
the packet as being part of a specific flow, allowing multilayer switches and routers to handle
traffic on a per-flow basis rather than per-packet, for faster packet-switching performance.
This field can also be used to provide QoS.
■
Payload length: This 16-bit field is similar to the IPv4 total length field.
■
Next header: The value of this 8-bit field determines the type of information that follows the
basic IPv6 header. It can be transport-layer information, such as Transmission Control
Protocol (TCP) or User Datagram Protocol (UDP), or it can be an extension header. The next
header field is similar to the protocol field of IPv4.
■
Hop limit: This 8-bit field specifies the maximum number of hops that an IPv6 packet can
traverse. Similar to the time to live (TTL) field in IPv4, each router decreases this field by 1.
Because there is no checksum in the IPv6 header, an IPv6 router can decrease the field without
recomputing the checksum; in IPv4 routers, the recomputation costs processing time. If this
field ever reaches 0, a message is sent back to the source of the packet, and the packet is
discarded.
■
Source address: This field has 16 octets (128 bits). It identifies the source of the packet.
■
Destination address: This field has 16 octets (128 bits). It identifies the destination of the
packet.
■
Extension headers: The extension headers, if any, and the data portion of the packet follow
the other eight fields. The number of extension headers is not fixed, so the total length of the
extension header chain is variable.
Notice that the IPv6 header does not have a header checksum field. Because link-layer
technologies perform checksum and error control and are considered relatively reliable, an IPv6
header checksum is considered redundant. Without the IPv6 header checksum, upper-layer
checksums, such as within UDP, are mandatory with IPv6.
408
Chapter 6: Designing IP Addressing in the Network
IPv6 Address Types
This section covers the various IPv6 address types and their scopes.
IPv6 Address Scope Types
Similar to IPv4, a single source can address datagrams to either one or many destinations at the
same time in IPv6.
NOTE RFC 4291, IPv6 Addressing Architecture, defines the IPv6 addressing architecture.
Following are the types of IPv6 addresses:
■
Unicast (one-to-one): Similar to an IPv4 unicast address, an IPv6 unicast address is for a
single source to send data to a single destination. A packet sent to a unicast IPv6 address goes
to the interface identified by that address. The IPv6 unicast address space encompasses the
entire IPv6 address range, with the exception of the FF00::/8 range (addresses starting with
binary 1111 1111), which is used for multicast addresses. The “IPv6 Unicast Addresses”
section discusses the different types of IPv6 unicast addresses.
■
Anycast (one-to-nearest): An IPv6 anycast address is a new type of address that is assigned
to a set of interfaces on different devices; an anycast address identifies multiple interfaces. A
packet that is sent to an anycast address goes to the closest interface (as determined by the
routing protocol being used) identified by the anycast address. Therefore, all nodes with the
same anycast address should provide uniform service.
Anycast addresses are syntactically indistinguishable from global unicast addresses because
anycast addresses are allocated from the global unicast address space. Nodes to which the
anycast address is assigned must be explicitly configured to recognize the anycast address.
Anycast addresses must not be used as the source address of an IPv6 packet.
Examples of when anycast addresses could be used are load balancing, content delivery
services, and service location. For example, an anycast address could be assigned to a set of
replicated FTP servers. A user in China who wants to retrieve a file would be directed to the
Chinese server, whereas a user in the Europe would be directed to the European server.
■
Multicast (one-to-many): Similar to IPv4 multicast, an IPv6 multicast address identifies a
set of interfaces (in a given scope), typically on different devices. A packet sent to a multicast
address is delivered to all interfaces identified by the multicast address (in a given scope).
IPv6 multicast addresses have a 4-bit scope identifier (ID) to specify how far the multicast
packet may travel.
Introduction to IPv6
409
KEY
IPv6 has no concept of broadcast addresses; multicast addresses are used instead.
POINT
An IPv6 address is valid for a specific scope, which defines the types of applications the address
is suitable for.
KEY
A single interface may be assigned multiple IPv6 addresses of any type (unicast, anycast,
POINT and multicast).
Interface Identifiers in IPv6 Addresses
In IPv6, a link is a network medium over which network nodes communicate using the link layer.
Interface IDs in IPv6 addresses are used to identify a unique interface on a link. They can also be
thought of as the “host portion” of an IPv6 address. Interface IDs are required to be unique on a
link and can also be unique over a broader scope. When the interface identifier is derived directly
from the data link layer address of the interface, the scope of that identifier is assumed to be
universal (global). Interface identifiers are always 64 bits and are dynamically created based on
the data link layer.
KEY
For Ethernet, the interface ID used is based on the MAC address of the interface and is in
POINT an extended universal identifier 64-bit (EUI-64) format. The EUI-64 format interface ID
is derived from the 48-bit link-layer MAC address by inserting the hexadecimal number
FFFE between the upper 3 bytes (the organizational unique identifier [OUI] field) and the
lower 3 bytes (the vendor code or serial number field) of the link-layer address. The
seventh bit in the high-order byte is set to 1 (equivalent to the IEEE G/L bit) to indicate
the uniqueness of the 48-bit address.
This process is illustrated in Figure 6-18.
Figure 6-18
EUI-64 Format IPv6 Interface Identifier
00
90
90
27
27
17
FC
OF
17
FC
Ethernet MAC Address (48-bit)
02
FF
000000UG
Modified EUI-64 Addess
02
U/L bit
where U =
90
27
0F
FE
1 = Universally Unique
0 = Locally Unique
FF
FE
17
FC
0F
410
Chapter 6: Designing IP Addressing in the Network
The seventh bit in an IPv6 interface identifier is referred to as the Universal/Local (U/L) bit. This
bit identifies whether this interface identifier is locally unique on the link or whether it is
universally unique. When the interface identifier is created from an Ethernet MAC address, it is
assumed that the MAC address is universally unique and, therefore, that the interface identifier is
universally unique. The U/L bit is for future use by upper-layer protocols to uniquely identify a
connection, even in the context of a change in the leftmost part of the address. However, this
feature is not yet used. The eighth bit in an IPv6 interface identifier, also known as the “G” bit, is
the group/individual bit for managing groups.
IPv6 Unicast Addresses
Following are the different unicast addresses that IPv6 supports:
■
Global aggregatable address (also called global unicast address)
■
Link-local address
■
IPv4-compatible IPv6 address
Global aggregatable addresses and link-local addresses are discussed in the next two sections,
respectively. IPv4-compatible IPv6 addresses are described in the later “IPv4-to-IPv6 Transition
Strategies and Deployments” section.
NOTE Site-local unicast addresses are another type of IPv6 unicast address; however, the use
of site-local addresses was deprecated in September 2004 by RFC 3879, Deprecating Site Local
Addresses, and future systems must not implement any support for this type of address.
Site-local unicast addresses were similar to private addresses in IPv4 and were used to address
a site without having a global prefix. Site-local addresses used the prefix FEC0::/10 (binary 1111
1110 11) with a subnet identifier (a 16-bit field) and an interface identifier (a 64-bit field)
concatenated after the prefix. Site-local addresses were considered private addresses to be used
to restrict communication to a limited domain.
IPv6 routers must not advertise routes or forward packets that have site-local source or
destination addresses, outside the site.
KEY
Every IPv6-enabled interface must contain at least one loopback (::1/128) and one linkPOINT local address. Optionally, an interface may have multiple unique local and global
addresses.
Introduction to IPv6
411
Global Aggregatable Unicast Addresses
KEY
IPv6 global aggregatable unicast addresses are equivalent to IPv4 unicast addresses.
POINT
The structure of global aggregatable unicast addresses enables summarization (aggregation) of
routing prefixes so that the number of routing table entries in the global routing table can be
reduced. Global unicast addresses used on links are aggregated upward, through organizations,
and then to intermediate-level ISPs, and eventually to top-level ISPs. A global unicast address
typically consists of a 48-bit global routing prefix, a 16-bit subnet ID, and a 64-bit interface ID
(typically in EUI-64 bit format), as illustrated in Figure 6-19.
Figure 6-19
IPv6 Global Aggregatable Unicast Address Structure
Network
Subnet
Host
48 Bits
16 Bits
64 Bits
Interface ID
001
3 Bits
The subnet ID can be used by individual organizations to create their own local addressing
hierarchy using subnets. This field allows an organization to use up to 65,536 individual subnets.
A fixed prefix of binary 2000::/3 (binary 001) indicates a global aggregatable IPv6 address; this is
the current range of IPv6 global unicast addresses assigned by the Internet Assigned Numbers
Authority (IANA). Assignments from this block are registered in the IANA registry, which is
available at http://www.iana.org/assignments/ipv6-unicast-address-assignments.
The 64-bit Interface ID field identifies interfaces on a link and therefore must be unique on the
link.
NOTE RFC 3587, IPv6 Global Unicast Address Format, defines the global unicast address
format.
Link-Local Unicast Addresses
A link-local address is useful only in the context of the local link network; its scope limits its
relevance to only one link. A link-local address is an IPv6 unicast address that can be automatically
412
Chapter 6: Designing IP Addressing in the Network
configured on any interface by using the link-local prefix FE80::/10 (1111 1110 10) and the 64-bit
interface identifier, as shown in Figure 6-20. Link-local addresses are used in the neighbor
discovery protocol and the dynamic address assignment process. Dynamic address assignment is
discussed in more detail in the next section.
Figure 6-20
IPv6 Link-Local Unicast Address Structure
128 Bits
0
Interface ID
64 Bits
1111 1110 10
FE80::/10
10 Bits
KEY
A link-local unicast address connects devices on the same local network without requiring
POINT globally unique addresses.
Many routing protocols also use link-local addresses.
When communicating with a link-local address, the outgoing interface must be specified, because
every interface is connected to FE80::/10.
An IPv6 router must not forward packets that have either link-local source or destination addresses
to other links.
IPv6 Address Assignment Strategies
As with IPv4, IPv6 allows two address assignment strategies: static and dynamic.
Static IPv6 Address Assignment
Static address assignment in IPv6 is the same as in IPv4—the administrator must enter the IPv6
address configuration manually on every device in the network.
Introduction to IPv6
413
Dynamic IPv6 Address Assignment
IPv6 dynamic address assignment strategies allow dynamic assignment of IPv6 addresses, as
follows:
■
Link-local address: The host configures its own link-local address autonomously, using the
link-local prefix FE80::0/10 and a 64-bit identifier for the interface, in an EUI-64 format.
■
Stateless autoconfiguration: A router on the link advertises—either periodically or at the
host’s request—network information, such as the 64-bit prefix of the local network and its
willingness to function as a default router for the link. Hosts can automatically generate their
global IPv6 addresses by using the prefix in these router messages; the hosts do not need
manual configuration or the help of a device such as a DHCP server. For example, Figure
6-21 shows a host using the prefix advertised by the router as the top 64 bits of its address;
the remaining 64 bits contain the host’s 48-bit MAC address in an EUI-64 format.
Figure 6-21
IPv6 Stateless Autoconfiguration Allows a Host to Automatically Configure Its IPv6 Address
Subnet Prefix +
MAC Address
Subnet
Prefix
Subnet Prefix +
MAC Address
(Single Subnet
Scope, Formed from
Reserved Prefix and
Link Layer Address)
■
Stateful using DHCP for IPv6 (DHCPv6): DHCPv6 is an updated version of DHCP for
IPv4. DHCPv6 gives the network administrator more control than stateless autoconfiguration
and can be used to distribute other information, including the address of the DNS server.
DHCPv6 can also be used for automatic domain name registration of hosts using a dynamic
DNS server. DHCPv6 uses multicast addresses.
414
Chapter 6: Designing IP Addressing in the Network
IPv6 Name Resolution
This section discusses IPv6 name resolution strategies and name resolution on a dual-stack (IPv4
and IPv6) host.
Static and Dynamic IPv6 Name Resolution
IPv6 and IPv4 name resolutions are similar. The following two name resolutions are available with
IPv6:
■
Static name resolution: Accomplished by manual entries in the host’s local configuration
files.
■
Dynamic name resolution: Accomplished using a DNS server that supports IPv6, usually
along with IPv4 support. As shown in Figure 6-22, an IPv6-aware application requests the
destination hostname’s IPv6 address from the DNS server using a request for an A6 record;
an A6 record is a new DNS feature that contains an address record for an IPv6 host. The task
of querying for the address is done with the name resolver, which is usually part of the
operating system. The network administrator must set up the appropriate DNS server with
IPv6 support and connect it to the IPv6 network with a valid IPv6 address. The hosts must
also have IPv6 addresses.
Figure 6-22
IPv6 Name Resolution
www.cisco.com
= A6 ?
IPv6
3ffe:b00::1
DNS
Server
IPv6
www.cisco.com
3ffe:b00::1
IPv4- and IPv6-Aware Applications and Name Resolution
A dual-stack host has both IPv4 and IPv6 protocol stacks and has a new application program
interface (API) defined to support both IPv4 and IPv6 addresses and DNS requests. An application
can use both IPv4 and IPv6. An application can be converted to the new API while still using only
IPv4.
Introduction to IPv6
415
As shown in Figure 6-23, an IPv6- and IPv4-enabled application chooses which stack to use (the
typical default is IPv6) and asks the DNS server for the destination host’s address; in this example,
it requests the host’s IPv6 address. After receiving the response from the DNS server, the
application asks the source host to connect to the destination host using IPv6.
Figure 6-23
Dual-Stack Name Resolution
www.cisco.com
10.1.1.1
www.cisco.com
= A6 ?
IPv4
(3ffe:b00::1)
DNS
Server
IPv4
and
IPv6
Aware
IPv6
www.cisco.com
3ffe:b00::1
NOTE Microsoft Windows XP and Windows Server 2003 fully support most aspects of IPv6
with the appropriate service packs installed; Windows Vista supports IPv6.
IPv4-to-IPv6 Transition Strategies and Deployments
IPv4-to-IPv6 migration does not happen automatically. The following sections first explore the
differences between IPv4 and IPv6 and then discuss possible transition strategies and
deployments.
Differences Between IPv4 and IPv6
Regardless of which protocol is used, the communication between IPv4 and IPv6 domains must
be transparent to end users. The major differences to consider between IPv4 and IPv6 include the
following:
■
IPv4 addresses are 32 bits long, whereas IPv6 addresses are 128 bits long.
■
An IPv6 packet header is different from an IPv4 packet header. The IPv6 header is longer and
simpler (new fields were added to the IPv6 header, and some old fields were removed).
■
IPv6 has no concept of broadcast addresses; instead, it uses multicast addresses.
■
Routing protocols must be changed to support native IPv6 routing.
416
Chapter 6: Designing IP Addressing in the Network
IPv4-to-IPv6 Transition
The transition from IPv4 to IPv6 will take several years because of the high cost of upgrading
equipment. In the meantime, IPv4 and IPv6 must coexist. The following are three primary
mechanisms for the transition from IPv4 to IPv6:
■
Dual-stack: Both the IPv4 and the IPv6 stacks run on a system that can communicate with
both IPv6 and IPv4 devices.
■
Tunneling: Uses encapsulation of IPv6 packets to traverse IPv4 networks, and vice versa.
■
Translation: A mechanism that translates one protocol to the other to facilitate
communication between the two networks.
The following sections describe these mechanisms.
In addition, Cisco has designed the IPv6 on the Multiprotocol Label Switching (MPLS) Provider
Edge (PE) routers (6PE) feature, which supports smooth integration of IPv6 into MPLS networks.
Because the MPLS routers switch packets based on labels rather than address lookups, organizations
with an MPLS backbone can scale IPv6 traffic easily and do not need to make costly hardware
upgrades.
Dual-Stack Transition Mechanism
As shown in Figure 6-24, a dual-stack node enables both IPv4 and IPv6 stacks. Applications
communicate with both IPv4 and IPv6 stacks; the IP version choice is based on name lookup and
application preference. This is the most appropriate method for campus and access networks
during the transition period, and it is the preferred technique for transitioning to IPv6. A dual-stack
approach supports the maximum number of applications. Operating systems that support the IPv6
stack include FreeBSD, Linux, Sun Solaris, and Windows 2000, XP, and Vista.
Figure 6-24
A Dual-Stack Node Has Both IPv4 and IPv6 Stacks
Application
TCP
UDP
IPv4
IPv6
0x0800
0x86dd
Data Link (Ethernet)
Frame
Protocol ID
Introduction to IPv6
417
Tunneling Transition Mechanism
The purpose of tunneling is to encapsulate packets of one type in packets of another type. When
transitioning to IPv6, tunneling encapsulates IPv6 packets in IPv4 packets, as shown in Figure 6-25.
Figure 6-25
Tunneling IPv6 Packets Within IPv4 Packets
IPv6 Header
IPv6 Data
IPv6
Host
IPv6 Header
Dual-stack
Router
IPv6
Network
IPv6 Data
Dual-stack
Router
IPv6
Host
IPv6
Network
IPv4
Tunnel: IPv6 in IPv4 Packet
Dual-stack
Host
IPv4 Header
IPv6 Header
IPv6 Data
By using overlay tunnels, isolated IPv6 networks can communicate without having to upgrade the
IPv4 infrastructure between them. Both routers and hosts can use tunneling. The following
different techniques are available for establishing a tunnel:
■
Manually configured: For a manually configured tunnel, the tunnel source and tunnel
destination are manually configured with static IPv4 and IPv6 addresses. Manual tunnels can
be configured between border routers or between a border router and a host.
■
Semi-automated: Semi-automation is achieved by using a tunnel broker that uses a webbased service to create a tunnel. A tunnel broker is a server on the IPv4 network that receives
tunnel requests from dual-stack clients, configures the tunnel on the tunnel server or router,
and associates the tunnel from the client to one of the tunnel servers or routers. A simpler
model combines the tunnel broker and server onto one device.
■
Automatic: Various automatic mechanisms accomplish tunneling, including the following:
— IPv4-compatible: The tunnel is constructed dynamically using an IPv4-compatible
IPv6 address (an IPv6 address that consists of 0s in the upper bits and an embedded
IPv4 address in the lower 32 bits). Because it does not scale, this mechanism is
appropriate only for testing.
418
Chapter 6: Designing IP Addressing in the Network
NOTE The format of an IPv4-compatible IPv6 address is 0:0:0:0:0:0:A.B.C.D, or ::A.B.C.D,
where A.B.C.D is the IPv4 address in dotted-decimal notation. The entire 128-bit IPv4compatible IPv6 address is used as a node’s IPv6 address, and the IPv4 address that is embedded
in the low-order 32 bits is used as the node’s IPv4 address. For example, the IPv4 address
192.168.30.1 would convert to the IPv4-compatible IPv6 address 0:0:0:0:0:0:192.168.30.1.
Other acceptable representations for this address are ::192.168.30.1 and ::C0A8:1E01.
— IPv6-to-IPv4 (6-to-4): The 6-to-4 tunneling method automatically connects IPv6
islands through an IPv4 network. Each 6-to-4 edge router has an IPv6 address with
a /48 prefix that is the concatenation of 2002::/16 and the IPv4 address of the edge
router; 2002::/16 is a specially assigned address range for the purpose of 6-to-4. The
edge routers automatically build the tunnel using the IPv4 addresses embedded in
the IPv6 addresses. For example, if the IPv4 address of an edge router is
192.168.99.1, the prefix of its IPv6 address is 2002:C0A8:6301::/48 because
0xC0A86301 is the hexadecimal representation of 192.168.99.1.
When an edge router receives an IPv6 packet with a destination address in the
range of 2002::/16, it determines from its routing table that the packet must
traverse the tunnel. The router extracts the IPv4 address embedded in the third
to sixth octets, inclusive, in the IPv6 next-hop address. This IPv4 address is the
IPv4 address of the 6-to-4 router at the destination site—the router at the other
end of the tunnel. The router encapsulates the IPv6 packet in an IPv4 packet
with the destination edge router’s extracted IPv4 address.
The packet passes through the IPv4 network. The destination edge router
unencapsulates the IPv6 packet from the received IPv4 packet and forwards the
IPv6 packet to its final destination. A 6-to-4 relay router, which offers traffic
forwarding to the IPv6 Internet, is required for reaching a native IPv6 Internet.
— 6over4: A router connected to a native IPv6 network and with a 6over4-enabled
interface can be used to forward IPv6 traffic between 6over4 hosts and native IPv6
hosts. IPv6 multicast addresses are mapped into the IPv4 multicast addresses. The
IPv4 network becomes a virtual Ethernet for the IPv6 network; to achieve this, an
IPv4 multicast-enabled network is required.
Translation Transition Mechanism
Dual-stack and tunneling techniques manage the interconnection of IPv6 domains. For legacy
equipment that will not be upgraded to IPv6 and for some deployment scenarios, techniques are
available for connecting IPv4-only nodes to IPv6-only nodes, using translation, an extension of
NAT techniques.
Introduction to IPv6
419
As shown in Figure 6-26, an IPv6 node behind a translation device has full connectivity to other
IPv6 nodes and uses NAT functionality to communicate with IPv4 devices.
Figure 6-26
Translation Mechanism
Translation
Point
IPv4-only
Network
IPv4
Host
IPv6-only
Network
IPv6
Host
2001:0420:1987:0:2E0:B0FF:FE6A:412C
172.16.1.1
Translation techniques are available for translating IPv4 addresses to IPv6 addresses and vice
versa. Similar to current NAT devices, translation is done at either the transport layer or the
network layer. NAT-Protocol Translation (NAT-PT) is the main translation technique; the DualStack Transition Mechanism (DSTM) might also be available.
The NAT-PT translation mechanism translates at the network layer between IPv4 and IPv6
addresses and allows native IPv6 hosts and applications to communicate with native IPv4 hosts
and applications. An application-level gateway (ALG) translates between the IPv4 and IPv6 DNS
requests and responses. NAT-PT is defined in RFC 2766, Network Address Translation-Protocol
Translation (NAT-PT).
NOTE ALGs use a dual-stack approach and enable a host in one domain to send data to
another host in the other domain. This method requires that all application servers be converted
to IPv6.
The DSTM translation mechanism may be used for dual-stack hosts in an IPv6 domain that have
not yet had an IPv4 address assigned to the IPv4 side but that must communicate with IPv4
systems or allow IPv4 applications to run on top of their IPv6 protocol stack. This mechanism
requires a dedicated server that dynamically provides a temporary global IPv4 address for the
duration of the communication (using DHCPv6) and uses dynamic tunnels to carry the IPv4 traffic
within an IPv6 packet through the IPv6 domain.
IPv6 Routing Protocols
The routing protocols available in IPv6 include interior gateway protocols (IGP) for use within an
autonomous system and exterior gateway protocols (EGP) for use between autonomous systems.
420
Chapter 6: Designing IP Addressing in the Network
As with IPv4 CIDR, IPv6 uses the same longest-prefix match routing. Updates to the existing IPv4
routing protocols were necessary for handling longer IPv6 addresses and different header
structures. Currently, the following updated routing protocols or draft proposals are available:
■
IGPs:
— RIP new generation (RIPng)
— EIGRP for IPv6
— OSPF version 3 (OSPFv3)
— Integrated IS-IS version 6 (IS-ISv6)
■
EGP: Multiprotocol extensions to BGP version 4 (BGP4+)
RIPng
RIPng is a distance-vector protocol with a limit of 15 hops that uses split-horizon and poison
reverse to prevent routing loops. RIPng features include the following:
■
RIPng is based on the IPv4 RIPv2 and is similar to RIPv2.
■
RIPng uses an IPv6 prefix and a next-hop IPv6 address.
■
RIPng uses the multicast address FF02::9, the all-RIP-routers multicast address, as the
destination address for RIP updates.
■
RIPng uses IPv6 for transport.
■
RIPng uses link-local addresses as source addresses.
■
RIPng updates are sent on UDP port 521.
NOTE RIPng is defined in RFC 2080, RIPng for IPv6.
EIGRP for IPv6
EIGRP for IPv6 is available in Cisco IOS Release 12.4(6)T and later. EIGRP for IPv4 and EIGRP
for IPv6 are configured and managed separately; however, the configuration and operation of
EIGRP for IPv4 and IPv6 is similar. EIGRP for IPv6 features include the following:
■
EIGRP for IPv6 is configured directly on the interfaces over which it runs.
■
EIGRP for IPv6 can be configured without the use of a global IPv6 address.
Introduction to IPv6
■
No network commands are used when configuring EIGRP for IPv6.
■
EIGRP for IPv6 routes IPv6 prefixes.
421
NOTE EIGRP IPv6 is not currently supported on the Cisco 7600 routers or Catalyst 6500
switches.
For more information on this protocol, refer to “Implementing EIGRP for IPv6,” available at
http://www.cisco.com/.
OSPFv3
OSPFv3 is a new OSPF implementation for IPv6; it has the following features:
■
OSPFv3 is similar to OSPF version 2 (OSPFv2) for IPv4; it uses the same mechanisms as
OSPFv2, but the internals of the protocols are different.
■
OSPFv3 carries IPv6 addresses.
■
OSPFv3 uses link-local unicast addresses as source addresses.
■
OSPFv3 uses IPv6 for transport.
NOTE OSPFv3 is defined in RFC 2740, OSPF for IPv6.
Integrated IS-IS Version 6
The large address support in integrated IS-IS facilitates the IPv6 address family. IS-ISv6 is the
same as IS-IS for IPv4, with the following extensions added for IPv6:
■
Two new type-length-values (TLV):
— IPv6 Reachability
— IPv6 Interface Address
■
New protocol identifier
422
Chapter 6: Designing IP Addressing in the Network
BGP4+
Multiprotocol extensions for BGP4 enable other protocols to be routed besides IPv4, including
IPv6. Additional IPv6-specific extensions incorporated into BGP4+ include the definition of a new
identifier for the IPv6 address family.
NOTE RFC 4760, Multiprotocol Extensions for BGP-4, defines multiprotocol extensions to
BGP. RFC 2545, Use of BGP-4 Multiprotocol Extensions for IPv6 Inter-Domain Routing,
defines BGP4+ for IPv6.
Summary
In this chapter, you learned about IPv4 and IPv6 addressing. The following topics were explored:
■
Private and public IP addresses, and when to use each
■
Determining the network size, including the number and type of locations and the number and
type of devices at each location
■
Hierarchical addressing, route summarization, and the role of classful and classless routing
protocols and fixed-length and variable-length subnet masks
■
Static and dynamic (DHCP) address assignment
■
Static and dynamic (DNS) name resolution
■
Features of IPv6, including its 128-bit addresses
■
Types of IPv6 addresses: unicast (one-to-one), anycast (one-to-nearest), and multicast (oneto-many)
■
Types of IPv6 unicast addresses: global aggregatable, link-local, and IPv4-compatible
■
Types of IPv6 address assignment: static or dynamic, which includes using link-local
addresses, stateless autoconfiguration, and stateful using DHCPv6
■
Types of IPv6 name resolution: static or dynamic using DNS servers that have IPv6 protocol
stack support
■
IPv4-to-IPv6 transition strategies, including dual-stack use, tunneling mechanisms, and
translation mechanisms
■
IPv6 routing protocols, including RIPng, EIGRP for IPv6, OSPFv3, IS-ISv6, and BGP4+
Case Study: ACMC Hospital IP Addressing Design
423
References
For additional information, refer to the following resources:
■
Comer, Douglas E. and David L. Stevens. Internetworking with TCP/IP Volume 1: Principles,
Protocols, and Architecture, Fifth Edition. Englewood Cliffs, New Jersey: Prentice-Hall,
2005.
■
Designing Large-Scale IP Internetworks, http://www.cisco.com/univercd/cc/td/doc/cisintwk/
idg4/nd2003.htm.
■
Subnetting an IP Address Space, http://www.cisco.com/univercd/cc/td/doc/cisintwk/idg4/
nd20a.htm.
■
DHCP, http://www.cisco.com/univercd/cc/td/doc/product/software/ios124/124cg/hiad_c/
ch10/index.htm
■
DNS Server Support for NS Records, http://www.cisco.com/en/US/products/ps6350/
products_configuration_guide_chapter09186a008045597e.html
■
Cisco IOS IPv6 Introduction, http://www.cisco.com/en/US/products/ps6553/
products_ios_technology_home.html
■
Cisco IP Version 6 Solutions, http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/
ipv6_sol/index.htm
■
Cisco IPv6 Solutions, http://www.cisco.com/en/US/tech/tk872/
technologies_white_paper09186a00802219bc.shtml
■
IPv6 Address Space, http://www.iana.org/assignments/ipv6-address-space
Case Study: ACMC Hospital IP Addressing Design
This case study is a continuation of the ACMC Hospital case study introduced in Chapter 2,
“Applying a Methodology to Network Design.”
424
Chapter 6: Designing IP Addressing in the Network
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study, you create an IP addressing design for the ACMC hospital network. Table 6-4
is a review of the switch port counts by location, as derived in the case study for Chapter 4,
“Designing Basic Campus and Data Center Networks.”
Table 6-4
Port Counts by Location
Location
Port Counts
Port Counts
with Spares
Comments
Main building 1, per floor
75
150
Six floors
Main building server farm
70
140
Servers will connect with dual
network interface cards; this number
allows for planned migration of all
servers to the server farm
Main building 2, per floor
75
150
Six floors
Children’s Place, per floor
60
120
Three floors
Buildings A–D
10 each
20 each
Buildings E–J
20 each
40 each
Buildings K–L
40 each
80 each
Case Study: ACMC Hospital IP Addressing Design
425
Figure 6-27 reviews the planned campus and WAN infrastructure, as determined in the previous
case studies.
Figure 6-27
ACMC Planned Campus and WAN Design
Distribution
Core
Access
X
Frame Relay
or Leased Line
6X4
X
5
IPsec
Smaller Buildings
X
Remote
Clinic
X
Internet
40 + 30
Server Switches
Main Building 1
Distribution
Access
X
7X4
X
5
Smaller Buildings
Main Building 2
X
3X3
X
2
Smaller Buildings
Children’s Place
Complete the following steps:
Step 1
Propose a suitable IP addressing plan that takes advantage of good
summarization techniques for the ACMC network, including the campus,
WAN and backup WAN links, and the remote clinics.
Step 2
Propose possible methods for IP address assignment.
5
426
Chapter 6: Designing IP Addressing in the Network
Review Questions
Answer the following questions, and then refer to Appendix A for the answers.
1.
Which of the following IPv4 addresses cannot be used in public networks?
a.
172.167.20.1/24
b.
192.168.1.200/28
c.
172.30.100.33/24
d.
172.32.1.1/16
2.
In what situation would both private and public IPv4 addresses be required?
3.
For the address 172.17.7.245/28:
■
What is the mask?
■
What class is the address?
■
What is the host part?
■
What is the network part?
■
How many hosts can reside on this subnet?
4.
What information must be collected to determine the size of the network?
5.
Approximately how much reserve in the number of network device addresses should be
included for future growth purposes?
6.
What type of routing protocol can support VLSM?
7.
Assume that a router has the following subnets behind it:
■
10.5.16.0/24
■
10.5.17.0/24
■
10.5.18.0/24
■
10.5.19.0/24
What summary route could the router advertise to represent these four subnets?
8.
What are some disadvantages of a poorly designed IP addressing scheme?
9.
What are some advantages of a hierarchical IP addressing scheme?
10.
What is the difference between classless and classful routing protocols?
11.
What are the advantages of using DHCP versus static address assignment?
Review Questions
12.
What are the three DHCP address allocation mechanisms?
13.
What is the advantage of using dynamic name resolution versus static name resolution?
14.
Describe the process used when DNS resolves a URL, such as www.cisco.com.
15.
How many bits are in an IPv6 address?
16.
How long is the IPv6 packet header?
17.
Which are valid ways of writing the following IPv6 address:
2035:0000:134B:0000:0000:088C:0001:004B?
a.
2035::134B::088C:1:004B
b.
2035::134B::88C:1:4B
c.
2035:0:134B::088C:1:004B
d.
2035::134B:0:0:88C:1:4B
e.
2035:0:134B:88C:1:4B
18.
When a packet is sent to an IPv6 anycast address, where does it go?
19.
One-to-many IPv6 addresses are called ________________.
20.
True or false: Packets with link-local IPv6 source or destination addresses must not be
forwarded to the Internet by a router.
21.
How many bits are used for the interface ID in an IPv6 unicast address?
22.
What IPv6 prefix is used by devices on the same network to communicate?
23.
What address assignment strategies are available in IPv6?
24.
How does IPv6 stateless autoconfiguration work?
25.
What feature allows DNS to support IPv6?
26.
Can a host support IPv4 and IPv6 simultaneously?
27.
What are three mechanisms for transitioning from IPv4 to IPv6?
28.
Describe how 6-to-4 tunneling works.
29.
Which IPv6 routing protocols are supported on Cisco routers?
30.
What multicast address is used for RIPng?
427
This chapter discusses IP routing
protocols and contains the following
sections:
■
Routing Protocol Features
■
Routing Protocols for the Enterprise
■
Routing Protocol Deployment
■
Summary
■
References
■
Case Study: ACMC Hospital Routing
Protocol Design
■
Review Questions
CHAPTER
7
Selecting Routing
Protocols for the Network
This chapter describes considerations for selecting the most appropriate network routing
protocol. First, routing protocol features are discussed, followed by a description of various
routing protocols appropriate for enterprise use. The chapter discusses why certain protocols are
suitable for specific modules in the Enterprise Architecture. It concludes with a description of
some advanced routing protocol deployment features, including redistribution, filtering, and
summarization.
NOTE Chapter 1, “Network Fundamentals Review,” includes introductory information
about routers and routing protocols.
For more details about IP routing protocols, see Authorized Self-Study Guide: Building
Scalable Cisco Internetworks (BSCI), Third Edition, by Diane Teare and Catherine Paquet,
Cisco Press, 2006.
NOTE In this chapter, the term IP refers to IP version 4 (IPv4).
Routing Protocol Features
There are many ways to characterize routing protocols, including the following:
■
Static versus dynamic routing
■
Interior versus exterior routing protocols
■
Distance vector versus link-state versus hybrid protocols
■
Routing protocol metrics
■
Routing protocol convergence
■
Flat versus hierarchical routing protocols
The following sections discuss these methods in detail.
430
Chapter 7: Selecting Routing Protocols for the Network
Static Versus Dynamic Routing
Whereas static routes are typically configured manually, routing protocols generate dynamic
routes. Each method has advantages and disadvantages in specific network scenarios, as discussed
in the following sections.
Static Routing
The term static routing denotes the use of manually configured or injected static routes for traffic
forwarding purposes. Using a static route might be appropriate in the following circumstances:
■
When it is undesirable to have dynamic routing updates forwarded across slow bandwidth
links, such as a dialup link
■
When the administrator needs total control over the routes used by the router
■
When a backup to a dynamically learned route is necessary
■
When it is necessary to reach a network that is accessible by only one path (a stub network)
Configuring and maintaining static routes is time-consuming. Properly implementing static routes
requires complete knowledge of the entire network.
Figure 7-1 illustrates a stub network scenario in which the use of static routes is favored over a
dynamic routing protocol. The right side of Figure 7-1 shows a stub network with a single entry/
exit point over the S0 interface of Router A. On the stub network router (Router A), a static default
route is configured so that the S0 link forwards all traffic toward destinations outside the stub
network. On Router B, a static route is installed toward the stub network and then is redistributed
into the routing protocol so that reachability information for the stub network is available
throughout the rest of the network.
Figure 7-1
Use Static Routes with a Stub Network
Static Route to
192.168.1.0
Network
with Routing Protocol
S0
S0
B
A
172.16.2.1
172.16.2.2
Static Default Route
Stub Network
192.168.1.0
Routing Protocol Features
431
NOTE Static routes are unidirectional. A static route configured in one direction via one router
must have a corresponding static route configured on the adjacent router, in the opposite
direction, for the return path. Figure 7-1 includes these two routes.
By using static and default static routes in this scenario, no traffic from a dynamic routing protocol
is present on the serial link or in the stub network. In addition, the processor and memory
requirements for both routers are lower; in the stub network, a low-end router would suffice. Static
routes are therefore appropriate in situations such as with stub networks, hub-and-spoke
connections (also called star connections), and dialup environments.
Dynamic Routing
Dynamic routing allows the network to adjust to changes in the topology automatically, without
administrator involvement. A static route cannot dynamically respond to changes in the network.
If a link fails, the static route is no longer valid if it is configured to use that failed link, so a new
static route must be configured. If a new router or new link is added, that information must also be
configured on every router in the network. In a very large or unstable network, these changes can
lead to considerable work for network administrators. It can also take a long time for every router
in the network to receive the correct information. In situations such as these, it might be better to
have the routers receive information about networks and links from each other using a dynamic
routing protocol. Dynamic routing protocols must do the following:
■
Find sources from which routing information can be received (usually neighboring routers)
■
Select the best paths toward all reachable destinations, based on received information
■
Maintain this routing information
■
Have a means of verifying routing information (periodic updates or refreshes)
When using a dynamic routing protocol, the administrator configures the routing protocol on each
router. The routers then exchange information about the reachable networks and the state of each
network. Routers exchange information only with other routers running the same routing protocol.
When the network topology changes, the new information is dynamically propagated throughout
the network, and each router updates its routing table to reflect the changes.
432
Chapter 7: Selecting Routing Protocols for the Network
Interior Versus Exterior Routing Protocols
An autonomous system (AS), also known as a domain, is a collection of routers that are under a
common administration, such as a company’s internal network or an Internet service provider’s
(ISP’s) network.
KEY
Because the Internet is based on the AS concept, two types of routing protocols are
POINT required:
■
Interior gateway protocols (IGP) are intra-AS (inside an AS) routing protocols.
Examples of IGPs include Routing Information Protocol (RIP) version 1 (RIPv1),
RIP version 2 (RIPv2), Open Shortest Path First (OSPF), Integrated Intermediate
System-to-Intermediate System (IS-IS), and Enhanced Interior Gateway Routing
Protocol (EIGRP).
■
Exterior gateway protocols (EGP) are inter-AS (between autonomous systems)
routing protocols. Border Gateway Protocol (BGP) is the only widely used EGP
protocol on the Internet. BGP version 4 (BGP-4) is considered the acceptable version
of BGP on the Internet. It is discussed in the “Border Gateway Protocol” section.
Different types of protocols are required for the following reasons:
■
Inter-AS connections require more options for manual selection of routing characteristics.
EGPs should be able to implement various policies.
■
The speed of convergence (distribution of routing information) and finding the best path to
the destination are crucial for intra-AS routing protocols.
Therefore, EGP routing protocol metrics (used to measure paths to a destination) include more
parameters to allow the administrator to influence the selection of certain routing paths. EGPs are
slower to converge and more complex to configure. IGPs use less-complicated metrics to ease
configuration and speed up the decisions about best routing paths for faster convergence. The
“Routing Protocol Metrics” section later in this chapter defines and explains routing protocol
metrics.
IGP and EGP Example
Figure 7-2 shows three interconnected autonomous systems (domains). Each AS uses an IGP for
intra-AS (intra-domain) routing.
Routing Protocol Features
433
Interior Protocols Are Used Inside and Exterior Protocols Are Used Between
Autonomous Systems
Figure 7-2
IGP: RIP, OSPF, etc.
AS 65000
Enterprise
AS 65200 ISP
AS 65100 ISP
EGP: BGP (Border Gateway Protocol)
AS 65000 Multihoming
The autonomous systems require some form of interdomain routing to communicate with each
other. Static routes are used in simple cases; typically, an EGP is used.
BGP-4 is the dominant EGP currently in use; BGP-4 and its extensions are the only acceptable
version of BGP available for use on the public Internet.
Multihoming is when an AS has more than one connection to the Internet (for redundancy or to
increase performance). BGP is particularly useful when an AS multihomes to the Internet via
multiple ISPs, as illustrated in Figure 7-2. To comply with the contractual requirements from
specific ISPs, an administrator uses BGP to apply specific policies—for example, to define traffic
exit points, return traffic paths, and levels of quality of service (QoS).
Distance Vector Versus Link-State Versus Hybrid Protocols
There are two main types of routing protocols:
■
Distance vector protocol: In a distance vector protocol, routing decisions are made on a hopby-hop basis. Each router relies on its neighbor routers to make the correct routing decisions.
The router passes only the results of this decision (its routing table) to its neighbors. Distance
vector protocols are typically slower to converge and do not scale well; however, they are easy
to implement and maintain. Examples of distance vector protocols include RIPv1, RIPv2, and
Interior Gateway Routing Protocol (IGRP).
NOTE Although they are all distance vector protocols, RIPv1 uses broadcast packets to
advertise routes, whereas RIPv2 uses multicast packets.
434
Chapter 7: Selecting Routing Protocols for the Network
NOTE IGRP is no longer supported as of Cisco IOS Release 12.3.
NOTE A network is converged when routing tables on all routers in the network are
synchronized and contain a route to all destination networks. Convergence is discussed in detail
in the “Routing Protocol Convergence” section later in this chapter.
■
Link-state protocol: Each router floods information about itself (its link states) either to all
other routers in the network or to a part of the network (area). Each router makes its own
routing decision based on all received information and using the shortest path first (SPF)
algorithm (also called the Dijkstra algorithm), which calculates the shortest path to any
destination. Link-state protocols are fast to converge, have less routing traffic overhead, and
scale well. However, because of their complexity, link-state protocols are more difficult to
implement and maintain. The IP link-state protocols are OSPF and Integrated IS-IS.
NOTE In the name link-state, link refers to the interface, and state refers to the link’s
characteristics, such as whether it is up or down.
A third type of protocol also exists: the hybrid interior gateway protocol, which is the Cisco
EIGRP. EIGRP has characteristics of both distance vector and link-state protocols; it combines
distance vector behavior with some link-state characteristics and some proprietary features.
EIGRP is a fast-converging and scalable routing protocol.
NOTE Cisco uses a variety of terms to characterize EIGRP, including hybrid, balanced
hybrid, and advanced distance vector routing protocol.
Routers running link-state and hybrid protocols use multicast packets to communicate with each
other.
KEY
When a network is using a distance vector routing protocol, all the routers periodically
POINT send their routing tables, or a portion of their tables, to only their neighboring routers.
In contrast, when a network is using a link-state routing protocol, each of the routers sends
the state of its own interfaces (its links) to all other routers, or to all routers in a part of the
network known as an area, only when there is a change.
Routers running a hybrid protocol send changed information only when there is a change
(similar to link-state protocols), but only to neighboring routers (similar to distance vector
protocols).
Routing Protocol Features
435
Table 7-1 summarizes the IP routing protocol types.
Table 7-1
IP Routing Protocols
Category
Routing Protocol
Distance vector
RIPv1, RIPv2, IGRP
Link-state
OSPF, Integrated IS-IS
Hybrid
EIGRP
Distance Vector Example
A distance vector router’s understanding of the network is based on its neighbor’s perspective of
the topology; consequently, the distance vector approach is sometimes referred to as routing by
rumor. Routers running traditional distance vector protocols periodically send their complete
routing tables to all connected neighbors. Convergence might be slow because triggered updates
are not typically used (RIPv2 is an exception) and loop detection timers are long. In large
networks, running a distance vector protocol might cause routing tables to become enormous and
result in a lot of traffic on the links.
NOTE A distance vector routing protocol’s routing-by-rumor behavior and periodic updates
might result in inconsistent routing information on routers within a network, which in turn might
result in routing loops. Loop-avoidance mechanisms (including hold-down timers, route
poisoning, poison reverse, and split horizon) are incorporated into modern distance vector
protocols to prevent routing loops; however, these mechanisms result in slower convergence
times compared to link-state or hybrid protocols.
NOTE Triggered updates (also called flash updates or gratuitous updates) are sent only when
a change occurs (the link goes down or comes up or link parameters that affect routing, such as
bandwidth, change).
Although, as stated, most traditional distance vector protocols do not send triggered updates, the
Cisco implementations of all IP distance vector protocols do send triggered updates.
Figure 7-3 shows a sample network that runs a distance vector protocol. In this network, the
routing updates are periodic and include the entire routing table.
436
Chapter 7: Selecting Routing Protocols for the Network
Figure 7-3
Distance Vector Routing Periodically Sends the Entire Routing Table
C
Routing
Table C
All B Routes
All A Routes
Incoming Routes
B
Routing
Table B
A
Routing
Table A
All B Routes
D
Routing
Table D
All D Routes
RIPv2, which is a standardized protocol developed from the RIPv1 protocol, is an example of a
distance vector protocol. The characteristics of RIPv2 include the following:
■
The hop count is used as the metric for path selection.
■
The maximum allowable hop count is 15.
■
By default, routing updates are sent every 30 seconds (RIPv1 uses broadcast, and RIPv2 uses
multicast).
■
RIPv2 supports variable-length subnet masking (VLSM); RIPv1 does not. Chapter 6,
“Designing IP Addressing in the Network,” describes VLSM.
Link-State Example
Both OSPF and Integrated IS-IS use the Hello protocol for establishing neighbor relationships.
Those relationships are stored in a neighbor table (also called an adjacencies database). Each
router learns a complete network topology from information shared through these neighbor
relationships. That topology is stored in the router’s link-state database (LSDB), also called the
topology table or topology database. Each router uses this topology and the SPF algorithm to
create a shortest-path tree for all reachable destinations. Each router selects the best routes from
its SPF tree and places them in its routing table (also called the forwarding database).
Routing Protocol Features
437
Link-State Routing Analogy
You can think of the LSDB as being like a map in a shopping mall. Every map in the mall is the
same, just as the LSDB is the same in all routers within an area. The one difference between all
the maps in a shopping mall is the “you are here” dot. By looking at this dot, you can determine
the best way to get to every store from your current location; the best path to a specific store is
different from each location in the mall. Link-state routers function similarly: They each calculate
the best way to every network within the area, from their own perspective, using the LSDB.
Figure 7-4 shows a network that uses a link-state protocol. Triggered updates, which include data
on the state of only links that have changed, are sent in this network.
Figure 7-4
Link-State Routing Sends Changed Data Only When There Is a Change
C
Link-State
Table C
A’s Link Update
A’s Link Update
B
Link-State
Table B
A’s Link Update
A’s Link Update
A
Link-State
Table A
D
Link-State
Table D
In link-state protocols, the information about connected links (including the subnets on those
links) on all routers is flooded throughout the network or to a specific area of the network.
Therefore, all routers in the network have detailed knowledge of the entire network. In contrast,
routers running a distance vector routing protocol receive knowledge about only the best routes
from their neighbors.
After the initial exchange of all link states and on reaching the full (converged) state of operation,
almost no periodic updates are sent through the network. (In OSPF, periodic updates are sent every
30 minutes for each specific route, but not at the same time for all routes, reducing the routing
traffic volume.) Triggered updates are flooded through the network only when a change in a link
state occurs (the link goes down, comes up, or link parameters that affect routing—such as
bandwidth—are changed). Only periodic hello messages are sent between neighbors to maintain
and verify neighbor relationships.
438
Chapter 7: Selecting Routing Protocols for the Network
Most of the control packets used in link-state operations are sent as multicast packets, which might
cause problems when deploying link-state protocols in nonbroadcast multiaccess (NBMA)
networks, such as with Frame Relay or ATM topologies.
Routing Protocol Metrics
This section introduces routing protocol metrics and compares the metrics used by different
routing protocols.
What Is a Routing Metric?
KEY
A metric is a value (such as path length) that routing protocols use to measure paths to a
POINT destination.
Different routing protocols base their metric on different measurements, including hop count,
interface speed, or more-complex metrics. Most routing protocols maintain databases containing
all the networks that the routing protocol recognizes and all the paths to each network. If a routing
protocol recognizes more than one way to reach a network, it compares the metric for each
different path and chooses the path with the lowest metric. If multiple paths have the same metric,
a maximum of 16 can be installed in the routing table, and the router can perform load balancing
among them. EIGRP can also perform load balancing between unequal-cost paths.
NOTE Before Cisco IOS Release 12.3(2)T, the maximum number of parallel routes (equalcost paths) supported by IP routing protocols was 6; that maximum was changed to 16 in Cisco
IOS Release 12.3(2)T.
Figure 7-5 shows network 172.16.1.0, which is connected to Router A. The parameters for route
metric calculation are forwarded in routing protocol updates.
Figure 7-5
Routing Protocol Metrics Are Passed in Updates
Network
Hop
Count
Min
BW
Acc
Delay
172.16.1.0
1
64
25
172.16.1.0
BW=10,000 K
Delay=5 ms
BW=64 K
BW=64 K
A
Delay=20 ms
B
C
Delay=20 ms
Network
Hop
Count
Min
BW
Acc
Delay
Network
Hop
Count
Min
BW
Acc
Delay
172.16.1.0
0
10,000
5
172.16.1.0
2
64
45
Routing Protocol Features
439
In this case, the EIGRP method of route metric parameters is used, and the minimum bandwidth
and cumulative delay influence best path selection (the path with the highest minimum bandwidth
and lowest delay is preferred). Figure 7-5 shows the following steps:
Step 1
Router A, which is the originator of the route 172.16.1.0, sends the initial
metric values to Router B.
Step 2
Router B takes into account the parameters of its link toward Router A,
adjusts the parameters (bandwidth, delay, hop count) appropriately,
calculates its metric toward the 172.16.1.0 network, and sends the routing
update to Router C.
Step 3
Router C adjusts the parameters again and calculates its metric toward the
destination network 172.16.1.0 from those parameters.
Metrics Used by Routing Protocols
Different routing protocols calculate their routing metrics from different parameters and with
different formulas. Some use simple metrics (such as RIPv1 and RIPv2), and some use complex
metrics (such as EIGRP).
RIPv1 and RIPv2 use only the hop count to determine the best path (the path with the smallest hop
count is preferred). Because they do not consider bandwidth, RIPv1 and RIPv2 are not suitable for
networks that have significantly different transmission speeds on redundant paths. For networks
that use diverse media on redundant paths, routing protocols must account for bandwidth and
possibly the delay of the links.
By default EIGRP uses the minimum bandwidth and accumulated delay of the path toward the
destination network in its metric calculation. Other parameters (reliability and load) can also be
used, but they should be configured only if the consequences are fully understood. If
misconfigured, they might affect convergence and cause routing loops.
NOTE On Cisco routers, the bandwidth and delay metrics can be manually configured and do
not necessarily reflect the link’s true speed.
These bandwidth and delay metrics should be changed only if the consequences are well
understood. For example, a bandwidth change might affect the QoS provided to data. As another
example, EIGRP limits the amount of routing protocol traffic it sends to a percentage of the
bandwidth value; changing the value could result in either too much bandwidth being used for
routing protocol updates or updates not being sent in a timely manner.
440
Chapter 7: Selecting Routing Protocols for the Network
EIGRP’s minimum bandwidth is the minimum (slowest) bandwidth along the path. An interface’s
bandwidth is either the default value of the interface or as specified by the bandwidth command—
this command is usually used on serial interfaces.
NOTE In earlier Cisco IOS releases, the default bandwidth on all serial ports was T1, or 1.544
megabits per second (Mbps). In the latest Cisco IOS releases, the default bandwidth varies with
interface type.
EIGRP Metric Calculation
EIGRP calculates the metric by adding weighted values of different link characteristics to a
destination network. The formula used is as follows:
Metric = (K1 * bandwidth) + (K2 * bandwidth)/(256 – load) + (K3 * delay)
If K5 does not equal 0, an additional operation is performed:
Metric = Metric * [K5/(reliability + K4)]
The K values in the previous formulas are constants with default values of K1 = K3 = 1 and K2 =
K4 = K5 = 0. Therefore, by default, the formula is the following:
Metric = Bandwidth + Delay
The bandwidth used in this formula is calculated using the smallest (slowest) bandwidth along the
path between the source and the destination, in kilobits per second (kbps). 107 is divided by that
value, and the result is multiplied by 256.
The delay used in this formula is the sum of the delays in the path from the source to the
destination, in tens of microseconds, multiplied by 256. Figure 7-6 presents a sample network to
illustrate the EIGRP metric calculation.
Figure 7-6
Network for EIGRP Metric Calculation Example
A
.1
S0
10.2.2.0/24
.2
S0
B
128 kbps,
20,000
microseconds
.1
S2
10.1.1.0/24
.2
S0
C
1544 kbps,
20,000
microseconds
In Figure 7-6, Router B advertises network 10.1.1.0 to Router A. The metric that Router B
advertises for 10.1.1.0 is calculated as follows:
■
Bandwidth = (10,000,000/1,544) * 256 = 6476 * 256 = 1,658,031
■
Delay = (20,000/10) * 256 = 2000 * 256 = 512,000
■
Metric = Bandwidth + Delay = 2,170,031
Routing Protocol Features
441
Router A calculates the metric it puts in its routing table for 10.1.1.0 as follows:
■
Bandwidth = (10,000,000/128) * 256 = 20,000,000 (using the minimum bandwidth in the
path—in this case, 128 kbps)
■
Delay = ((20,000 + 20,000)/10) * 256 = 1,024,000
■
Metric = Bandwidth + Delay = 21,024,000
The IGRP metric is the EIGRP metric divided by 256 because the metric for EIGRP is a 32-bit
number versus the IGRP 24-bit metric; accordingly, EIGRP has additional granularity for route
selection.
In the case of link-state protocols (OSPF and IS-IS), a cumulative cost or metric is used (the lowest
cost or metric path is selected). OSPF uses cost for path calculation, usually reflecting the link’s
bandwidth. As a result, the highest accumulated bandwidth (lowest cost) is used to select the best
path. The IS-IS interface metric defaults to 10 on Cisco routers; this value can be changed, to
reflect different bandwidths, for example.
NOTE The IS-IS metric is known as the metric; the IS-IS specification defines four different
types of metrics. All routers support Cost, the default metric. Delay, Expense, and Error are
optional metrics. The default Cisco implementation of IS-IS uses Cost only, but the Cisco IOS
does allow all four metrics to be set with optional parameters in the isis metric command.
BGP uses the AS-path attribute as part of its metric. The length of this attribute is the number of
autonomous systems that must be traversed to reach a destination and is usually a factor that
influences the path selection. BGP incorporates additional path attributes that can influence
routing decisions; these can be manually configured.
Routing Protocol Convergence
Whenever a change occurs in a network’s topology, all the routers in that network must learn the
new topology. This process is both collaborative and independent; the routers share information
with each other, but they must calculate the impact of the topology change independently. Because
they must mutually develop an independent agreement on the new topology, they are said to
converge on this consensus.
442
Chapter 7: Selecting Routing Protocols for the Network
Convergence properties include the speed of propagation of routing information and the
calculation of optimal paths. The quicker the convergence, the more optimal the routing protocol
is said to be.
KEY
Recall that a network is converged when all routing tables are synchronized and each
POINT contains a usable route to each destination network.
Convergence time is the time it takes for all routers in a network to agree on the current
topology. The size of the network, the routing protocol in use, the network design, and
numerous configurable timers can affect convergence time. For example, the use of
hierarchical addressing and summarization helps localize topology changes, which speeds
convergence.
Network convergence must occur whenever a new routing protocol starts and whenever a change
takes place in the network. It occurs in both new networks and those that are already operational.
Convergence is also important when changes occur in the network.
A network is not completely operable until it has converged. Therefore, short convergence times
are required for routing protocols.
RIPv2 Convergence Example
RIPv2 is a distance vector protocol that periodically propagates its routing information. Distance
vector protocols use the principle of hold-down to prevent routing loops. Putting a route in holddown after the route has failed (perhaps due to a link failure) means that if a routing update arrives
with the same or a worse metric, the new route is not installed until the hold-down timer expires.
Even though the destination might no longer be reachable, a route in hold-down is still used to
forward traffic during the entire hold-down period.
Figure 7-7 shows a network running RIPv2; the Ethernet link (Network N) between Routers A and
C has failed. The following are the RIPv2 convergence steps:
Step 1
Router C detects the link failure and sends a triggered update to Routers D
and B. A triggered update is sent because something happened. In contrast,
a periodic update is sent periodically—every 30 seconds, in the case of
RIPv1 and RIPv2. The route is poisoned (sent with an infinite metric
indicating that the route is unreachable) to B and D and is removed from
Router C’s routing table.
Step 2
Router C sends a request to its neighbors for an alternative path to network
N. A broadcast request is used for RIPv1, and a multicast request is used
for RIPv2.
Routing Protocol Features
Step 3
443
Router D does not report an alternative path; Router B reports a route with
a worse metric.
The route via B is immediately placed in Router C’s routing table. Note that
Router C does not put Network N in hold-down because Router C knows
that the link failed and has already removed the entry from its routing table.
Step 4
Router C advertises the route via B in a periodic update to D.
There is no change to Router D’s table because Router D has the route in
hold-down.
Step 5
Figure 7-7
When Router D’s hold-down timer expires, the route is added to the table
and is propagated to Router E in a periodic update.
RIPv2 Convergence Example
B
A
Network N
C
D
E
Therefore, the convergence time at Router E is the hold-down time plus one or two update
intervals.
NOTE The default hold-down time is 180 seconds for RIPv1 and RIPv2. This value can be
adjusted manually, but this should be done only if necessary and in the entire network to ensure
consistency.
Comparison of Routing Protocol Convergence
As shown in Figure 7-8, different routing protocols need different amounts of time to converge in
a given network. Although the convergence depends on the network’s topology and structure, pure
distance vector protocols are slower to converge than link-state protocols. The use of periodic
updates and the hold-down mechanism are the main reasons for slow convergence. As a result, the
fast-converging protocols should be used when the network’s convergence time is crucial.
444
Chapter 7: Selecting Routing Protocols for the Network
Figure 7-8
Routing Protocol Convergence Comparison for the Network Shown in Figure 7-7
B
A
Protocol
RIP
IGRP
EIGRP
OSPF
C
D
E
Convergence Time to Router E
Hold-down + 1 or 2 Update Intervals
Hold-down +1 or 2 Update Intervals
Matter of Seconds
Matter of Seconds
Link-state protocols usually converge much more quickly because they instantly propagate
routing updates. Whenever a change occurs in a link’s state, a link-state update is flooded through
the entire network. There is no need to wait for the hold-down timer to expire or for the next
periodic update, as with distance vector protocols.
EIGRP is a special case because it incorporates the distance vector principle of metric propagation
(it sends only the best routes to the neighbors). However, it does not have periodic updates, nor
does it implement the principle of hold-downs. The most distinct feature of EIGRP is that it stores
all feasible backup routes in its topology table. When a backup route exists for a lost destination,
the switchover to the best backup route is almost immediate and involves no action from other
routers. Therefore, very fast convergence can be achieved with proper EIGRP deployment.
Flat Versus Hierarchical Routing Protocols
KEY
Flat routing protocols propagate all routing information throughout the network, whereas
POINT hierarchical routing protocols divide large networks into smaller areas.
This section discusses these two types of routing protocols.
Flat Routing Protocols
Flat routing protocols have no means of limiting route propagation in a major network (within a
Class A, B, or C network) environment. These protocols are typically classful distance vector
protocols.
Routing Protocol Features
445
Recall from Chapter 6 that classful means that routing updates do not include subnet masks and
that the protocol performs automatic route summarization on major network (class) boundaries.
Summarization cannot be done within a major network. These protocols support only fixed-length
subnet masking (FLSM); they do not support VLSM.
Recall also that distance vector protocols periodically send entire routing tables to neighbors.
Distance vector protocols do not scale well because, in a large network, they produce significant
volumes of routing information that consume too many network resources (CPU, bandwidth,
memory). These resources should be available to the routed traffic (application data and user
traffic) instead.
Two examples of flat routing protocols are RIPv1 and RIPv2. Note, however, that RIPv2 is a
classless protocol. Figure 7-9 illustrates a flat network and a hierarchical network.
Flat and Hierarchical Networks
Hierarchical
Flat
Figure 7-9
Route
Summarization
Hierarchical Routing Protocols
To solve the problems associated with flat routing protocols, additional features are implemented
in hierarchical routing protocols to support large networks—for example, some support an areabased design.
Hierarchical routing protocols are typically classless link-state protocols. Recall from Chapter 6
that classless means that routing updates include subnet masks in their routing updates; therefore,
the routing protocol supports VLSM.
446
Chapter 7: Selecting Routing Protocols for the Network
Hierarchy is part of the implementation of link-state protocols with the concept of backbone and
nonbackbone areas. With link-state protocols such as OSPF and IS-IS, large networks are divided
into multiple areas.
Route summarization can be performed manually in hierarchical protocols and is required in most
cases. With the help of route summarization, smaller routing updates propagate among areas,
resulting in higher scalability. Instabilities in one part of the network are isolated, and convergence
is greatly improved. Summarization can be performed on an arbitrary bit boundary within an IP
address. Note, however, that OSPF supports summarization on only specific routers called area
border routers and autonomous system boundary routers.
Although it is a classless hybrid protocol, EIGRP is considered a flat routing protocol because it
is not area-based. Because EIGRP also supports manual summarization, EIGRP can be used in a
hierarchical network design by dividing the network into areas. A hierarchical design is not
necessary in EIGRP, but one is recommended for large networks.
NOTE Although it too is classless and supports manual summarization, RIPv2 is considered
a flat protocol. RIPv2 is not recommended for large networks because it is a distance vector
protocol.
Routing Protocols for the Enterprise
Routing protocols vary in their support for many features, including VLSM, summarization,
scalability, and fast convergence. There is no best protocol—the choice depends on many factors.
This section discusses the most common routing protocols for use within the enterprise and
evaluates their suitability for given network requirements.
First, the interior routing protocols EIGRP, OSPF, and Integrated IS-IS are discussed, followed by
a description of BGP.
NOTE Integrated IS-IS is not a recommended enterprise protocol for reasons described in this
section.
EIGRP
EIGRP is a Cisco-proprietary protocol for routing IPv4; EIGRP can also be configured for routing
IP version 6 (IPv6), Internetwork Packet Exchange (IPX), and AppleTalk traffic. EIGRP is an
Routing Protocols for the Enterprise
447
enhanced version of IGRP, which is a pure distance vector protocol. EIGRP, however, is a hybrid
routing protocol—it is a distance vector protocol with additional link-state protocol features.
EIGRP features include the following:
■
Uses triggered updates (EIGRP has no periodic updates).
■
Uses a topology table to keep all routes received from its neighbors, not only the best routes.
■
Establishes adjacencies with neighboring routers using the Hello protocol.
■
Uses multicast, rather than broadcast, for communication.
■
Supports VLSM.
■
Supports manual route summarization. EIGRP summarizes on major network boundaries by
default, but this feature can be turned off, and summarization can be configured at any point
in the network.
■
Can be used to create hierarchically structured, large networks.
■
Supports unequal-cost load balancing.
Routes are propagated in EIGRP in a distance vector manner, from neighbor to neighbor, and only
the best routes are sent onward. A router that runs EIGRP does not have a complete view of a
network because it sees only the routes it receives from its neighbors. In contrast, with a pure linkstate protocol (OSPF and IS-IS), all routers in the same area have identical information and
therefore have a complete view of the area and its link states.
Recall that the default EIGRP metric calculation uses the minimum bandwidth and cumulative
delay of the path. Other parameters can also be used in this calculation, including worst reliability
between source and destination and worst loading on a link between source and destination. Note
that the maximum transmission unit and hop count are carried in the EIGRP routing updates but
are not used in the metric calculation.
EIGRP Terminology
Some EIGRP-related terms include the following:
■
Neighbor table: EIGRP routers use hello packets to discover neighbors. When a router
discovers and forms an adjacency with a new neighbor, it includes the neighbor’s address and
the interface through which it can be reached in an entry in the neighbor table. This table is
comparable to OSPF’s neighbor table (adjacency database); it serves the same purpose, which
is to ensure bidirectional communication between each of the directly connected neighbors.
EIGRP keeps a neighbor table for each supported network protocol.
448
Chapter 7: Selecting Routing Protocols for the Network
■
Topology table: When a router dynamically discovers a new neighbor, it sends an update
about the routes it knows to its new neighbor and receives the same from the new neighbor.
These updates populate the topology table. The topology table contains all destinations
advertised by neighboring routers; in other words, each router stores its neighbors’ routing
tables in its EIGRP topology table. If a neighbor is advertising a destination, it must be using
that route to forward packets; this rule must be strictly followed by all distance vector
protocols. An EIGRP router maintains a topology table for each network protocol configured.
■
Advertised distance (AD) and feasible distance (FD): EIGRP uses the Diffusing Update
Algorithm (DUAL). DUAL uses distance information, known as a metric or cost, to select
efficient loop-free paths. The lowest-cost route is calculated by adding the cost between the
next-hop router and the destination—referred to as the advertised distance—to the cost
between the local router and the next-hop router. The sum of these costs is referred to as the
feasible distance.
■
Successor: A successor, also called a current successor, is a neighboring router that has a
least-cost path to a destination (the lowest FD) guaranteed not to be part of a routing loop.
Successors are offered to the routing table to be used to forward packets. Multiple successors
can exist if they have the same FD.
■
Routing table: The routing table holds the best routes to each destination and is used to
forward packets. Successor routes are offered to the routing table. The router maintains one
routing table for each network protocol.
■
Feasible successor: Along with keeping least-cost paths, DUAL keeps backup paths to each
destination. The next-hop router for a backup path is called the feasible successor. To qualify
as a feasible successor, a next-hop router must have an AD less than the FD of the current
successor route. In other words, a feasible successor is a neighbor that is closer to the
destination, but is not in the least-cost path and, therefore, is not used to forward data. Feasible
successors are selected at the same time as successors but are kept only in the topology table.
The topology table can maintain multiple feasible successors for a destination.
If the route via the successor becomes invalid because of a topology change or if a neighbor
changes the metric, DUAL checks for feasible successors to the destination. If a feasible successor
is found, DUAL uses it, thereby avoiding a recomputation of the route. If no suitable feasible
successor exists, a recomputation must occur to determine the new successor. Although
recomputation is not processor-intensive, it affects convergence time, so it is advantageous to
avoid unnecessary recomputations.
Routing Protocols for the Enterprise
449
EIGRP Characteristics
The characteristics that make EIGRP suitable for deployment in enterprise networks include the
following:
■
Fast convergence: One advantage of EIGRP is its fast-converging DUAL route calculation
mechanism. This mechanism allows backup routes (the feasible successors) to be kept in the
topology table for use if the primary route fails. Because this process occurs locally on the
router, the switchover to a backup route (if one exists) is immediate and does not involve
action in any other routers.
■
Improved scalability: Along with fast convergence, the ability to manually summarize also
improves scalability. EIGRP summarizes routes on classful network boundaries by default.
Automatic summarization can be turned off, and manual summarization can be configured at
any point in the network, improving scalability and network performance because the routing
protocol uses fewer resources.
■
Use of VLSM: Because EIGRP is a classless routing protocol, it sends subnet mask
information in its routing updates and therefore supports VLSM.
■
Reduced bandwidth usage: Because EIGRP does not send periodic routing updates as other
distance vector protocols do, it uses less bandwidth—particularly in large networks that have
a large number of routes. On the other hand, EIGRP uses the Hello protocol to establish and
maintain adjacencies with its neighbors. If many neighbors are reachable over the same
physical link, as might be the case in NBMA networks, the Hello protocol might create
significant routing traffic overhead. Therefore, the network must be designed appropriately to
take advantage of EIGRP’s benefits.
■
Multiple network layer protocol support: EIGRP supports multiple network layer
protocols through Protocol-Dependent Modules (PDM). PDMs include support for IPv4,
IPv6, IPX, and AppleTalk.
NOTE EIGRP is a Cisco-proprietary protocol that can pass protocol information only with
licensed devices.
OSPF
OSPF is a standardized protocol for routing IPv4, developed in 1988 by the Internet Engineering
Task Force to replace RIP in larger, more diverse media networks. In 1998, minor changes in
OSPF version 2 (OSPFv2) addressed some of OSPF version 1’s problems while maintaining full
backward compatibility.
NOTE OSPFv2 is described in RFC 2328, OSPF Version 2.
450
Chapter 7: Selecting Routing Protocols for the Network
OSPF was developed for use in large scalable networks in which RIP’s inherent limitations failed
to satisfy requirements. OSPF is superior to RIP in all aspects, including the following:
■
It converges much faster.
■
It supports VLSM, manual summarization, and hierarchical structures.
■
It has improved metric calculation for best path selection.
■
It does not have hop-count limitations.
At its inception, OSPF supported the largest networks.
OSPF Hierarchical Design
Although OSPF was developed for large networks, its implementation requires proper design and
planning; this is especially important for networks with 50 or more routers. The concept of
multiple separate areas inside one domain (or AS) was implemented in OSPF to reduce the amount
of routing traffic and make networks more scalable.
In OSPF, there must always be one backbone area—area 0—to which all other nonbackbone areas
must be directly attached. A router is a member of an OSPF area when at least one of its interfaces
operates in that area. Routers that reside on boundaries between the backbone and a nonbackbone
area are called Area Border Routers (ABR) and have at least one interface in each area. The
boundary between the areas is within the ABR itself.
If external routes are propagated into the OSPF AS, the router that redistributes those routes is
called an Autonomous System Boundary Router (ASBR). Careful design and correct mapping of
areas to the network topology are important because manual summarization of routes can only be
performed on ABRs and ASBRs.
Traffic sent from one nonbackbone area to another always crosses the backbone. For example, in
Figure 7-10, the Area 1 ABR must forward traffic from Area 1 destined for Area 2 into the
backbone. The Area 2 ABR receives the traffic from the backbone and forwards it to the
appropriate destination inside Area 2.
NOTE You might encounter different terminology for the various OSPF tables:
■ OSPF neighbor table = adjacency database
■
OSPF topology table = OSPF topology database = LSDB
■
Routing table = forwarding database
Routing Protocols for the Enterprise
Figure 7-10
451
Traffic from OSPF Area 1 to Area 2 Must Go Through the Backbone, Area 0
Area 0—Backbone
Area Border Router
Area Border Router
Nonbackbone
Area 1
Nonbackbone
Area 2
Autonomous System
OSPF Characteristics
OSPF is a link-state protocol that has the following characteristics for deployment in enterprise
networks:
■
Fast convergence: OSPF achieves fast convergence times using triggered link-state updates
that include one or more link-state advertisements (LSA). LSAs describe the state of links on
specific routers and are propagated unchanged within an area. Therefore, all routers in the
same area have identical topology tables; each router has a complete view of all links and
devices in the area. Depending on their type, LSAs are usually changed by ABRs when they
cross into another area.
When the OSPF topology table is fully populated, the SPF algorithm calculates the shortest
paths to the destination networks. Triggered updates and metric calculation based on the cost
of a specific link ensure quick selection of the shortest path toward the destination.
By default, the OSPF link cost value is inversely proportional to the link’s bandwidth.
NOTE By default, the cost in Cisco routers is calculated using the formula 100 Mbps /
bandwidth in Mbps. For example, a 64-kbps link has a cost of 1562, and a T1 link has a cost of
64. However, this formula is based on a maximum bandwidth of 100 Mbps, which results in a
cost of 1. If the network includes faster links, the cost formula can be recalibrated.
452
Chapter 7: Selecting Routing Protocols for the Network
■
Very good scalability: OSPF’s multiple area structure provides good scalability. However,
OSPF’s strict area implementation rules require proper design to support other scalability
features such as manual summarization of routes on ABRs and ASBRs, stub areas, totally
stubby areas, and not-so-stubby areas (NSSA). The stub, totally stubby, and NSSA features
for nonbackbone areas decrease the amount of LSA traffic from the backbone (area 0) into
nonbackbone areas (and they are described further in the following sidebar). This allows lowend routers to run in the network’s peripheral areas, because fewer LSAs mean smaller OSPF
topology tables, less OSPF memory usage, and lower CPU usage in stub area routers.
■
Reduced bandwidth usage: Along with the area structure, the use of triggered (not periodic)
updates and manual summarization reduces the bandwidth used by OSPF by limiting the
volume of link-state update propagation. Recall, though, that OSPF does send updates every
30 minutes.
■
VLSM support: Because OSPF is a classless routing protocol, it supports VLSM to achieve
better use of IP address space.
OSPF Area Types
A variety of possible area types exist. The most commonly used are as follows:
■
Standard area: This default area accepts link updates, route summaries, and external routes.
■
Backbone area (transit area): The backbone area, area 0, is the central entity to which all
other areas connect to exchange and route information. The OSPF backbone has all the
properties of a standard OSPF area.
■
Stub area: This area does not accept information about routes external to the autonomous
system, such as routes from non-OSPF sources. If routers need to route to networks outside
the autonomous system, they use a default route, indicated as 0.0.0.0. Stub areas cannot
contain ASBRs (except that the ABRs may also be ASBRs). Using a stub area reduces the size
of the routing tables inside the area.
■
Totally stubby area: A totally stubby area is a Cisco-specific feature that further reduces the
number of routes in the routing tables inside the area. This area does not accept external
autonomous system routes or summary routes from other areas internal to the autonomous
system. If a router has to send a packet to a network external to the area, it sends the packet
using a default route. Totally stubby areas cannot contain ASBRs (except that the ABRs may
also be ASBRs).
■
NSSA: This area offers benefits similar to those of a stub area. However, NSSAs allow
ASBRs, which is against the rules in a stub area. A totally stubby NSSA (although it sounds
like a very strange name) is a Cisco-specific feature that further reduces the size of the routing
tables inside the NSSA.
Routing Protocols for the Enterprise
453
Integrated IS-IS
IS-IS was developed by Digital Equipment Corporation (DEC) as the dynamic link-state routing
protocol for the Open Systems Interconnection (OSI) protocol suite. The OSI suite uses
Connectionless Network Service (CLNS) to provide connectionless delivery of data, and the
actual Layer 3 protocol is Connectionless Network Protocol (CLNP). CLNP is the OSI suite
solution for connectionless delivery of data, similar to IP in the TCP/IP suite. IS-IS uses CLNS
addresses to identify the routers and build the LSDB.
IS-IS was adapted to the IP environment because IP is used on the Internet; this extended version
of IS-IS for mixed OSI and IPv4 environments is called Integrated IS-IS. Integrated IS-IS tags
CLNP routes with information on IP networks and subnets.
NOTE One version of Integrated IS-IS supports IPv6, as described in Chapter 6.
KEY
Even if Integrated IS-IS is used only for routing IP (and not CLNP), OSI protocols are
POINT used to form the neighbor relationships between the routers; therefore, for Integrated IS-
IS to work, CLNS addresses must still be assigned to areas. This proves to be a major
disadvantage when implementing Integrated IS-IS, because OSI and CLNS knowledge is
not widespread in the enterprise networking community. Therefore, Integrated IS-IS is not
recommended as an enterprise routing protocol; it is included here for completeness.
Integrated IS-IS Terminology
ISO specifications call routers intermediate systems. Thus, IS-IS is a router-to-router protocol,
allowing routers to communicate with other routers.
IS-IS routing takes place at two levels within an AS: Level 1 (L1) and Level 2 (L2). L1 routing
occurs within an IS-IS area and is responsible for routing inside an area. All devices in an L1
routing area have the same area address. Routing within an area is accomplished by looking at the
locally significant address portion, known as the system ID, and choosing the lowest-cost path.
L2 routing occurs between IS-IS areas. L2 routers learn the locations of L1 routing areas and build
an interarea routing table. L2 routers use the destination area address to route traffic using the
lowest-cost path. Therefore, IS-IS supports two routing levels:
■
L1 builds a common topology of system IDs in the local area and routes traffic within the area
using the lowest-cost path.
■
L2 exchanges prefix information (area addresses) between areas and routes traffic to an area
using the lowest-cost path.
454
Chapter 7: Selecting Routing Protocols for the Network
To support the two routing levels, IS-IS defines three types of routers:
■
L1 routers use link-state packets (LSP) to learn about paths within the areas to which they
connect (intra-area).
■
L2 routers use LSPs to learn about paths among areas (interarea).
■
Level 1/Level 2 (L1/L2) routers learn about paths both within and between areas. L1/L2
routers are equivalent to ABRs in OSPF.
The three types of IS-IS routers are shown in Figure 7-11.
Figure 7-11
Three Types of IS-IS Routers
Area 3
L1/L2
L2
Area 2
L1/L2
L1/L2
Area 4
L1
L1/L2
L1/L2
L1
L1
Area 1
The path of connected L2 and L1/L2 routers is called the backbone. All areas and the backbone
must be contiguous.
IS-IS area boundaries fall on the links, not within the routers. Each IS-IS router belongs to exactly
one area. Neighboring routers learn that they are in the same or different areas and negotiate
appropriate adjacencies—L1, L2, or both.
Changing Level 1 routers into Level 1/Level 2 or Level 2 routers can easily expand the Integrated
IS-IS backbone. In comparison, in OSPF, entire areas must be renumbered to achieve this.
Routing Protocols for the Enterprise
455
Integrated IS-IS Characteristics
IS-IS is a popular IP routing protocol in the ISP industry. The simplicity and stability of IS-IS
make it robust in large internetworks. Integrated IS-IS characteristics include the following:
■
VLSM support: As a classless routing protocol, Integrated IS-IS supports VLSM.
■
Fast convergence: Similar to OSPF, Integrated IS-IS owes its fast convergence characteristics
to its link-state operation (including flooding of triggered link-state updates). Another feature
that guarantees fast convergence and less CPU usage is the partial route calculation (PRC).
Although Integrated IS-IS uses the same algorithm as OSPF for best path calculation, the full
SPF calculation is initially performed on network startup only. When IP subnet information
changes, only a PRC for the subnet in question runs on routers. This saves router resources
and enables faster calculation. A full SPF calculation must be run for each OSPF change.
NOTE Introduced in Cisco IOS Release 12.0(24)S, the OSPF incremental SPF feature is more
efficient than the full SPF algorithm and allows OSPF to converge on a new routing topology
more quickly. Information on this feature is available in OSPF Incremental SPF, at http://
www.cisco.com/en/US/products/ps6350/
products_configuration_guide_chapter09186a00804556a5.html.
■
Excellent scalability: Integrated IS-IS is more scalable and flexible than OSPF; IS-IS
backbone area design is not as strict as OSPF, thereby allowing for easy backbone extension.
■
Reduced bandwidth usage: Triggered updates and the absence of periodic updates ensure
that less bandwidth is used for routing information.
Integrated IS-IS offers inherent support for LAN and point-to-point environments only, whereas
NBMA point-to-multipoint environment support is not included. In NBMA environments, pointto-point links (subinterfaces) must be established for correct Integrated IS-IS operation.
As mentioned, one disadvantage of Integrated IS-IS is its close association with the OSI world.
Because few network administrators have adequate knowledge of OSI addressing and operation,
implementation of Integrated IS-IS might be difficult.
Summary of Interior Routing Protocol Features
There is no best or worst routing protocol. The decision about which routing protocol to
implement (or whether multiple routing protocols should indeed be implemented in a network)
can be made only after you carefully consider the design goals and examine the network’s physical
topology in detail.
456
Chapter 7: Selecting Routing Protocols for the Network
Table 7-2 summarizes some characteristics of IP routing protocols discussed in this chapter.
Although they are no longer recommended enterprise protocols, RIPv1, RIPv2, and IGRP are also
included in this table for completeness.
Table 7-2
IP Routing Protocol Comparison
Feature
RIPv1
RIPv2
IGRP1
EIGRP2
Distance vector
X
X
X
X
OSPF
IS-IS
Link-state
X
X
Hierarchical topology required
X
X
X
X
X
X
X
Hierarchical topology support
Flat topology support
X
X
Classless (and therefore VLSM
support)
X
X
X
Classful (and therefore no VLSM
support)3
X
Performs automatic route
summarization
X
Manual route summarization support
X
X
X
X
X
X
X
Multiaccess (LAN) support
X
X
X
X
X
X
Point-to-point support
X
X
X
X
X
X
X
X
NBMA point-to-multipoint support
1
IGRP is no longer supported as of Cisco IOS Release 12.3. It is included in this table for completeness.
EIGRP is an advanced distance vector protocol with some characteristics also found in link-state
protocols.
3 Only FLSM is supported, not VLSM, because consistency of the mask is assumed within a classful
network.
2
Selecting an Appropriate Interior Routing Protocol
The selection of a routing protocol is based on the design goals and the physical topology of the
network. Both EIGRP and OSPF are recommended as enterprise routing protocols.
When choosing routing protocols, you can use Table 7-3 as a decision table template. Decision
tables are discussed in Chapter 2, “Applying a Methodology to Network Design.” Additional rows
can be added to specify other parameters that might be important in the specific network.
Routing Protocols for the Enterprise
Table 7-3
457
Routing Protocol Selection Decision Table Template
Parameters
OSPF
EIGRP
Size of network (small/medium/large/very large)
Large
Large
Speed of convergence (very high/high/medium/low)
High
Very high
Very good scalability (yes/no)
Yes
Yes
Support for VLSM (yes/no)
Yes
Yes
Support for mixed vendor devices (yes/no)
Yes
No
Multiple network layer protocol support (yes/no)
No
Yes
Required Network
Parameters
When to Choose EIGRP
EIGRP is a Cisco-proprietary hybrid protocol that incorporates the best aspects of distance vector
and link-state features. EIGRP keeps a topology table, it does not perform periodic route updates,
and it does perform triggered updates. It is well suited to almost all environments, including LAN,
point-to-point, and NBMA. The EIGRP split-horizon functionality can be disabled in NBMA
environments. EIGRP is not suitable for dialup environments because it must maintain its
neighbor relationships using periodic hello packets; sending these packets would mean that the
dialup connections would have to stay up all the time.
When to Choose OSPF
OSPF is a standards-based link-state protocol that is based on the SPF algorithm for best path
calculation. OSPF was initially designed for networks of point-to-point links and was later
adapted for operation in LAN and NBMA environments. OSPF can be used on dialup links with
the OSPF Demand Circuit feature, which suppresses the Hello protocol.
The OSPF hierarchical area requirements impose design constraints in larger networks. One
backbone area is required, and all nonbackbone areas must be directly attached to that backbone
area. Expansion of the backbone area can cause design issues because the backbone area must
remain contiguous.
Border Gateway Protocol
BGP is an EGP that is primarily used to interconnect autonomous systems. BGP is a successor to
EGP, the Exterior Gateway Protocol (note the dual use of the EGP acronym). Because EGP is
obsolete, BGP is currently the only EGP in use.
458
Chapter 7: Selecting Routing Protocols for the Network
BGP-4 is the latest version of BGP. It is defined in RFC 4271, A Border Gateway Protocol (BGP4). As noted in this RFC, the classic definition of an AS is “a set of routers under a single technical
administration, using an Interior Gateway Protocol (IGP) and common metrics to determine how
to route packets within the AS, and using an inter-AS routing protocol to determine how to route
packets to other [autonomous systems].”
NOTE Extensions to BGP-4, known as BGP4+, have been defined to support multiple
protocols, including IPv6. These multiprotocol extensions to BGP are defined in RFC 2858,
Multiprotocol Extensions for BGP-4.
KEY
The main goal of BGP is to provide an interdomain routing system that guarantees the
POINT loop-free exchange of routing information between autonomous systems. BGP routers
exchange information about paths to destination networks.
BGP does not look at speed to determine the best path. Rather, BGP is a policy-based routing
protocol that allows an AS to control traffic flow by using multiple BGP attributes.
Routers running BGP exchange network reachability information, called path vectors or
attributes, including a list of the full path of BGP AS numbers that a router should take to reach a
destination network. BGP is therefore also called a path vector routing protocol. BGP allows a
provider to fully use all its bandwidth by manipulating these path attributes. This AS path
information is useful in constructing a graph of loop-free autonomous systems. It is used to
identify routing policies so that restrictions on routing behavior can be enforced based on the AS
path.
NOTE Attributes can be used within an AS to influence the path that a packet takes within the
AS and how the packet gets to a neighboring AS. Some attributes can be used to attempt to
influence how a neighboring AS routes its traffic. However, an AS cannot mandate how a
neighboring AS routes its traffic.
BGP use in an AS is most appropriate when the effects of BGP are well understood and at least
one of the following conditions exists:
■
The AS has multiple connections to other autonomous systems.
■
The AS allows packets to transit through it to reach other autonomous systems (for example,
it is an ISP).
■
Routing policy and route selection for traffic entering or leaving the AS must be manipulated.
Routing Protocols for the Enterprise
459
The use of static routes is recommended for inter-AS routing if none of these requirements exists.
If an enterprise has a policy that requires it to differentiate between its traffic and traffic from its
ISP, the enterprise must connect to its ISP using BGP. If, instead, an enterprise is connected to its
ISP with a static route, traffic from that enterprise is indistinguishable from traffic from the ISP for
policy decision-making purposes.
NOTE BGP implementation requires considerable knowledge. Improper implementations
can cause immense damage, especially when neighbors exchange complete BGP Internet tables
(which can have more than 190,000 routes and are growing).
BGP Implementation Example
In Figure 7-12, BGP is used to interconnect multiple autonomous systems. Because of the multiple
connections between autonomous systems and the need for path manipulation, the use of static
routing is excluded. AS 65000 is multihomed to three ISPs: AS 65500, AS 65250, and AS 64600.
Figure 7-12
BGP Is Used to Interconnect Autonomous Systems
AS
65000
B
AS
64600
C
BGP
BGP
ISP
BGP
A
ISP
F
AS
65500
BGP
D
E
AS
65250
ISP
NOTE The AS designator is a 16-bit number with a range of 1 to 65535. RFC 1930,
Guidelines for Creation, Selection, and Registration of an Autonomous System (AS), provides
guidelines for the use of AS numbers. A range of AS numbers, 64512 to 65535, is reserved for
private use, much like the private IP addresses. All the examples in this book use private AS
numbers to avoid publishing AS numbers belonging to an organization.
An organization must use an Internet Assigned Numbers Authority–assigned AS number rather
than a private AS number only if it plans to use an EGP, such as BGP, to connect to a public
network such as the Internet. On the Internet, ISPs use public AS numbers.
460
Chapter 7: Selecting Routing Protocols for the Network
External and Internal BGP
BGP uses TCP to communicate. Any two routers that have formed a TCP connection to exchange
BGP routing information—in other words, a BGP connection—are called peers or neighbors.
BGP peers can be either internal or external to the AS.
When BGP is running between routers within one AS, it is called internal BGP (IBGP). IBGP is
run within an AS to exchange BGP information so that all internal BGP speakers have the same
BGP routing information about outside autonomous systems, and so that this information can be
passed to other autonomous systems. As long as they can reach each other, routers that run IBGP
do not have to be directly connected to each other; static routes or routes learned from an IGP
running within the AS provide reachability.
When BGP runs between routers in different autonomous systems, it is called external BGP
(EBGP). Routers that run EBGP are usually connected directly to each other. Figure 7-13
illustrates IBGP and EBGP neighbors.
Figure 7-13
Routers That Have Formed a BGP Connection Are BGP Peers or Neighbors, Either External
or Internal
IBGP Neighbors
AS 65500
B
C
EBGP Neighbors
A
AS 65000
The primary use for IBGP is to carry EBGP (inter-AS) routes through an AS. IBGP can be run on
all routers or on specific routers inside the AS.
KEY
All routers in the path between IBGP neighbors within an AS, known as the transit path,
POINT must also be running BGP. These IBGP sessions must be fully meshed.
Routing Protocol Deployment
461
IBGP is usually not the only protocol running in the AS; there is usually an IGP running also.
Instead of redistributing the entire Internet routing table (learned via EBGP) into the IGP, IBGP
carries the EBGP routes across the AS. This is necessary because in most cases the EBGP tables
are too large for an IGP to handle. Even if EBGP has a small table, the loss of external routes
triggering extensive computations in the IGP should be prevented. Other IBGP uses include the
following:
■
Applying policy-based routing within an AS using BGP path attributes.
■
QoS Policy Propagation on BGP, which uses IBGP to send common QoS parameters (such
as Type of Service [ToS]) between routers in a network and results in a synchronized QoS
policy.
■
Multiprotocol Label Switching (MPLS) virtual private networks (VPN) where the
multiprotocol version of BGP is used to carry MPLS VPN information.
Routing Protocol Deployment
This section first describes why certain protocols are suitable for specific modules in the
Enterprise Architecture. After that is a discussion of the following advanced routing features:
redistribution, filtering, and summarization.
Routing Protocols in the Enterprise Architecture
Recall from Chapter 3, “Structuring and Modularizing the Network,” that the modules in the Cisco
Enterprise Architecture correspond to places in the network. The choice of routing protocols
depends on the network design goals. Therefore, the routing protocol decision should be made
only after the network goals and topology are determined. Running multiple routing protocols
might be necessary in large enterprise networks, for example, when a network upgrade is
performed; the old routing protocol usually coexists with the new one during the transition period.
As discussed in previous sections of this chapter, routing protocols differ in many ways. For
example, how routing information is exchanged, convergence times, metrics used for optimal
route determination, required amount of processing power and memory, and availability of a
routing protocol on various platforms can determine whether a routing protocol is more or less
suitable for a network or parts of a network. The following sections explain why certain protocols
are suitable for specific modules in the Enterprise Architecture, and the advantages and
disadvantages of individual protocols.
Routing in the Campus Core
The Campus Core provides high-speed data transmission between Building Distribution devices.
The Campus Core is critical for connectivity and, therefore, incorporates a high level of
462
Chapter 7: Selecting Routing Protocols for the Network
redundancy using redundant links and load sharing between equal-cost paths. In the event of a link
failure, it must immediately converge, adapting quickly to change to provide a seamless transport
service.
KEY
EIGRP and OSPF both adapt quickly to changes and have short convergence times.
POINT Therefore, they are suitable for use in the Campus Core.
The decision of whether to use EIGRP or OSPF should be based on the underlying physical
topology, IP addressing, equipment used, and possible issues related to the routing protocol in a
particular situation. Figure 7-14 illustrates routing protocols in the Enterprise Architecture,
including those recommended for the Campus Core.
Figure 7-14
Routing Protocols in the Enterprise Architecture
Enterprise Campus
Enterprise Edge
Building
Access
E-Commerce
Building
Distribution
Internet Connectivity
Public
Servers
EIGRP or
OSPF
Campus
Core
Enterprise
Branch
Database,
Application,
Web Servers
Remote Access/
VPN
EIGRP or
OSPF, or
IPsec VPN
BGP or
Static
Enterprise
Data Center
EIGRP or
OSPF
IPsec
VPN
WAN/MAN
Site-to-site VPN
Server Farm
EIGRP or
OSPF
Enterprise
Teleworker
Network
Management
IPsec
VPN
Routing Protocol Deployment
463
The following are considerations for routing protocol use in the Campus Core:
■
OSPF imposes a strict hierarchical design. OSPF areas should map to the IP addressing plan,
which cannot always be achieved.
■
EIGRP restricts vendor selection because it is a Cisco-proprietary protocol. One way to
overcome this restriction is to use EIGRP in the Campus Core and other routing protocols in
the non-Cisco parts of the network, and redistribute between the protocols.
The following are reasons that other routing protocols are not considered for the Campus Core:
■
Even if routing only IP, IS-IS requires detailed knowledge of the OSI protocol suite for proper
configuration, and that knowledge is not widely available.
■
RIP is not recommended as a Campus Core routing protocol because of its periodic
transmission of the entire routing table, which results in relatively slow convergence, and
because the RIP metric is based on hop count.
■
Using static routing in the Campus Core is not an option because static routing requires
administrative intervention for changes and on link failures.
Routing in the Building Distribution Layer
The Building Distribution layer is the intermediate point between the Campus Core and the
Building Access layers. In addition to other issues (such as physical media and IP addressing), the
choice of routing protocol depends on the routing protocols used in the Campus Core and Building
Access.
KEY
As a recommended practice, the same routing protocol should be used in all three layers
POINT of the Enterprise Campus. If multiple routing protocols must be used, the Building
Distribution layer redistributes among them.
Recommended routing protocols in the Building Distribution layer include EIGRP and
OSPF.
For example, if EIGRP is the Campus Core routing protocol and RIPv1 is the Building Access
layer routing protocol (to support legacy equipment), both routing protocols are used in the
Building Distribution devices, with redistribution and filtering.
Routing in the Building Access Layer
The Building Access layer provides local users with access to network resources. The underlying
physical topology, IP addressing, and the available processing power and memory in the Building
Access layer equipment influence the routing protocol choice. The recommended routing
464
Chapter 7: Selecting Routing Protocols for the Network
protocols for the Building Access layer are OSPF and EIGRP. Using static routing in the access
layer is also a possibility.
Routing in the Enterprise Edge Modules
In the Enterprise Edge modules, the underlying physical topology, IP addressing, and the deployed
equipment also drive the choice of routing protocol.
KEY
The routing protocols in the Enterprise Edge modules are typically OSPF, EIGRP, BGP,
POINT and static routing.
NOTE Routing protocols running in the enterprise edge module are referred to as edge routing
protocols.
EIGRP gives an administrator more influence on routing and is suitable for NBMA environments
in which there is a split-horizon issue because EIGRP split-horizon can be turned off. When
equipment from multiple vendors is part of the overall design, the use of EIGRP is restricted to
only where Cisco devices exist.
The limitations of using OSPF as an Enterprise Edge routing protocol are related to its high
memory and processing power requirements, which might preclude its use on older routers, and
its strict hierarchical design. The high memory and processing power requirements can be reduced
using summarization and careful area planning.
OSPF also requires significant configuration expertise. OSPF is appropriate in environments such
as LAN, NBMA, and dialup.
The Remote Access and VPN module provides connectivity to corporate networks for remote
users via dialup connections and dedicated IPsec VPNs across the Internet. In a dialup
environment, static routing is typically used.
Depending on whether multiple exit points exist and on redundancy requirements, either static
routes or BGP are used for Internet connectivity. Static routes are used when only one exit point
exists; they use less overhead than BGP routing. BGP is used when there are multiple exit points
and when multihoming is desired.
Route Redistribution
This section introduces route redistribution and discusses administrative distance and the process
used to select the best route. The specifics of route redistribution deployment are described.
Routing Protocol Deployment
465
Using Route Redistribution
The following are possible reasons why you might need multiple routing protocols running at the
same time within your network:
■
You are migrating from an older IGP to a new IGP. Multiple redistribution boundaries might
exist until the new protocol has displaced the old protocol completely. Running multiple
routing protocols during a migration is effectively the same as a network that has multiple
routing protocols running as part of its design.
■
You want to use another protocol but have to keep the old routing protocol because of the host
system’s needs. For example, UNIX host-based routers might run only RIP.
■
Some departments might not want to upgrade their routers to support a new routing protocol.
■
If you have a mixed-vendor environment, you can use the Cisco-proprietary EIGRP routing
protocol in the Cisco portion of the network and then use a common standards-based routing
protocol, such as OSPF, to communicate with non-Cisco devices.
KEY
When any of these situations arises, Cisco routers allow internetworks using different
POINT routing protocols (referred to as routing domains or autonomous systems) to exchange
routing information through a feature called route redistribution. This allows, for example,
hosts in one part of the network to reach hosts in another part that is running a different
routing protocol.
In some cases, the same protocol may be used in multiple different domains or autonomous
systems within a network. Multiple instances of the protocol are treated no differently than if they
were distinct protocols; redistribution is required to exchange routes between them. Accordingly,
redistribution of routes is required when one or both of the following occur:
■
Multiple routing protocols are used in the network—for example, RIPv2, EIGRP, and OSPF.
■
Multiple routing domains are used in the network—for example, two EIGRP routing
processes.
Redistribution occurs on the boundaries between routing protocols and between domains. As
shown in Figure 7-15, redistribution occurs on a router with interfaces that participate in multiple
routing protocols or routing domains.
466
Chapter 7: Selecting Routing Protocols for the Network
Figure 7-15
Redistribution Occurs on the Boundaries Between Protocols or Domains
Routing Protocol: Domain 1
Routing Protocol: Domain 2
Administrative Distance
Most routing protocols have metric structures and algorithms that are incompatible with other
protocols. It is critical that a network using multiple routing protocols be able to seamlessly
exchange route information and be able to select the best path across multiple protocols. Cisco
routers use a value called administrative distance to select the best path when they learn of two or
more routes to the same destination from different routing protocols.
Administrative distance rates a routing protocol’s believability. Cisco has assigned a default
administrative distance value to each routing protocol supported on its routers. Each routing
protocol is prioritized in order, from most to least believable.
KEY
Administrative distance is a value between 0 and 255. The lower the administrative
POINT distance value, the higher the protocol’s believability.
Table 7-4 lists the default administrative distance of the protocols supported by Cisco routers.
Table 7-4
Administrative Distance of Routing Protocols
Route Source
Default Distance
Connected interface
0
Static route out an interface
0
Static route to a next-hop address
1
EIGRP summary route
5
External BGP
20
Internal EIGRP
90
IGRP1
100
Routing Protocol Deployment
467
Administrative Distance of Routing Protocols (Continued)
Table 7-4
Route Source
Default Distance
OSPF
110
Integrated IS-IS
115
RIPv1, RIPv2
120
EGP
140
On-demand routing
160
External EIGRP
170
Internal BGP
200
Unknown
255
1
IGRP is no longer supported as of Cisco IOS Release 12.3. It is included in this table for completeness.
Selecting the Best Route
Cisco routers use the following two parameters to select the best path when they learn two or more
routes to the same destination from different routing protocols:
■
Administrative distance: As described in the previous section, the administrative distance is
used to rate a routing protocol’s believability. This criterion is the first thing a router uses to
determine which routing protocol to believe if more than one protocol provides route
information for the same destination.
■
Routing metric: The routing metric is a value representing the path between the local router
and the destination network, according to the routing protocol being used. This metric is used
to determine the routing protocol’s “best” path to the destination.
Route Redistribution Direction
Redistribution is often applied between the Campus Core and Enterprise Edge protocols. As
shown in Figure 7-16, redistribution is possible in two ways:
■
One-way route redistribution: Routing information is redistributed from one routing
protocol or domain to another, but not vice versa. Static or default routes are required in the
opposite direction to provide connectivity.
■
Two-way route redistribution: Routing information is redistributed from one routing
protocol or domain to another, and vice versa. Static or default routes are not required because
all routing information is passed between two entities.
468
Chapter 7: Selecting Routing Protocols for the Network
Figure 7-16
Route Redistribution Can Be One-Way or Two-Way
Campus Core
Protocol
(OSPF, EIGRP)
Filtering on
Redistribution
(One-Way or
Two-Way
Redistribution)
Enterprise Edge
Protocol
(OSPF, EIGRP,
BGP, Static Routes)
Specific routes can be filtered, and the administrative distance of redistributed routes can be
changed in either of these cases to reduce the possibility of routing loops and ensure that traffic is
routed optimally.
Route Redistribution Planning
When deciding where and how to use route redistribution, determine the following:
■
The routing protocols and domains that will be used in the network
■
The routing protocol and domain boundaries (the boundary routers)
■
The direction of route redistribution (one-way or two-way)
If route redistribution is not carefully designed, suboptimal routing and routing loops can be
introduced into the network when routes are redistributed in a network with redundant paths
between dissimilar routing protocols or domains. Route filtering (as described in the “Route
Filtering” section of this chapter) helps solve this problem.
Route Redistribution in the Enterprise Architecture
Redistribution is needed in the Building Distribution layer when different routing protocols or
domains exist in the Building Access layer and Campus Core. Redistribution might also be needed
between the Campus Core and the Enterprise Edge, including to and from WAN module routers,
from static or BGP routes in the Internet Connectivity module, and from static routes in the
Routing Protocol Deployment
469
Remote Access and VPN module. Figure 7-17 shows a sample enterprise network with
redistribution points throughout.
Figure 7-17
Route Redistribution in the Enterprise Architecture
Servers
Edge
Distribution
Campus
Core
Enterprise
Edge
To
PSTN
To
Internet
Building
Distribution
Internet and
Remote Access
Building
Access
To
WAN
WAN
In this example, some remote sites require connectivity to the Server Farm; therefore, one-way
redistribution is performed to inject routes from these remote sites into the Campus Core. Some
remote sites require connectivity to the entire network; this is provided by two-way redistribution
(otherwise, static routes would have to be configured in the Campus Core). The Building
Distribution layer propagates only a default route down to the Building Access layer, whereas the
Building Access layer advertises its own subnets to the Building Distribution layer.
470
Chapter 7: Selecting Routing Protocols for the Network
Redistribution might also be necessary in the Remote Access and VPN and Internet Connectivity
modules. For a Remote Access and VPN module with static routing, static routes are injected into
the Campus Core routing protocol. In the opposite direction, default routes provide connectivity
for remote users.
In an Internet Connectivity module with only one exit point, that exit point is the default route for
traffic destined for the Internet and is propagated through the core routing protocol. If multiple exit
points toward multiple ISPs exist, BGP provides Internet connectivity, and redistribution can be
used.
KEY
Redistribution with BGP requires careful planning. For more details, see Authorized SelfPOINT Study Guide: Building Scalable Cisco Internetworks (BSCI), Third Edition, by Diane
Teare and Catherine Paquet, Cisco Press, 2006.
Route Filtering
As mentioned, route filtering might be required when redistributing routes. Route filtering
prevents the advertisement or acceptance of certain routes through the routing domain. Filtering
can be configured as follows:
■
On a routing domain boundary where redistribution occurs
■
Within the routing domain to isolate some parts of the network from other parts
■
To limit routing traffic from untrusted external domains
Filtering is used with route redistribution, primarily to prevent suboptimal routing and routing
loops that might occur when routes are redistributed at multiple redistribution points. Route
filtering is also used to prevent routes about certain networks, such as a private IP address space,
from being sent to or received from remote sites.
Redistributing and Filtering with BGP
An enterprise border router running BGP typically announces only the major network (the prefix
assigned to the enterprise network) to the external domains, excluding any details about subnets.
This is done using the BGP network router configuration command, which allows BGP to
advertise a network that is already part of its IP routing table.
Alternatively, internal networks could be summarized into one major subnet that covers the
assigned public address space and redistributed into BGP. However, redistributing from an IGP
into BGP is not recommended, because any change in the IGP routes—for example, if a link goes
down—can cause a BGP update, which might result in unstable BGP tables.
Routing Protocol Deployment
471
If IGP routes are redistributed into BGP, make sure that only local routes—those that originate
within the AS—are redistributed. For example, routes learned from other autonomous systems
(that were learned by redistributing BGP into the IGP) must not be sent out from the IGP again,
because routing loops could result, or the AS could inadvertently become a transit AS. Private IP
addresses must not be redistributed, so they should also be filtered. Configuring this filtering can
be complex.
In the other direction, either a default route or a default route plus a few other specific routes is
passed into an enterprise AS. These can then be redistributed into the IGP running in the AS.
Redistributing all BGP routes into an IGP is not advised, because non-BGP participating routers
do not require full Internet routing tables, and IGP protocols are unable to process large numbers
of advertised routes. Unnecessary routes should be filtered.
Route Summarization
Chapter 6 explains route summarization (which is also called route aggregation or supernetting).
In route summarization, a single summary address in the routing table represents a set of routes.
Summarization reduces the routing update traffic, the number of routes in the routing table, and
the overall router overhead in the router receiving the routes.
The Benefits of Route Summarization
A large flat network is not scalable because routing traffic consumes considerable network
resources. When a network change occurs, it is propagated throughout the network, which requires
processing time for route recomputation and bandwidth to propagate routing updates.
A network hierarchy can reduce both routing traffic and unnecessary route recomputation. To
accomplish this, the network must be divided into areas that enable route summarization. With
summarization in place, a route flap (a route that goes down and up continuously) that occurs in
one network area does not influence routing in other areas. Instabilities are isolated and
convergence is improved, thereby reducing the amount of routing traffic, the size of the routing
tables, and the required memory and processing power for routing. Summarization is configured
manually, or occurs automatically at the major network boundary in some routing protocols.
KEY
Recall from Chapter 6 that being able to summarize requires a well-planned underlying IP
POINT addressing design.
Recommended Practice: Summarize at the Distribution Layer
It is a recommended practice to configure summarization in a large network from the distribution
layers toward the core, as illustrated in Figure 7-18. The distribution layer should summarize all
472
Chapter 7: Selecting Routing Protocols for the Network
networks on all interfaces toward the Campus Core. WAN connectivity and remote access points
should be summarized toward the core. For example, remote subnets could be summarized into
major networks, and only those major networks would be advertised to the core.
Figure 7-18
Summarizing at the Distribution Layer Reduces Routing Traffic
Summary Routes Only; No
Queries for Individual Subnets
Rest of Network
Core
Si
Si
Summary
10.1.0.0/16
Summary
10.1.0.0/16
Si
Si
Distribution
Access
10.1.1.0/24
10.1.2.0/24
Implementing summarization at the distribution layer optimizes the convergence process. For
example, if a link to an access layer device goes down, return traffic to that device is dropped at
the distribution layer until the routing protocol converges. Summarizing also limits the number of
peers that an EIGRP router must query or the number of LSAs that an OSPF router must process,
which also reduces the convergence time.
Core routers that receive two routes for a network install the more-specific route in the routing
table. Therefore, summary routes for primary links must use a longer subnet mask.
Routing Protocol Deployment
473
Recommended Practice: Passive Interfaces for IGP at the Access Layer
Another recommended practice is to limit unnecessary peering across the access layer. In Figure
7-19, the distribution multilayer switches are directly connected to each other and are also
interconnected with three access layer switches, each having four VLANs. By default, the
distribution layer devices send routing updates and attempt to peer with the remote distribution
layer devices across the links from the access switches on every VLAN. Having the distribution
switches form neighbor relationships over these 12 access layer connections provides no benefit
and wastes resources (including CPU processing time and memory). Therefore, the interfaces on
the distribution layer devices toward the access layer devices are configured as passive interfaces
under the routing protocol configuration. This suppresses the advertisement of routing updates for
that routing protocol on those interfaces.
Figure 7-19
Limit Unnecessary Peering Across the Access Layer
Distribution
Si
Si
Routing
Updates
Access
Configuring Passive Interfaces
In a network that has many VLAN interfaces on distribution layer routers, configuring each
interface as a passive interface could result in many commands. To ease the configuration burden,
Cisco IOS 12.0 introduced the passive interface default command, which makes all interfaces
passive. This command can be used in conjunction with the no passive interface {interface}
command to run the routing protocol on only the interfaces from the distribution layer devices to
the core layer devices, minimizing the configuration required.
474
Chapter 7: Selecting Routing Protocols for the Network
Summary
In this chapter, you learned about selecting routing protocols for enterprise networks. The
following topics were explored:
■
Static versus dynamic routing
■
Interior versus exterior gateway routing protocols
■
Distance vector versus link-state versus hybrid routing protocols
■
Routing protocol metrics
■
Routing protocol convergence
■
Flat versus hierarchical protocols
■
EIGRP, a Cisco-proprietary routing protocol that includes a topology table for maintaining all
routes received from its neighbors. The best of these routes are put in the routing table.
■
OSPF, an open-standard protocol that was developed to overcome the limitations of RIP
■
Integrated IS-IS, a routing protocol designed for the OSI protocol suite and adapted for IP
■
BGP, an exterior routing protocol primarily used for inter-AS routing
■
Route redistribution use in a network running multiple routing protocols
■
Route filtering to prevent the advertisement of certain routes through the routing domain
■
Route summarization to represent a series of routes by a single summary address
References
For additional information, refer to these resources:
■
Cisco Systems, Inc., Designing Large-Scale IP Internetworks, http://www.cisco.com/
univercd/cc/td/doc/cisintwk/idg4/nd2003.htm.
■
Cisco Systems, Inc., Designing a Campus Network for High Availability, http://
www.cisco.com/application/pdf/en/us/guest/netsol/ns432/c649/
cdccont_0900aecd801a8a2d.pdf.
■
Teare, Diane and Catherine Paquet. Authorized Self-Study Guide: Building Scalable Cisco
Internetworks (BSCI), Third Edition. Cisco Press, 2006.
■
Comer, Douglas E. and D. L. Stevens. Internetworking with TCP/IP, Volume 1: Principles,
Protocols, and Architecture, Fifth Edition. Englewood Cliffs, New Jersey: Prentice-Hall,
2005.
Review Questions
475
Case Study: ACMC Hospital Routing Protocol Design
This case study is a continuation of the ACMC Hospital case study introduced in Chapter 2.
Case Study General Instructions
Use the scenarios, information, and parameters provided at each task of the ongoing case study. If
you encounter ambiguities, make reasonable assumptions and proceed. For all tasks, use the initial
customer scenario and build on the solutions provided thus far. You can use any and all
documentation, books, white papers, and so on.
In each step, you act as a network design consultant. Make creative proposals to accomplish the
customer’s business needs. Justify your ideas when they differ from the provided solutions. Use
any design strategies you feel are appropriate. The final goal of each case study is a paper solution.
Appendix A, “Answers to Review Questions and Case Studies,” provides a solution for each step
based on assumptions made. There is no claim that the provided solution is the best or only
solution. Your solution might be more appropriate for the assumptions you made. The provided
solution helps you understand the author’s reasoning and allows you to compare and contrast your
solution.
In this case study you determine the routing protocol design for the ACMC hospital network.
Complete the following steps:
Step 1
Determine a suitable routing protocol or protocols for the ACMC network,
and design the protocol hierarchy.
Step 2
What summary routes could be configured in this network?
Review Questions
Answer the following questions, and then see Appendix A for the answers.
1.
In what situations could static routing be preferred over dynamic routing?
2.
What do dynamic routing protocols do?
3.
Which type of routing protocol is used for interconnecting autonomous systems?
4.
Do IGPs or EGPs typically converge faster?
5.
What is BGP multihoming?
6.
How do distance vector and link-state routing protocols differ?
7.
What are triggered updates?
476
Chapter 7: Selecting Routing Protocols for the Network
8.
9.
What parameters do the following routing protocols use in their metric calculation by default?
■
RIP
■
EIGRP
■
OSPF
■
BGP
■
IS-IS
What is convergence?
10.
How does the speed of convergence affect the network?
11.
What is an advantage of a hierarchical network versus a flat network?
12.
A large organization has decided to connect its branch offices to the appropriate regional
offices. Each regional office has a minimum of two and a maximum of five branch offices with
which it will connect. Each branch office uses low-end routers that will directly connect to
their regional office router via a Frame Relay permanent virtual circuit link, effectively
creating a hub-and-spoke topology (star network). No physical connections exist between the
branch office routers. OSPF is run in the rest of the network, but the routing protocol that runs
between the regional office and the branch offices does not need to be OSPF. Select the two
best options for use between the regional and branch offices:
a.
Deploy EIGRP in both directions.
b.
Deploy IS-IS in both directions.
c.
Deploy OSPF in both directions.
d.
Use static routes in both directions, with a default static route from each branch to the
regional office, and static routes on each regional router toward the branch networks.
13.
A network consists of links with varying bandwidths. Would RIPv2 be a good routing
protocol choice in this network? Why or why not?
14.
What are some features of EIGRP that make it an appropriate choice for an enterprise routing
protocol?
15.
What is an EIGRP feasible successor?
16.
Does OSPF support manual route summarization on all routers?
Review Questions
17.
What is an OSPF LSA?
18.
What is the OSPF metric?
19.
For what network layer protocols does Integrated IS-IS provide support?
20.
What is the difference between an Integrated IS-IS backbone and an OSPF backbone?
21.
Why might Integrated IS-IS be better than OSPF in a very large network?
22.
What is the main use of BGP?
23.
Which routing protocols are likely to be used in an enterprise Campus Core?
24.
Is IS-IS typically a good choice of routing protocol for the Building Distribution layer?
25.
What is route redistribution?
26.
Which parts of the Enterprise Architecture are likely to implement redistribution?
27.
What is route filtering?
28.
When is route filtering required?
29.
Why does redistributing from an IGP into BGP require caution?
30.
What is route summarization, and why would a network need it?
31.
What is a passive interface?
477
This chapter introduces voice design
principles and contains the following
sections:
■
Traditional Voice Architectures and
Features
■
Integrating Voice Architectures
■
Voice Issues and Requirements
■
Introduction to Voice Traffic
Engineering
■
Summary
■
References
■
Case Study: ACMC Hospital Network
Voice Design
■
Review Questions
CHAPTER
8
Voice Network Design
Considerations
This chapter introduces voice design principles. It begins with an overview of traditional voice
architectures and features and continues with a discussion of integrated voice architectures.
This chapter describes how converged voice networks can run the same applications as a
telephony network, but in a more cost-effective and scalable manner. It describes voice and data
networking concepts and introduces VoIP and IP telephony.
This chapter discusses voice quality issues, coding and compression standards, and bandwidth
considerations and requirements when voice traffic is present on a network. Quality of service
(QoS) mechanisms available for voice are described, and voice traffic engineering concepts are
examined.
Traditional Voice Architectures and Features
This section introduces the traditional telephony infrastructure and explains its major
components. It describes analog and digital signaling and the process to convert between the
two. PBX and Public Switched Telephone Network (PSTN) switches are described and
contrasted. The telephone infrastructure and connections between telephony devices are
examined. Telephony signaling mechanisms are described, and PSTN numbering plans are
explained.
NOTE We examine traditional telephony in this section to better understand the features
and services that must be provided on a converged network.
Analog and Digital Signaling
The human voice generates sound waves; a telephone converts the sound waves into analog
signals. However, analog transmission is not particularly efficient. Analog signals must be
amplified when they become weak from transmission loss as they travel. However, amplification
of analog signals also amplifies noise.
The PSTN is a collection of interconnected voice-oriented public telephone networks, both
commercial and government-owned. The PSTN today consists almost entirely of digital
480
Chapter 8: Voice Network Design Considerations
technology, except for the final link from the central (local) telephone office to the user. To obtain
clear voice connections, the PSTN switches convert analog speech to a digital format and send it
over the digital network. At the other end of the connection, the digital signal is converted back to
analog and to the normal sound waves that the ear picks up. Digital signals are more immune to
noise, and the digital network does not induce any additional noise when amplifying signals.
Signals in digital networks are transmitted over great distances and are coded, regenerated, and
decoded without degradation of quality. Repeaters amplify the signal, restore it to its original
condition, and send this clean signal to the next network destination.
The Analog-to-Digital Process
Pulse code modulation (PCM) is the process of digitizing analog voice signals. Several steps are
involved in converting an analog signal into PCM digital format, as shown in Figure 8-1 and
described here:
■
Filtering: Filters out the signal’s nonspeech frequency components. Most of the energy of
spoken language ranges from approximately 300 hertz (Hz) to 3400 Hz; this is the 3100 Hz
bandwidth, or range, for standard speech. Analog waveforms are put through a voice
frequency filter to filter out anything greater than 4000 Hz.
■
Sampling: Samples the filtered input signal at a constant frequency, using a process called
pulse amplitude modulation (PAM). This step uses the original analog signal to modulate the
amplitude of a pulse train that has a constant amplitude and frequency. The filtered analog
signal is sampled at twice the highest frequency of the analog input signal (4000 Hz);
therefore, the signal is sampled 8000 times per second, or every 125 microseconds (Mu-sec).
Sampling Frequency
Analog speech is filtered at 4000 Hz before being sampled. The Nyquist theorem states that a
signal should be sampled at a rate at least two times the input frequency to obtain a quality
representation of the signal. Therefore, the input analog signal is sampled at a rate of 8000 times
per second.
■
Digitizing: Digitizes the samples in preparation for transmission over a telephony network;
this is the PCM process. PCM takes the PAM process one step further by encoding each
analog sample using binary code words. An analog-to-digital converter is required on the
source side, and a digital-to-analog converter is required on the destination side.
Traditional Voice Architectures and Features
Figure 8-1
481
Analog-to-Digital Conversion Process
PCM—Pulse Code Modulation
0010111010101
Filtered Analog
Audio Signal
Sampling
Stage
Digitizing
Stage
The digitizing process is further divided into the following steps:
■
Quantization and coding: A process that converts each analog sample value into a discrete
value to which a unique digital code word can be assigned. As the input signal sample enters
the quantization phase, it is assigned to a quantization interval. All quantization intervals are
equally spaced throughout the dynamic range of the input analog signal. Each quantization
interval is assigned a discrete binary code word value. The standard word size used is 8 bits,
enabling 256 possible quantization intervals.
KEY
Because the input analog signal is sampled 8000 times per second and each sample is
POINT assigned an 8-bit-long code word, the maximum transmission bit rate for telephony
systems using PCM is 8000 samples per second * 8 bits per sample, which results in
64,000 bits per second, or 64 kilobits per second (kbps).
■
Companding: The process of first compressing an analog signal at the source and then
expanding (decompressing) this signal back to its original size when it reaches its destination.
(Combining the terms compressing and expanding creates the term companding.) During the
companding process, input analog signal samples are compressed into logarithmic segments,
and each segment is quantified and coded using uniform quantization. The compression
process is logarithmic, meaning that the compression increases as the sample signals increase.
In other words, larger sample signals are compressed more than smaller sample signals,
thereby causing the quantization noise to increase as the sample signal increases. This results
in a more accurate value for smaller-amplitude signals and a uniform signal-to-noise ratio
across the input range.
482
Chapter 8: Voice Network Design Considerations
Two basic variations of logarithmic companding are commonly used: The a-law
companding standard is used in Europe, and Mu-law is used in North America and
Japan. The methods are similar—they both use logarithmic compression to achieve
linear approximations in 8-bit words—but they are not compatible.
A-law and Mu-law Companding
Following are the similarities between a-law and Mu-law companding:
■
Both are linear approximations of a logarithmic input/output relationship.
■
Both are implemented using 8-bit code words (256 levels, one for each quantization interval),
resulting in a bit rate of 64 kbps.
■
Both break a dynamic range into 16 segments: eight positive and eight negative segments.
Each segment is twice the length of the preceding one and uses uniform quantization within
each segment.
■
Both use a similar approach to coding the 8-bit word. The first bit (the most significant bit)
identifies polarity; bits 2, 3, and 4 identify the segment, and the final 4 bits quantize the
segment.
The differences between a-law and Mu-law include the following:
■
Different linear approximations lead to different lengths and slopes.
■
The numerical assignment of the bit positions in the 8-bit code word to segments and
quantization levels within segments is different.
■
A-law provides a greater dynamic range than Mu-law.
■
Mu-law provides better signal/distortion performance for low-level signals than a-law.
■
A-law requires 13 bits for a uniform PCM equivalent. Mu-law requires 14 bits for a uniform
PCM equivalent.
■
An international connection should use a-law; Mu-law-to-a-law conversion is the
responsibility of the Mu-law country.
This information was adapted from Cisco’s Waveform Coding Techniques document, available at
http://www.cisco.com/warp/public/788/signalling/waveform_coding.html#subrstsix.
Time-Division Multiplexing in PSTN
Time-division multiplexing (TDM) is used in networks that are commonly deployed by telephone
companies, including the PSTN. As illustrated in Figure 8-2, TDM is a digital transmission
Traditional Voice Architectures and Features
483
technique for simultaneously carrying multiple signals over a single trunk line by interleaving
octets from each signal into different time slots.
Figure 8-2
Circuit-Switched Networks Use Time-Division Multiplexing
Conversations
1
Silence
2
Voice
3
Silence
4
Voice
Silence Silence
Voice
Voice
Silence Silence
3
Multiplexing Silence
Voice
2
1
n
Voice
Silence
Voice
4
. . . Voice
3
Voice
2
1
Silence Silence
Voice
.
.
.
n
Silence
Silence Silence
The PSTN allocates a dedicated 64-kbps digital channel for each call. Although TDM cannot
allocate bandwidth on demand as packet switching can, TDM’s fixed-bandwidth allocation
ensures that a channel is never blocked because of competition for bandwidth resources on another
channel, and that performance does not degrade because of network congestion.
With time slot allocation, the number of simultaneous calls cannot exceed the number of TDM
slots in the trunk. One call always allocates one TDM slot, regardless of whether silence or speech
is transmitted. Time slot allocation ensures that connections always have access to a trunk, thereby
resulting in low delay. However, because of the allocation method, the overall trunk utilization,
also known as trunk efficiency, becomes relatively low.
The low trunk efficiency of circuit-switched networks is a major driver for the migration to unified
packet-switched networks in which bandwidth is consumed only when there is traffic.
PBXs and the PSTN
This section introduces PBX and PSTN switches and networks.
484
Chapter 8: Voice Network Design Considerations
Differences Between a PBX and a PSTN Switch
As shown in Table 8-1, PBXs and PSTN switches share many similarities, but they also have many
differences.
Table 8-1
PBX and PSTN Switch Comparison
PBX
PSTN Switch
Used in the private sector
Used in the public sector
Scales to thousands of phones
Scales to hundreds of thousands of phones
Mostly digital
Mostly digital
Uses 64-kbps circuits
Uses 64-kbps circuits
Uses proprietary protocols to control telephones
Uses open-standard protocols between switches
and telephones
Interconnects remote branch subsystems and
telephones
Interconnects with other PSTN switches, PBXs,
and telephones
Both the PBX and PSTN switch systems use 64-kbps circuits; however, the scale is very different.
A PSTN switch can support hundreds of thousands of telephones, whereas a PBX can support only
several thousand.
KEY
A PSTN switch’s primary task is to provide residential telephony. A PBX supports user
POINT telephones within a company.
PBX vendors often create proprietary protocols to enable their PBXs to intercommunicate and
transparently carry additional features through their voice network. In addition, only the vendor’s
telephones can be connected to its PBX. This forces enterprise networks to consolidate to one
brand of PBX, and the enterprise business customer is restricted to one vendor.
NOTE Many vendors are implementing standards-based signaling protocols that enable
interoperability between different vendors’ PBXs. The two standards are Q Signaling (QSIG)
and Digital Private Network Signaling System (DPNSS), as described in the “Digital Telephony
Signaling” section later in this chapter.
Figure 8-3 illustrates the location of and communication between the PSTN and PBXs. PSTN
switches connect residential and business users, but PBXs are mainly used for business purposes.
PBXs are typically found at corporate locations, whereas PSTN switches are used to build the
PSTN network and are located in central offices (CO).
Traditional Voice Architectures and Features
Figure 8-3
485
PBXs and the PSTN Interconnect to Facilitate Communication
PBX
Switch
Corporate
Location
Switch
Switch
PSTN
PBX
Corporate
Location
PBX Features
A PBX is a business telephone system that provides business features such as call hold, call
transfer, call forward, follow-me, call park, conference calls, music on hold, call history, and voice
mail. Most of these features are not available in traditional PSTN switches.
A PBX switch often connects to the PSTN through one or more T1 or E1 digital circuits. A PBX
supports end-to-end digital transmission, employs PCM switching technology, and supports both
analog and digital proprietary telephones.
Recall from Chapter 5, “Designing Remote Connectivity,” that the United States, Canada, and
Japan use T1. A T1 trunk can carry 24 fixed 64-kbps channels for either voice or data, using PCM
signals and TDM, plus additional bits for framing, resulting in an aggregate carrying capacity of
1.544 megabits per second (Mbps). T1 lines originally used copper wire but now also include
optical and wireless media.
In Europe, the trunk used to carry a digital transmission is an E1. An E1 trunk can carry up to 31
fixed 64-kbps channels for data and signaling, with another 64-kbps channel reserved for framing,
giving an aggregate carrying capacity of 2.048 Mbps.
486
Chapter 8: Voice Network Design Considerations
PBXs support end-to-end digital transmission, use PCM switching technology, and support both
analog and digital proprietary telephones. A local PBX provides several advantages for an
enterprise:
■
Local calls between telephones within the PBX or group of PBXs are free of charge.
■
Most PBX telephone system users do not call externally, through the T1 or E1 circuits, at the
same time. Therefore, companies with a PBX only need the number of external lines to the
PSTN to equal the maximum possible number of simultaneous calls, resulting in PSTN cost
savings.
■
When adding a new user, changing a voice feature, or moving a user to a different location,
there is no need to contact the PSTN carrier; the local administrator can reconfigure the PBX.
However, the PBX adds another level of complexity: The enterprise customer must configure and
maintain the PBX. Figure 8-4 illustrates a typical enterprise telephone network that has
proprietary telephones connected to the PBX and a trunk between the PBX and the PSTN network.
Figure 8-4
A PBX Can Reduce the Number of Trunks to the PSTN
Corporate Location
Fax
Number of Telephones
Is Greater Than the
Number of Trunks
PSTN
Switch
PSTN
Secretary
Phone
Proprietary Protocol Between
Phones and PBX
PSTN Switches
The PSTN appears to be a single large network with telephone lines connected. In reality, the
PSTN is composed of circuits, switches, signaling devices, and telephones. Many different
companies own and operate different systems within the PSTN.
PSTN Features
A PSTN switch’s primary role is to connect the calling and called parties. If the two parties are
physically connected to the same PSTN switch, the call remains local; otherwise, the PSTN switch
forwards the call to the destination switch that owns the called party.
Traditional Voice Architectures and Features
487
PSTN switches interconnect business PBXs and public and private telephones. Large PSTN
switches are located at COs, which provide circuits throughout the telephony network. PSTN
switches are deployed in hierarchies to provide resiliency and redundancy to the PSTN network
and avoid a single point of failure.
PSTN signaling traditionally supported only basic features such as caller ID and direct inward
dialing. Modern PSTN switches now support, on a fee basis, many traditional PBX services,
including conferencing, forwarding, call holding, and voice mail.
PSTN Services
Modern PSTN service providers offer competitive services to differentiate themselves and
generate additional revenue. These PSTN services include the following:
■
Centrex: Centrex is a set of specialized business solutions (primarily, but not exclusively, for
voice service) in which the service provider owns and operates the equipment that provides
both call control and service logic functions; therefore, the equipment is located on the service
provider’s premises.
■
Voice virtual private networks (VPN): Voice VPNs interconnect corporate voice traffic
among multiple locations over the PSTN. PBXs are connected to the PSTN instead of directly
over tie trunks. The PSTN service provider provides call routing among locations, and all
PBX features are carried transparently across the PSTN.
■
Voice mail: Voice mail is an optional service that lets PSTN customers divert their incoming
PSTN calls to a voice mailbox when they are unable to answer their telephones, such as when
the line is busy or they are unavailable. Alternatively, all calls can be diverted to the voice
mailbox.
■
Call center: A call center is a place of doing business by telephone, combined with a
centralized database that uses an automatic call distribution (ACD) system. Call centers
require live agents to accept and handle calls.
■
Interactive voice response: Interactive voice response (IVR) systems allow callers to
exchange information over the telephone without an intermediary live agent. The caller and
the IVR system interact using a combination of spoken messages and dual-tone
multifrequency (DTMF) touch-tone telephone pad buttons.
Local Loops, Trunks, and Interswitch Communications
Figure 8-5 illustrates a typical telephone infrastructure and connections between telephony
devices.
488
Chapter 8: Voice Network Design Considerations
Figure 8-5
Local Loops, Trunks, and Interswitch Communication
PBX Tie Trunk
Station Line
Corporate Location
PBX
PBX
Corporate
Location
Corporate
Location
Switch
Central Office
Trunk
Switch
Switch
PSTN
Local Loop
PSTN Switch Trunk
Telephone Line
The telephone infrastructure starts with a simple pair of copper wires running to the end user’s
home or business. This physical cabling is known as a local loop or telephone line; the local loop
physically connects the home telephone to the CO PSTN switch. Similarly, the connection
between an enterprise PBX and its telephones is called the station line.
A trunk is a communication path between two telephony systems. Available trunk types, shown in
Figure 8-5, include the following:
■
Tie trunk: Connects enterprise PBXs without connecting to the PSTN (in other words, not
connecting to a phone company’s CO). Tie trunks are used, for example, to connect PBXs in
different cities so that the enterprise can use the PBX rather than the PSTN for intercity calls
between offices and, as a result, save on long-distance toll charges. A connection to the
PSTN—via a CO trunk—is still required for off-net calls (to nonoffice numbers).
■
CO trunk: Connects CO switches to enterprise PBXs. Enterprises connect their PBXs to the
PSTN with PBX-to-CO trunks. The telephone service provider is responsible for running COto-PBX trunks between its CO and enterprise PBXs; from a service provider point of view,
these are lines or business lines.
■
PSTN switch trunk: Interconnects CO switches; also called interoffice trunks.
Traditional Voice Architectures and Features
489
As shown in Figure 8-6, another type of trunk, foreign exchange (FX) trunks, are analog interfaces
used to interconnect a PBX to telephones, other PBXs, or to the PSTN. FX trunks save on longdistance toll calls; the dial tone from a different toll region is produced via the FX trunk at a
reduced tariff.
Figure 8-6
Foreign Exchange Trunks
Station
Port
PBX
FXO
PSTN
FXS
Two types of FX trunk interfaces exist:
■
Foreign Exchange Office (FXO): This interface emulates a telephone. It creates an analog
connection to a PSTN CO or to a station interface on a PBX. The FXO interface sits on the
PSTN or PBX end of the connection and plugs directly into the line side of the PSTN or PBX
so that the PSTN or PBX thinks the FXO interface is a telephone. The FXO interface provides
either pulse or DTMF digits for outbound dialing. The PBX or PSTN notifies the FXO of an
incoming call by sending ringing voltage to the FXO. Likewise, the FXO answers a call by
closing the loop to allow current flow. After current is flowing, the FXO interface transports
the signal to the Foreign Exchange Station (FXS).
■
FXS: This interface emulates a PBX. It connects directly to a standard telephone, fax
machine, or similar device and supplies line power, ring voltage, and dial tone to the end
device. An example of where an FXS is used to emulate a PBX is in locations where there are
not physical lines for every telephone.
Telephony Signaling
In a telephony system, a signaling mechanism is required for establishing and disconnecting
telephone communications.
490
Chapter 8: Voice Network Design Considerations
Telephony Signaling Types
The following forms of signaling are used when a telephone call is placed via a PBX:
■
Between the telephone and PBX
■
Between the PBX and PSTN switch
■
Between the PSTN switches
■
Between two PBXs
At a high level, there are two signaling realms, as shown in Figure 8-7:
■
Local-loop signaling: Between a PSTN or PBX switch and a subscriber (telephone)
■
Trunk signaling: Between PSTN switches, between a PSTN switch and a PBX, or between
PBX switches
Figure 8-7
Telephony Signaling Includes Local-Loop and Trunk Signaling
Local-loop Signaling
Trunk Signaling
Corporate Location
PBX
PBX
Trunk Signaling
Switch
PSTN
Local-loop Signaling
Simple signaling examples include the ringing of the telephone, a dial tone, and a ring-back tone.
Following are the three basic categories of signals commonly used in telephone networks:
■
Supervision signaling: Typically characterized as on-hook, off-hook, and ringing,
supervision signaling alerts the CO switch to the state of the telephone on each local loop.
Supervision signaling is used, for example, to initiate a telephone call request on a line or
trunk and to hold or release an established connection.
Traditional Voice Architectures and Features
491
■
Address signaling: Used to pass dialed digits (pulse or DTMF) to a PBX or PSTN switch.
These dialed digits provide the switch with a connection path to another telephone or
customer premises equipment.
■
Informational signaling: Includes dial tone, busy tone, reorder tone, and tones indicating
that a receiver is off-hook or that no such number exists, such as those used with call progress
indicators.
For a telephone call to take place, all three types of signaling occur.
Analog Telephony Signaling
The most common methods of analog local-loop signaling are loop start and ground start. The
most common analog trunk signaling method is E&M (derived from a combination of recEive and
transMit, and sometimes known as Ear and Mouth). These methods are described as follows:
■
Loop start: Loop start is the simplest and least intelligent signaling protocol, and the most
common form of local-loop signaling. It provides a way to indicate on-hook and off-hook
conditions in a voice network. The creation of the electrical loop initiates a call (off-hook),
and the opening of the loop terminates the call (on-hook). This type of signaling is not
common for PBX signaling because it has a significant drawback in which glare—what the
telephone industry calls collisions—can occur. Glare occurs when two endpoints try to seize
the line at the same time, resulting in the two callers connecting unexpectedly. Because
business callers use telephones regularly and the possibility of glare is high, loop-start
signaling is acceptable only for residential use.
■
Ground start: Also called reverse battery, ground start is a modification of loop start that
provides positive recognition of connects and disconnects (off-hook and on-hook). It uses
current-detection mechanisms at each end of the trunk, thereby enabling PBXs to agree which
end will seize the trunk before actually doing so, minimizing the effect of glare. Ground start
is preferred when there is a high volume of calls; therefore, PBXs typically use this type of
signaling.
■
E&M: E&M is a common trunk signaling technique used between PBXs. In E&M, voice is
transmitted over either two- or four-wire circuits, with five types of E&M signaling (Types I,
II, III, IV, and V). E&M uses separate paths (or leads) for voice and signaling. The M (Mouth)
lead sends the signal, and the E (Ear) lead receives the signal.
Digital Telephony Signaling
On PSTN switches, analog signaling is usually provided through current flow in closed electrical
circuits, and digital signaling is provided through channel associated signaling (CAS) or common
channel signaling (CCS).
492
Chapter 8: Voice Network Design Considerations
CAS
Many varieties of CAS exist, and they operate over various analog and digital facilities.
KEY
CAS uses defined bits within T1 or E1 channels for signaling; this is in-band signaling.
POINT Therefore, the signal for call setup and so forth is in the same channel as the voice call.
Examples of CAS signaling include the following:
■
R1 signaling (on T1 facilities): Used in North America.
■
R2 signaling (on E1 facilities): Used in Europe, Latin America, Australia, and Asia.
■
DTMF signals: DTMF signals are the “pulses” used within the call path.
CCS
Modern telecommunication networks require more efficient means of signaling, so they are
moving toward CCS systems. CCS can have faster connect times than CAS, and it offers the
possibility of a number of additional services.
KEY
CCS uses a common link to carry signaling information for several trunks. It differs from
POINT CAS signaling because it uses a separate channel for call setup; this is out-of-band
signaling.
Examples of CCS signaling include the following:
■
DPNSS
■
Integrated Services Digital Network (ISDN)
■
QSIG
■
Signaling System 7 (SS7)
The following sections further describe these types of CCS signaling.
DPNSS
DPNSS is an industry-standard interface defined between a PBX and an access network. DPNSS
expands the facilities normally available only between extensions on a single PBX to all
extensions on PBXs connected in a private network.
Traditional Voice Architectures and Features
493
ISDN
ISDN provides digital telephony and data transport services. ISDN involves the digitalization of
the telephone network, permitting voice, data, text, graphics, music, video, and other source
material to be transmitted on the same facility. For example, ISDN enables PBXs to connect over
the PSTN and to create voice VPNs by delivering PBX signaling over the network to distant PBXs.
Following are the two ISDN access methods, as illustrated in Figure 8-8:
■
ISDN Basic Rate Interface (BRI): Offers two bearer (B) channels and one delta (D) channel
(2B+D). The BRI B channel operates at 64 kbps and carries user data and voice. The BRI D
channel operates at 16 kbps and carries both control and signaling information. BRI is
typically used for residential and small office/home office applications.
■
ISDN Primary Rate Interface (PRI): Designed to use T1 or E1 circuits, PRI offers 23 B
channels and one D channel (23B+D) in North America and 30 B channels and one D channel
(30B+D) in Europe. The PRI B channels operate at 64 kbps and carry user data and voice.
The PRI D channel also operates at 64 kbps and carries both control and signaling
information. PRI is typically used for enterprise business and voice applications.
Figure 8-8
ISDN Digital Signaling
Channel
Capacity
Used For
B
64 kbps
Circuit-Switched Data
D
16 kbps for
BRI; 64 kbps
for PRI
Signaling Information
NT1
BRI
D
2B
D
CSU/DSU
23B or 30B
PRI
Service
Provider
Network
494
Chapter 8: Voice Network Design Considerations
QSIG Digital Signaling
Figure 8-9 illustrates QSIG. QSIG is a peer-to-peer signaling system used in corporate voice
networking to provide standardized inter-PBX communications. It is a standards-based
mechanism that provides transparent transportation of PBX features across a network.
Figure 8-9
QSIG
Layers 4–7
End-to-End Protocol
Network Transparent
QSIG Procedures for
Supplementary Services
Network
QSIG Generic
Functional Procedures
QSIG Basic Call
Link Layer
Physical
Interface-Dependent
Protocols
Media
QSIG features include the following:
■
Standards-based protocol that enables interconnection of multivendor equipment
■
Enables inter-PBX basic services, generic feature transparency between PBXs, and
supplementary services
■
Interoperability with public and private ISDN
■
Operable in any network configuration and compatible with many PBX-type interfaces
■
No restrictions on private numbering plans
SS7 Digital Signaling
SS7 is an international signaling standard within the PSTN. SS7 defines the architecture, network
elements, interfaces, protocols, and management procedures for a network that transports control
information between PSTN switches. SS7 works between PSTN switches and replaces per-trunk
in-band signaling.
Traditional Voice Architectures and Features
495
As shown in Figure 8-10, a separate data network within the PSTN implements SS7. SS7 provides
call setup and teardown, network management, fault resolution, and traffic management services.
The SS7 network is solely for network control. Out-of-band signaling via SS7 provides numerous
benefits for internetworking design, including reduced call setup time, bearer capability, and other
progress indicators.
Figure 8-10
SS7 Signaling Is Used Between PSTN Switches
SS7 Signaling
SS7 Network
PSTN
Voice
Switch
Voice
Transmission
Network
Switch
QSIG
PBX
KEY
When using SS7, all trunk channels are for voice and data, and the SS7 network carries
POINT the associated signaling separately.
PSTN Numbering Plans
PSTN numbering plans are the foundation for routing voice calls through the PSTN network.
International Numbering Plans
For any telephone network to function, a unique address must identify each telephone. Voice
addressing relies on a combination of international and national standards, local telephone
company practices, and internal customer-specific codes. The International Telecommunications
Union Telecommunication Standardization Sector (ITU-T) recommendation E.164 defines the
international numbering plan. Each country’s national numbering plan must conform to the E.164
recommendation and work in conjunction with the international numbering plan in a hierarchical
fashion. PSTN service providers must ensure that their numbering plan aligns with the E.164
recommendation and that their customers’ networks conform.
496
Chapter 8: Voice Network Design Considerations
Call Routing
Call routing is closely related to the numbering plan and signaling. Basic routing allows the source
telephone to establish a call to the destination telephone. However, most routing is more
sophisticated: It enables subscribers to select services or divert calls from one subscriber to
another. Routing results from establishing a set of tables or rules within each switch. As each call
arrives, the path to the desired destination and the type of services available derive from these
tables or rules.
Numbering Plans
Specific numbers within the dialed digits indicate special codes. An international prefix is the code
dialed before an international number. In most nations, the international prefix is 00. In some
nations in Asia, it is 001 (in some cases, alternative codes are available to select a particular
international carrier). In North America, the international prefix is 011 (or 01 for special call
processing—collect, person-to-person, calling card, and so on).
A country code is used to reach a particular telephone system (or special service) for each nation.
The initial digit in the country code is a zone, which usually relates to a general geographic region
(for example, zone 5 is South America and Latin America). Table 8-2 provides examples of
country codes and zones.
Table 8-2
Country Code Examples
Country Code
Zone
Country
1
1
Canada, United States
1242
1
Bahamas
1787
1
Puerto Rico
1876
1
Jamaica
20
2
Egypt
212
2
Morocco
213
2
Nigeria
30
3
Greece
34
3
Spain
386
3
Slovenia
44
4
United Kingdom
45
4
Denmark
Traditional Voice Architectures and Features
Table 8-2
497
Country Code Examples (Continued)
Country Code
Zone
Country
51
5
Peru
52
5
Mexico
61
6
Australia
63
6
Philippines
679
6
Fiji Islands
7
7
Kazakhstan, Russia
81
8
Japan
86
8
China
886
8
Taiwan
91
9
India
966
9
Saudi Arabia
995
9
Georgia
A trunk prefix is the initial digit or digits dialed before the area code (if necessary) and the
subscriber number when making a domestic call. The trunk prefix in North America is 1; it is 0 in
most other places.
North American Numbering Plan
The North American Numbering Plan (NANP), as illustrated in Figure 8-11, is an example of a
PSTN numbering plan. It conforms to the ITU-T recommendation E.164. NANP numbers are ten
digits in length and occur in the following format: NXX-NXX-XXXX, where N is any digit 2–9
and X is any digit 0–9. The first three digits identify the numbering plan area and are commonly
called the area code. The next three digits are called the CO code; other names for these three digits
are prefix, exchange, or simply NXX. The final four digits are called the line number. NANP is
also referred to as 1+10 because when a 1 (the trunk prefix) is the first number dialed, a ten-digit
number follows to reach another NANP number. This enables the end-office switch to determine
whether it should expect a seven- or ten-digit telephone number (although many local calls now
require ten-digit, rather than seven-digit, dialing).
498
Chapter 8: Voice Network Design Considerations
Figure 8-11
North American Numbering Plan Has Ten-Digit Numbers
Country
6XX-XXXX
2XX-XXXX
62X-XXXX
Region
5XX-XXXX
61X-XXXX
51X-XXXX
3XX-XXXX
52X-XXXX
4XX-XXXX
Town
NOTE As telephone numbers in existing area codes are depleted, new area codes are required.
One way to add area codes is to split the area covered by an existing area code into two or more
areas; one area keeps the existing area code, and the other areas get new area codes.
Another way to add area codes is with overlay area codes, in which the new area code overlays
the existing area code, so people within the same geographic area might have different area
codes. Existing customers retain their existing area codes and numbers; new customers get the
new area code. Overlay area codes can result in two different people living in the same
geographic area having the same seven-digit local number, but with two different area codes.
In cities in which overlay area codes are used, everyone must dial ten digits (the area code plus
the local number) for local calls.
NOTE A closed numbering plan refers to a telephone numbering scheme that has a fixed
number of digits, not counting special service codes. The NANP 1+10 is an example, because
ten digits are always associated with each national number—three digits of area code followed
by seven digits of subscriber number. Australia’s numbering plan (with country code 61) is
another example of a closed numbering plan.
Figure 8-12 illustrates how the NANP routes telephone calls. In this example, the lower telephone
is dialing 212-4321, which is the telephone number of the top-right phone. A PSTN switch
Traditional Voice Architectures and Features
499
forwards the signal as soon as it receives enough digits to send the call to the next switch. The last
switch in the path receives all the digits and rings the destination telephone (in this case, the
telephone at the top right).
NOTE The SS7 first determines through out-of-band signaling that there is a path to the
destination and that the end station can accept the call, and then it allocates the trunks.
Figure 8-12
Routing Calls Based on the NANP
212-4321
211-1234
PSTN
Switch
212-XXXX
Switch
211-XXXX
212-4321
Switch
21X-XXXX
212-4321
Switch
251-XXXX
Switch
252-XXXX
Dials
212-4321
United Kingdom National Numbering Plan
The U.K. national numbering plan is another example of a national PSTN numbering plan
conforming to the ITU-T recommendation E.164. Figure 8-13 shows a portion of the U.K. national
numbering plan. It supports several geographic numbering options, depending on the population
density of the city or area. It also reserves some number ranges for corporate uses.
500
Chapter 8: Voice Network Design Considerations
Figure 8-13
A Portion of the U.K. National Numbering Plan
Number Range
Description
(01xxx) xxx xxx
Trunk prefix (national long-distance calling prefix)
(01xxx) xxx xxx
Geographic numbering options—area code and subscriber
number
(01x1) xxx xxxxx
(011x) xxx xxxxx
(02x) xxxx xxxx
(01xxx[x]) xxxx[x]
(05x) xxxx xxxx
Mobile phones, pagers, and personal numbering
(07xxx) xxxxxx
Reserved for corporate numbering
(0800) xxx xxx
Freephone (except for mobile phone)
(0800) xxx xxxx
(0808) xxx xxxx
999
Free emergency number
112
Integrating Voice Architectures
This section discusses integrated voice architecture concepts, components, mechanisms, and
issues. Integrated networks are described, and the H.323 standard is introduced. IP telephony is
presented and call control and transport protocols are discussed.
Introduction to Integrated Networks
Figure 8-14 illustrates a typical enterprise WAN with separate data and voice networks.
Integrating data, voice, and video in a network enables vendors to introduce new features. The
unified communications network model enables distributed call routing, control, and application
functions based on industry standards. Enterprises can mix and match equipment from multiple
vendors and geographically deploy these systems wherever they are needed.
One means of creating an integrated network is to replace the PBXs’ voice tie trunks with IP
connections by connecting the PBXs to voice-enabled routers. The voice-enabled routers convert
voice traffic to IP packets and direct them over IP data networks. This implementation is called
VoIP. Figure 8-15 illustrates an integrated network using VoIP over an IP WAN link that carries
voice and data at the same time.
Integrating Voice Architectures
Figure 8-14
Traditional Separate Voice and Data Networks
Voice Tie Trunks
Remote
Location
PBX
Remote
Location
PBX
PSTN
PBX
IP
IP
IP WAN Link
Central
Location
Figure 8-15
Integrated Voice and Data Traffic in a Converged Network
Integrated IP Data & Voice WAN Link
PBX
PBX
V
V
Remote
Location
Remote
Location
Voice-enabled Router
V
PBX
PSTN
Central
Location
501
502
Chapter 8: Voice Network Design Considerations
IP telephony, a superset of VoIP, is another implementation. IP phones are used, and the phones
themselves convert the voice into IP packets. A dedicated network server that runs specialized call
processing software replaces the PBX; in Cisco networks, this is the Cisco Unified Communications Manager. IP phones are not connected with telephone cabling. Instead, they send all signals
over standard Ethernet. The “Introduction to IP Telephony” section later in this chapter provides
details of this solution.
NOTE Earlier names for the Cisco Unified Communications Manager include Cisco
CallManager and Cisco Unified CallManager.
Drivers for Integrating Voice and Data Networks
Although a PSTN is effective for carrying voice signals, many business drivers are forcing the
need for a new type of network for the following reasons:
■
Data has overtaken voice as the primary traffic on many voice networks.
■
Companies want to reduce WAN costs by migrating to integrated networks that can efficiently
carry any type of data.
■
The PSTN architecture was designed and built for voice and is not flexible enough to
optimally carry data.
■
The PSTN cannot create and deploy features quickly enough.
■
Data, voice, and video cannot be integrated on the current PSTN structure.
IP telephony is cost-effective because of the reduced number of tie trunks and higher link
efficiency, and because both voice and data networks use the same WAN infrastructure. It is much
easier to manage a single network than two separate networks, because doing so requires fewer
administrators, a simplified management infrastructure, and lower administrator training costs.
KEY
Whether or not either caller is talking, circuit-switched (classical voice) calls require a
POINT dedicated duplex 64-kbps dedicated circuit between the two telephones. During the call,
no other party can use the 64-kbps connection, and the company cannot use it for any other
purpose.
Packet-switched networking uses bandwidth only when it is required. This difference is
an important benefit of packet-based voice networking.
On an IP network, voice servers and application servers can be located virtually anywhere. The
rationale for enterprises to maintain voice servers, as with data application servers, is diminishing
Integrating Voice Architectures
503
over time. As voice moves to IP networks (using the public Internet for inter-enterprise traffic and
private intranets for intra-enterprise traffic), service providers might host voice and application
servers.
H.323
KEY
H.323 is an ITU-T standard for packet-based audio, video, and data communications
POINT across IP-based networks.
Introduction to H.323
The ITU-T H.323 standard is a foundation for audio, video, and data communications across IPbased networks, including the Internet. By complying with the H.323 standard, multimedia
products and applications from multiple vendors can interoperate, thereby allowing users to
communicate without concern for compatibility.
The H.323 standard is broad in scope and includes standalone devices (such as IP telephones and
voice gateways), embedded personal computer technology (such as PCs with Microsoft’s
NetMeeting), and point-to-point and multipoint conferencing. H.323 includes call control
(including session setup, monitoring, and termination), multimedia management, bandwidth
management, and multicast support in multipoint conferences.
Communications under H.323 are a mix of audio, video, data, and control signals. To establish a
voice call, H.323 refers to other standards, including H.225 and H.245. The H.225 standard is
based on the Q.931 protocol. It describes call signaling and the Registration, Admission, and
Status (RAS) signaling used for H.323 session establishment and packetization between two
H.323 devices. For example, the H.225 setup message has information elements that include the
calling party number and the called party number. H.245 is a control standard for multimedia
communication that describes the messages and procedures used for opening and closing logical
channels for audio, video, and data, capability exchange, control, and indications.
An H.323 conference can include endpoints with different capabilities. For example, a terminal
with audio-only capabilities can participate in a conference with terminals that have video and data
capabilities. An H.323 multimedia terminal can share the data portion of a videoconference with
a data-only terminal while sharing voice, video, and data with other H.323 terminals.
H.323 Components
H.323 defines four major components for a network-based communications system: terminals,
gateways, gatekeepers, and multipoint control units (MCU).
504
Chapter 8: Voice Network Design Considerations
Terminals
Terminals are client endpoints that provide real-time two-way H.323 communications with other
endpoints, such as H.323 terminals, gateways, or MCUs. All terminals must support standard 64kbps PCM-encoded voice communications; video and data are optional. Examples of H.323
terminals are IP telephones and PCs with Microsoft NetMeeting software.
Gateways
An H.323 gateway is an optional element in the voice network; it can be a voice-enabled router or
switch. Gateways provide many services, such as translation between H.323 endpoints and nonH.323 devices, which allows H.323 endpoints and non-H.323 devices to communicate. In
addition, the gateway also translates between audio, video, and data formats; converts call setup
signals and procedures; and converts communication control signals and procedures.
KEY
Gateways are not required between two H.323 terminals because these endpoints can
POINT communicate with each other directly.
Terminals use the H.245 and Q.931 protocols to communicate with H.323 gateways. An example
of a gateway is a voice-enabled router providing a connection to the PSTN, a PBX, or an analog
phone. An interface on a voice gateway that carries voice data is a voice port. A voice port is a
physical port on a voice module; this is what makes a router voice-enabled.
A voice module enables connectivity with traditional circuit-switched voice devices and networks.
It converts voice into IP packets and vice versa. Specialized processors called digital signal
processors (DSP) are located on the voice module and perform the coding and compressing of
voice data. The following are some of the voice modules available on Cisco voice gateways:
■
ISDN PRI on an E1 or T1 voice module
■
E1-R2 signaling on an E1 voice module
■
T1-CAS signaling on a T1 voice module
■
FXS on a low-capacity voice module
■
FXO on a low-capacity voice module
■
ISDN BRI on a low-capacity voice module
Integrating Voice Architectures
505
Gatekeepers
KEY
An H.323 gatekeeper is another optional element that manages H.323 endpoints. The
POINT terminals, gateways, and MCUs managed by a single gatekeeper are known as an H.323
zone; there is a one-to-one relationship between a zone and a gatekeeper.
A gatekeeper is typically used in larger, more complex networks; the gatekeeper function can be
performed by a Cisco IOS router or by third-party software. A gatekeeper serves as the central
point for all calls within its zone and provides call control services to registered H.323 endpoints.
All H.323 devices in the zone register with the gatekeeper so that the gatekeeper can perform its
basic functions, such as H.323 address translation, admission control, bandwidth control, and zone
management. Optionally the gatekeeper provides call control signaling, call authorization,
bandwidth management, and call management.
The gatekeeper can balance calls among multiple gateways, either by integrating their addressing
into the Domain Name System or via Cisco IOS configuration options. For instance, if a call is
routed through a gatekeeper, that gatekeeper can forward the call to a corresponding gateway
based on some routing logic. When an H.323 gatekeeper acts as a virtual voice switch, its function
is known as gatekeeper-routed call signaling.
NOTE The Cisco Unified Communications Manager does not support the gatekeeper-routed
call signaling capability.
The Importance of a Gatekeeper
Figure 8-16 illustrates some different voice design options and emphasizes the importance of a
gatekeeper, especially in large voice network designs.
Voice network design depends primarily on the number of voice gateways and, consequentially,
the number of logical connections between them. The maximum number of logical connections
between voice gateways, and, as a result, the network’s complexity, can be calculated by the
formula (N * (N–1))/2, where N is the number of voice gateways in the system. For example, the
maximum number of logical connections between three voice gateways is three, between five
voice gateways is ten, and between eight voice gateways is 28. The complexity of the network
grows quickly with the number of gateways; adding one more voice gateway to an existing
network means reconfiguring all other voice gateways, making network maintenance quite
difficult. A solution for this issue is the use of a gatekeeper.
506
Chapter 8: Voice Network Design Considerations
Figure 8-16
The Importance of a Gatekeeper in Voice Networks
Small Voice Network Scenario
V
V
2XXX
3XXX
Complete
Dialing Plan on
Every Gateway
Medium Voice Network Scenario
2XXX
V
V
3XXX
V
V
V
1XXX
1XXX
4XXX
Logical Connection
V
5XXX
V
8XXX
V
1XXX
V
V
7XXX
2XXX
V
V
6XXX
3XXX
V
V
5XXX
4XXX
Simple Dialing
Plan on Every
Gateway
Gatekeeper with a
Complete Dialing Plan
Large Voice Network Scenario
KEY
The gatekeeper stores the dialing plan of the entire zone. Gateways only have to register
POINT with the gatekeeper; the gatekeeper provides all call control services to the gateways.
Therefore, the configuration of a voice gateway becomes simpler and does not require
modification when a new voice gateway is added to the system.
Multipoint Control Units
An MCU is an H.323 endpoint that enables three or more endpoints to participate in a multipoint
H.323 conference. An MCU incorporates a multipoint controller (MC) and optionally one or more
multipoint processors (MP).
The MC is the conference controller that handles H.245 capability negotiations between the
endpoints and controls conference resources. An MC is not a standalone unit and can be located
within an endpoint, terminal, gateway, gatekeeper, or MCU.
Integrating Voice Architectures
507
The MP handles the conference’s data streams. It receives multiple streams of multimedia input,
switches and mixes the streams, and retransmits the result to the conference members. An MP
resides in an MCU.
H.323 Example
Figure 8-17 illustrates the components typically involved in an H.323 call and the interactions
between them.
Figure 8-17
Interactions of H.323 Components
Traditional Phone
Voice Trunk
PBX
PBX
IP
V
V
H.323 Terminal
H.323 Terminal
Voice-to-IP and IP-to-Voice
Conversions in Voice Gateways
No Conversion Required for
H.323-Capable Devices
No Conversion for
H.323-Capable Devices
If traditional telephones are used and an IP network must transport calls, a voice gateway is
required on both sides of the IP network. In this example, the gateway is a voice-enabled router
that performs voice-to-IP and IP-to-voice conversions in DSPs. After the gateway router converts
voice into IP packets, it transmits the packets across the IP network. The receiving router performs
the same function in the reverse order: It converts IP packets to voice signals and forwards them
through the PBX to the destination telephone.
KEY
A voice gateway is not required when H.323-capable devices (terminals) communicate
POINT over an IP network; the router forwards IP packets it receives from an H.323 device to the
appropriate outgoing interface. A voice gateway is required, however, to convert between
an IP network and the PSTN.
508
Chapter 8: Voice Network Design Considerations
Introduction to IP Telephony
IP telephony refers to cost-effective communication services, including voice, fax, and voicemessaging applications, transported via the packet-switched IP network rather than the circuitswitched PSTN.
KEY
VoIP uses voice-enabled routers to convert voice into IP packets and route those packets
POINT between corresponding locations. Users do not often notice the implementation of VoIP in
the network; they use their traditional phones, connected to a PBX. However, the PBX is
not connected to the PSTN or to another PBX, but to a voice-enabled router that is an entry
point to VoIP.
IP telephony replaces traditional phones with IP phones and uses the Cisco Unified
Communications Manager, a server for call control and signaling, in place of PBXs. The
IP phone itself performs voice-to-IP conversion, and voice-enabled routers are not
required within the enterprise network. If connection to the PSTN is required, a voiceenabled router or other gateway must be added where calls are forwarded to the PSTN.
The basic steps for placing an IP telephone call include converting the analog voice signal into a
digital format, and compressing and translating the digital signal into IP packets for transmission
across the IP network. The process is reversed at the receiving end.
The IP telephony architecture, illustrated in Figure 8-18, includes four distinct components:
infrastructure, call processing, applications, and client devices. These components are described
as follows:
■
Infrastructure: The infrastructure is based on data link layer and multilayer switches and
voice-enabled routers that interconnect endpoints with the IP and PSTN network. Endpoints
attach to the network using switched 10/100 Ethernet ports. Switches may include Power over
Ethernet (PoE) ports that sense the presence of IP devices that require inline power, such as
Cisco IP phones and wireless access points, and provide that power. Voice-enabled routers
perform conversions between the circuit-switched PSTN and IP networks.
■
Call processing: Cisco Unified Communications Manager is the software-based callprocessing component of the Cisco enterprise IP telephony solution. Cisco Unified
Communications Manager provides a scalable, distributable, and highly available enterprise
IP telephony call processing solution and performs much like the PBX in a traditional
telephone network, including providing call setup and processing functions.
The Cisco Unified Communications Manager can be installed on Cisco MCS 7800 Series
server platforms and selected third-party servers.
Integrating Voice Architectures
509
■
Applications: Applications provide additional features to the IP telephony infrastructure.
Cisco Unity unified messaging (integrating e-mail and voice mail), Cisco Unified
MeetingPlace (multimedia conferencing), Cisco Unified IP IVR, and Cisco Unified Contact
Center products (including intelligent contact routing, call treatment, network-to-desktop
computer telephony integration, and multichannel automatic call distribution) are among the
Cisco applications available for IP telephony. The open-source application layer allows thirdparty companies to develop software that interoperates with Cisco Unified Communications
Manager.
■
Client devices: Client devices are IP telephones and software applications that allow
communication across the IP network. Cisco Unified Communications Manager centrally
manages the IP telephones through Ethernet connections in the Building Access Layer
switches.
Figure 8-18
IP Telephony Components
Voice Messaging and
Applications
Voice Mail
Call-Processing
Engine
DSP Resources for
Conferencing
Cisco Unified
Communications Manager
IP WAN
QoS-Enabled
WAN Infrastructure
V
Router or
Gateway
IP Phones
and Endpoints
PSTN
PSTN Gateway
or Router
IP Telephony Design Goals
Typical design goals of an IP telephony network are as follows:
■
End-to-end IP telephony: Using end-to-end IP telephony between sites where IP
connectivity is already established. IP telephony can be deployed as an overlaid service that
runs on the existing infrastructure.
510
Chapter 8: Voice Network Design Considerations
■
Widely usable IP telephony: To make IP telephony widely usable, voice quality should be
at the same level as in traditional telephony; this is known as toll quality voice.
■
Reduced long-distance costs: Long-distance costs should be lower than with traditional
telephony. This can be accomplished by using private IP networks, or possibly the public
Internet, to route telephone calls.
■
Cost-effective: Making IP telephony cost effective depends on using the existing WAN
capacity more efficiently and the cost-of upgrading the existing IP network infrastructure to
support IP telephony. In some cases, this goal can be accomplished by using the public
Internet or private IP networks to route telephone calls.
■
High availability: To provide high availability, redundant network components can be used
and backup power can be provided to all network infrastructure components, including
routers, switches, and IP phones.
■
Lower total cost of ownership: IP telephony should offer lower total cost of ownership and
greater flexibility than traditional telephony. Installation costs and operational costs for
unified systems are lower than the costs to implement and operate two infrastructures.
■
Enable new applications on top of IP telephony via third-party software: For example,
an intelligent phone used for database information access as an alternative to a PC is likely to
be easier to use and less costly to own, operate, and maintain.
■
Improved productivity: IP telephony should improve the productivity of remote workers,
agents, and stay-at-home staff by extending the productivity-enhancing enterprise telephony
features such as voice mail and voice conferencing to the remote teleworker.
■
Facilitate data and telephony network consolidation: Such consolidation can contribute to
operational and equipment savings.
The following sections illustrate some sample IP telephony designs.
Single-Site IP Telephony Design
Figure 8-19 illustrates a design model for an IP telephony network within a single campus or site.
Integrating Voice Architectures
Figure 8-19
511
Single-Site IP Telephony Design
LAN Switch with
Inline Power
Cisco
Unified
Communications
Manager
Voice Mail
Voice-Enabled
Router
PSTN
V
Voice Trunk
IP
IP
A single-site IP telephony design consists of Cisco Unified Communications Manager, IP
telephones, LAN switches with inline power (PoE), applications such as voice mail, and a voiceenabled router, all at the same physical location. The IP telephones are powered through their
Ethernet interface via the LAN switch. Gateway trunks are connected to the PSTN so that users
can make external calls.
Single-site deployment allows each site to be completely self-contained. All calls to the outside
world and remote locations are placed across the PSTN. If an IP WAN is incorporated into the
single-site model, it is for data traffic only; no telephony services are provided over the WAN.
Therefore, there is no loss of the call processing service or functionality if an IP WAN failure
occurs or if the WAN has insufficient bandwidth. The only external requirements are a PSTN
carrier and route diversity within the PSTN network. As a recommended practice, use this model
for a single campus or a site with fewer than 30,000 lines.
Multisite WAN with Centralized Call Processing Design
Figure 8-20 presents a multisite WAN design model with centralized call processing; Cisco
Unified Communications Manager at the central site connects to remote locations through the IP
WAN. Remote IP telephones rely on the centralized Cisco Unified Communications Manager to
handle their call processing. The IP WAN transports voice traffic between sites and carries call
control signaling between the central site and the remote sites. Applications such as voice mail and
IVR systems are also centralized, therefore reducing the overall cost of ownership and centralized
administration and maintenance.
512
Chapter 8: Voice Network Design Considerations
Figure 8-20
Multisite WAN with Centralized Call Processing Design
Voice Mail
Voice-Enabled
Router
Cisco Unified
Communications
Manager
PSTN
Voice
Trunk
Enterprise
Campus
Voice
Trunk
IP
IP
IP
V
V
Voice-Enabled
Router
IP
IP
Remote
Location
IP Phones Managed
with the Central Cisco Unified
Communications Manager
The remote locations require IP connectivity with the Enterprise Campus. IP telephones, powered
by a local LAN switch, convert voice into IP packets and send them to the local LAN. The local
router forwards the packets to the appropriate destination based on its routing table. In the event
of a WAN failure, the voice-enabled routers at the remote sites can provide backup call processing
functionality with Cisco Unified Survivable Remote Site Telephony (SRST) services. Cisco
Unified SRST extends high-availability IP telephony to branch offices by providing backup call
processing functionality on voice-enabled routers.
If an enterprise requires high-quality voice communication over the WAN, the service provider
must implement QoS mechanisms. Enterprises and service providers usually sign a service level
agreement (SLA) that guarantees bandwidth and latency levels suitable for voice transport.
NOTE The routers are voice-capable to enable voice communication with the outside world
through the PSTN.
Integrating Voice Architectures
513
As a recommended practice, use this model for a main site with many smaller remote sites that
connect via a QoS-enabled WAN but that do not require full features and functionality during a
WAN outage.
Mu