642-891
CCNP/CCDP 642-891 (Composite)
Study Guide:
Composite
(BSCI and BCRAN)
Version 1
Leading the way in IT testing and certification tools, www.testking.com
-1-
CCNP/CCDP 642-891 (Composite)
TABLE OF CONTENTS
List of Tables
List of Acronyms
Introduction
1. The Campus Network
1.1 The Traditional Shared Campus Network
1.1.1 Collisions
1.1.2 Bandwidth
1.1.3 Broadcasts and Multicasts
1.2 The New Campus Network
1.3 The 80/20 Rule and the New 20/80 Rule
1.4 Characterizing Scalable Internetworks
1.4.1 Reliability and Availability
1.4.2 Responsiveness
1.4.3 Efficiency
1.4.4 Adaptability and Serviceability
1.4.5 Accessibility and Security
1.5 Network Congestion
1.5.1 Problems Created by Network Congestion
1.5.1.1 Excessive Traffic
1.5.1.2 Dropped Packets
1.5.1.3 Retransmission of Packets
1.5.1.4 Incomplete Routing Tables
1.5.1.5 Incomplete Server Lists
1.5.1.6 The Spanning-Tree Protocol Breaks
1.5.1.7 Runaway Congestion
1.5.2 Symptoms of Network Congestion
1.5.2.1 Application Time Outs
1.5.2.2. Clients Cannot Connect to Network Resources
1.5.2.3 Network Death Results
1.6 Designing Scalable Networks
1.6.1 Open Systems Interconnection Model
1.6.1.1 Data Encapsulation
1.6.1.2 Layer 2 Switching
1.6.1.3 Layer 3 Switching
1.6.1.4 Layer 4 Switching
Leading the way in IT testing and certification tools, www.testking.com
-2-
CCNP/CCDP 642-891 (Composite)
1.6.1.5 Multi-Layer Switching (MLS)
1.6.2 The Cisco Hierarchical Model
1.6.2.1 Core Layer
1.6.2.2 Distribution Layer
1.6.2.3 Access Layer
1.6.3 Modular Network Design
1.6.3.1 The Switch Block
1.6.3.2 The Core Block
1.6.3.2.1 The Collapsed Core
1.6.3.2.2 Dual Core
1.6.3.2.3 Core Size
1.6.3.2.4 Core Scalability
1.6.3.2.5 Layer 3 Core
1.6.3.3 Additional Building Blocks
1.7 Alleviating Congestion
1.7.1 Access Lists
1.7.2 Extended Access Lists
1.7.3 Distribution Lists
1.7.4 Other Solutions to Traffic Control
1.7.5 Prioritization
1.7.5.1 First In, First Out (FIFO)
1.7.5.2 Weighted Fair Queuing (WFQ)
1.7.5.3 Priority Queuing (PQ)
1.7.5.4 Custom Queuing
1.7.5.5 Class-Based Weighted Fair Queuing (CBWFQ)
1.7.5.6 Low-Latency Queuing (LLQ)
1.7.6 Null Interface
1.7.7 Fast, Autonomous, and Silicon Switching
1.7.8 Cisco Express Forwarding (CEF)
1.7.9 Enhanced Interior Gateway Routing Protocol (EIGRP)
2. IP/TCP
2.1 The IP Address
2.1.1 IP Address Classes
2.1.2 Classless Interdomain Routing (CIDR) Notation
2.1.3 Subnetting
2.1.4 Variable-Length Subnet Masks
2.2 Summarization
2.2.1 Automatic Summarization
2.2.2 Manual Summarization
2.3 Implementing Private IP Addresses
2.3.1 Private IP Addressing
2.3.2 Network Address Translation
Leading the way in IT testing and certification tools, www.testking.com
-3-
CCNP/CCDP 642-891 (Composite)
2.4 The Logical AND Operation
2.5 IP Routing
2.5.1 Routing Protocols
2.5.2 The show ip route Command
2.5.3 The clear ip route Command
3. Basic Switching and Network Technologies
3.1 Network Technologies
3.1.1 Ethernet
3.1.1.1 Ethernet Switches
3.1.1.2 Ethernet Media
3.1.2 Cisco Long Reach Ethernet (LRE)
2.1.3 Fast Ethernet
3.1.4 Gigabit Ethernet
3.1.5 10Gigabit Ethernet
3.1.6 Token Ring
3.2 Connecting Switches
3.2.1 Console Port Cables and Connectors
3.2.2 Ethernet Port Cables and Connectors
3.2.3 Gigabit Ethernet Port Cables and Connectors
3.2.4 Token Ring Port Cables and Connectors
3.3 Switch Management
3.3.1 Switch Naming
3.3.2 Password Protection
3.3.3 Remote Access
2.3.4 Inter-Switch Communication
3.3.5 Switch Clustering and Stacking
3.4 Switch File Management
3.4.1 OS Image Files
3.4.2 Configuration Files
3.4.3 More Catalyst Switch Files
3.4.4 Shifting Catalyst Switch Files About
3.5 Switch Port Configuration
3.5.1 Port Description
3.5.2 Port Speed
3.5.3 Ethernet Port Mode
3.5.4 Token Ring Port Mode
4. Routing
4.1 Routing Tables
4.1.1 Static Routing
Leading the way in IT testing and certification tools, www.testking.com
-4-
CCNP/CCDP 642-891 (Composite)
4.1.2 Dynamic Routing
4.1.3 Routing Updates
4.1.4 Verifying Routing Tables
4.2 Routing Protocols
4.2.1 Distance-Vector Routing
4.2.2 Link-State Routing
4.2.3 Classful Routing
4.2.4 Classless Routing
4.2.5 Multipath Routing
4.3 Basic Switching Functions
4.4 Convergence
4.4.1 Distance-Vector Routing Convergence
4.4.1.1 RIP and IGRP Convergence
4.4.1.2 EIGRP Convergence
4.4.2 Link-State Convergence
4.5 Routing and Switching in a Cisco Router
4.6 The Structure of a Routing Table
4.7 Testing and Troubleshooting Routes
4.7.1 The ping Command
4.7.2 The traceroute Command
5. OSPF in a Single Area Network
5.1 OSPF Neighbors
5.1.1 Adjacent OSPF Neighbors
5.2 The Designated Router (DR) and the Backup Designated Router (BDR)
5.3 The OSPF Routing Table
5.3.1 Building the Routing Table on a New OSPF Router
5.3.2 The Topology Database
5.3.3 The Shortest Path First
5.4 OSPF Across Nonbroadcast Multiaccess Networks (NBMA)
5.5 Problems with OSPF in a Single Area
5.6 Configuring OSPF in a Single Area
5.6.1 Configuring OSPF on an Internal Router
5.6.1.1 The router ospf Command
5.6.1.2 The network Command
5.6.2 Configuring OSPF on the External Router
Leading the way in IT testing and certification tools, www.testking.com
-5-
CCNP/CCDP 642-891 (Composite)
5.6.2.1 The interface loopback Command
5.6.2.2 The cost Command
5.6.2.3 The auto-cost Command
5.6.2.4 The priority Command
5.6.3 Configuring OSPF over an NBMA Topology
5.6.3.1 Configuring OSPF in NBMA Mode
5.6.3.2 Configuring OSPF in Point-to-Multipoint Mode
5.6.3.3 Configuring OSPF in Broadcast Mode
5.6.3.4 Configuring OSPF in Point-to-Point Mode on a Frame Relay
Subinterface
5.7 Verifying the OSPF Configuration on a Single Router
5.8 Differences between OSFP and RIP Routing Protocols
6. OSPF in a Multiple Area Network
6.1 Different Router Types
6.2 The Link-State Advertisements
6.3 OSPF Path Selection Between Areas
6.3.1 The Path to Another Area
6.3.2 The Path to Another AS
6.4 Different Types of Areas
6.5 Design Considerations in Multiple Area OSPF
6.5.1 Cisco Design Guidelines
6.5.2 Summarization
6.5.3 The Virtual Link
6.5.4 OSPF over an NBMA Network
6.6 Configuring OSPF in a Multiple Area Network
6.6.1 The network Command
6.6.2 The area range Command for an ABR
6.6.3 The summary-address Command for an ASBR
6.6.4 The area Command
6.6.5 Configuring a Virtual Link
6.7 Verifying the OSPF Configuration a Multiple Area Network
7. EIGRP in Enterprise Networks
7.1 Operation of EIGRP
7.1.1 The Neighbor Table
7.1.2 The Topology Table
7.1.3 EIGRP Metrics
Leading the way in IT testing and certification tools, www.testking.com
-6-
CCNP/CCDP 642-891 (Composite)
7.2 Updating the Routing Table
7.2.1 Updating the Routing Table in Passive Mode
7.2.2 Updating the Routing Table in Active Mode
7.2.3 Adding a Network to the Topology Table
7.2.4 Removing a Path or Router from the Topology Table
7.3 Scaling EIGRP
7.4 Configuring EIGRP
7.5 Verifying the EIGRP Operation
8. Using BGP-4 to Communicate with Other Autonomous Systems
8.1 BGP-4 Overview
8.1.1 The BGP-4 Operation
8.1.2 Types of BGP-4
8.1.3 BGP-4 Synchronization
8.1.4 BGP-4 Policy-Based Routing
8.1.5 BGP-4 Attributes
8.2 Basic BGP-4 Configuration Commands
8.2.1 Starting the Routing Process
8.2.2 Defining the Networks to Be Advertised
8.2.3 Identifying Neighbors and Defining Peer Groups
8.2.4 Forcing the Next-Hop Address
8.2.5 Disabling Synchronization
8.2.6 Aggregating Routes
8.3 Effecting BGP-4 Configuration Changes
8.4 Verifying the Basic BGP-4 Configuration
8.5 Advanced BGP-4 Configuration
8.5.1 Configuring Route Reflectors
8.5.2 Controlling BGP-4 Traffic
8.5.3 Redundant Connections into the Internet
8.5.4 Determining the BGP-4 Path by Configuring the Attributes
8.6 Verifying the Advanced BGP-4 Configuration
9. Using Integrated IS-IS in Connectionless Networks
9.1. IS-IS Overview
9.1.1 The OSI Connectionless Network Service (CLNS)
9.1.2 Integrated IS-IS
Leading the way in IT testing and certification tools, www.testking.com
-7-
CCNP/CCDP 642-891 (Composite)
9.2 IS-IS Operations
9.2.1 IS-IS Data-Flow Diagram
9.2.2 Adjacency Building
9.2.3 The Link-State Database and Reliable Flooding
9.2.4 DIS and Pseudonodes
9.2.5 IS-IS Metrics
9.3 IS-IS Routing
9.3.1 IP Routing with IS-IS
9.4 Security
9.5 Configuring Integrated IS-IS
9.5.1 Enabling IS-IS and Assigning Areas
9.5.2 Enabling IP Routing for an Area on an Interface
9.5.3 Configuring Optional Interface Parameters
9.5.4 Configuring IS-IS Authentication Passwords
9.5.5 Monitoring IS-IS
10. Controlling Routing Updates Across the Network
10.1 Features of Redistribution
10.2 Problems of Configuring Multiple Routing Protocols
10.2.1 Path Selection
10.2.2 Routing Loops
10.2.3 Redistribution and Network Convergence
10.3 Configuring Redistribution
10.4 The Default or Seed Metric
10.4.1 Configuring the Default Metric for OSPF, RIP, EGP or BGP-4
10.4.2 Configuration for EIGRP or IGRP
10.5 Configure the Administrative Distance
10.5.1 Configuring the Administrative Distance in EIGRP
10.5.2 Configuring the Administrative Distance in Other Protocols
10.6 The Passive Interface
10.7 Static Routes
10.8 Controlling Routing Updates with Filtering
10.9 Policy-Based Routing Using Route Maps
10.10 Managing the Redistribution
Leading the way in IT testing and certification tools, www.testking.com
-8-
CCNP/CCDP 642-891 (Composite)
10.10.1 Trouble shooting Redistribution
10.10.2 Monitoring Policy-Routing Configurations
11. Virtual LANs (VLANs) and Trunking
11.1 VLAN Membership
11.2 Extent of VLANs
11.3 VLAN Trunks
3.3.1 VLAN Frame Identification
3.3.2 Dynamic Trunking Protocol
3.3.3 VLAN Trunk Configuration
11.4 Service Provider Tunneling
3.4.1 IEEE 802.1Q Tunnels
3.4.2 Layer 2 Protocol Tunnels
3.4.3 Ethernet Over Multiprotocol Label Switching (MPLS) Tunneling
11.5 VLAN Trunking Protocol (VTP)
11.5.1 VTP Modes
11.5.1.1 Server Mode
11.5.1.2 Client Mode
11.5.1.3 Transparent Mode
11.5.2 VTP Advertisements
11.5.2.1 Summary Advertisements
11.5.2.2 Subset Advertisements
11.5.2.3 Client Request Advertisements
11.5.3 VTP Configuration
11.5.3.1 Configuring a VTP Management Domain
11.5.3.2 Configuring the VTP Mode
11.5.3.3 Configuring the VTP Version
11.5.4 VTP Pruning
11.6 Token Ring VLANs
11.6.1 TrBRF
11.6.2 TrCRF
11.6.3 VTP and Token Ring VLANs
11.6.4 Duplicate Ring Protocol (DRiP)
12. Redundant Switch Links
12.1 Switch Port Aggregation with EtherChannel
12.1.1 Bundling Ports with EtherChannel
12.1.2 Distributing Traffic in EtherChannel
12.1.3 Port Aggregation Protocol (PAgP)
12.1.4 Link Aggregation Control Protocol (LACP)
12.1.5 EtherChannel Configuration
Leading the way in IT testing and certification tools, www.testking.com
-9-
CCNP/CCDP 642-891 (Composite)
12.2 Spanning-Tree Protocol (STP)
12.3 Spanning-Tree Communication
12.3.1 Root Bridge Election
12.3.2 Root Ports Election
12.3.3 Designated Ports Election
12.4 STP States
12.5 STP Timers
12.6 Convergence
12.6.1 PortFast: Access Layer Nodes
12.6.2 UplinkFast: Access Layer Uplinks
12.6.3 BackboneFast: Redundant Backbone Paths
12.7 Spanning-Tree Design
12.8 STP Types
12.8.1 Common Spanning Tree (CST)
12.8.2 Per-VLAN Spanning Tree (PVST)
12.8.3 Per-VLAN Spanning Tree Plus (PVST+)
12.9 Protecting Against Unforeseen Bridge Protocol Data Units (BPDU)
12.9.1 The Root Guard Feature
12.9.2 The BPDU Guard Feature
12.10 Protecting Against the Sudden Loss of BPDUs
12.10.1 BPDU Skew Detection Feature
12.10.2 Loop Guard Feature
12.10.3 Unidirectional link detection (UDLD) STP Feature
12.11 Advanced Spanning-Tree Protocol
12.11.1 Rapid Spanning Tree Protocol (RSTP)
12.11.1.1 RSTP Port Performance
12.11.1.2 BPDUs and RSTP
12.11.1.3. RSTP Convergence
12.11.1.4. RSTP and Topology Changes
12.11.1.5 Configuring RSTP
12.11.2 The Multiple Spanning Tree Protocol (MSTP or MST)
12.11.2.1 MST Regions
12.11.2.2 Spanning Tree Instances in MST
12.11.2.3 Configuring MST
13. Trunking with ATM LAN Emulation (LANE)
13.1 ATM
Leading the way in IT testing and certification tools, www.testking.com
- 10 -
CCNP/CCDP 642-891 (Composite)
13.1.1 The ATM Model
13.1.2 Virtual Circuits
13.1.3 ATM Addressing
13.1.3.1 VPI/VCI Addresses
13.1.3.2 NSAP Addresses
13.1.4 ATM Protocols
13.2 LAN Emulation (LANE)
13.2.1 LANE Components
13.2.2 LANE Operation
13.2.3 Address Resolution
13.2.4 LANE Component Placement
13.2.5 LANE Component Redundancy (SSRP)
13.3 LANE Configuration
13.3.1 Configuring the LES and BUS
13.3.2 Configuring the LECS
13.3.3 Configuring Each LEC
13.3.4 Viewing the LANE Configuration
14. InterVLAN Routing
14.1 InterVLAN Routing Design
14.1.1 Routing with Multiple Physical Links
14.1.2 Routing over Trunk Links
14.1.2.1 802.1Q and ISL Trunks
14.1.2.2 ATM LANE
14.2 Routing with an Integrated Router
14.3 InterVLAN Routing Configuration
14.3.1 Accessing the Route Processor
14.3.2 Establishing VLAN Connectivity
14.3.2.1 Establishing VLAN Connectivity with Physical Interfaces
14.3.2.2 Establishing VLAN Connectivity with Trunk Links
14.3.2.3 Establishing VLAN Connectivity with LANE
14.3.2.4 Establishing VLAN Connectivity with Integrated Routing Processors
14.3.3 Configure Routing Processes
14.3.4 Additional InterVLAN Routing Configurations
15. Multilayer Switching (MLS)
15.1 Multilayer Switching Components
15.2 MLS-RP Advertisements
15.3 Configuring Multilayer Switching
Leading the way in IT testing and certification tools, www.testking.com
- 11 -
CCNP/CCDP 642-891 (Composite)
15.4 Flow Masks
15.5 Configuring the MLS-SE
15.5.1 MLS Caching
15.5.2 Verifying MLS Configurations
15.5.3 External Router Support
15.5.4 Switch Inclusion Lists
15.5.5 Displaying MLS Cache Entries
16. Cisco Express Forwarding (CEF)
16.1 CEF Components
16.1.1 Forwarding Information Base (FIB)
16.1.2 Adjacency Tables
16.2 CEF Operation Modes
16.3 Configuring Cisco Express Forwarding
16.3.1 Configuring Load Balancing for CEF
16.3.1.1 Per-Destination Load Balancing
16.3.1.2 Per-Packet Load Balancing
16.3.2 Configuring Network Accounting for CEF
17. The Hot Standby Router Protocol (HSRP)
17.1 Traditional Redundancy Methods
17.1.1 Default Gateways
17.1.2 Proxy ARP
17.1.3 Routing Information Protocol (RIP)
17.1.4 ICMP Router Discovery Protocol (IRDP)
17.2 Hot Standby Router Protocol
17.2.1 HSRP Group Members
17.2.2 Addressing HSRP Groups Across ISL Links
17.3 HSRP Operations
17.3.1 The Active Router
17.3.2 Locating the Virtual Router MAC Address
17.3.3 Standby Router Behavior
17.3.4 HSRP Messages
17.3.5 HSRP States
17.4 Configuring HSRP
17.4.1 Configuring an HSRP Standby Interface
17.4.2 Configuring HSRP Standby Priority
17.4.3 Configuring HSRP Standby Preempt
17.4.4 Configuring the Hello Message Timers
17.4.5 HSRP Interface Tracking
Leading the way in IT testing and certification tools, www.testking.com
- 12 -
CCNP/CCDP 642-891 (Composite)
17.4.6 Configuring HSRP Tracking
17.4.7 HSRP Status
17.5 Troubleshooting HSRP
18. Multicasts
18.1 Unicast Traffic
18.2 Broadcast Traffic
18.3 Multicast Traffic
18.4 Multicast Addressing
18.4.1 Multicast Address Structure
18.4.2 Mapping IP Multicast Addresses to Ethernet
18.4.3 Managing Multicast Traffic
18.4.4 Subscribing and Maintaining Groups
18.4.4.1 IGMP Version 1
18.4.4.2 IGMP Version 2
18.4.5 Switching Multicast Traffic
18.5 Routing Multicast Traffic
18.5.1 Distribution Trees
18.5.2 Multicast Routing Protocols
18.5.2.1 Dense Mode Routing Protocols
18.5.2.2 Sparse Mode Routing Protocols
18.6 Configuring IP Multicast
18.6.1 Enabling IP Multicast Routing
18.6.2 Enabling PIM on an Interface
18.6.2.1 Enabling PIM in Dense Mode
18.6.2.2 Enabling PIM in Sparse Mode
18.6.2.3 Enabling PIM in Sparse-Dense Mode
18.6.2.4 Selecting a Designated Router
18.6.3 Configuring a Rendezvous Point
18.6.4 Configuring Time-To-Live
18.6.5 Debugging Multicast
18.6.6 Configuring Internet Group Management Protocol (IGMP)
18.6.7 Configuring Cisco Group Management Protocol (CGMP)
19. Quality of Service
19.1 Understanding the Need for Quality of Service
19.2 QoS Types
19.2.1 Best Efforts Delivery
19.2.2 Integrated Services Model
Leading the way in IT testing and certification tools, www.testking.com
- 13 -
CCNP/CCDP 642-891 (Composite)
19.2.3 Differentiated Services Model
19.3 Differentiated Services QoS
19.3.1 IEEE 802.1p
19.3.2 Using the QoS Model
19.3.3 Prioritizing the Traffic Classes
19.3.4 Queuing Methods
19.3.4.1. Auto-QoS
19.4 Configuring QoS
19.4.1 Per-interface QoS Trust
19.4.2 Defining a QoS Policy
19.4.3 Configuring and Tuning Egress Scheduling
19.4.4 Congestion Prevention
20. IP Telephony
201 Inline Power
201.1 Inline Power Configuration and Verification
202 Voice VLANs
202.1 Voice VLANs Configuration and Verification
203 Voice QoS
203.1 QoS Trust
203.1.1 QoS Trust Configuration and Verification
203.2 Voice Packet Classification
21. Controlling Access in the Campus Environment
21.1 Access Policies
212 Managing Network Devices
21.2.1 Physical Access
21.2.2 Passwords
21.2.3 Privilege Levels
21.2.4 Virtual Terminal Access
21.3 Access Layer Policy
21.4 Distribution Layer Policy
21.4.1 Filtering Traffic at the Distribution Layer
21.4.2 Controlling Routing Update Traffic
21.4.3 Configuring Route Filtering
21.5 Core Layer Policy
22. Monitoring and Troubleshooting
Leading the way in IT testing and certification tools, www.testking.com
- 14 -
CCNP/CCDP 642-891 (Composite)
22.1 Monitoring Cisco Switches
22.1.1 Out-of-Band Management
22.1.1.1 Console Port Connection
22.1.1.2 Serial Line Internet Protocol (SLIP)
22.1.2 In-Band Management
22.1.2.1 SNMP
22.1.2.2 Telnet Client Access
22.1.2.3 Cisco Discovery Protocol (CDP)
22.1.3 Embedded Remote Monitoring
22.1.4 Switched Port Analyzer
22.1.5 CiscoWorks 2000
22.2 General Troubleshooting Model
22.2.1 Troubleshooting with show Commands
22.2.2 Physical Layer Troubleshooting
22.2.3 Troubleshooting Ethernet
22.2.3.1 Network Testing
22.2.3.2 The Traceroute Command
22.2.3.3 Network Media Test Equipment
Leading the way in IT testing and certification tools, www.testking.com
- 15 -
CCNP/CCDP 642-891 (Composite)
LIST OF TABLES
TABLE 1.1:
TABLE 1.2:
TABLE 1.3:
TABLE 1.4:
TABLE 2.1:
TABLE 2.2:
TABLE 3.1:
TABLE 3.2:
TABLE 3.3:
TABLE 3.4:
TABLE 3.5:
TABLE 3.6:
TABLE 4.1:
TABLE 4.2:
TABLE 5.1:
TABLE 5.2:
TABLE 5.3:
TABLE 7.1:
TABLE 8.1:
TABLE 8.2:
TABLE 10.1:
TABLE 12.1:
TABLE 13.1:
TABLE 15.1:
TABLE 16.1:
TABLE 18.1:
TABLE 19.1:
TABLE 21.1:
TABLE 22.1:
TABLE 22.2:
TABLE 22.3:
TABLE 22.4:
TABLE 22.5:
Network Service Types
OSI Encapsulation
Differences between Switches and Bridges
Parameters for the Extended access-list Command
Private IP Address Ranges
The Metrics used by Different Routing Protocols
Coaxial Cable for Ethernet
Twisted-Pair and Fiber Optic Cable for Ethernet
Fast Ethernet Cabling and Distance Limitations
Gigabit Ethernet Cabling and Distance Limitations
Catalyst Switch File Locations
File Management Commands
Parameters for the ping Command
Parameters for the traceroute Command
OSPF Terminology
The Default Hello and Dead Time Intervals
Default Costs in OSPF
EIGRP Terminology
The Categories of BGP-4 Attributes
The BGP-4 Attributes supported by Cisco
The Default Administrative Distance
MST Configuration Commands
Automatic NSAP Address Generation for LANE Components
Displaying Specific MLS Cache Entries
Adjacency Types for Exception Processing
Well-Known Class D Addresses
Differentiated Services Types of Traffic
Access Policy Guidelines
Keywords and Arguments for the set snmp trap Command
CiscoWorks 2000 LAN Management Features
Ethernet Media Problems
Parameters for the ping Command
Parameters for the traceroute Command
Leading the way in IT testing and certification tools, www.testking.com
- 16 -
CCNP/CCDP 642-891 (Composite)
LIST OF ACRONYMS
AAA
Authentication, Authorization, and Accounting
ABR
Area Border Router
ACF
Advanced Communications Function
ACK
Acknowledgment bit (in a TCP segment)
ACL
Access Control List
ACS
Access Control Server
AD
Advertised Distance
ADSL
Asymmetric Digital Subscriber Line
ANSI
American National Standards Institute
API
Application Programming Interface
APPC
Advanced Program-to-Program Communications
ARAP
AppleTalk Remote Access Protocol
ARE
All Routes Explorer
ARP
Address Resolution Protocol
ARPA
Advanced Research Projects Agency
ARPANET
Advanced Research Projects Agency Network
AS
Autonomous System
ASA
Adaptive Security Algorithm
ASBR
Autonomous System Boundary Router
ASCII
American Standard Code for Information Interchange
ASIC
Application Specific Integrated Circuits
ATM
Asynchronous Transfer Mode
AUI
Attachment Unit Interface
Bc
Committed burst (Frame Relay)
B channel
Bearer channel ( ISDN)
BDR
Backup Designated Router
Be
Excess burst (Frame Relay)
BECN
Backward Explicit Congestion Notification (Frame Relay)
BGP
Border Gateway Protocol
BGP-4
BGP version 4
BIA
Burned-in Address (another name for a MAC address)
Leading the way in IT testing and certification tools, www.testking.com
- 17 -
CCNP/CCDP 642-891 (Composite)
BOD
Bandwidth on Demand.
BPDU
Bridge Protocol Data Unit
BRF
Bridge Relay Function
BRI
Basic Rate Interface (ISDN)
BSD
Berkeley Standard Distribution (UNIX)
CBT
Core Based Trees
CBWFQ
Class-Based Weighted Fair Queuing
CCITT
Consultative Committee for International Telegraph and Telephone
CCO
Cisco Connection Online
CDDI
Copper Distribution Data Interface
CDP
Cisco Discovery Protocol
CEF
Cisco Express Forwarding
CHAP
Challenge Handshake Authentication Protocol
CIDR
Classless Interdomain Routing
CIR
Committed Information Rate. (Frame Relay)
CLI
Command-Line Interface
CLNP
Connectionless Network Protocol (IS-IS)
CLNS
Connectionless Network Service (IS-IS)
CPE
Customer Premises Equipment
CPU
Central Processing Unit
CR
Carriage Return.
CRC
Cyclic Redundancy Check (error)
CRF
Concentrator Relay Function
CSNP
Complete Sequence Number PDU (IS-IS)
CST
Common Spanning Tree
CSU
Channel Service Unit
DB
Data Bus (connector)
DCE
Data Circuit-Terminating Equipment
dCEF
Distributed CEF
DDR
Dial-on-Demand Routing
DE
Discard Eligible Indicator
DECnet
Digital Equipment Corporation Protocols
DES
Data Encryption Standard
DHCP
Dynamic Host Control Protocol
Leading the way in IT testing and certification tools, www.testking.com
- 18 -
CCNP/CCDP 642-891 (Composite)
DIS
Designated Intermediate System (IS-IS)
DLCI
Data-Link Connection Identifier
DNIC
Data Network Identification Code. (X.121addressing)
DNS
Domain Name System
DoD
Department of Defense (US)
DRiP
Duplicate Ring Protocol
DR
Designated Router
DS
Digital Signal
DS0
Digital Signal level 0
DS1
Digital Signal level 1
DS3
Digital Signal level 3
DSL
Digital Subscriber Line
DSU
Data Service Unit
DTE
Data Terminal Equipment
DTP
Dynamic Trunking Protocol
DUAL
Diffusing Update Algorithm
DVMRP
Distance Vector Multicast Routing Protocol
EBC
Ethernet Bundling Controller
EGP
Exterior Gateway Protocol
EIA/TIA
Electronic Industries Association/Telecommunications Industry Association
EIGRP
Enhanced IGRP
ES
End System (IS-IS)
ES-IS
End System-to-Intermediate System Protocol (IS-IS)
ESH
End System Hello message (IS-IS)
FCC
Federal Communications Commission
FCS
Frame Check Sequence
FC
Feasible Condition (Routing)
FD
Feasible Distance (Routing)
FDDI
Fiber Distributed Data Interface
FEC
Fast EtherChannel
FECN
Forward Explicit Congestion Notification
FIB
Forwarding Information Base
FIFO
First-In, First-Out (Queuing)
FLSM
Fixed-Length Subnet Masks
Leading the way in IT testing and certification tools, www.testking.com
- 19 -
CCNP/CCDP 642-891 (Composite)
FR
Frame Relay
FS
Feasible Successor (Routing)
FSSRP
Fast Simple Server Redundancy Protocol
FTP
File Transfer Protocol
GBIC
Gigabit Interface Converters
GBPT
Generic Bridge PDU Tunneling
GEC
Gigabit EtherChannel
GSR
Gigabit Switch Router
HDLC
High-Level Data Link Control
HDSL
High data-rate digital subscriber line
HSRP
Hot Standby Router Protocol
HSSI
High-Speed Serial Interface
HTTP
Hypertext Transfer Protocol
I/O
Input/Output
IANA
Internet Assigned Numbers Authority
ICMP
Internet Control Message Protocol
IDRP
Interdomain Routing Protocol
IDN
International Data Number
IEEE
Institute of Electrical and Electronic Engineers
IETF
Internet Engineering Task Force
IGP
Interior Gateway Protocol
IGRP
Interior Gateway Routing Protocol
ILMI
Integrated Local Management Interface
IOS
Internetwork Operating System
IP
Internet Protocol
IPSec
IP Security
IPv6
IP version 6
IPX
Internetwork Packet Exchange (Novell)
IRDP
ICMP Router Discovery Protocol
IS
Information Systems
or
Intermediate System
IS-IS
Intermediate System-to-Intermediate System.
ISDN
Integrated Services Digital Network
Leading the way in IT testing and certification tools, www.testking.com
- 20 -
CCNP/CCDP 642-891 (Composite)
ISH
Intermediate System Hello message (IS-IS)
ISO
International Organization for Standardization
ISOC
Internet Society
ISP
Internet Service Provider
IST
Internal Spanning Tree
ITU-T
International Telecommunication Union–Telecommunication Standardization Sector
kbps
kilobits per second (bandwidth)
LACP
Link Aggregation Control Protocol
LAN
Local Area Network
LANE
LAN Emulation
LAPB
Link Access Procedure, Balanced
LAPD
Link Access Procedure on the D channel
LEC
LAN Emulation Client
LECS
LAN Emulation Configuration Server
LED
Light Emitting Diode
LES
LAN Emulation Server
LLC
Logic Link Control (OSI Layer 2 sublayer)
LLQ
Low-Latency Queuing
LMI
Local Management Interface
LSA
Link-State Advertisement
LSP
Link-State PDU
MAC
Media Access Control (OSI Layer 2 sublayer)
MAN
Metropolitan-Area Network
MD5
Message Digest Algorithm 5
MLS
Multilayer Switching
MLS-RP
Multilayer Switching Route Processor
MLS-SE
Multilayer Switching Switch Engine
MLSP
Multilayer Switching Protocol
MOSPF
Multicast Open Shortest Path First
MPLS
Multiprotocol Label Switching
MSAU
Multistation Access Unit
MSFC
Multilayer Switch Feature Card
MST
Multiple Spanning Tree
MTU
Maximum Transmission Unit
Leading the way in IT testing and certification tools, www.testking.com
- 21 -
CCNP/CCDP 642-891 (Composite)
NAK
Negative Acknowledgment
NAS
Network Access Server
NAT
Network Address Translation
NBMA
Nonbroadcast Multiaccess
NetBEUI
NetBIOS Extended User Interface
NetBIOS
Network Basic Input/Output System
NFFC
NetFlow Feature Card
NMS
Network Management System
NNI
Network-to-Network Interface
NPDU
Network Protocol Data Unit
NVRAM
Nonvolatile Random Access Memory
OC
Optical Carrier
ODBC
Open Database Connectivity
OLE
Object Linking and Embedding
OSI
Open Systems Interconnection (Model)
or
Open System Interconnection (IS-IS)
OSPF
Open Shortest Path First
OTDR
Optical Time Domain Reflectometer
OUI
Organizationally Unique Identifier
PAgP
Port Aggregation Protocol
PAP
Password Authentication Protocol
PAT
Port Address Translation
PDN
Public Data Network
PDU
Protocol Data Unit (i.e., a data packet)
PIM
Protocol Independent Multicast
PIM
SM Protocol Independent Multicast Sparse Mode
PIMDM
Protocol Independent Multicast Mode
PIX
Private Internet Exchange (Cisco Firewall)
PNNI
Private Network-to-Network Interface
POP
Point of Presence
POTS
Plain Old Telephone Service
PPP
Point-to-Point Protocol
PQ
Priority Queuing
Leading the way in IT testing and certification tools, www.testking.com
- 22 -
CCNP/CCDP 642-891 (Composite)
PRI
Primary Rate Interface (ISDN)
PSNP
Partial Sequence Number PDU (IS-IS)
PSTN
Public Switched Telephone Network
PTT
Poste, Telephone, Telegramme
PVC
Permanent Virtual Circuit (ATM)
PVST
Per-VLAN Spanning Tree
PVST+
Per-VLAN Spanning Tree Plus
QoS
Quality of Service
RADIUS
Remote Authentication Dial-In User Service
RAS
Remote Access Service
RIF
Routing Information Field
RIP
Routing Information Protocol
RJ
Registered Jack (connector)
RMON
Embedded Remote Monitoring
RP
Rendezvous Point
RPF
Reverse Path Forwarding
RSFC
Route Switch Feature Card
RSM
Route Switch Module
RSP
Route Switch Processor
RSTP
Rapid Spanning-Tree Protocol
RTP
Reliable Transport Protocol
RTO
Retransmission Timeout
SA
Source Address
SAID
Security Association Identifier
SAP
Service Access Point
or
Service Advertising Protocol (Novell)
SAPI
Service Access Point Identifier.
SAR
Segmentation and Reassembly
SDLC
Synchronous Data Link Control (SNA)
SIA
Stuck in Active (EIGRP)
SIN
Ships-in-the-Night (Routing)
SLIP
Serial Line Internet Protocol
SMDS
Switched Multimegabit Data Service
Leading the way in IT testing and certification tools, www.testking.com
- 23 -
CCNP/CCDP 642-891 (Composite)
SMTP
Simple Mail Transfer Protocol
SNA
Systems Network Architecture (IBM)
SNAP
SubNetwork Access Protocol
SNMP
Simple Network Management Protocol
SOF
Start of Frame
SOHO
Small Office, Home Office
SONET
Synchronous Optical Network
SONET/SDH
Synchronous Optical Network/Synchronous Digital Hierarchy
SPAN
Switched Port Analyzer
SPF
Shortest Path First
SPID
Service Profile Identifier
SPP
Sequenced Packet Protocol (Vines)
SPT
Shortest Path Tree (IS-IS)
SPX
Sequenced Packet Exchange (Novell)
SQL
Structured Query Language
SRAM
Static RAM
SRB
Source-Route Bridge
SRT
Source-Route Transparent (Bridging)
SRTT
Smooth Round-Trip Timer (EIGRP)
SS7
Signaling System 7
SSAP
Source service access point (LLC)
SSE
Silicon Switching Engine.
SSP
Silicon Switch Processor
SSRP
Simple Server Redundancy Protocol
STA
Spanning-Tree Algorithm
STP
Spanning-Tree Protocol; also Shielded Twisted-Pair (cable)
SVC
Switched Virtual Circuit (ATM)
SYN
Synchronize (TCP segment)
TA
Terminal Adapter (ISDN)
TAC
Technical Assistance Center (Cisco)
TACACS
Terminal Access Controller Access Control System
TCI
Tag Control Information
TCP
Transmission Control Protocol
TCP/IP
Transmission Control Protocol/Internet Protocol
Leading the way in IT testing and certification tools, www.testking.com
- 24 -
CCNP/CCDP 642-891 (Composite)
TCN
Topology Change Notification
TDM
Time-Division Multiplexing
TDR
Time Domain Reflectometers
TFTP
Trivial File Transfer Protocol
TIA
Telecommunications Industry Association
TLV
Type-Length-Value
ToS
Type of Service
TPID
Tag Protocol Identifier
TrBRF
Token Ring Bridge Relay Function
TrCRF
Token Ring Concentrator Relay Function
TTL
Time To Live
UDLD
Unidirectional Link Detection
UDP
User Datagram Protocol
UNC
Universal Naming Convention or Uniform Naming Convention
UNI
User-Network Interface
URL
Uniform Resource Locator
UTC
Coordinated Universal Time (same as Greenwich Mean Time)
UTL
Utilization
UTP
Unshielded Twisted-Pair (cable)
VBR
Variable Bit Rate
VC
Virtual Circuit (ATM)
VID
VLAN Identifier
VIP
Versatile Interface Processor
VLAN
Virtual LAN
VLSM
Variable-Length Subnet Mask
VMPS
VLAN Membership Policy Server
VPN
Virtual Private Network
VTP
VLAN Trunking Protocol
vty
Virtual terminal line
WAIS
Wide Area Information Server
WAN
Wide Area Network
WFQ
Weighted Fair Queuing
WWW
World Wide Web
XNS
Xerox Network Systems
Leading the way in IT testing and certification tools, www.testking.com
- 25 -
CCNP/CCDP 642-891 (Composite)
XOR
Exclusive-OR
XOT
X.25 over TCP
ZIP
Zone Information Protocol (AppleTalk)
Leading the way in IT testing and certification tools, www.testking.com
- 26 -
CCNP/CCDP 642-891 (Composite)
Building Scalable Cisco Internetworks (BSCI) and
Building Cisco Multilayer Switched Networks
(BCMSN)
Exam Code: 642-891
Certifications:
Cisco Certified Internetwork Professional (CCIP)
Cisco Certified Network Professional (CCNP)
Cisco Certified Design Professional (CCDP)
Core
Core and Recertification
Core and Recertification
Prerequisites:
Cisco CCNA 640-801 – Cisco Certified Network Associate, or
Cisco CCNA 640-811 – Interconnecting Cisco Network Devices (ICND) AND
Cisco CCNA 640-821 – Introduction to Cisco Network Devices (INTRO), or
Cisco CCDA 640-861 – Designing Cisco Internetwork Solutions.
About This Study Guide
This Study Guide is based on the current pool of exam questions for the 642-891 exam. As such it provides
all the information required to pass the Cisco 642-801 exam and is organized around the specific skills that
are tested in that exam. Thus, the information contained in this Study Guide is specific to the 642-891 exam
and does not represent a complete reference work on the subject of Building Scalable Cisco Internetworks.
Topics covered in this Study Guide includes: List the key information routers needs to route data; Describe
classful and classless routing protocols; Describe link-state router protocol operation; Compare classful and
classless routing protocols; Compare distance vector and link state routing protocols; Describe concepts
relating to extending IP addresses and the use of VLSMs to extend IP addresses; Describe the features and
operation of EIGRP; Describe the features and operation of single area OSPF; Describe the features and
operation of multi-area OSPF; Explain basic OSI terminology and network layer protocols used in OSI;
Identify similarities and differences between Integrated IS-IS and OSPF; List the types of IS-IS routers and
their role in IS-IS area design; Describe the hierarchical structure of IS-IS areas; Describe the concept of
establishing adjacencies; Describe the features and operation of BGP; Explain how BGP policy-based
routing functions within an autonomous system; Explain the use of redistribution between BGP and Interior
Gateway Protocols (IGPs); Implementation and Configuration of OSPF in a single-area an in a multiple area
network, Enhanced IGRP, and BGP and verifying their proper operation; Identifying the steps to select and
configure the different ways to control routing update traffic; Identifying the steps to configure router
redistribution in a network; Identifying the steps to configure policy-based routing using route maps;
Identifying the steps to configure a router for Network Address Translation with overload, static translations,
and route maps; Describing the three-layer hierarchical design model and explain the function of each layer;
Choosing the correct routing protocol to meet the requirements; Identifying the correct IP addressing scheme;
Describing the concepts relating to route summarization and apply them to hypothetical scenarios;
Troubleshooting the OSPF operation in a single area, the OSPF operation in multiple areas, the Enhanced
Leading the way in IT testing and certification tools, www.testking.com
- 27 -
CCNP/CCDP 642-891 (Composite)
IGRP operation, and the BGP operation; Identifying verification methods which ensure proper operation of
Integrated IS-IS on Cisco routers; Identifying the steps to verify route redistribution; Describing the
scalability problems associated with internal BGP; Interpreting the output of various show and debug
commands to determine the cause of route selection errors and configuration problems; Describing the
functionality of CGMP, Enabling CGMP on the distribution layer devices, Identifying the correct Cisco
Systems product solution given a set of network switching requirements; Describing how switches facilitate
Multicast Traffic; Translating Multicast Addresses into MAC addresses; Identifying the components
necessary to effect multilayer switching; Applying flow masks to influence the type of MLS cache;
Describing layer 2, 3, 4 and multilayer switching; Verifying existing flow entries in the MLS cache;
Describing how MLS functions on a switch; Configuring a switch to participate in multilayer switching;
Describing Spanning Tree; Configuring the switch devices to improve Spanning Tree Convergence in the
network; Identifying Cisco Enhancement that improve Spanning Tree Convergence; Configuring a switch
device to Distribute Traffic on Parallel Links; Providing physical connectivity between two devices within a
switch block; Providing connectivity from an end user station to an access layer device; Providing
connectivity between two network devices; Configuring a switch for initial operation; Applying IOS
command set to diagnose and troubleshoot a switched network problems; Describing the different Trunking
Protocols; Configuring Trunking on a switch; Maintaining VLAN configuration consistency in a switched
network; Configuring the VLAN Trunking Protocol; Describing the VTP Trunking Protocol; Describing
LAN segmentation using switches; Configuring a VLAN; Ensuring broadcast domain integrity by
establishing VLANs; Facilitating InterVLAN Routing in a network containing both switches and routers;
and Identify the network devices required to effect InterVLAN routing; Quality of Service; and IP
Telephony.
Intended Audience
This Study Guide is targeted specifically at people who wish to take the Cisco 642-891 exam. This
information in this Study Guide is specific to the exam. It is not a complete reference work. Although our
Study Guides are aimed at new comers to the world of IT, the concepts dealt with in this Study Guide are
complex and require an understanding of material provided for the Cisco CCNA 640-801 - Routing and
Switching Certification Exam or the Cisco CCDA 640-861 - Designing for Cisco Internetwork Solutions
Exam. Knowledge of CompTIA's Network+ course would also be advantageous.
Note: There is a fair amount of overlap between this Study Guide, and the
640-801, the 642-801, 642-811 and 642-831 Study Guides. We would,
however not advise skimming over the information that seems familiar as
this Study Guide expands on the information in the 642-831 and 640-801
Study Guides.
How To Use This Study Guide
To benefit from this Study Guide we recommend that you:
•
Although there is a fair amount of overlap between this Study Guide and the 640-801 Study Guide, the
642-801 Study Guide, the 642-811 Study Guide and the642-831 Study Guide, the relevant information
from those Study Guides is included in this Study Guide. This is thus the only Study Guide you will
require to pass the 642-801 exam.
•
Study each chapter carefully until you fully understand the information. This will require regular and
disciplined work. Where possible, attempt to implement the information in a lab setup.
Leading the way in IT testing and certification tools, www.testking.com
- 28 -
CCNP/CCDP 642-891 (Composite)
•
Be sure that you have studied and understand the entire Study Guide before you take the exam.
Note: Remember to pay special attention to these note boxes as they contain
important additional information that is specific to the exam.
Note: A large portion of the 642-891 exam is based on IP addressing. For this
reason, you must thoroughly understand IP addressing. IP addressing is
discussed in detail in Section 2 of this Study Guide.
Good luck!
Leading the way in IT testing and certification tools, www.testking.com
- 29 -
CCNP/CCDP 642-891 (Composite)
1. The Campus Network
A campus network is a building or group of buildings that connects to one network that is typically owned
by one company. This local area network (LAN) typically uses Ethernet, 802.11 wireless LANs, Fast
Ethernet, Fast EtherChannel, Gigabit Ethernet LANs, Token Ring, Fiber Distributed Data Interface (FDDI),
or Asynchronous Transfer Mode (ATM) technologies. The task for network administrators is to ensure that
the campus network run effectively and efficiently. This requires an understanding current and new
emerging campus networks and equipment such as Cisco switches, which can be used to maximize network
performance. Understanding how to design for the emerging campus networks is critical for implementing
production networks.
1.1 The Traditional Shared Campus Network
In the 1990s, the traditional campus network started as one LAN and grew until segmentation needed to take
place to keep the network up and running. In this era of rapid expansion, response time was secondary to
ensure the network functionality. Typical campus networks ran on 10BaseT or 10Base2, which was prone to
collisions, and were, in effect, collision domains. Ethernet was used because it was scalable, effective, and
comparatively inexpensive. Because a campus network can easily span many buildings, bridges were used to
connect the buildings together. As more users were attached to the hubs used in the Ethernet network,
performance of the network became extremely slow.
Availability and performance are the major problems with traditional campus networks. Bandwidth helps
compound these problems. The three performance problems in traditional campus networks were:
1.1.1 Collisions
Because all devices could see each other, they could also collide with each other. If a host had to broadcast,
then all other devices had to listen, even though they themselves were trying to transmit. And if a device
were to malfunction, it could bring the entire network down. Bridges were used to break these networks into
subnetworks, but broadcast problems remained. Bridges also solved distance-limitation problems because
they usually had repeater functions built into the electronics.
1.1.2 Bandwidth
The bandwidth of a segment is measured by the amount of data that can be transmitted at any given time.
However, the amount of data that can be transmitted at any given time is dependent on the medium, i.e. its
carrier line: on its quality and length. All lines suffer from attenuation, which is the progressive degradation
of the signal as it travels along the line and is due to energy loss and energy abortion. For the remote end to
understand digital signaling, the signal must stay above a critical value. If it drops below this critical, the
remote end will not be able to receive the data. The solution to bandwidth issues is maintaining the distance
limitations and designing the network with proper segmentation of switches and routers.
Another problem is congestion, which happens on a segment when too many devices are trying to use the
same bandwidth. By properly segmenting the network, you can eliminate some of these bandwidth issues.
1.1.3 Broadcasts and Multicasts
Leading the way in IT testing and certification tools, www.testking.com
- 30 -
CCNP/CCDP 642-891 (Composite)
All protocols have broadcasts built in as a feature, but some protocols, such as Internet Protocol (IP),
Address Resolution Protocol (ARP), Network Basic Input Output System (NetBIOS), Internetworking
Packet eXchange (IPX), Service Advertising Protocol (SAP), and Routing Information Protocol (RIP), need
to be configured correctly. However, there are features, such as packet filtering and queuing, that are built
into the Cisco router Internetworking Operating System (IOS) that, if correctly designed and implemented,
can alleviate these problems.
Multicasts are broadcasts that are destined for a specific or defined group of users. If you have large
multicast groups or a bandwidth-intensive application, such as Cisco's IPTV application, multicast traffic
can consume most of the network bandwidth and resources.
To solve broadcast issues, create network segmentation with bridges, routers, and switches. Another solution
is Virtual LANs (VLANs). A VLAN is a group of devices on different network segments defined as a
broadcast domain by the network administrator. The benefit of VLANs is that physical location is no longer
a factor for determining the port into which you would plug a device into the network. You can plug a
device into any switch port, and the network administrator gives that port a VLAN assignment. However,
routers or layer 3 switches must be used for different VLANs to communicate. VLANs are discussed in
more detail in Section 11.
1.2 The New Campus Network
The problems with collision, bandwidth, and broadcasts, together with the changes in customer network
requirements have necessitated a new network campus design. Higher user demands and complex
applications force the network designers to think more about traffic patterns instead of solving a typical
isolated department issue. Now network administrators need to create a network that makes everyone
capable of reaching all network services easily. They therefore need to must pay attention to traffic patterns
and how to solve bandwidth issues. This can be accomplished with higher-end routing and switching
techniques. Because of the new bandwidth-intensive applications, video and audio to the desktop, as well as
more and more work being performed on the Internet, the new campus model must be able to perform:
•
Fast Convergence, i.e., when a network change takes place, the network must be able to adapt very
quickly to new changes and keep data moving quickly.
•
Deterministic paths, i.e., users must be able to gain access to a certain area of the network without fail.
•
Deterministic failover, i.e., the network design must have provisions which ensure that the network
stays up and running even if a link fails.
•
Scalable size and throughput, i.e., the network infrastructure must be able to handle the new increase
in traffic as users and new devices are added to the network.
•
Centralized applications, i.e., enterprise applications accessed by all users must be available to support
all users on the internetwork.
•
The new 20/80 rule, i.e., instead of 80 percent of the users' traffic staying on the local network, 80
percent of the traffic will now cross the backbone and only 20 percent will stay on the local network.
(The new 20/80 rule is discussed below in Section 1.3.)
•
Multiprotocol support, i.e., networks must support multiple protocols, some of which are routed
protocols used to send user data through the internetwork, such as IP or IPX; and some of which are
routing protocols used to send network updates between routers, such as RIP, Enhanced Interior
Gateway Routing Protocol (EIGRP), and Open Shortest Path First (OSPF).
Leading the way in IT testing and certification tools, www.testking.com
- 31 -
CCNP/CCDP 642-891 (Composite)
•
Multicasting, which is sending a broadcast to a defined subnet or group of users who can be placed in
multicast groups.
1.3 The 80/20 Rule and the New 20/80 Rule
The traditional campus network followed what is called the 80/20 rule because 80% of the users' traffic was
supposed to remain on the local network segment and only 20% or less was supposed to cross the routers or
bridges to the other network segments. If more than 20% of the traffic crossed the network segmentation
devices, performance was compromised. Because of this, users and groups were placed in the same physical
location. In other words, users who required a connection to one physical network segment in order to share
network resources, such as network servers, printers, shared directories, software programs, and applications,
had to be placed in the same physical location. Therefore, network administrators designed and implemented
networks to ensure that all of the network resources for the users were contained within their own network
segment, thus ensuring acceptable performance levels.
With new Web-based applications and computing, any computer can be a subscriber or a publisher at any
time. Furthermore, because businesses are pulling servers from remote locations and creating server farms to
centralize network services for security, reduced cost, and administration, the old 80/20 rule cannot work in
this environment and, hence, is obsolete. All traffic must now traverse the campus backbone, effectively
replacing the 80/20 rule with a 20/80 rule. Approximately 20% of user activity is performed on the local
network segment while up to 80% percent of user traffic crosses the network segmentation points to access
network services.
The problem that the 20/80 rule has is that the routers must be able to handle an enormous amount of
network traffic quickly and efficiently. More and more users need to cross broadcast domains, which are
also called Virtual LANs (VLANs). This puts the burden on routing, or layer 3 switching. By using VLANs
within the new campus model, you can control traffic patterns and control user access easier than in the
traditional campus network. VLANs break up the network by using either a router or switch that can
perform layer 3 functions. VLANs are discussed in more detail in Section 11.
The network should be designed around traffic flow and not a specific type of traffic. Each network service
type is determined by the situation of the network service in relation to the user. The three types of traffic
flow within a campus network are illustrated below.
TABLE 1.1: Network Service Types
Service Type
Service Location
Traffic flow
Local
Same sector (VLAN user)
Access Layer Access
Remote
Different sector (VLAN user)
Distribution Layer Access
Enterprise
Central (Campus users)
Core Layer Access
1.4 Characterizing Scalable Internetworks
The key requirements for scalable internetworks are:
Leading the way in IT testing and certification tools, www.testking.com
- 32 -
CCNP/CCDP 642-891 (Composite)
•
They must be reliable and available. This includes being dependable and available 24 hours, 7 days a
week. In addition, failures need to be isolated and recovery must be nonvisible to the end user.
•
They must be responsive. This includes managing the quality of service (QoS) needs for the different
protocols being used without affecting response at the desktop.
•
They must be efficient. Large internetworks must optimize the use of resources, especially bandwidth.
Reducing the amount of overhead traffic such as unnecessary broadcasts, service location, and routing
updates results in an increase in data throughput without increasing the cost of hardware or the need for
additional WAN services.
•
They must be adaptable and serviceable. This includes being able to accommodate disparate networks
and interconnect independent network clusters, as well as to integrate legacy technologies, such as those
running Systems Network Architecture (SNA).
•
They must be accessible and secure. This includes the ability to enable connections into the
internetwork using dedicated, dialup, and switched services while maintaining network integrity.
1.4.1 Reliability and Availability
The internetwork should be reliable and available at all layers,
especially at the core layer. Core routers are reliable when they
can accommodate failures by rerouting traffic and respond quickly
to changes in the network topology. The protocols that enhance
network reliability and availability that the Cisco IOS supports are
scalable protocols such as Open Shortest Path First (OSPF) and
Enhanced IGRP (EIGRP). These protocols provide reachability
and fast convergence times.
•
Scalable networks, including those using a hierarchical design,
can have a large number of subnetworks. These networks can
be subject to reachability problems due to metric limitations of
distance vector routing protocols. Scalable routing protocols
such as OSPF and EIGRP use metrics that expand the
reachability potential for routing updates because they use cost,
rather than hop count, as a metric.
•
Scalable protocols can converge quickly because of the
router's ability to detect failure rapidly and because each router
maintains a network topology map.
Route Metrics
In a routed network, the routing process
relies on the routing protocol to locate the
best path to the destination network.
Different routing protocols in the TCP/IP
environment use different measuring
mechanisms, or metrics, to locate the best
path to a destination network. In addition,
routers advertise the path to a network in
terms of a metric value. Some examples of
metrics are: hop count and cost. If the
destination network is not local to the
router, then the path is represented by the
total of metric values defined for all of the
links that must be traversed to reach the
destination network.
Once the routing process knows the metric
values associated with the different paths,
and then the routing decision can be made.
The routing process will select the path that
has the smallest metric value.
1.4.2 Responsiveness
In addition to improving network reachability and reliability, scalable protocols also improve responsiveness
because they support alternate paths and load balancing.
•
Scalable protocols enable a router to maintain a map of the entire network topology. When a failure is
detected the router can reroute traffic by looking at the network topology and finding an alternate path.
Enhanced IGRP also keeps a record of alternate routes in case the preferred route goes down.
Leading the way in IT testing and certification tools, www.testking.com
- 33 -
CCNP/CCDP 642-891 (Composite)
•
Because scalable protocols have a map of the entire network topology, and because of how they
maintain their routing tables, they are able to transport data across multiple paths simultaneously to a
given location.
In addition, you can configure backup links on WAN connections when you need to make the primary WAN
connection more reliable; and when you need to increase availability by configuring the backup connections
to be used when a primary connection is experiencing congestion.
1.4.3 Efficiency
Optimizing your network at all layers of an internetwork hierarchy is critical because it can reduce potential
costs in additional WAN services. Bandwidth optimization is normally done by reducing the amount of
update traffic over a WAN connection, without dropping essential routing information, to increase data
traffic throughput. The Cisco IOS has a number of features that help optimize bandwidth use. These are:
•
Access lists, which can be used to allow or prevent protocol update traffic, data traffic, and broadcast
traffic. Access lists are available for IP and other protocols and can be tailored to meet the needs for each
protocol.
•
Reduce the number of routing table entries, which reduce the number of router processing cycles. This
can be done using route summarization, or incremental updates.
•
Dial-on-demand routing (DDR), which can be used to create connections as required for infrequent
traffic flow after interesting traffic is detected by the router.
•
Switched access, through packet-switched networks such as X.25 and Frame Relay, which offer the
advantage of providing global connectivity through a large number of service providers with established
circuits to most major cities.
•
Snapshot routing, which allows peer routers to exchange full distance vector routing information upon
initial connection, then on a predefined interval. This is usually used on ISDN connections and can
reduce WAN costs when using distance vector protocols because routing information is exchanged at an
interval you define. Between update exchanges, the routing tables for the distance vector protocols are
kept frozen.
•
Compression, which can be used to reduce traffic that is crossing a WAN connection. Cisco supports
TCP/IP header compression and payload compression.
1.4.4 Adaptability and Serviceability
Because scalable internetworks experience change frequently, they must be able to adapt to possible changes.
It is difficult to anticipate every change that your company may make in terms of mergers and organizational
structure. Therefore, building an adaptable network protects capital investment. It also increases the
reliability of the network. It is essential that attention be given to the interoperability of both products and
applications when designing the network. Serviceability is related to adaptability, but it is more focused
toward being able to make changes to production systems without disrupting normal operations.
1.4.5 Accessibility and Security
Security is a major consideration, particularly as more companies connect to the Internet and thereby
increase the chance of access to the network. You must weigh the needs of users to access the network,
particularly when remote access is required, against the need to secure the company's network resources. It
Leading the way in IT testing and certification tools, www.testking.com
- 34 -
CCNP/CCDP 642-891 (Composite)
is important to consider security as part of the initial design because it is very difficult to address this issue
as an afterthought.
1.5 Network Congestion
The consequence of having a network that is incapable of scaling is that as it grows it becomes constricted,
resulting in network congestion.
1.5.1 Problems Created by Network Congestion
Network congestion results when too many packets are competing for limited bandwidth. The problems
caused by network congestion can be easily identified using network-monitoring tools, such as Cisco's
TrafficDirector or a standard protocol analyzer. An understanding of the traffic volumes within the network
can also be gained by issuing commands, such as show interface, show buffers, and show queuing, at
the Cisco router.
Problems created by network congestion are:
•
Excessive traffic;
•
Dropped packets;
•
Retransmission of packets;
•
Incomplete routing tables
•
Incomplete server lists;
•
The Spanning-Tree Protocol breaks; and
•
Runaway congestion.
1.5.1.1 Excessive Traffic
If the traffic volume outgrows the network, the result is congestion. When this occurs on a single segment, it
results in the dropping of packets.
Ethernet has strict rules about accessing the medium. Physical problems, such as extraneous noise or
electromagnetic interference, can result in excessive traffic and can cause collisions. A collision requires all
transmitting devices to stop sending data and to wait a random amount of time before attempting to send the
original packet. Only the nodes involved in the collision are required to wait during the backoff period.
Other nodes must wait until the end of the jam signal and the interframe gap. If after 16 attempts the device
fails to transmit, it reports an error to the calling process. If for this or any other reason the device fails to
transmit and drops the packet from its buffer, the application typically retransmits the original packet. This
may result in increased congestion that grows exponentially. The latter is referred to as runaway congestion.
1.5.1.2 Dropped Packets
When congestion occurs, not all the packets can get through the network. The queues and buffers in the
intermediate forwarding devices, such as routers, overflow and must drop packets, causing an OSI higherlayer process on either end device to time out. Typically, the transport or application layers have the
responsibility to ensure the arrival of every piece of data. Maintaining the integrity of the transmission
Leading the way in IT testing and certification tools, www.testking.com
- 35 -
CCNP/CCDP 642-891 (Composite)
requires the communication to be connection oriented, giving the end devices the mechanisms to perform
error detection and correction through windowing, sequencing, and acknowledgments.
1.5.1.3 Retransmission of Packets
If packets are dropped, the layer responsible for the integrity of the transmission will retransmit the lost
packets. If the session or application layer does not receive the packets that were resent in time, the result
will be either incomplete information or timeouts.
1.5.1.4 Incomplete Routing Tables
If a connection congested, packets may be dropped, possibly resulting in the receipt of partial routing
updates. If the routing table of an intermediate forwarding device such as a router is incomplete, it may
make inaccurate forwarding decisions, resulting in loss of connectivity or even the dreaded routing loop.
1.5.1.5 Incomplete Server Lists
Congestion results in the random loss of packets. Under extreme circumstances, packet loss may result in
incomplete routing tables and server lists. Entries may ghost in and out of these tables. Users may find that
their favorite service is sometimes unavailable. The intermittent nature of this type of network problem
makes it difficult to troubleshoot.
1.5.1.6 The Spanning-Tree Protocol Breaks
The Spanning-Tree Protocol is maintained in each Layer 2 device, a switch or a bridge, allowing the device
to ensure that it has only one path back to the root bridge. Any redundant paths will be blocked, as long as
the Layer 2 device continues to see the primary path. The health of this primary path is ensured by the
receipt of spanning-tree updates. As soon as the Layer 2 device fails to see the updates, the device removes
the block on the redundant path. The block on the redundant path is removed after several updates have been
missed, after the MaxAge timer has been exceeded. This ensures some stability in the network. However, if
this problem occurs, in a short time, spanning-tree loops and broadcast storms will cause the network to
seize up and die.
1.5.1.7 Runaway Congestion
When packets are dropped, requiring retransmission, the congestion will inevitably increase. In some
instances, this may increase the traffic exponentially; this is often called runaway congestion. In relatively
unsophisticated protocols, such as Spanning-Tree Protocol, it is almost unavoidable, although others may
have methods of tracking the delays in the network and throttling back on transmission. Both TCP and
AppleTalk's DDP use flow control to prevent runaway congestion.
1.5.2 Symptoms of Network Congestion
The symptoms of congestion are intermittent. However, some of these symptoms can be due to other
underlying problem within the network. Furthermore, the symptoms of network congestion are difficult to
troubleshoot because some protocols are more sensitive than others and will time out after very short delays
are experienced. The three symptoms of network congestion are: application time outs; clients cannot
connect to network resources; and network death results.
Leading the way in IT testing and certification tools, www.testking.com
- 36 -
CCNP/CCDP 642-891 (Composite)
1.5.2.1 Application Time Outs
The session layer of the OSI model (Layer 5) is responsible for maintaining the communication flow
between the two end devices. This includes assigning resources to incoming requests to connect to an
application. To allocate resources adequately, idle timers disconnect sessions after a set time, releasing those
resources for other requests. Although the OSI model assigns these duties to the session layer, many
protocol stacks, such as TCP/IP, include the upper layers of the stack in the application.
1.5.2.2. Clients Cannot Connect to Network Resources
In a client/server environment, the available resources are communicated throughout the network. The
dynamic nature of the resource tables gives an up-to-date and accurate picture of the network. NetWare,
AppleTalk, Vines, and Windows NT all work on this principle. If these tables are inaccurate as a result of
the loss of packets in your network, errors will be introduced because decisions were made with incorrect
information. Some network systems are moving more toward a peer-to-peer system in which the end user
requests a service identified not by the network, but by the administrator.
1.5.2.3 Network Death Results
The most common problems arising from network congestion are intermittent connectivity and excessive
delays, users are disconnected from applications, print jobs fail, and errors result when trying to write files to
remote servers. If the response of the applications is to retransmit, congestion could reach a point of no
recovery. Likewise, if routing or spanning-tree loops are introduced as a result of packet loss, the excessive
looping traffic could bring your network down.
1.6 Designing Scalable Networks
It is important to know how to reduce network
congestion when it occurs, but it is more important
that you build a network that is scalable and can
accommodate future growth. When used properly
in network design, a hierarchical model makes
networks more predictable. It helps to define and
expect at which levels of the hierarchy we should
perform certain functions. The hierarchy requires
that you use tools like access lists at certain levels
in hierarchical networks and must avoid them at
others. In short, a hierarchical model helps us to
summarize a complex collection of details into an
understandable model. Then, as specific
configurations are needed, the model dictates the
appropriate manner for in which they are to be
applied.
Switching technologies are crucial to the new
network design. To understand switching
FIGURE 1.1: The Open System Interconnection (OSI Model
Leading the way in IT testing and certification tools, www.testking.com
- 37 -
CCNP/CCDP 642-891 (Composite)
technologies and how routers and switches work together, you must understand the Open Systems
Interconnection (OSI) model.
1.6.1 Open Systems Interconnection Model
The OSI model has seven layers (see Figure 1.1), each of which specifies functions that allow data to be
transmitted from one host to another on an internetwork. The OSI model is the cornerstone for application
developers to write and create networked applications that run on an internetwork. What is important to
network engineers and technicians is the encapsulation of data as it is transmitted on a network.
1.6.1.1 Data Encapsulation
Data encapsulation is the process by which the information in a protocol is wrapped, in the data section of
another protocol. In the OSI reference model, each layer encapsulates the layer immediately above it as the
data flows down the protocol stack. The logical communication that happens at each layer of the OSI
reference model does not involve many physical connections because the information each protocol needs to
send is encapsulated in the layer of protocol information beneath it. This encapsulation produces a set of
data called a packet.
Each layer communicates only with its peer layer on the receiving host, and they exchange Protocol Data
Units (PDUs). The PDUs are attached to the data at each layer as it traverses down the model and is read
only by its peer on the receiving side.
TABLE 1.2: OSI Encapsulation
OSI Layer
Name of Protocol Data
Units (PDUs)
Device
PDUs
Transport
TCP Segment
TCP Port
Network
Packet
Router
Data Link
Frame
Bridge/switch
to
Process
Starting at the Application layer, data is converted for transmission on the network, and then encapsulated in
Presentation layer information. The Presentation layer receives this information, and hands the data to the
Session layer, which is responsible for synchronizing the session with the destination host. The Session layer
then passes this data to the Transport layer, which transports the data from the source host to the destination
host. However, before this happens, the Network layer adds routing information to the packet. It then passes
the packet on to the Data Link layer for framing and for connection to the Physical layer. The Physical layer
sends the data as bits (1s and 0s) to the destination host across fiber or copper wiring. When the destination
host receives the bits, the data passes back up through the model, one layer at a time. The data is deencapsulated at each of the OSI model's peer layers.
The Network layer of the OSI model defines a logical network address. Hosts and routers use these
addresses to send information from host to host within an internetwork. Every network interface must have a
logical address, typically an IP address.
1.6.1.2 Layer 2 Switching
Leading the way in IT testing and certification tools, www.testking.com
- 38 -
CCNP/CCDP 642-891 (Composite)
Layer 2 (Data Link) switching is hardware based, which means it uses the Media Access Control (MAC)
address from the host's network interface cards (NICs) to filter the network. Switches use ApplicationSpecific Integrated Circuits (ASICs) to build and maintain filter tables. Layer 2 switching provides
hardware-based bridging; wire speed; high speed; low latency; and low cost. It is efficient because there is
no modification to the data packet, only to the frame encapsulation of the packet, and only when the data
packet is passing through dissimilar media, such as from Ethernet to FDDI.
Layer 2 switching has helped develop new components in the network infrastructure. These are:
•
Server farms - servers are no longer distributed to physical locations because virtual LANs can be
created to create broadcast domains in a switched internetwork. This means that all servers can be placed
in a central location, yet a certain server can still be part of a workgroup in a remote branch.
• Intranets allow organization-wide client/server communications based on a Web technology.
However, these new components allow more data to flow off of local subnets and onto a routed network,
where a router's performance can become the bottleneck.
Layer 2 switches have the same limitations as bridge networks. They cannot break up broadcast domains,
which can cause performance issues and limits the size of the network. Thus, broadcast and multicasts,
along with the slow convergence of spanning tree, can cause major problems as the network grows. Table
1.3 briefly summarizes the differences between Layer 2 switching and bridges
TABLE 1.3: Differences between Switches and Bridges
Operation / Occurrences
Switches
Bridges
Ports
Numerous
Maximum of 16
Filters
Hardware based
Software based
Spanning Tree numbers
Many occurrences
One occurrence
Because of these problems, layer 2 switches cannot completely replace routers in the internetwork. They can
however be used for workgroup connectivity and network segmentation. When used for workgroup
connectivity and network segmentation, layer 2 switches allows you to create a flatter network design and
one with more network segments than traditional 10BaseT shared networks.
Address learning occurs when Layer 2 switches and bridges learn the hardware addresses of all devices on
an internetwork and enters it into a MAC database. A switch is in essence a multiport transparent bridge.
Frame forwarding is based on the MAC addresses that each frame has. A switch forwards a frame when it
knows the destination device’s location.
The MAC filtering table has nothing in it when a switch is powered. Once a frame is received from a device,
the switch retains information on which interface the device is located on. It inserts the source address into
the MAC filter table. Since the device’s location is unknown at this stage, the network is flooded with the
frame.
When a device replies and returns a frame, the switch gets that frame’s source address and inserts the MAC
address in the MAC database. This source address is connected with the interface on which the frame was
initially received. At this point, the switch has two MAC addresses in the MAC filtering table and the
devices can create a point-to-point connection. Frames are transmitted just between the two devices.
Leading the way in IT testing and certification tools, www.testking.com
- 39 -
CCNP/CCDP 642-891 (Composite)
Forwarding and Filtering Decisions is the procedure that a switch uses to establish which ports to forward
a frame out of. In addition, the Layer 2 switch uses the MAC filter table to filter received frames. When a
switch port receives a frame, it places the frame into one of its ingress queues. The switch then has to decide
on the forwarding policies as well find the egress switch port. These decision processes are outlined below.
•
L2 Forwarding Table: The destination hardware address is utilized as an input key and placed into the
Content Addressable Memory (CAM). The egress switch port and its fitting VLAN ID are obtained from
the address table if it is listed there. The frame is transmitted out on the correct exit interface.
•
Security Access Control Lists (ACLs): The Ternary Content Addressable Memory (TCAM) holds
ACLs that can be used to single out frames. Frames are identified on their MAC addresses, IP addresses,
Layer 4 port numbers and protocol types when the frame is not an IP frame.
•
QoS ACLs: These ACLs can be utilized to categorize received frames in relation to quality of service
(QoS) parameters. In this manner, the extent of traffic flows can be controlled and QoS parameters in
outbound frames can be noted.
Another function that Layer 2 switching is responsible for is Loop Avoidance. Network loops takes place
when there are multiple links between switches that were established for redundancy. Although this can help
to prevent network failures, redundant links can cause severe problems. These are noted below
•
Broadcast Storms occur when switches continuously flood broadcasts all through the network. Loop
avoidance help to avoid this situation
•
Multiple frames can turn up from different links concurrently and cause Multiple Frame Copies. The
switch would not know the location of device. Thrashing the
MAC table happens when a switch cannot send a frame
Routers
because it is continuously updating the MAC table
Routers and layer 3 switches are similar in
1.6.1.3 Layer 3 Switching
The difference between a layer 3 (Network) switch and a router is
the way the administrator creates the physical implementation. In
addition, traditional routers use microprocessors to make
forwarding decisions, whereas the layer 3 switch performs only
hardware-based packet switching. Layer 3 switches can be placed
anywhere in the network because they handle high-performance
LAN traffic and can cost-effectively replace routers. Layer 3
switching is all hardware-based packet forwarding, and all packet
forwarding is handled by hardware ASICs. Furthermore, Layer 3
switches provide the same functionally as the traditional router.
These are:
concept but not design. Like bridges,
routers break up collision domains but they
also break up broadcast/multicast domains.
The benefits of routing include:
• Break up of broadcast domains;
• Multicast control;
• Optimal path determination;
• Traffic management;
• Logical (layer 3) addressing; and
• Security.
Routers
provide
optimal
path
determination because the router examines
every packet that enters an interface and
improves
network
segmentation
by
forwarding data packets to only a known
destination network. If a router does not
know about a remote network to which a
packet is destined, it will drop the packet.
Because of this packet examination, traffic
management is obtained. Security can be
obtained by a router reading the packet
header information and reading filters
defined by the network administrator.
•
Determine paths based on logical addressing;
•
Run layer 3 checksums on header only;
•
Use Time to Live (TTL);
•
Process and responds to any option information;
•
Can update Simple Network Management Protocol (SNMP) managers with Management Information
Base (MIB) information; and
•
Provide Security.
Leading the way in IT testing and certification tools, www.testking.com
- 40 -
CCNP/CCDP 642-891 (Composite)
The benefits of Layer 3 switching include:
•
Hardware-based packet forwarding;
•
High-performance packet switching;
•
High-speed scalability;
•
Low latency;
•
Lower per-port cost;
•
Flow accounting;
•
Security; and
•
Quality of service (QoS).
1.6.1.4 Layer 4 Switching
Layer 4 (Transport) switching is considered a hardware-based layer 3 switching technology. It provides
additional routing above layer 3 by using the port numbers found in the Transport layer header to make
routing decisions. These port numbers are found in Request for Comments (RFC) 1700 and reference the
upper-layer protocol, program, or application.
The largest benefit of layer 4 switching is that the network administrator can configure a layer 4 switch to
prioritize data traffic by application, which means a QoS can be defined for each user. However, because
users can be part of many groups and run many applications, the layer 4 switches must be able to provide a
huge filter table or response time would suffer. This filter table must be much larger than any layer 2 or 3
switch. A layer 2 switch might have a filter table only as large as the number of users connected to the
network while a layer 4 switch might have five or six entries for each and every device connected to the
network. If the layer 4 switch does not have a filter table that includes all the information, the switch will not
be able to produce wire-speed results.
1.6.1.5 Multi-Layer Switching (MLS)
Multi-layer switching combines layer 2 switching, layer 3 switching, and layer 4 switching technologies and
provides high-speed scalability with low latency. It accomplishes this by using huge filter tables based on
the criteria designed by the network administrator. Multi-layer switching can move traffic at wire speed
while also providing layer 3 routing. This can remove the bottleneck from the network routers. Multi-layer
switching can make routing/switching decisions based on:
•
The MAC source/destination address in a Data Link frame;
•
The IP source/destination address in the Network layer header;
•
The Protocol filed in the Network layer header; and
•
The Port source/destination numbers in the Transport layer header.
Two types of MLS are supported by catalyst switches:
•
Route caching needs a switch engine (SE) and a route processor (RP). The RP processes a traffic flow’s
first packet in order to establish the destination, while the SE inserts an entry in its MLS cache to store
Leading the way in IT testing and certification tools, www.testking.com
- 41 -
CCNP/CCDP 642-891 (Composite)
the relevant destination. These SE uses these entries when sending the next packets in the same traffic
flow. Route caching is also known as demand-based switching, NetFlow LAN switching and flow-based
switching.
•
With Topology-based switching, also known as Cisco Express Forwarding (CEF), Layer 3 routing
data creates and preloads a database of the whole network topology. This database is checked when
forwarding packets.
Packets entering the switch port are located in the ingress queue, and are then extracted and examined for
Layer 2 and Layer 3 destination. A decision process is performed to determine the destination for the packet
and the forwarding policies:
•
L2 Forwarding Table: The destination MAC address is utilized like an input key to the CAM table.
When the frame has a Layer 3 packet, the only action taken is to process the packet at that layer.
•
L3 Forwarding Table: The destination IP address is utilized as an input key and checked against the
FIB table. The FIB table also holds the egress switch port with its fitting VLAN ID, and each entry’s
Layer 2 MAC address.
•
Security Access Control Lists (ACLs): The Ternary Content Addressable Memory (TCAM) holds
ACLs that can be used to single out frames. A decision on whether to forward a packet is done as a
single table lookup.
•
QoS ACLs: These ACLs can be utilized to categorize received frames in relation to quality of service
(QoS) parameters. Packet categorization and marking can be done as a single table lookups in the QoS
TCAM.
The following are excluded from MLS because they cannot be directly forwarded by CEF:
•
Cisco Discovery Protocol packets
•
ARP requests and replies
•
IP packets that needs a reply from a router
•
Routing protocol updates
•
IPX routing protocol
•
Packets that are not IP and IPX protocol packets
•
IP broadcasts to be passed on as unicast
•
Packets setting off Network Address Translation (NAT)
•
Packets that require encryption
1.6.2 The Cisco Hierarchical Model
When used properly in network design, a hierarchical model makes networks more predictable. It helps to
define and expect at which levels of the hierarchy we should perform certain functions. The hierarchy
requires that you use tools like access lists at certain levels in hierarchical networks and must avoid them at
others.
Leading the way in IT testing and certification tools, www.testking.com
- 42 -
CCNP/CCDP 642-891 (Composite)
The Cisco hierarchical model is used to design a scalable, reliable, cost-effective hierarchical internetwork.
Cisco defines three layers of hierarchy: the core layer; the distribution layer; and the access layer. These
three layers are logical and not necessarily physical. They are thus not necessarily represented by three
separate devices.
The Cisco hierarchical model is used to design a scalable, reliable, cost-effective hierarchical internetwork.
Cisco defines three layers of hierarchy: the core layer; the distribution layer; and the access layer. These
three layers are logical and not necessarily physical. They are thus not necessarily represented by three
separate devices. Each layer has specific responsibilities, allowing only certain traffic to be forwarded
through to the upper levels. A filtering operation restricts unnecessary traffic from traversing the network.
Thus, the network is more adaptable, scalable, and more reliable.
1.6.2.1 Core Layer
At the top of the hierarchy is the core layer. It is literally the core of the network and is responsible for
switching traffic as quickly as possible. The traffic transported across the core is common to a majority of
users. However, user data is processed at the distribution layer, and the distribution layer forwards the
requests to the core, if needed. If there is a failure in the core, every all user can be affected; therefore, fault
tolerance at this layer is critical.
As the core transports large amounts of traffic, you should design the core for high reliability and speed.
You should thus consider using data-link technologies that facilitate both speed and redundancy, such as
FDDI, FastEthernet (with redundant links), or even ATM. You should use routing protocols with low
convergence times. You should avoid using access lists, routing between virtual LANs (VLANs), and packet
filtering. You should also not use the core layer to support workgroup access and upgrade rather than expand
the core layer if performance becomes an issue in the core.
The following Cisco witches are recommended for use in the core:
•
The 5000/5500 Series. The 5000 is a great distribution layer switch, and the 5500 is a great core layer
switch. The Catalyst 5000 series of switches includes the 5000, 5002, 5500, 5505, and 5509. All of the
5000 series switches use the same cards and modules, which makes them cost effective and provides
protection for your investment.
•
The Catalyst 6500 Series, which are designed to address the need for gigabit port density, high
availability, and multi-layer switching for the core layer backbone and server-aggregation environments.
These switches use the Cisco IOS to utilize the high speeds of the ASICs, which allows the delivery of
wire-speed traffic management services end to end.
•
The Catalyst 8500, which provides high performance switching. It uses Application-Specific Integrated
Circuits (ASICs) to provide multiple-layer protocol support including Internet Protocol (IP), IP multicast,
bridging, Asynchronous Transfer Mode (ATM) switching, and Cisco Assure policy-enabled Quality of
Service (QoS). All of these switches provide wire-speed multicast forwarding, routing, and Protocol
Independent Multicast (PIM) for scalable multicast routing. These switches are perfect for providing the
high bandwidth and performance needed for a core router. The 6500 and 8500 switches can aggregate
multiprotocol traffic from multiple remote wiring closets and workgroup switches.
1.6.2.2 Distribution Layer
Leading the way in IT testing and certification tools, www.testking.com
- 43 -
CCNP/CCDP 642-891 (Composite)
The distribution layer is the communication point between the access layer and the core. The primary
function of the distribution layer is to provide routing, filtering, and WAN access and to determine how
packets can access the core, if needed. The distribution layer must determine the fastest way that user
requests are serviced. After the distribution layer determines the best path, it forwards the request to the core
layer. The core layer is then responsible for quickly transporting the request to the correct service. You can
implement policies for the network at the distribution layer. You can exercise considerable flexibility in
defining network operation at this level.
Generally, you should:
•
Implement tools such as access lists, packet filtering, and queuing;
•
Implement security and network policies, including address translation and firewalls;
•
Redistribute between routing protocols, including static routing;
•
Route between VLANs and other workgroup support functions; and
•
Define broadcast and multicast domains.
The distribution layer switches must also be able to participate in multi-layer switching (MLS) and be able
to handle a route processor. The Cisco switches that provide these functions are:
•
The 2926G, which is a robust switch that uses an external router processor like a 4000 or 7000 series
router.
•
The 5000/5500 Series, which is the most effective distribution layer switch, it can support a large
amount of connections and also an internal route processor module called a Route Switch Module
(RSM). It can switch process up to 176KBps.
•
The Catalyst 6000, which can provide up to 384 10/100 Ethernet connections, 192 100FX FastEthernet
connections, and 130 Gigabit Ethernet ports.
1.6.2.3 Access Layer
The access layer controls user and workgroup access to internetwork resources. The network resources that
most users need will be available locally. Any traffic for remote services is handled by the distribution layer.
At this layer access control and policies from distribution layer should be continued and network
segmentation should be implemented. Technologies such as dial-on-demand routing (DDR) and Ethernet
switching are frequently used in the access layer.
The switches deployed at this layer must be able to handle connecting individual desktop devices to the
internetwork. The Cisco solutions that meet these requirements include:
•
The 1900/2800 Series, which provides switched 10 Mbps to the desktop or to 10BaseT hubs in small to
medium campus networks.
•
The 2900 Series, which provides 10/100 Mbps switched access for up to 50 users and gigabit speeds for
servers and uplinks.
•
The 4000 Series, which provides a 10/100/1000 Mbps advanced high-performance enterprise solution
for up to 96 users and up to 36 Gigabit Ethernet ports for servers.
Leading the way in IT testing and certification tools, www.testking.com
- 44 -
CCNP/CCDP 642-891 (Composite)
•
The 5000/5500 Series, which provides 10/100/1000 Mbps Ethernet switched access for more than 250
users.
1.6.3 Modular Network Design
Cisco promotes a campus network design based on a modular approach. In this design approach, each layer
of the hierarchical network model can be broken down into basic functional modules or blocks. These
modules can then be sized appropriately and connected together, while allowing for future scalability and
expansion. A building block approach to network design. Campus networks based on the modular approach
can be divided into basic elements. These are:
•
Switch blocks, which are access layer switches connected to the distribution layer devices; and
•
Core blocks, which are multiple switch blocks connected together with possibly 5500, 6500, or 8500
switches.
Within these fundamental campus elements, there are other contributing variables that can be added to the
network. These are:
•
Server Farm blocks, which are groups of network servers on a single subnet
•
Enterprise Edge blocks are centralized services to which the enterprise network is responsible for
providing complete access, together with their related access and distribution switches.
•
Network Management blocks are a set of network management resources with their accompanying
access and distribution switches
•
Service Provider Edge blocks are multiple connections to an ISP or multiple ISPs
1.6.3.1 The Switch Block
The switch block is a combination of layer 2 switches and layer 3 routers. The layer 2 switches connect
users in the wiring closet into the access layer and provide 10/100 Mbps dedicated connections. 1900/2820
and 2900 Catalyst switches can be used in the switch block. From here, the access layer switches will
connect into one or more distribution layer switches, which will be the central connection point for all
switches coming from the wiring closets. The distribution layer device is either a switch with an external
router or a multi-layer switch. The distribution layer switch will then provide layer 3 routing functions, if
needed.
The distribution layer router will prevent broadcast storms that could happen on an access layer switch from
propagating throughout the entire internetwork. Thus, the broadcast storm would be isolated to only the
access layer switch in which the problem exists.
Switch block sizing at the access layer is based on the quantity of users or the port density. Distribution layer
sizing is based on the quantity of access layer switches that are passed into a distribution mechanism. When
sizing the distribution layer, the following should be considered:
•
Traffic types and behaviors
•
Quantity of users connected to access layer switches
•
Layer 3 switching abilities on the distribution layer
Leading the way in IT testing and certification tools, www.testking.com
- 45 -
CCNP/CCDP 642-891 (Composite)
•
The size of Spanning Tree domains
•
The physical confines of VLANs
Designing a switch block should be based essentially on traffic types and behaviors, and the quantity and
extent of workgroups. Because a switch block can be too large or too small, the ability to break up or
downsize a switch block should be catered for. A switch block is too large when multicast or broadcast
traffic reduces speed of the switch block switches, or the distribution layer multilayer switches turn into
traffic blockages.
Access switches are able to contain one or many redundant links to distribution layer mechanisms. This
enables traffic to be load balanced across redundant links using redundant gateways.
1.6.3.2 The Core Block
If you have two or more switch blocks, you need a core block which will be responsible for transferring data
to and from the switch blocks as quickly as possible. You can build a fast core with a frame, packet, or cell
(ATM) network technology. Typically, have two or more subnets configured on the core network for
redundancy and load balancing.
Switches can trunk on a certain port or ports. This means that a port on a switch can be a member of more
than one VLAN at the same time. However, the distribution layer will handle the routing and trunking for
VLANs, and the core is only a pass-through once the routing has been performed. Because of this, core links
will not carry multiple subnets per link. A Cisco 6500 or 8500 switch is recommended at the core. Even
though one switch might be sufficient to handle the traffic, Cisco recommends two switches for redundancy
and load balancing purposes.
1.6.3.2.1 The Collapsed Core
A collapsed core is defined as one switch device performing both core and distribution layer functions. The
collapsed core is typically found in smaller campus networks where a separate core layer is not warranted.
Although the distribution and core layer functions are performed in the same device, keeping these functions
distinct and properly designed remain of importance. In the collapsed core design, each access layer switch
has a redundant link to each distribution/core layer switch and each access layer switch may support more
than one VLAN. The distribution layer routing is the termination for all ports. In a collapsed core network,
Spanning-Tree Protocol (STP) blocks the redundant links to prevent loops. Hot Standby Routing Protocol
(HSRP) can provide redundancy in the distribution layer routing. It can keep core connectivity if the primary
routing process fails.
1.6.3.2.2 Dual Core
A dual core connects two or more switch blocks in a redundant fashion. Each connection would be a
separate subnet. Redundant links connect the distribution layer portion of each switch block to each of the
dual core switches. In the dual core, each distribution switch has two equal-cost paths to the core, providing
twice the available bandwidth. The distribution layer routers would have links to each subnet in the routing
tables, provided by the layer 3 routing protocols. If a failure on a core switch takes place, convergence time
will not be an issue. HSRP can be used to provide quick cutover between the cores.
Leading the way in IT testing and certification tools, www.testking.com
- 46 -
CCNP/CCDP 642-891 (Composite)
1.6.3.2.3 Core Size
The dual core is made up of redundant switches, and is bounded and isolated by Layer 3 devices. Routing
protocols determine paths and maintain the operation of the core. You must pay attention to the overall
design of the routers and routing protocols in the network. As routing protocols propagate updates
throughout the network, network topologies might be undergoing change. The size of the network, i.e., the
number of routers, then affects routing protocol performance, as updates are exchanged and network
convergence takes place. Large campus networks can have many switch blocks connected into the core
block. Layer 2 devices are used in the core with usually only a single VLAN or subnet across the core.
Therefore, all route processors connect into a single broadcast domain at the core.
Each route processor must communicate with and keep information about each of its directly connected
peers. Thus, most routing protocols have practical limits on the number of peer routers that can be connected.
Because two equal-cost paths from each distribution switch into the core, each router forms two peer
relationships with every other router. Therefore, the actual maximum number of switch blocks that can be
supported is half the number of distribution layer routers. In the case of dual core design, the equalcost paths
must lead to isolated VLANs or subnets if a routing protocol supports two equal-cost paths. Thus, two
equal-cost paths are used in a dual core design with two Layer 2 switches. Likewise, a routing protocol that
supports six equal-cost paths requires that the six distribution switch links be connected to exactly six Layer
2 devices in the core. This gives six times the redundancy and six times the available bandwidth into the
core.
1.6.3.2.4 Core Scalability
As the number of switch blocks increases, the core block must also be capable of scaling without needing to
be redesigned. Traditionally, hierarchical network designs have used Layer 2 switches at the access layer,
Layer 3 devices at the distribution layer, and Layer 2 switches at the core. This design is called a Layer 2
Core has been very cost effective and has provided high-performance connectivity between switch blocks in
the campus. As the network grows, more switch blocks must be added to the network, which in turn requires
more distribution switches with redundant paths into the core. The core must then be scaled to support the
redundancy and the additional campus traffic load.
Providing redundant paths from the distribution switches into the core block allows the Layer 3 distribution
switches to identify several equal-cost paths across the core. If the number of core switches must be
increased for scalability, the number of equal-cost paths can become too much for the routing protocols to
handle. Because the core block is formed with Layer 2 switches, the Spanning-Tree Protocol (STP) is used
to prevent bridging loops. If the core is running STP, then it can compromise the high-performance
connectivity between switch blocks. The best design on the core is to have two switches without STP
running. You can do this only by having a core without links between the core switches.
1.6.3.2.5 Layer 3 Core
Layer 3 switching can also be used in the core to fully scale the core block for large campus networks. This
approach overcomes the problems of slow convergence, load balancing limitations, and router peering
limitations. In a Layer 3 core, the core switches can have direct links to each other. Because of Layer 3
functionality, the direct links do not impose any bridging loops.
With a Layer 3 core, the path determination intelligence occurs in both the distribution and core layers,
allowing the number of core devices to be increased for scalability. Redundant paths also can be used to
Leading the way in IT testing and certification tools, www.testking.com
- 47 -
CCNP/CCDP 642-891 (Composite)
interconnect the core switches without concern for Layer 2 bridging loops, eliminating the need for STP. If
you have only Layer 2 devices at the core layer, the STP will be used to stop network loops if there is more
than one connection between core devices. The STP has a convergence time of over 50 seconds, and if the
network is large, this can cause an enormous amount of problems if it has just one link failure. However,
STP would not be implemented in the core if the core has Layer 3 devices. Instead, routing protocols, which
have a much faster convergence time than STP, could be implemented. In addition, the routing protocols can
load balance with multiple equal-cost links. STP is discussed in more detail in Section 12.2.
Router peering problems are also overcome as the number of routers connected to individual subnets is
reduced. Distribution devices are no longer considered peers with all other distribution devices. Instead, a
distribution device peers only with a core switch on each link into the core. This advantage becomes
especially important in very large campus networks involving more than 100 switch blocks. However, Layer
3 devices are more expensive than Layer 2 devices. The Layer 3 devices also need to have switching
latencies comparable to their Layer 2 counterparts. Using a Layer 3 core also adds additional routing hops to
cross-campus traffic.
1.6.3.3 Additional Building Blocks
Additional resources can be assembled into building blocks, and can be located and arranged in the same
manner as common switch block modules.
•
Server Farm Blocks: - Enterprise servers comprising of company e-mail, intranet services, mainframe
systems and Enterprise Resource Planning (ERP) applications normally belong to a server farm. These
enterprise resources are accessed by most of the connected users. The whole server farm can be
structured into a switch block with its own layer of access switches. These access switches are then
uplinked to dual distribution switches that are connected into the core layer by means of redundant highspeed links. Dual-homing the servers occurs when each server has dual network connections - one to
each distribution switch.
•
Enterprise Edge Blocks: - Campus networks connect to service providers at the edge of the campus
network to gain access to these service providers’ external resources or services. The resources are
utilized by the whole network and can be grouped into a single switch block that is connected to the core
network. These resources can comprise of internet access, WAN access, e-commerce and remote access
and VPNs.
•
Network Management Blocks: - Network management resources and policy management applications
such as system logging servers and authentication, authorization, and accounting (AAA) servers, can be
grouped into a single network management switch block. These resources access application servers,
network devices and user connectivity and actions. This single network management switch block has a
distribution layer that links into the core switches. Redundant links and redundant switches are usually
utilized to ensure that the resources are always available.
•
Service Provider Edge Blocks: - A service provider has its own hierarchical network design and can be
viewed as an enterprise or campus network. A campus network contains an edge block and connects to
each service provider’s network edge from there.
1.7 Alleviating Congestion
1.7.1 Access Lists
Leading the way in IT testing and certification tools, www.testking.com
- 48 -
CCNP/CCDP 642-891 (Composite)
You can alleviate congestion by controlling network traffic. Cisco routers have features, such as access lists,
that you can use to control network traffic. Access lists are crucial to the programming of a Cisco router and
allow for the control of traffic. Given that the router operates at Layer 3, the control that is offered is
extensive. The router can also act at higher layers of the OSI model. This is useful when identifying
particular traffic and protocol types for prioritization across slower WAN links.
Access lists can be used to either restrict or police traffic entering or leaving a specified interface. The
access lists used for IP enable you to apply great subtlety in the router's configuration. Access lists are linked
lists with a top-down logic, ending in an implicit deny any command, which will deny everything. Topdown logic means that the process will read from the top of the access list and stop as soon as it meets the
first entry in the list that matches the packet's characteristics. Therefore, it is crucial that careful attention be
given to their creation. Writing down the purpose of the proposed access list before attempting to program
the system also proves helpful. Access lists block traffic traversing the router but does not prevent or block
the traffic generated by the router.
The syntax for a standard access-list command and an ip access-group command are:
access-list access-list_number { permit | deny }
{ source [ source_wildcard | any ]}
and
ip access-group access-list_number { in | out }
The access-list_number must be 1-99 to create a standard access list. Standard access lists are
implemented at Layer 3. In general, both the source and destination addresses are identified as criteria in the
logic of the list.
IP access lists use the source address only. The placement of the access list is crucial because it may
determine the effectiveness of the control imposed. Because the decision to forward can be made on the
source address only, the access list is placed as close to the destination as possible to allow connectivity to
intermediary devices. You can place an access list on either an inbound or an outbound interface. If this
option is not configured, the default is for the access list to be placed on the outbound interface. The access
list will examine traffic flowing only in the direction stated. In this way, traffic subject to an inbound access
list will be examined before it is sent to the routing process. To ensure that all paths to the remote location
have been covered, access lists should be implemented with reference to the network topology map.
1.7.2 Extended Access Lists
Although the same rules apply for all access lists, extended access lists allow for a far greater level of
control because decisions are made at higher levels of the OSI model. The syntax of an extended accesslist command is:
access-list access-list_number { deny | permit } protocol
source source_wildcard destination destination_wildcard
ip access-group access-list_number { in | out }
The access-list-number must be 100-199 to create an extended access list.
Leading the way in IT testing and certification tools, www.testking.com
- 49 -
CCNP/CCDP 642-891 (Composite)
TABLE 1.4: Parameters for the Extended access-list Command
Command
Description
access-list_number
Specifies the number of an access list.
{ deny | permit }
Denies or permits access if the conditions are
matched.
source source_wildcard
Gives the source address and the wildcard mask.
destination destination_wildcard
Gives the destination address and the wildcard
mask.
[ precedence precedence ]
Packets can be filtered by precedence level, as
specified by a number from 0 to 7, or by name.
[ tos tos ]
Packets can be filtered by type of service level,
as specified by a number from 0 to 15, or by
name.
[ established ] (For TCP only)
Indicates an established connection. A match
occurs if the TCP datagram has the ACK or RST
bits set. The nonmatching case is that of the
initial TCP datagram to form a connection.
[ log ]
Gets access list logging messages, including
violations.
Top-down logic is employed in extended access lists. It is therefore important to consider the sequence of
conditions within the list.
You can use show commands to verify the filter configuration for either IP or IPX filters. The two
commands to use are:
show access list
and
show ip interface
Although access lists are used mainly to manage traffic, they can be used to solve many other problems. An
access list is a list that uses a simple logic to decide whether to forward traffic. As such, access lists are
sometimes used as a security system. However, although access lists are complex, they are easily spoofed
and defeated. Starting with IOS release 11.3, Cisco has implemented full security features that should be
utilized in preference to access lists.
In addition, access lists can be used to control Telnet traffic. However, access lists filter traffic traversing the
router but not the traffic generated by the router. To control Telnet traffic in which the router is the end
station, an access list can be placed on the virtual terminal line (vty). Five terminal sessions are available:
vty 0 through vty 4. Because anticipating which session will be assigned to which terminal is difficult,
Leading the way in IT testing and certification tools, www.testking.com
- 50 -
CCNP/CCDP 642-891 (Composite)
control is generally placed uniformly on all virtual terminals. Although this is the default configuration,
some platforms have different limitations on the number of vty interfaces that can be created.
The syntax for virtual terminal line commands are:
line { vty_number | vty_range }
access-class access-list_number { in | out }
1.7.3 Distribution Lists
Traffic management is most easily accomplished at Layer 3 of the OSI model. However, limiting traffic at
Layer 3 can also limit connectivity. Therefore, careful design is required. Routing updates convey
information about the available networks.
In most routing protocols, these updates are sent out periodically to ensure that every router's perception of
the network is accurate and current. Access lists applied to routing protocols restrict the information sent out
in the update and are called distribute lists. They work by omitting certain networks based on the criteria in
the access list. The result is that remote routers that are unaware of these networks are not capable of
delivering traffic to them. Distribute lists are also used to prevent routing loops in networks that have
redistribution between outing protocols. When connecting two separate domains, the connection point of the
domains or the entry point to the Internet is an area through which only limited information needs to be sent.
Otherwise, routing tables become unmanageably large and consume large amounts of bandwidth.
1.7.4 Other Solutions to Traffic Control
It is popular to tune the update timers between routers, trading currency of the information for optimization
of the bandwidth. All routers running the same routing protocol expect to hear these updates with the same
frequency that they send out their own. If any of the parameters defining how the routing protocol works are
changed, these alterations should be applied consistently throughout the network; otherwise, routers will
time out and the routing tables will become unsynchronized.
It may be advantageous to completely turn off routing updates across WAN networks and to manually or
statically define the best path to be taken by the router. Routing protocols such as EIGRP or OSPF send out
only incremental updates. However, these are correspondingly more complex to design and implement,
although the configuration is easier. Another method of reducing routing updates is to implement the
technology snapshot routing available on Cisco routers and designed for use across WAN links. This allows
the routing tables to be frozen and updated either at defined times, such as every two days or whenever the
dialup line is raised. For more information on this topic, refer to the Cisco web page.
1.7.5 Prioritization
Access lists are not used just to determine which packets will be forwarded to a destination. On a slow
network connection where bandwidth is at a premium, access lists are used to determine the order in which
traffic is scheduled to leave the interface. Unfortunately, some of the packets may time out. Therefore, it is
important to plan the prioritization based on your understanding of the network. It is important to ensure that
the traffic most likely to time out, such as IBM's Systems Network Architecture (SNA), is handled first.
Leading the way in IT testing and certification tools, www.testking.com
- 51 -
CCNP/CCDP 642-891 (Composite)
There are many types of prioritization available. These types of prioritization are referred to as queuing
techniques. They are implemented at the interface level and are applied to the interface queue. These include:
•
Weighted Fair Queuing (WFQ), which is replacing the First-In, First-Out (FIFO) queuing
mechanism as the default. The queuing process analyzes the traffic patterns on the link, based on the size
of the packets and the nature of the traffic, to distinguish interactive traffic from file transfers. The queue
then transmits traffic based on its conclusions.
•
Priority Queuing, which is a method of dividing the outgoing interface into four virtual queues.
Importance or priority ranks these queues, so traffic is queued based on its importance and will be sent
out of the interface accordingly. This method ensures that sensitive traffic, such as SNA traffic, on a
slow or congested link is processed first.
•
Custom Queuing, which is a method of dividing the outgoing interface into many subqueues. Each
queue has a threshold stating the number of bytes that may be sent before the next queue must be
processed. In this way, it is possible to determine the percentage of bandwidth that each protocol is given.
•
Class-based Weighted Fair Queuing (CBWFQ), which extends the standard WFQ functionality to
provide support for user-defined traffic classes. For CBWFQ, you define traffic classes based on match
criteria, including protocols, access lists, and input interfaces. Packets satisfying the match criteria for a
class constitute the traffic for that class. A queue is reserved for each class, and traffic belonging to a
class is directed to that class's queue.
•
Low-Latency Queuing (LLQ), which brings strict priority queuing to CBWFQ. Configured by the
priority command, strict priority queuing gives delay-sensitive data, such as voice, preferential treatment
over other traffic. With this feature, delay-sensitive data is sent before packets in other queues.
1.7.5.1 First In, First Out (FIFO)
FIFO is the most basic queuing strategy. It is the first-come, first-served approach to data forwarding. In
FIFO, packets are transmitted in the order in which they are received. Until recently, FIFO was the default
queuing strategy for all interfaces on a router. However, should it become necessary for the traffic to be
reordered in any way, another strategy must be invoked because FIFO gives no regard to one type of traffic
over another. It simply dispatches data as it receives it. This, however, is not really queuing; but is more
along the lines of buffering. The packets are routed to the interface and stored in router memory until
transmittal. The transmission order is based on the arrival order of the first bit of the packet.
1.7.5.2 Weighted Fair Queuing (WFQ)
Weighted Fair Queuing (WFQ) enables Telnet and other interactive traffic to have priority over FTP and
other large transfers. This improves overall throughput. The FTP packets get through with relatively little
delay, and Telnet users see improved response times. In WFQ, traffic is sorted by high-volume and lowvolume communications (sessions). The traffic in a session is kept within one session, and the records are
handled FIFO within a particular session. The lower volume interactive traffic is given a priority and flows
first. The necessary bandwidth is allocated to the interactive traffic, and the high-volume traffic equally
shares the remaining bandwidth. WFQ is the default queuing strategy on interfaces of less than 2 MB
because at higher speeds, queuing is usually not necessary. In addition, WFQ is on by default for interfaces
that support it. WFQ is not used by default on Link Access Procedure on the B channel (LAPB) for X.25,
compressed Point-to-Point Protocol (PPP), or Synchronous Data Link Control (SDLC) interfaces.
Leading the way in IT testing and certification tools, www.testking.com
- 52 -
CCNP/CCDP 642-891 (Composite)
Discrimination of traffic conversations is based on source and destination packet header addresses. Other
factors such as source and destination MAC addresses, source and destination port or socket numbers, datalink connection identifier (DLCI) in Frame Relay deployments, Quality of Service (QoS) values, and Type
of Service (ToS) values also provide discriminatory criteria to the WFQ process. With WFQ, the lowvolume traffic is given the priority on the outbound interface.
Configuring WFQ involves adjusting the queue limits. To keep some sessions from overwhelming the
circuit, you can configure the maximum number of records that any high-volume traffic allows into the
queue. The default setting is 64 records, but the supported range is from 1–4096. If a session reaches the
queue limit, no further records are queued for that session until the percentage of the entries in the queue for
that conversation drops. All new packets for the over-queue-limit conversation are dropped and lost. TCP
window sizes control the amount of data that can be transmitted between two hosts without an
acknowledgement. A larger window size enables a higher transmission threshold. Sessions using TCP that
suffer a packet drop retransmit automatically as part of the Layer 4 flow control process. As a consequence,
the communicating parties in the TCP conversation are forced to reduce their window sizes. The fairqueue command is shown below:
RouterA(config-if)#fair-queue [ queue_limit ]
WFQ can be disabled using the no fair-queue command.
1.7.5.3 Priority Queuing (PQ)
When absolute control over the throughput is necessary, priority queuing should be utilized. Priority queuing
gives the network administrator granular control that reduces network delay for high-priority traffic.
Variations of priority queuing have been in use for a number of years in differing vendor implementations.
Cisco's implementation of priority queuing utilizes four queues: high, medium, normal, and low. For traffic
placed in individual queues, the output strategy is FIFO. The traffic defined as high priority receives the
benefit of all available resources on the output interface until the queue is empty. Once the high queue is
complete, the medium queue traffic is dispatched in the same manner until empty. At this stage, the high
queue is again checked for content and emptied if there is any new traffic. If there are no entries in the high
queue after servicing the medium queue, the normal queue is emptied. Once normal traffic has been
dispatched, the high queue is checked again, followed by the medium and normal queues. If all three are
empty, the low queue is serviced. The result is that high-priority traffic always suffers the shortest delay in
awaiting dispatch.
The low-priority traffic must wait until it can be serviced. The traffic can even age out and be purged from
memory if the queue overflows. Once an overflow occurs, all new packets for that queue are dropped until
space is freed up in the queue. Each queue has a fixed length, which is configurable. The defaults are 20
records for high, 40 records for medium, 60 records for normal, and 80 records for low. The lower priority
queues are larger, by default, than the higher priority queues to accommodate the queuing algorithm and the
fact that the lower priority queues might wait longer to be serviced.
The configuration of priority queuing, in the most basic of configurations, entails configuring each protocol
that traverses a particular WAN link to enter a specific queue. In more advanced configurations, standard or
extended access lists can be defined for specific traffic types and applied to a queue configuration. In
priority queuing, the priority-list commands are read in the order of their appearance until a matching
protocol or interface type is found. When a match is found, the packet is assigned to the appropriate queue
Leading the way in IT testing and certification tools, www.testking.com
- 53 -
CCNP/CCDP 642-891 (Composite)
and the search ends. Therefore, some planning needs to go into the creation of the list. The configuration of
priority queuing entails defining specific access lists if they are to be used; creating the priority list; applying
the priority list to the interface; and verifying the queuing process.
•
If it is necessary to queue traffic based on a specific network address, protocol, or application, access
lists can be put in place to sort the traffic. Standard or extended access lists can be defined to specify the
traffic type or types that should be placed into a specific queue.
•
The command parameters for priority queue configuration for a specific protocol or traffic type is:
RouterA(config)#priority-list list_number protocol protocol
{ high | medium | normal | low } queue_keyword keyword_value
The list_number argument can be an arbitrarily selected number from 1–16; however, all lines for a
particular priority list must have the same list_number to function properly. The queue_keyword and
keyword_value parameters are used to associate access lists with the priority list.
•
Once the priority list is created, it must be associated with an interface. The priority list is activated on
the interface by the priority-group command.
•
Verifying the queuing configuration can be performed by using the show queueing command, which
shows the detail of the priority lists configured on the router and the appropriate details of each list.
Note: The command used to verify the queuing configuration is show
queueing and not show queuing. The latter command is not recognized by
the Cisco IOS.
1.7.5.4 Custom Queuing
Custom queuing enables the sharing of available bandwidth across all types of traffic. This technique
allocates a percentage of bandwidth to each of the various traffic types. The difference between this
approach and priority queuing is that the queues are processed in round-robin sequence. Therefore, it is
possible that high priority traffic would not be serviced quickly enough because although each type of traffic
would get some bandwidth, no traffic would be designated with a higher priority than the rest. Custom
queuing employs 17 queues with queue 0, which the system queue, being used for the system. The
remaining 16 queues can be configured by the administrator.
By default, queues evenly balance traffic. There are two thresholds by which queues are measured: queue
limit and byte count. The queue limit default is 20 records. The byte count limit default is 1500 bytes.
Whichever limit is reached first signifies the end of a particular queue's time with the processor. If the byte
count limit is reached during the transmission of a packet, the entire packet is dispatched.
As with priority queuing, the configuration of custom queuing involves the creation of a list and associating
a group with an interface. Traffic in the queues can be configured based on a specific traffic type, protocol,
or input interface. Access lists can be configured to place specific traffic types into a particular queue, and
traffic not designated to a particular queue can be placed in a default queue.
Note: To implement custom queuing on a Frame Relay interface, Frame Relay
traffic shaping must be disabled.
Leading the way in IT testing and certification tools, www.testking.com
- 54 -
CCNP/CCDP 642-891 (Composite)
•
If it is necessary to queue traffic based on a specific network address, protocol, or application, access
lists can be put in place to sort the traffic. Standard or extended access lists can be defined to specify the
traffic type or types that should be placed into a specific queue.
•
The command parameters for custom queue configuration is:
RouterA(config)#queue-list list_number protocol protocol queue_number
queue_keyword keyword_value
The list_number argument can be an arbitrarily selected number from 1–16; however, all lines for a
particular queue list must have the same list_number to function properly. The queue_keyword and
keyword_value parameters are used to associate access lists with the queue list.
It is also possible to specify that any traffic that entered the router through a particular interface be
placed into a particular queue, using the following command:
RouterA(config)#queue-list list_number interface interface_type
interface_number queue_number
Any traffic that does not match any lines in a priority list is placed in the default queue. For custom
queuing, the default queue is queue 1. The command for assigning a default queue is:
Router(config)#queue-list list_number default queue_number
The amount of data a queue can service before having to move on to the next queue is known as a
service threshold. You can alter the service threshold of each individual queue. The command for
resizing a queue's record limit service threshold is:
RouterA(config)#queue-list list_number queue queue_number
limit limit_number
Valid entries for this command are 0–32, 767.
The command structure for altering the byte-count service threshold is as follows:
RouterA(config)#queue-list list_number queue queue_number
byte-count byte_count_number
•
Once the queue list is created, it must be associated with an interface. The queue list is activated on the
interface by the custom-queue-list command.
•
Verifying the queuing configuration can be performed by using the show queueing command, which
shows the detail of the priority lists configured on the router and the appropriate details of each list.
Note: The command used to verify the queuing configuration is show
queueing and not show queuing. The latter command is not recognized by
the Cisco IOS.
Leading the way in IT testing and certification tools, www.testking.com
- 55 -
CCNP/CCDP 642-891 (Composite)
1.7.5.5 Class-Based Weighted Fair Queuing (CBWFQ)
Class-based Weighted Fair Queuing (CBWFQ) extends the standard functionality of WFQ to provide
support for user-defined traffic classes. It allows you to define traffic classes based on criteria such as
protocols, access lists, and input interfaces. Once a class has been defined, you can assign it characteristics
such as bandwidth, weight, and maximum packet limit. However, to characterize a class, you also specify
the maximum number of packets allowed to accumulate in the queue, or the queue limit, for that class. Once
a queue has reached its queue limit, the queuing of additional packets to the class causes tail drop or packet
drop to take effect, depending on how class policy is configured.
If a default class is configured with the bandwidth policy-map class configuration command, all
unclassified traffic is put into a single FIFO queue and treated according to the configured bandwidth. If a
default class is configured with the fair-queue command, all unclassified traffic is flow classified and
given best-effort treatment. If no default class is configured, then the traffic that does not match any of the
configured classes is flow classified and given best-effort treatment. Once a packet is classified, all of the
standard mechanisms that can be used to differentiate service among the classes apply.
For CBWFQ, the weight specified for the class becomes the weight of each packet that meets the match
criteria of the class. Packets that arrive at the output interface are classified according to the match criteria
filters you define, then each one is assigned the appropriate weight. The weight for a packet belonging to a
specific class is derived from the bandwidth you assigned to the class. After the weight for a packet is
assigned, the packet is queued in the appropriate class queue.
1.7.5.6 Low-Latency Queuing (LLQ)
Low-Latency Queuing (LLQ) allows you to implement strict priority queuing to CBWFQ. This gives delaysensitive data preferential treatment over other traffic and ensures that delay-sensitive data is sent before
packets in other queues. LLQ is configured by using the priority command. It enables the use of a single,
strict priority queue within CBWFQ at the class level, allowing you to direct traffic belonging to a class to
the CBWFQ strict priority queue. To queue class traffic to the strict priority queue, you must specify the
named class within a policy map and then configure the priority command for the class.
1.7.6 Null Interface
Access lists are not always the most suitable solution to alleviate congestion. Access lists require CPU
processing from the router. The more complex or the longer the access list, the greater the amount of CPU
processing is required. The null interface, which is a virtual interface, is an alternative to access lists. Traffic
may be sent to the null interface, but the traffic disappears because the interface has no physical layer. Thus,
while access lists require CPU processing to determine which packets to forward, the null interface just
routes the traffic to nowhere. The null interface is configured by using the following command syntax:
ip route ip_address subnet_mask null0
1.7.7 Fast, Autonomous, and Silicon Switching
Fast, autonomous and silicon switching techniques were created to improve the capability of the router to
forward traffic at speed. After the routing process has made a routing decision, it sends the packet to the
appropriate outbound interface. Meanwhile, the router holds a copy of the address details of the outbound
frame in memory, along with a pointer to the appropriate outbound interface. This means that incoming
Leading the way in IT testing and certification tools, www.testking.com
- 56 -
CCNP/CCDP 642-891 (Composite)
traffic can be examined as it comes into the router. The router looks in the cache to see whether a routing
decision has already been made for that set of source and destination addresses. If an entry exists, the frame
can be switched directly to the outbound interface, and the routing process is bypassed.
1.7.8 Cisco Express Forwarding (CEF)
Another solution is Cisco Express Forwarding. This is very high-end solution and is available on 7500
routers with Versatile Interface Processors (VIPs) and the 8510 router. It is a distributed switching
mechanism that keeps copies of route cache information in several different forms to be used for efficient
switching. It is designed for high-performance, highly resilient Layer 3 IP backbone switching. It optimizes
network performance and scalability for networks with large and dynamic traffic patterns, such as the
Internet, on networks characterized by intensive Web-based applications, or interactive sessions. CEF offers
three benefits: improved performance because it is less CPU-intensive than fast switching route caching;
scalability; and resilience as CEF can switch traffic more efficiently than typical demand caching schemes.
Information conventionally stored in a route cache is stored in two data structures for CEF switching. The
data structures are the Forwarding Information Base (FIB) and the Adjacency Tables, and provide
optimized lookup for efficient packet forwarding.
•
The Forwarding Information Base (FIB) is used to make IP destination prefix-based switching
decisions. The FIB is similar to a routing table and maintains a mirror image of the forwarding
information contained in the IP routing table. When routing or topology changes, the IP routing table is
updated, and those changes are reflected in the FIB. The FIB also maintains next-hop address
information based on the information in the IP routing table. Because there is a one-to-one correlation
between FIB entries and routing table entries, the FIB contains all known routes and eliminates the need
for route cache maintenance that is associated with switching paths.
•
Adjacency Tables are used to maintain Layer 2 next-hop addresses for all FIB entries. The adjacency
table is populated as adjacencies are discovered. Each time an adjacency entry is created; a link-layer
header for that adjacent node is computed and stored in the adjacency table. Once a route is determined,
it points to a next hop and corresponding adjacency entry. It is subsequently used for encapsulation
during CEF packet switching. However, a route might have several paths, such as when a router is
configured for load balancing and/or redundancy. For each resolved path, a pointer is added for the
adjacency corresponding to the next-hop interface for that path. This mechanism is also used for load
balancing across several paths.
CEF can be enabled in one of two modes: Central CEF Mode; and Distributed CEF Mode (dCEF)
•
When in Central CEF Mode is enabled, the CEF FIB and adjacency tables reside on the route processor,
and the route processor performs the express forwarding. This mode can be used when line cards, such
as VIP line cards or GSR line cards, are not available for CEF switching or when you need to use
features not compatible with distributed CEF switching.
•
When Distributed CEF Mode (dCEF) is enabled, line cards maintain an identical copy of the FIB and
adjacency tables. The line cards perform the express forwarding between port adapters, relieving the
RSP of involvement in the switching operation. dCEF uses an Inter Process Communication (IPC)
mechanism to ensure synchronization of FIBs and adjacency tables on the route processor and line cards.
1.7.9 Enhanced Interior Gateway Routing Protocol (EIGRP)
Leading the way in IT testing and certification tools, www.testking.com
- 57 -
CCNP/CCDP 642-891 (Composite)
EIGRP a Cisco propriety routing protocol that is designed to make efficient use of the available network
bandwidth. It can be used for IP, AppleTalk and IPX. EIGRP sends incremental updates, i.e., it sends
updates only when a change in the network is experienced. EIGRP is particularly efficient in sending
network and server information for client/server products such as NetWare for IPX and AppleTalk because
it automatically redistributes routing updates into the local protocol updates. EIGRP is discussed in more
detail in Section 7.
Leading the way in IT testing and certification tools, www.testking.com
- 58 -
CCNP/CCDP 642-891 (Composite)
2. IP/TCP
IP addresses uniquely identify devices within the TCP/IP domain, including all routers and switching
devices, as well as all devices connected to the Internet. An IP address is composed of 32 binary bits and
consists of two parts: a network ID and a host ID. The boundary between the network ID and the host ID of
the IP address is defined by the subnet mask, another 32-bit field. There is a bit-for-bit alignment between
the IP address and the subnet mask. The subnet mask contains a continuous field of ones followed by a
continuous field of zeros. Where the contiguous ones stop indicates the boundary between the network ID
and the host ID of the IP address. The network boundary can occur at any place after the eighth bit position
from the left. Once the boundary between the network part and the host part of the IP address is known, all
devices addressed in that network will have a common binary pattern in the network part that identifies the
device as belonging to the specified network.
2.1 The IP Address
Both the IP address and the associated subnet mask contain 32 bits.
represented in other formats. The common formats include
decimal (base 10) and hexadecimal (base 16) notation. The
generally accepted format for representing IP addresses and
subnet masks the doted decimal notation in which the 32-bit field
is divided into four groups of eight bits, also called a byte, which
are translated to decimal value and separated by dots. Each group
of eight bits is called an octet. The decimal value of each octet is
calculated from right to left with each binary 1 representing a
value and each binary 0 having no value. Furthermore, binary is 2
based; therefore each successive bit is twice the value of the
preceding bit. This is illustrated in Figure 2.1. When binary value
has more than one 1, as in 00001001 the decimal values for the 1s
are added to produce the decimal value. In this example 00000001
is 1 and 00001000 is 8. Therefore the decimal value for 00001001
is 9 (8+1). The maximum binary value for an octet would contain
all 1s, as in 11111111, and would have a decimal value 255
(128+64+32+16+8+4+2+1).
However, the 32-bit IP address can be
Binary Value
Decimal Value
00000001
1
00000010
2
00000100
4
00001000
8
00010000
16
00100000
32
01000000
64
10000000
128
Figure 2.1: Binary to Decimal Conversion
2.1.1 IP Address Classes
IP addresses are divided in to 'classes', based on the decimal value represented in the first octet. This class
definition is referred to as the First Octet Rule. There are five classes of IP addresses: classes A, class B,
class C, class D; and class E, but only class A, B and C addresses are used to identify devices connected to
the Internet. Class D addresses are used for multicasting, and Class E addresses are reserved for
experimental use. The subnet mask is related to the IP address class. Thus, once the IP address class is
known, the default routing mask is also known. The IP address classes and their related subnet masks are:
•
Class A addresses range from 0.0.0.0 through 126.255.255.255 and use a default subnet mask of
255.0.0.0. In Class A addresses, the first octet is used as for the network ID while the last three octets are
used for the host ID. In other words, the first 8 bits of the subnet mask are all 1s, hence a subnet mask of
255.0.0.0. As a result, networks that use Class A addresses can theoretically support a maximum of 256
networks and 16,777,216 (224) hosts, however, the first and the last address cannot be used. The first
Leading the way in IT testing and certification tools, www.testking.com
- 59 -
CCNP/CCDP 642-891 (Composite)
address is the network address and the last address is the broadcast address. For example, a network with
an IP address of 10.10.11.12 has a network ID of 10.0.0.0, the fist address, and a broadcast address of
10.255.255.255, the last address. Thus networks with a Class A IP address space can support a
maximum of 254 networks (28-2) and 16,777,214 hosts (224-2). Consequently, Class A addresses are
used for a few networks with a very large number of hosts on each network.
•
Class B addresses range from 128.0.0.0 through 168.255.255.255 and 170.0.0.0 through
191.255.255.255. These addresses use a default subnet mask of 255.255.0.0. In Class B addresses, the
first two bits are used as for the network ID while the last two bits are used for the host ID. As a result,
networks that use Class B addresses can support a maximum of 65,534 networks (216-2) and 65,534
hosts. Consequently, Class B addresses are used for a reasonable number of medium sized networks.
Note: IP addresses with a first octet of 127, i.e. 127.0.0.0 through
127.255.255.255 do not fall in either the Class A address range or the Class
B address range. IP addresses that have a first octet of 127 are reserved for
diagnostics purposes.
Note: The IP address range of 169.0.0.1 through 169.253.255.254 has been
reserved by InterNIC for future use.
•
Class C addresses range from 192.0.0.0 through 223.225.225.225 and default subnet mask of
255.255.255.0. In Class C addresses, the first three bits are used as for the network ID while only the last
bit is used for the host ID. As a result, networks that use Class C addresses can support a maximum of
16,777,214 networks and 254 hosts. Consequently, Class C addresses are used for a large number of
networks with a relatively small number of hosts on each network.
•
Class D addresses are in the range 224.0.0.0 through 239.255.255.255. These addresses are reserved for
multicast transmissions.
•
Class E addresses are in the range 240.0.0.0 through 254.255.255.255. These addresses are reserved for
experimental use.
2.1.2 Classless Interdomain Routing (CIDR) Notation
Class-based IP addressing is fairly rigid. Thus, a small company with 50 hosts that wants to connect to the
Internet would need a Class C address. However, a Class C address range supports 253 hosts; therefore 203
addresses would be wasted. Similarly, a company with 4,000 hosts would require a Class B address to
connect to the Internet. A Class B address can support up to 65,023 hosts, resulting in 61,023 addresses
being wasted. This problem can be overcome by extending the default subnet mask by adding more
continuous 1s to it. The result is that the network can support less hosts. Thus, the company that has 4,000
hosts would use a Class B address with a subnet mask of 255.255.240.0. This is achieved by extending the
subnet mask by 4 bits so that the first 20 bits represent the network ID and 12 bits only represent the host ID.
Thus the address range now supports only 4,094 hosts, representing a loss of only 94 addresses. We can
calculate the number of hosts supported by using the formula: 2n-2 where n is the number of bits used for the
host ID. We need to subtract 2 addresses: the network address and the broadcast address. In this example, 12
bits are used for the host ID. Thus using the formula we can see that this subnet mask supports 4,094 hosts
(212-2).
This solves the problem of IP address allocation on the internet but presents a problem for routing tables, as
the routing table cannot determine the subnet mask on the basses of the IP address class. Hence a different
Leading the way in IT testing and certification tools, www.testking.com
- 60 -
CCNP/CCDP 642-891 (Composite)
format of representing the IP address and its subnet mask is required. This format is called the Classless
Interdomain Routing (CIDR) notation. CIDR is in essence an adaptation of the Dotted Decimal Format and
represents the subnet mask as a number of bits used for the network ID. This number of bits is indicated
after the IP address by the number that follows the slash (/) symbol and is referred to as the prefix mask.
For example, the CIDR notation IP address 140.12.26.128/20 has a prefix mask of /20, which indicates that
the first 20 bits of the IP address is used for the subnet mask, i.e., the first 20 bits are all 1s. Thus, the subnet
mask expressed in binary format is 11111111.11111111.11110000.00000000, being represented in dotted
decimal format as 255.255.240.0. In addition, the routing protocols must send the mask with the routing
update.
2.1.3 Subnetting
The process of extending the default subnet mask creates a counting range in the octet that the subnet was
extended into, which can be used to represent subnetworks. For example, when we extend the default Class
B subnet mask to 255.255.240.0, we do so by extending the subnet mask by 4 bits into the third octet. The
number of bits that the subnet mask is extended by represents a counting range for counting the number of
subnetworks that new subnet mask can support, using the 2n-2 formula. Thus, the subnet mask
255.255.240.0 subnet mask can support 14 subnets (24-2). In other words, the 65,534 hosts supported by the
default subnet mask can now be divided among 14 subnetworks. The number of IP addresses supported by
each subnet is called an address range. To calculate the range of addresses for each subnet, we would take
the decimal value for the last bit used for the subnet mask as the starting point for the first address in our
subnetwork, and then increment that number for each subsequent subnet. In this octet the bit range would be
11110000. The last bit in the subnet mask would thus have a decimal value of 16 (00010000). Therefore the
first IP address in the first subnet address range would be 140.12.16.1. The address ranges for the 14 subnets
would be:
•
140.12.16.1 to 140.12.31.254
•
140.12.128.1 to 140.12.143.254
•
140.12.32.1 to 140.12.47.254
•
140.12.144.1 to 140.12.159.254
•
140.12.48.1 to 140.12.63.254
•
140.12.160.1 to 140.12.175.254
•
140.12.64.1 to 140.12.79.254
•
140.12.176.1 to 140.12.191.254
•
140.12.80.1 to 140.12.95.254
•
140.12.192.1 to 140.12.207.254
•
140.12.96.1 to 140.12.111.254
•
140.12.208.1 to 140.12.223.254
•
140.12.112.1 to 140.12.127.254
•
140.12.224.1 to 140.12.239.254
Note: The IP address range for each subnet beginning with a 1, as in
140.12.16.1 or 140.12.32.1 and not 140.12.16.0 or 140.12.32.0 as this would
be the first address in the subnetwork, and would therefore be the network
address. Similarly, the last address in the range ends in 254 and not 255 as
the last address would be the broadcast address.
2.1.4 Variable-Length Subnet Masks
CIDR is used within the Internet. Its counterpart within an organization is the Variable-length subnet mask
(VLSM). Like CIDR, VLSM allows you to allocate the required host bits on a granular basis. In other
words, it allows you to provide only the bits required to address the number of hosts on a particular
Leading the way in IT testing and certification tools, www.testking.com
- 61 -
CCNP/CCDP 642-891 (Composite)
subnetwork. Like CIDR, VLSM requires a routing protocol that supports the sending of the subnet mask in
its updates. The routing protocols that support VLSM are: RIPv2; OSPF; IS-IS; EIGRP; and BGP-4. The
routing protocols do not support VLSM are: RIPv1; IGRP; and EGP.
2.2 Summarization
Summarization allows the representation of a series of networks in a single summary address. At the top of
the hierarchical design, the subnets in the routing table are more generalized. The subnet masks are shorter
because they have aggregated the subnets lower in the network hierarchy. These summarized networks are
often referred to as supernets, particularly when seen in the Internet as an aggregation of class addresses.
They are also known as aggregated routes. The summarization of multiple subnets within a few subnets has
several advantages. These include: reducing the size of the routing table; simplifying the recalculation of the
network as the routing tables are smaller; network overhead scalability; and hiding network changes.
2.2.1 Automatic Summarization
All routing protocols employ some a type of summarization. RIP and IGRP automatically summarize at the
NIC or natural class boundary as the subnet mask is not sent in the routing updates. When a routing update
is received, the router checks if it has an interface in the same class network. If it has one, it applies the mask
configured on the interface to the incoming routing update. With no interface configured in the same NIC
network, there is insufficient information and the routing protocol uses the first octet rule to determine the
default subnet mask for the routing update.
2.2.2 Manual Summarization
Both EIGRP and Open Shortest Path First (OSPF) send the subnet mask along with the routing update. This
feature allows the use of VLSM and summarization. When the routing update is received, it assigns the
subnet mask to the particular subnet. When the routing process performs a lookup, it searches the entire
database and acts on the longest match, which is important because it allows for the granularity of the
hierarchical design, summarization, and discontiguous networks.
A discontiguous network is a network in which a different NIC number separates two instances of the same
NIC number. This can happen either through intentional design or through a break in the network topology.
If the network is not using a routing protocol that supports VLSM, this will create a routing problem because
the router will not know where to send the traffic. Without a subnet mask, a routing protocol that supports
VLSM resolves the address down to the NIC number, which appears as if there is a duplicate address. This
will incorrectly lead to the appearance of intermittent connectivity symptoms.
If there are discontiguous networks in the organization, it is important that summarization is turned off or
not configured. Summarization may not provide enough information to the routing table on the other side of
the intervening NIC number to be capable of appropriately routing to the destination subnets, especially with
EIGRP, which automatically summarizes at the NIC boundary. In OSPF and EIGRP, manual configuration
is required for any sophistication in the network design. However, because EIGRP can perform
summarization at the interface level, it is possible to select interfaces that do not feed discontiguous
networks for summarization.
If summarization is not possible, you can either turn summarization off, and understand the scaling
limitations that have now been set on the network, or you can readdress the network.
Leading the way in IT testing and certification tools, www.testking.com
- 62 -
CCNP/CCDP 642-891 (Composite)
2.3 Implementing Private IP Addresses
2.3.1 Private IP Addressing
Private addressing is one of the solutions the Internet community began to implement when it became
apparent that there was a severe limitation to the number of IP addresses available on the Internet. Private
addressing was originally designed for organizations that had no intention of connecting to the Internet. As
Internet connectivity was not required, there was no need for a globally unique IP address from the Internet.
The individual organization could address its network without any reference to the Internet, using one of the
private IP address ranges. The advantage to the Internet was that none of the routers within the Internet
would recognize any of the addresses designated as private addresses. Therefore, if an organization that had
deployed private IP addressing connected to the Internet, all its traffic would be dropped. The ISPs' routers
are configured to filter all network routing updates from networks using private addressing. Table 2.1 lists
the three private IP Address ranges.
TABLE 2.1: Private IP Address Ranges
Class
IP Address Range
Subnet Mask
Class A
10.0.0.0 to 10.255.255.2555
255.0.0.0
Class B
172.16.0.0 to 172.31.255.255
255.240.0.0
Class C
192.168.0.0 to 192.168.255.255
255.255.0.0
The Class A private IP address range uses the default subnet mask to provide a single contiguous block of IP
addresses, i.e., a single subnet. The Class B private IP address range uses a 12-bit prefix mask (/12) while
the Class C private IP address range uses a 16-bit prefix mask (/16). Therefore the Class B range supports 16
subnets and the Class C range supports 256 subnets.
The use of private addressing has become widespread among companies connected to the Internet and has
become the means by which an organization does not have to apply to the Internet for an address. Because
these addresses have not unique addresses, a company cannot just connect to the Internet, but it must first go
through a gateway that can form a translation to a valid, unique address. This is called a network address
translation (NAT) gateway.
2.3.2 Network Address Translation
NAT is the method of translating an address on one network into a different address for another network. It
is used when a packet is traversing from one network to another and when the source address on the
transmitting network is not legal or valid on the destination network, such as when the destination
corresponds to a private address. The NAT software process must be run on a Layer 3 device or router as the
address that needs to be translated is a Layer 3 address. NAT is often implemented on a device that operates
at higher layers of the OSI model because of their strategic placement in the organization, such as on a
firewall system. The position of the firewall, on the boundary between the corporate network and the
Internet, makes it ideal for the implementation of NAT.
NAT is useful to connect organizations that used address space issued to other organizations to the Internet,
to connect organizations that have used private address space and that want to connect to the Internet, to
connect together two organizations that have used the same private address, and when the organization
Leading the way in IT testing and certification tools, www.testking.com
- 63 -
CCNP/CCDP 642-891 (Composite)
wants to hide its addresses and is using NAT as part of firewall capabilities or is using additional security
features.
2.4 The Logical AND Operation
When an IP address is assigned to an interface, it is configured with the subnet mask. Although represented
in a dotted decimal format, the router converts the IP address and the subnet mask into binary and performs
a logical AND operation to find the network portion of the address, i.e., the network ID. To perform a
logical AND, the IP address is written out in binary, with the subnet or Internet mask written beneath it in
binary. Each binary digit of the address is then ANDed with the corresponding binary digit of the mask. The
AND operation has two rules: 1 AND 1 is 1; and 0 AND 1 or 0 remains 0. Essentially, the logical AND
operation removes the host ID from the IP address, as illustrated in Figure 2.2.
IP address:
IP subnet mask:
IP address in binary:
IP subnet mask in binary:
The result of the logical AND in binary:
The result of the logical AND in dotted decimal format:
140.12.26.128
255.255.240.0
10001100.00001100.00011010.10000000
11111111.11111111.11110000. 00000000
10001100.00001100.00000000. 00000000
140.12.16.0
Figure 2.2: The Logical AND Operation
In the above example, the network to which the host 140.12.26.128 belongs has the network ID of
140.12.16.0. Once the network ID is determined, the router can perform a search on the routing table to see
whether it can route to the remote network. Therefore, the correct mask is essential to ensure that traffic can
be directed through the overall network.
2.5 IP Routing
2.5.1 Routing Protocols
In an IP network, routing is performed by routing protocols. A
routing protocol is a set of rules that describes how Layer 3
routing devices send updates about the available networks to each
other. If more than one path to the remote network exists, the
protocol also determines how to select the best path or route to the
remote network.
Participating routers advertise the routes that they know about to
their neighbors in routing updates. These routes held in the routing
table. The router will reference the routing table and make a
decision about forwarding data packets to the end destination
identified in the destination address of the datagram/packet. The
routing table has four fields:
•
The Network field, which contains the networks that the
router knows exist in the organization. These entries either
were entered manually as static routes or default routes, or
were learned via a routing protocol as dynamic routes.
Route Metrics
In a routed network, the routing process
relies on the routing protocol to locate the
best path to the destination network.
Different routing protocols in the TCP/IP
environment use different measuring
mechanisms, or metrics, to locate the best
path to a destination network. In addition,
routers advertise the path to a network in
terms of a metric value. Some examples of
metrics are: hop count and cost. If the
destination network is not local to the
router, then the path is represented by the
total of metric values defined for all of the
links that must be traversed to reach the
destination network.
Once the routing process knows the metric
values associated with the different paths,
then the routing decision can be made. The
routing process will select the path that has
the smallest metric value.
Leading the way in IT testing and certification tools, www.testking.com
- 64 -
CCNP/CCDP 642-891 (Composite)
•
The Outgoing Interface field, which is the interface on the router to which the routing process sends the
datagram. The outgoing interface field in the routing table indicates which interface to send the datagram
to and which interface the routing update came through.
•
The Metric field, which holds route metrics data and is used to determine which path to use if there are
multiple paths to the remote network. The metric used depends on the routing protocol. Table 2.2 shows
the metrics used by the different routing protocols.
•
The Next Logical Hop field, which is the destination address of the next forwarding router. The address
of the next logical hop will be on the same subnet as the outgoing interface. The purpose of identifying
the next logical hop is to enable the router to create the Layer 2 frame with the destination address. The
logical address of the next hop is stored instead of the MAC address because the MAC address may
change because of changes in the hardware; however, such changes do not affect the logical address.
TABLE 2.2: The Metrics used by Different Routing Protocols
Protocol
Metric
RIPv1
Hop count
IGRP
Bandwidth, delay, load, reliability, MTU
EIGRP
Bandwidth, delay, load, reliability, MTU
OSPF
Cost
IS-IS
Cost
Note: A routed protocol is the Layer 3 datagram used to transfer application
data as well as the upper-layer information from one network device to
another. The routing protocol is the protocol used to send updates between
the routers about the networks that exist in the organization, thereby
allowing the routing process to determine the path of the datagram across
the network.
2.5.2 The show ip route Command
The show ip route command displays the IP routing table on the router. It details the network as it is
known to the router, as well as the source of that the information. This command can be used to troubleshoot
configuration errors and understanding how the network is communicating about its routes. When used with
the network_number, or network ID, parameter, the entry only for a particular network in the routing table
is displayed.
2.5.3 The clear ip route Command
The clear ip route command empties the contents of the routing table and forces the router to learn the
information about the network anew. This is useful in troubleshooting a network. When used with the
network_number the specified network is removed from the table.
Leading the way in IT testing and certification tools, www.testking.com
- 65 -
CCNP/CCDP 642-891 (Composite)
3. Basic Switching and Network Technologies
3.1 Network Technologies
Various network technologies can be used to establish switched connections within the campus network.
These are: Ethernet, Fiber Distribution Data Interface (FDDI), Copper Distribution Data Interface (CDDI),
Token Ring, and Asynchronous Transfer Mode (ATM) that can be used in a campus network. Ethernet is
emerging as the most popular choice in installed networks because of its low cost, availability, and
scalability to higher bandwidths. Ethernet scales to support increasing bandwidths, and should be chosen to
match the need at each point in the campus network. As network bandwidth requirements grow, the links
between access, distribution, and core layers can be scaled to match the load.
3.1.1 Ethernet
Ethernet is a LAN technology that provides shared media access to many connected stations. It is based on
the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standard and offers a bandwidth of 10
Mbps between end users. In its most basic form, Ethernet is a shared media that becomes both a collision
and a broadcast domain. As the number of users on the shared media increases, so does the probability that a
user is trying to transmit data at any given time. Ethernet is based on the carrier sense multiple access
collision detect (CSMA/CD) technology, which requires that transmitting stations back off for a random
period of time when a collision occurs.
In a campus network environment, Ethernet is usually used in the access layer, between end user devices
and the access layer switch. Ethernet is not typically used at either the distribution or core layer.
3.1.1.1 Ethernet Switches
As the number of users on an Ethernet segment increases, the segment becomes less efficient. Ethernet
switching addresses this problem by dynamically allocating a dedicated 10 Mbps bandwidth to each of its
ports. The resulting increased network performance occurs by reducing the number of users connected to an
Ethernet segment. To improve performance even further, an Ethernet switch can be implemented. An
Ethernet switch provides all users with dedicated 10 Mbps connections. However, if an enterprise server is
located elsewhere in the network, then all of the switched users must still share available bandwidth across
the campus to reach it. A network design based on careful observation of traffic patterns and flows would
thus need to be implemented.
Switched Ethernet removes the possibility of collisions, thus stations do not have to listen to each other in
order to take a turn transmitting on the wire. Instead, stations can operate in full-duplex mode, transmitting
and receiving simultaneously. This further increases network performance, with a net throughput of 10
Mbps in each direction, or 20 Mbps total on each port.
3.1.1.2 Ethernet Media
Coaxial cable was the first media system specified in the Ethernet standard. Coaxial Ethernet cable comes in
two major categories: Thicknet (10Base5) and Thinnet (10Base2). These cables differed in their size and
their length limitation. Although Ethernet coaxial cable lengths can be quite long, they susceptible to
electromagnetic interference (EMI) and eavesdropping.
Leading the way in IT testing and certification tools, www.testking.com
- 66 -
CCNP/CCDP 642-891 (Composite)
TABLE 3.1: Coaxial Cable for Ethernet
Cable
Diameter
Resistance
Bandwidth
Length
Thinnet (10Base2)
10 mm
50 ohms
10 Mbps
185 m
Thicknet (10Base5)
5 mm
50 ohms
10 Mbps
500 m
Today most networks use twisted-pair media for connections to the desktop. Twisted-pair also comes in two
major categories: Unshielded twisted-pair (UTP) and Shielded twisted-pair (STP). One pair of insulated
copper wires twisted about each other forms a twisted-pair. The pairs are twisted top reduce interference and
crosstalk. Both STP and UTP suffer from high attenuation, therefore these lines are usually restricted to an
end-to-end distance of 100 meters between active devices. Furthermore, these cables are sensitive to EMI
and eaves dropping. Most networks use 10BaseT UPT cable.
An alternative to twisted-pair is fiber optic cable (10BaseFL). Instead of transmitting electrical signals, as
coaxial and twisted-pair cables do, fiber optic cable transmits light signals which are generated either by
light emitting diodes (LEDs) or laser diodes (LDs). There are two major categories of fiber optic cables:
multimode cables and single-made cables. Multimode cables transmit many wavelengths of the same light
source (LDs) along multiple light paths. As a result the light pulse at the end of the cable is more blurred.
Single-mode cables transmit a single wavelength light that is generated by LEDs along the same path. These
cables support higher transmission speeds and longer distances but are more expensive. Because they do not
carry electrical signals, fiber optic cables are immune to EMI and eavesdropping. They also have low
attenuation which means they can be used to connect active devices that are up to 2 km apart. However,
fiber optic devices are not cost effective while cable installation is complex.
TABLE 3.2: Twisted-Pair and Fiber Optic Cable for Ethernet
Cable
Technology
Bandwidth
Cable Length
Twisted-Pair
(10BaseT)
10 Mbps
100 m
Fiber Optic
(10BaseFL)
10 Mbps
2,000 m
3.1.2 Cisco Long Reach Ethernet (LRE)
Cisco LRE can be transported over lengthy distances over Category 1, 2, or 3 wiring and is accessible in the
Catalyst 2900 LRE XL Switch Series. LRE can supply 5 Mbps full-duplex bandwidth over connections up
to 5000 feet, 10 Mbps up to 4000 feet and15 Mbps up to 3000 feet. Numerous LRE ports are utilized to
connect into current building wiring, and can be on the same physical building wiring pairs with POTS and
ISDN, and can be in the same building with ADSL.
A LRE connection requires the following tools:
•
Cisco Catalyst 2900 LRE XL Switch: Combines 12 or 24 LRE connections at the building head-end
•
Cisco 575 or 585 LRE CPE: Ends the LRE connection in the tenant room
•
Cisco LRE 48 POTS Splitter: Divides POTS and LRE on 48 ports when a building is using current
telephone wiring.
2.1.3 Fast Ethernet
Leading the way in IT testing and certification tools, www.testking.com
- 67 -
CCNP/CCDP 642-891 (Composite)
To address the demand of modern networks for greater bandwidth, the networking industry developed a
higher-speed Ethernet based on the existing Ethernet standards. Fast Ethernet operates at 100 Mbps and is
based on the IEEE 802.3u standard. The Ethernet cabling schemes, CSMA/CD operation, and all upperlayer protocol operations have been maintained with Fast Ethernet. The net result is the same data link
Media Access Control (MAC) layer merged with a new physical layer.
Furthermore, the Fast Ethernet specification is backward compatible with 10 Mbps Ethernet. Compatibility
is possible because the two devices at each end of a network connection can automatically negotiate link
capabilities so that they both can operate at a common level. This negotiation involves the detection and
selection of the highest available bandwidth and half-duplex or full-duplex operation. For this reason, Fast
Ethernet is also referred to as 10/100 Mbps Ethernet.
The larger bandwidth available with Fast Ethernet can support the aggregate traffic from multiple Ethernet
segments in the access layer. Fast Ethernet can also be used to connect distribution layer switches to the core,
with either single or multiple redundant links. It can also be used to connect faster end user workstations to
the access layer switch, and to provide improved connectivity to enterprise servers. Therefore, Fast Ethernet
can be successfully deployed at all layers within a campus network.
In addition, Cisco provides Fast EtherChannel (FEC), which allows several Fast Ethernet links to be bundled
together for increased throughput. Fast EtherChannel (FEC) allows two to eight full-duplex Fast Ethernet
links to act as a single physical link, for 400- to 1600-Mbps bandwidth. EtherChannel is discussed in more
detail in Section 12.1.
Cabling for Fast Ethernet can be either UTP or fiber optic. Specifications for Fast Ethernet cables are shown
in Table 3.3.
TABLE 3.3: Fast Ethernet Cabling and Distance Limitations
Technology
Wiring Type
Pairs
Cable Length
100BaseTX
EIA/TIA Category 5 UTP
2
100 m
100BaseT2
EIA/TIA Category 3,4,5 UTP
2
100 m
100BaseT4
EIA/TIA Category 3,4,5 UTP
4
100 m
100BaseFX
Multimode fiber (MMF) with 62.5
micron core; 1300 nm laser
1
400 m (half-duplex)
2,000 m (full-duplex)
Single-mode fiber (SMF) with 62.5
micron core; 1300 nm laser
1
10,000 m
3.1.4 Gigabit Ethernet
Gigabit Ethernet is an escalation of the Fast Ethernet standard using the same IEEE 802.3 Ethernet frame
format. Gigabit Ethernet offers a throughput of 1,000 Mbps (1 Gbps). Like Fast Ethernet, Gigabit Ethernet is
compatible with earlier Ethernet types. However, the physical layer has been modified to increase data
transmission speeds: The IEEE 802.3 Ethernet standard and the American National Standards Institute
(ANSI) X3T11 FibreChannel. IEEE 802.3 provided the foundation of frame format, CSMA/CD, full duplex,
and other characteristics of Ethernet. FibreChannel provided a base of high-speed ASICs, optical
components, and encoding/decoding and serialization mechanisms. The resulting protocol is termed IEEE
802.3z Gigabit Ethernet.
Leading the way in IT testing and certification tools, www.testking.com
- 68 -
CCNP/CCDP 642-891 (Composite)
In a campus network, Gigabit Ethernet can be used in the switch block, the core block, and in the server
block. In the switch block, it is used to connect access layer switches to distribution layer switches. In the
core, it connects the distribution layer to the core switches, and also interconnects the core devices. For a
server block, a Gigabit Ethernet switch in the server block can provide high-speed connections to individual
servers.
Cisco has extended FEC to allow you to bundle several Gigabit Ethernet links. Gigabit EtherChannel (GEC)
allows two to eight full-duplex Gigabit Ethernet connections to be aggregated, for up to 16 Gbps throughput.
Gigabit Ethernet supports several cabling types, referred to as 1000BaseX. Table 3.4 lists the cabling
specifications for each type.
TABLE 3.4: Gigabit Ethernet Cabling and Distance Limitations
Technology
Wiring Type
Pairs
Cable Length
1000BaseCX
Shielded Twisted Pair (STP)
1
25 m
1000BaseT
EIA/TIA Category 5 UTP
4
100 m
1000BaseSX
Multimode fiber (MMF) with 62.5
micron core; 850 nm laser
1
275 m
Multimode fiber (MMF) with 50
micron core; 1300 nm laser
1
550 m
Multimode fiber (MMF) with 62.5
micron core; 1300 nm laser
1
550 m
Single-mode fiber (SMF) with 50
micron core; 1300 nm laser
1
550 m
Single-mode fiber (SMF) with 9
micron core; 1300 nm laser
1
10 km
Single-mode fiber (SMF) with 9
micron core; 1550 nm laser
1
70 km
Single-mode fiber (SMF) with 8
micron core; 1550 nm laser
1
100 km
1000BaseLX/LH
1000BaseZX
3.1.5 10Gigabit Ethernet
Gigabit Ethernet has been further extended to 10Gigabit Ethernet, using the same IEEE 802.3 Ethernet
frame format. 10Gigabit Ethernet offers a throughput of 10 Gbps and is compatible with earlier Ethernet
types. However, it only functions over optical fiber, and only operates in full-duplex mode, thus making
collision detection protocols (CSMA/CD) unnecessary.
In a campus network, 10Gigabit Ethernet can be used in the switch block, the core block, and in the server
block. It can be used to connect access layer switches to distribution layer switches and distribution layer
switches to the core switches; however, it's most practical application is to interconnect the core devices. For
a server block, a 10Gigabit Ethernet switch in the server block can provide high-speed connections to
individual servers.
Leading the way in IT testing and certification tools, www.testking.com
- 69 -
CCNP/CCDP 642-891 (Composite)
3.1.6 Token Ring
Like Ethernet, Token Ring is a LAN technology that provides shared media access to many connected
stations. Token Ring stations are arranged in a ring, in a daisy-chain fashion. A token is passed from station
to station around the ring, giving the current token holder permission to transmit a frame onto the ring. Once
the frame is sent, it is passed around the ring until it is received again by the source. The sending station is
responsible for removing the frame from the ring and for introducing a new token to the next neighboring
station. This means that only one station can transmit at a given time, and prevents a Token Ring network
from ever becoming a collision domain. However, frames can be sent to a broadcast MAC address, like
Ethernet, causing all stations on the ring to listen. Therefore, a token ring is a broadcast domain.
A Token Ring network offers a bandwidth of 4 Mbps or 16 Mbps. At the higher rate, stations are allowed to
introduce a new token as soon as they finish transmitting a frame. This early token release increases
efficiency by letting more than one station transmit a frame during the original token's round trip. One
station is elected to be the ring monitor, to provide recovery from runaway frames or tokens. The ring
monitor will remove frames that have circled the ring once, if no other station removes them.
Traditional Token Ring networks use multistation access units (MSAUs) to provide connectivity between
end user stations. MSAUs have several ports that a station can connect to, with either a B connector for
Type 2 cabling or an RJ-45 connector for Category 5 UTP cabling. Internally, the MSAU provides stationto-station connections to form a ring segment. The Ring-In and Ring-Out connectors of a MSAU can be
chained to other MSAUs to form a complete ring topology.
To form larger networks, Token Rings are interconnected with bridges. Source-route bridges are used to
forward frames between rings, based on a predetermined path. The source station includes the exact ringand-bridge path within the frame so that specific bridges will forward the frame to the appropriate rings.
Rings must be numbered uniquely and must be identified with the campus network. Bridges, however, do
not have to be unique across the network, as long as two bridges with the same number do not connect to the
same ring.
As in Ethernet switching, Token Rings can also be segmented by dividing a ring across several switch ports.
This increases the available bandwidth on a ring segment; it requires more in-depth forwarding decisions.
Token ring switching, which is called sourceroute switching, forwards frames according to a combination of
MAC addresses and RIF contents.
Source-route switching differs from other forms of bridging in that it only looks at the RIF and never
updates or adds to the RIF. Instead, the switch learns route descriptors, or the ring/bridge combinations that
specify the next-hop destinations from incoming frames. The source-route switch then associates the route
descriptors and MAC addresses with outbound ports closest to the destination. When subsequent frames are
received on other ports, the route descriptor is quickly indexed to lookup the outbound port. Thus, sourceroute switching supports parallel source-route paths to destinations. The number of MAC addresses to be
learned is lessened, because route descriptors point to the next hop ports. Source-route switching and Token
Ring are discussed in more detail in Section 11.6.
3.2 Connecting Switches
Leading the way in IT testing and certification tools, www.testking.com
- 70 -
CCNP/CCDP 642-891 (Composite)
Switch deployment in a network involves two steps: physical connectivity and switch configuration. Cable
connections must be made to the console port of a switch in order to make initial configurations. Physical
connectivity between switches and end users involves cabling for the various types of LAN ports.
3.2.1 Console Port Cables and Connectors
A terminal emulation program on a computer is usually required to interface with the console port on a
switch. Each Cisco switch family has various types of console cables and console connectors that are
associated with them. All Catalyst switch families use an RJ-45-to-RJ-45 rollover cable to make the console
connection between a computer or terminal or modem and the console port. On the Catalyst 1900, 2820,
2900, 3500, 2926G, 2948G, 4912G, 5000 Supervisor IIG/III/IIIG, and the 6000 switches, the rollover cable
plugs directly into the RJ-45 jack of the console port and plugs into an RJ-45 to DB-9 "Terminal" adapter or
an RJ-45 to DB-25 "Terminal" adapter to connect the computer end or a DB-25 "Modem" adapter for a
modem connection. On the Catalyst 4003, 5000 Supervisor I/II, and the 8500 switches, the rollover cable
must connect to an RJ-45 to DB-25 adapter. These switches have a DB-25 console port connector that is a
female DCE. Once the console port is cabled to the computer, terminal, or modem, a terminal emulation
program can be started or a user connection can be made. The console ports on all switch families require an
asynchronous serial connection at 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control.
3.2.2 Ethernet Port Cables and Connectors
Catalyst switches support a variety of network connections,
including all forms of Ethernet. In addition, Catalyst switches
support several types of cabling, including UTP and fiber optic.
On Catalyst 1900 and 2820 series switches, the Ethernet ports are
fixed-speed with 12 or 24 10BaseT and one or two 100BaseTX or
100BaseFX ports. The 10BaseT and 100BaseTX ports use
Category 5 UTP cabling and RJ-45 connectors. The 100BaseFX
ports use two-strand multimode fiber (MMF) with SC. All other
Catalyst switch families support 10/100 autosensing and Gigabit
Ethernet. Switched 10/100 ports use RJ-45 connectors on
Category 5 UTP cabling. These ports can be connected to other
10BaseT, 100BaseTX, or 10/100 autosensing devices.
To connect two 10/100 switch ports back-to-back, as in an access
layer to distribution layer link, you must use a Category 5 UTP
crossover cable.
Rollover and Crossover Cables
With a "rollover" cable pins on one end are
all reversed on the other end. In other
words, pin 1 on one end connects to pin 8
on the other end; pin 2 connects to pin 7;
pin 3 connects to pin 6; pin 4 connects to
pin 5; pin 5 connects to pin 4; pin 6
connects to pin 3; pin 7 connects to pin 2;
and pin 8 connects to pin 1. On a
"crossover" cable, pairs 2 and 3 on one end
of the cable are reversed on the other end.
In other words, pin 1 on one end connects
to pin 3 on the other end; pin 2 connects to
pin 6; pin 3 connects to pin 1; pin 4
connects straight through to pin 4; pin 5
connects straight through to pin 5; pin 6
connects to pin 2; pin 7 connects straight
through to pin 7; and pin 8 connects
straight through to pin 8.
3.2.3 Gigabit Ethernet Port Cables and Connectors
Gigabit Ethernet connections provide modular connectivity options. Catalyst switches with Gigabit Ethernet
ports accept Gigabit Interface Converters (GBICs) so that various types of cables can be connected.
Furthermore, the GBIC module is hot-swappable. GBICs are available for:
•
1000BaseSX GBIC, which provides for short wavelength connectivity using SC fiber connectors and
MMF for distances up to 550 meters.
Leading the way in IT testing and certification tools, www.testking.com
- 71 -
CCNP/CCDP 642-891 (Composite)
•
1000BaseLX/LH GBIC, which provides for long wavelength or long haul connectivity using SC fiber
connectors and either MMF or single-mode fiber (SMF); MMF can be used for distances up to 550
meters and SMF can be used for distances up to 10 km.
•
1000BaseZX GBIC, which provides for extended distance connectivity using SC fiber connectors and
SMF for distances up to 70 km or 100 km when used with premium grade SMF.
•
GigaStack GBIC, which provides a proprietary GBIC-to-GBIC connection between stacking Catalyst
switches or between any two Gigabit switch ports over a short distance.
3.2.4 Token Ring Port Cables and Connectors
Catalyst switches also support UTP Token Ring connections. These ports operate at either 4 or 16 Mbps, in
several half and full-duplex modes. RJ-45 connectors on Category 5 UTP cabling use twisted pairs 3,6 and
4,5. These pairs are connected straight through to the far end.
3.3 Switch Management
Cisco Catalyst switch devices can be configured to support many different features. Configuration is
generally performed using a terminal emulator application when a computer is connected to the serial
console port. Further configurations can be performed through a Telnet session across the LAN or through a
web-based interface. Catalyst switches support one of two types of user interface for configuration: Cisco
IOS-based commands, and set-based, command-line interface (CLI) commands. The IOS-based commands
found in Catalyst 1900/2820, 2900XL, and 3500XL are similar to many IOS commands used on Cisco
routers. However, the CLI commands found in 2926G, 4000, 5000 and 6000 use set and clear commands
to change configuration parameters.
3.3.1 Switch Naming
All switches are shipped from the factory with a default configuration and a default system name or prompt.
This name can be changed, which can be useful when you are using Telnet to move from switch to switch in
a network. On an IOS-based switch, use the following command in configuration mode to change the host or
system name:
Switch(config)# hostname host_name
To change the host or system name on a CLI-based switch, you can use the following command in
configuration mode:
Switch(enable) set system name system_name
3.3.2 Password Protection
A network device should be configured to secure it from unauthorized access. Catalyst switches allows you
to set passwords on them to restrict who can log in to the user interface. Catalyst switches have two levels of
user access: regular login, which is called exec mode, and enable login, which is called privileged mode.
Exec mode is the first level of access, which gives access to the basic user interface through any line or the
console port. The privileged mode requires a second password and allows users to set or change switch
operating parameters or configurations.
Leading the way in IT testing and certification tools, www.testking.com
- 72 -
CCNP/CCDP 642-891 (Composite)
On an IOS-based switch, you can use the following commands in global configuration mode to set the login
passwords:
Switch(config)# enable password level 1 password
Switch(config)# enable password level 15 password
The first line in this command sets the exec mode password with a privilege level of 1, while the enable
password is set with a privilege level of 15. Both passwords must a string of four to eight alphanumeric
characters. The passwords on these switches are not case-sensitive.
On a CLI-based switch, you can use the following commands in enable mode to set the login passwords:
Switch (enable) set password
Enter old password: old_password
Enter new password: new_password
Retype new password: new_password
Password changed.
Switch (enable) set enablepass
Enter old password: old_enable_password
Enter new password: new_enable_password
Retype new password: new_enable_password
Password changed.
Switch (enable)
On these switches, password is the exec mode password, and the enablepass is the privileged mode
password. Unlike on the IO-based switches, passwords on these switches are case-sensitive.
Cisco provides various methods for providing device security and user authentication, many of which are
more secure than using the login passwords. These methods are discussed in Section 19.
3.3.3 Remote Access
By default, the switch login passwords allow user access only via the console port. To use Telnet to access a
switch from within the campus network you must configure the switch for remote access. Although a switch
operates at Layer 2, the switch supervisor processor must maintain an IP stack at Layer 3 for administrative
purposes. An IP address and subnet mask can then be assigned to the switch so that remote communications
with the switch supervisor are possible. By default, all ports on a switch are assigned to the same virtual
LAN (VLAN) or broadcast domain. The switch supervisor and its IP stack must be assigned to a VLAN
before remote Telnet and ping sessions will be supported. VLANs are discussed in detail in Section 11.
To enable remote access on an IOS-based switch, assign an IP address to the management VLAN using the
following commands in global configuration mode:
Switch(config)# interface vlan vlan_number
Switch(config-if)# ip address ip_address subnet_mask
Switch(config-if)# ip default-gateway ip_address
Leading the way in IT testing and certification tools, www.testking.com
- 73 -
CCNP/CCDP 642-891 (Composite)
These commands assign an IP address, subnet mask and a gateway to the management VLAN (VLAN1 by
default) specified in the vlan_number parameter. You can check the switch's current switch IP settings by
using the show ip command.
To enable remote access on a CLI-based switch, configure an IP address for in-band management by
entering the following commands in privileged mode:
Switch(enable) set interface sc0 ip_address subnet_mask broadcast_address
Switch(enable) set interface sc0 vlan_number
Switch(enable) set ip route default gateway
You can check the switch's current IP settings, use the show interface command.
2.3.4 Inter-Switch Communication
Because switch devices are usually interconnected, management is simplified by inter-switch
communication. Cisco has implemented protocols on its devices so that neighboring Cisco equipment can be
found. Also, some families of switch devices can be clustered and managed as a unit once they discover one
another. The Cisco Discovery Protocol is used for this purpose. CDP is a Cisco proprietary layer 2 protocol
that is bundled in Cisco IOS release 10.3 and later versions. CDP can run on all Cisco manufactured devices,
including switches. It uses SNAP (layer 2 frame type) and is multicast based, using a destination MAC
address of 01:00:0C:CC:CC:CC. CDP communication occurs at the data link layer so that it is independent
of any network layer protocol that may be running on a network segment. By default, a Cisco device running
CDP sends information about itself on each of its ports every 60 seconds. Neighbor devices that are directly
connected to the device will add the device and its information to their dynamic CDP tables. Switches regard
the CDP address as a special address designating a multicast frame that should not be forwarded. Instead,
CDP multicast frames are redirected to the switch's management port, and are processed by the switch
supervisor alone. Therefore, Cisco switches only become aware of other directly connected Cisco devices.
The information a switch sends includes:
•
Its device name;
•
Its device capabilities;
•
Its hardware platform;
•
The port type and number through which CDP information is being sent; and
•
One address per upper layer protocol.
On an IOS-based switch, CDP is enabled by default. To disable CDP, you use the following command:
Switch(config-if)# no cdp enable
To re-enable CDP again, use the same command without the no keyword. To view the information an IOSbased switch learned from CDP advertisements of neighboring Cisco devices, you use one of the following
commands:
Switch# show cdp interface [ type module_number/port_number ]
Leading the way in IT testing and certification tools, www.testking.com
- 74 -
CCNP/CCDP 642-891 (Composite)
or
Switch# show cdp neighbors [ type module/port ] [ detail ]
The first command displays CDP information pertaining to a specific interface. If the type, module_number,
and port_number are not specified, CDP information from all interfaces is listed. The second command
displays CDP information about neighboring Cisco devices. If the detail keyword is used, all CDP
information about each neighbor is displayed.
CDP is also enabled by default on a CLI-based switch. You can, however, enable or disable CDP by using
the following command:
Switch(enable) set cdp {enable | disable} module_number/port_number
In this command, the module_number and port_number can be specified to enable or disable CDP on that
specific port or else CDP is enabled or disabled for all ports on the switch. To view information learned
from CDP advertisements of neighboring Cisco devices, use the following command:
Switch(enable) show cdp neighbors [ module_number/port_number ]
[ vlan | duplex | capabilities | detail ]
Again the module_number and port_number can be specified to view information learned via the specified
port. The vlan keyword displays information about the native VLAN numbers of neighboring devices. The
duplex keyword displays the duplex type of each neighboring device. Using capabilities displays
capability codes for the neighboring devices. The detail keyword displays all possible CDP information
about each neighboring device, including the IP address assigned to the neighboring interface or
management interface.
3.3.5 Switch Clustering and Stacking
Up to 16 Cisco switch devices can be grouped into a management cluster, regardless of their physical
location on the network. An entire cluster of switches can be managed through a single IP address.
Furthermore, cluster management can be performed through HTML, IOS-based, and SNMP-based
management interfaces on the command switch. Cluster discovery takes place once a command switch has
been assigned an IP address and configured as a command switch. CDP messages are used to discover
neighboring switches that are candidates for cluster membership. However, cluster discovery takes place
only on switch ports that are assigned and connected to the default management VLAN, i.e., VLAN1. Also,
the command switch can only discover the switch devices that are directly connected to it. Other switches
daisy-chained behind the directly connected neighbors can be manually added to the cluster. To configure a
switch to become the command switch for a cluster, assign an IP address for the management interface and
then use the following command.
Switch(config)# cluster enable cluster_name
Leading the way in IT testing and certification tools, www.testking.com
- 75 -
CCNP/CCDP 642-891 (Composite)
3.4 Switch File Management
IOS image files and configuration files are normally used in a Catalyst switch. Files can be stored in the
following file systems:
•
Network servers: An external system to the switch is linked to the network to enable FTP, TFTP or
remote copy program (rcp) file transfer facilities.
•
Flash memory: The switch has Nonvolatile memory where files can be stored and are not damaged after
a power cycle.
•
NVRAM: The NVRAM file system is duplicated in Flash memory on most switches, and consists of
switch configuration utilized at bootup.
•
RAM: The switch configuration utilized at runtime and modified by configuration commands is stored
in volatile memory.
3.4.1 OS Image Files
IOS image files are stored on the switch in flash memory. Many image file can be stored on the switch but
only one can be utilized at runtime. The Catalyst 2950 and 3550 has one flash place to store images, known
as flash. The other greater modular switches like the Catalyst 4500 can contain many Flash file systems.
With the Catalyst 4500 bootflash holds the IOS image and bootstrap image files while cat4000_flash stores
the VLAN database file. PCMCIA cards named slot0:, slot1:, slot1:, etc can be used to enable
stored files to be switched by replacing the flash card.
IOS image files are named according to the following format:
mmmmm-fffff-mm.vvvv.bin
•
mmmmm symbolizes the Catalyst switch model, for example, cat4000,c3550
•
fffff symbolizes the feature of the image, usually i. Everything identifying a IP feature set is
represented next: k for a cryptographic feature set, s for the IP “Plus feature set, j for the enterprise
feature set, d for the desktop set and p for service providers.
•
mm represents the file format.
•
vvvv symbolizes the IOS version in the following format: vvv-mmm.bbb.
ƒ vvv represents the major release,
ƒ mmm represents the maintenance release, and
ƒ bbb represents the build level.
•
bin indicates that the image file is a binary executable.
3.4.2 Configuration Files
These files contain switch operation and feature configuration commands. The three most used configuration
files are listed below:
•
startup-config: The startup-config file is stored in the NVRAM file system and is used at initial bootup.
Power failures have no effect on this file
Leading the way in IT testing and certification tools, www.testking.com
- 76 -
CCNP/CCDP 642-891 (Composite)
•
running-config: During runtime, the running-config file holds a replica of the existing conditions of all
commands being executed. All commands are lost when a power failure occurs and when a switch is
reloaded.
•
vlan.dat: As any VLAN initial and modification configurations are made, they are stored in the VLAN
database.
3.4.3 More Catalyst Switch Files
• system_env_vars holds system information like the MAC address, model.
• crashinfo holds text information about prior switch failures.
3.4.4 Shifting Catalyst Switch Files About
With Cisco IOS Software the Flash file system can be operated in the same manner as a UNIX or DOS
operating systems. The system can be manipulated and files can be shifted about to and from different
locations. Table 3.5 shows the locations of Catalyst switch files.
TABLE 3.5: Catalyst Switch File Locations
File System
Purpose
flash
Flash memory normally with bootable IOS image files
Bootflash
Flash memory normally with bootable IOS image files
Slot0
Optional Flash card memory that can store any file type
NVRAM
Normally holds the startup-config file
System
This RAM location holds the running-config file and the
directory of dynamic switch memory locations.
TFTP
Any switch file type can be stored and recovered from an
external TFTP server without any user authentication.
FTP
Any switch file type can be stored and recovered from an
external FTP server with user authentication.
RCP
Any switch file type can be stored and recovered from an
external RCP server with user authentication.
Flash memory enables you to locate directories, files and binary executable files. Files can be copied,
renamed, and deleted. The default position in EXEC mode is the root directory, flash:,. The file
management commands are listed Table 3.6.
TABLE 3.6: File Management Commands
Command Syntax
Function
dir [flash:[directory]]
Displays files in the current Flash directory.
cd flash:directory
A Change directory to the directory specified
cd
Changes the directory to the Flash directory
Leading the way in IT testing and certification tools, www.testking.com
- 77 -
CCNP/CCDP 642-891 (Composite)
cd..
Changes the directory one step up
copy flash:[filename] tftp
Copies files between Flash memory and a
TFTP server
or
Copy tftp: flash:[filename]
delete flash:filename
Deletes a file from Flash memory
erase flash:
Files in Flash memory are cleared
or
format flash:
copy running-config startup-config
or
Saves the running configuration
Copy running-config tftp:
copy startup-config running-config
Performed at switch bootups and overwrites
the running configuration
copy tftp: startup-config
The permanent configuration is overwritten
or
erase startup-config
3.5 Switch Port Configuration
3.5.1 Port Description
Switch ports can have a text description added to their configuration which can be used to identify them.
This description is only a comment field and is displayed when the switch configuration is shown.
To assign a port comment or description on an IOS-based switch use the following command in interface
configuration mode:
Switch(config-if)# description description_string
If the description string has embedded spaces, the entire string must be enclosed between quotation marks.
To remove a description, use the no description interface configuration command.
To assign a port description on a CLI- based switch use the following command:
Switch(enable) set port name module_number/port_number description_string
The description_string in this command must be less than 21 characters long, and can have embedded
spaces. To remove a port description, use the same command but omit the description_string.
3.5.2 Port Speed
Switch ports can be assigned a specific speed through switch configuration commands. Ethernet ports can be
set to speeds of 10, 100, and Auto for autonegotiate mode. Gigabit Ethernet ports are set to a speed of 1000
and 10Gigabit Ethernet ports are set to a speed of 10000. Token Ring ports can be set to speeds of 4, 16, and
Auto for autosensing mode.
Leading the way in IT testing and certification tools, www.testking.com
- 78 -
CCNP/CCDP 642-891 (Composite)
To assign port speed on an IOS-based switch, specify the port speed using the following interface
configuration command:
Switch(config-if)# speed { 10 | 100 | auto }
IOS-based switches do not support Token Ring therefore there is no 4 or 16 speed setting. However, CLIbased switches support Ethernet, Fast Ethernet and Token Ring. For this reason there are two commands that
can be used to set port speeds on CLI-based switches. For Ethernet and Fast Ethernet ports, use the
following command:
Switch(enable) set port speed module_number/port_number
{ 10 | 100 | auto }
For Token Ring ports, use the following command:
Switch(enable) set port speed module_number/port_number
{ 4 | 16 | auto }
3.5.3 Ethernet Port Mode
Ethernet-based switch ports can also be assigned a specific link mode, half-duplex, full-duplex, or
autonegotiated mode. Autonegotiation is only allowed on Fast Ethernet and Gigabit Ethernet ports. In this
mode, full-duplex operation will be attempted first, and then half duplex if full duplex was not successful.
The autonegotiation process repeats whenever the link status changes.
To set the link mode on an IOS-based switch port, use the following command in interface configuration
mode:
Switch(config-if)# duplex { auto | full | half }
If the port is not automatically enabled or activated, you must use the no shutdown interface configuration
command. You can use the show interface command to view the current speed and duplex state of a port.
Use the following command to set the link mode on a CLI-based switch:
Switch(enable) set port duplex module_number/port_number { full | half }
If the port is not automatically enabled or activated, use the set port enable command. You can use the
show port command to view the current speed and duplex status of a port on a CLI-based switch.
3.5.4 Token Ring Port Mode
Token Ring ports have five modes of operation:
• Half-duplex concentrator port (hdxcport), in this mode the port is connected to a single station in halfduplex mode, similar to a MAU connection.
• Half-duplex station emulation (hdxstation), in this mode the port is connected to a media attachment
unit (MAU) port, like a regular station.
Leading the way in IT testing and certification tools, www.testking.com
- 79 -
CCNP/CCDP 642-891 (Composite)
•
•
•
Full-duplex concentrator port (fdxcport), in this mode the port is connected to a full-duplex station.
Full-duplex station emulation (fdxstation), in this mode the port is connected to another full-duplex
Token Ring.
Autosensing (auto), in this mode the port will autosense the operating mode of the connected device or
ring.
To set the Token Ring link mode on a CLI-based switch port, use the following command:
Switch(enable) set tokenring portmode module_number/port_number
{ auto | fdxcport | hdxcport | fdxstation | hdxstation }
Because Token Ring ports are supported by CLI-based switches only, this command can only be applied to
them.
Leading the way in IT testing and certification tools, www.testking.com
- 80 -
CCNP/CCDP 642-891 (Composite)
4. Routing
Routing is a relay system by which packets are forwarded from one device to another. Each device in the
network as well as the network itself has a logical address so it can be identified and reached individually or
as part of a larger group of devices. For a router to act as an effective relay device, it must be able to
understand the logical topology of the network and to communicate with its neighboring devices. The router
understands several different logical addressing schemes and regularly exchanges topology information with
other devices in the network. The mechanism of learning and maintaining awareness of the network
topology is considered to be the routing function while the movement of traffic through the router is a
separate function and is considered to be the switching function. Routing devices must perform both a
routing and a switching function to be an effective relay device. A router receiving a packet from a host, the
router will need to make a routing decision based on the protocol in use; the existence of the destination
network address in its routing table; and the interface that is connected to the destination network. After the
decision has been made the router will switch the packet to the appropriate interface on the router to forward
it out. If the destination logical network does not exist in the routing table, routing devices will discard the
packet and to generate an Internet Control Message Protocol (ICMP) message to notify the sender of the
event.
4.1 Routing Tables
A routing table is a database repository that holds the router's routing information that represents each
possible logical destination network that is known to the router. The entries for major networks are listed in
ascending order and, most commonly, within each major network the subnetworks are listed in descending
order. If the routing table entry points to an IP address, the router will perform a recursive lookup on that
next-hop address until the router finds an interface to use. The router will switch the packet to the outbound
interfaces buffer. The router will then determine the Layer 2 address that maps to the Layer 3 address. The
packet will then be encapsulated in a Layer 2 frame appropriate for the type of encapsulation used by the
outbound interface. The outbound interface will then place the packet on the medium and forward it to the
next hop. The packet will continue this process until it reaches its destination.
There are two ways in which a routing table can be populated: a route can be entered manually, this is called
static routing; or a router can dynamically learning a route. Once router learns a route, it is added to its route
table.
4.1.1 Static Routing
A statically defined route is one in which a route is manually entered into the router. A static route can be
entered into the router in global configuration mode with the following command:
ip route destination_ip_address subnet_mask
{ ip-address | interface } [ distance ]
In the ip route command, the destination_ip_address and subnet_mask is the IP address and subnet
mask for the destination host. The ip-address parameter is the IP address of the next hop that can be used
to reach the destination and interface is the router interface to use. The optional distance parameter
specifies the administrative distance.
Leading the way in IT testing and certification tools, www.testking.com
- 81 -
CCNP/CCDP 642-891 (Composite)
The advantages to using static routes in an internetwork are the administrator has total control of what is in
the routers routing table and there is no network overhead for a routing protocol. The disadvantage of using
only static routes is they do not scale well.
4.1.2 Dynamic Routing
Dynamic routing is a process in which a routing protocol will find the best path in a network and maintain
that route. Once a route fails, the routing protocol will automatically find an alternate route to the destination.
Routing protocols are easier to use than static routes. However, a routing protocol will consume more CPU
cycles and network bandwidth than a static route.
4.1.3 Routing Updates
Routing updates can occur using the distance vector approach or the link-state approach.
•
Distance-vector protocols use a routine, periodic announcement that contains the entire contents of the
routing table. These announcements are usually broadcasts and are propagated only to directlyconnected, next-hop, devices. This allows the router to view the network from the neighbor's perspective
and facilitates the addition of the router's metric to the 'distance' already stated by the neighboring router.
However, this approach uses considerable bandwidth at regular intervals on each link even if no
topology changes has occurred.
•
Link-state protocols use a triggered-update type of announcement that are generated only when there is
a topology change within the network. The link-state announcements only contain information about the
link that changed and are propagated or flooded to all devices in the network. This approach saves
bandwidth on each link because the announcements contain less information and is only sent when there
is a topology change. In some link-state protocols, a periodic announcement is required to ensure that the
topology database is synchronized among all routing devices.
4.1.4 Verifying Routing Tables
You can use the show ip route privileged exec command to view an IP routing table. If the information
that is displayed is not correct, you can force an update from the neighboring devices with the clear ip
route command. An optional keyword specifying an ip_address and subnet_mask, or the * (wildcard)
character, can be used to further identify the routes to be refreshed.
4.2 Routing Protocols
There are two types of dynamic routing protocols: Interior Gateway Protocols (IGP) and External Gateway
Protocols (EGP). IGPs are used to exchange routing information within an autonomous system (AS), which
is a collection of routing domains under the same administrative control the same routing domain. An EGP,
on the other hand, is used to exchange routing information between different ASs.
IGPs can be broken into two classes: distance-vector and link-state, and can also be broken into two
categories: classful routing protocols and classless routing protocols.
4.2.1 Distance-Vector Routing
Leading the way in IT testing and certification tools, www.testking.com
- 82 -
CCNP/CCDP 642-891 (Composite)
Distance-vector routing is consists of two parts: distance and vector. Distance is the measure of how far it is
to reach the destination and vector is the direction the packet must travel to reach that destination. The latter
is determined by the next hop of the path. Distance-vector routing protocols will learn routes from its
neighbors. This is called routing by rumor. Examples of distance-vector routing protocols are: Routing
Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), and Enhanced Interior Gateway
Routing Protocol (EIGRP).
Most distance-vector routing protocols have common characteristics:
•
Broadcast Updates. When a router becomes active it will send out a message to the broadcast address
stating it is alive. In return neighboring routers participating in the same routing protocol will respond to
this broadcast.
•
Periodic Updates, which is the length of time before a router will send out an update. For RIP this time
is 30 seconds. For IGRP the time is 90 seconds. This means that once the periodic update timer expires,
a broadcast of the routing table will be sent.
•
Routing by Rumor, which describes the means by which the router learns routes from its neighbors.
•
Neighbors, which is another router on the same logical, or data link, connection. In a distance-vector
routing protocol, a router will send its routing table to its connected neighbors. Those neighbors will
send their updated routing tables to their connected neighbors. This continues until all the routers
participating in the selected routing protocol have updated routing tables.
•
Full Routing Table Updates, which is when the distance-vector routing protocol sends its entire routing
table to its neighbors. This occurs when the periodic update timer expires.
•
Invalid Timer, which solves the problem of router failure. Since the failed router will not send out
updates, the other routers in the network do not know that a router has gone down and the routes are
unreachable. Hence, the routers continue to send packets to the routes connected to the missing router.
An invalid timer solves this problem by associating a period of time with a route. If the route is not
updated in the routing table in this set period of time, the route is marked unreachable and the router will
send this new information in its periodic update.
•
Split Horizon, which prevents a reverse route. A reverse route occurs when a router learns a route from
a neighbor and the router turns around and sends that route back to the neighbor that the router learned it
from, causing an infinite loop. The split horizon prevents this by setting a rule that a route cannot be
advertised out the same interface the route was learned on.
•
Counting to Infinity. In networks that are slow to converge, another type of routing loop can occur
when routers have multiple paths to the same destination. In this case, the routing table is populated with
the best route to the destination even though it has two routes to the same destination. When the
destination network goes down, the updates about the destination being unreachable can arrive at the
router at different times. The router in turn advertises out that it has another route to the destination. This
will continue across the network incrementing the hop count at each router it encounters. Even though
the destination network is down, all of the routers participating in the routing process think they have an
alternate route to the network causing a loop. This is known as Counting to Infinity and is solved by
enforcing maximum hop counts. When a route reaches the maximum hop count limit, the route is
marked unreachable and removed from the router's routing table.
•
Triggered Updates, which one of the means of speeding up convergence on a network. Instead of the
routers waiting until the periodic update timer expires and sends out an update, a triggered update will
send out an update as soon a significant event occurs.
Leading the way in IT testing and certification tools, www.testking.com
- 83 -
CCNP/CCDP 642-891 (Composite)
•
Hold-down Timer, which is used when information about a route changes. When the new information
is received or a route is removed the router will place that route in a hold-down state. This means that the
router will not advertise nor will it accept advertisements about this route for the time period specified
by the hold-down timer. After the time period expires, the router will start accepting and sending
advertisements about the route. This will reduce the amount of wrong information being advertised
about routes.
4.2.2 Link-State Routing
Link-state routing differs from distance-vector routing in that each router knows the exact topology of the
network. This reduces the number of bad routing decisions that can be made because every router in the
process has an identical view of the network. Each router in the network will report on its state, the directly
connected links, and the state of each link. The router will then propagate this information to all routers in
the network. Each router that receives this information will take a snapshot of the information. This ensures
all routers in the process have the same view of the network, allowing each router to make its own routing
decisions based upon the same information.
In addition, link-state routing protocols generate routing updates only when there is a change in the network
topology. When a link, i.e., a point on a route, changes state, a link-state advertisement (LSA) concerning
that link is created by the device that detected the change and propagated to all neighboring devices using a
multicast address. Each routing device takes a copy of the LSA, updates its topological database and
forwards the LSA to all neighboring devices. An LSA is generated for each link on a router. Each LSA will
include an identifier for the link, the state of the link, and a metric for the link. With the use of LSAs, linkstate protocols reduces routing bandwidth usage.
Examples of link-state routing protocols are: Open Shortest Path First (OSPF) and Integrated Intermediate
System to Intermediate System (IS-IS). Another protocol, Enhanced Interior Gateway Routing Protocol
(EIGRP) is considered a hybrid protocol because it contains traits of both distance-vector and link-state
routing protocols. Most link-state routing protocols require a hierarchical design, especially to support
proper address summarization. The hierarchical approach, such as creating multiple logical areas for OSPF,
reduces the need to flood an LSA to all devices in the routing domain. The use of areas restricts the flooding
to the logical boundary of the area rather than to all devices in the OSPF domain. In other words, a change in
one area should only cause routing table recalculation in that area, not in the entire domain.
4.2.3 Classful Routing
Classful routing is used in routing packets based upon the class of IP address. IP addresses are divided into
five classes: Class A, Class B, Class C, Class D, and Class E. Class A, Class B and Class C are used to
private and public network addressing; Class D is used for multicast broadcasting; and Class E is reserved
by the Internet Assigned Numbers Authority (IANA) for future use. IP Address classes are discussed in
detail in Section 2.1.1.
Classful routing is a consequence of the fact that routing masks are not advertised in the periodic, routine,
routing advertisements generated by distance vector routing protocols. In a classful environment, the
receiving device must know the routing mask associated with any advertised subnets or those subnets cannot
be advertised to it. There are two ways this information can be gained:
•
Share the same routing mask as the advertising device
Leading the way in IT testing and certification tools, www.testking.com
- 84 -
CCNP/CCDP 642-891 (Composite)
•
If the routing mask does not match, this device must summarize the received route a classful boundary
and send the default routing mask in its own advertisements.
Classful routing protocols, such as Routing Information Protocol version 1 (RIPv1) and Interior Gateway
Routing Protocol (IGRP), exchange routes to subnetworks within the same network if network administrator
configured all of the subntworks in the major network have the same routing mask. When routes are
exchanged with foreign networks, subnetwork information from this network cannot be included because the
routing mask of the other network is not known. As a result, the subnetwork information from this network
must be summarized to a classful boundary using a default routing mask prior to inclusion in the routing
update. The creation of a classful summary route at major network boundaries is handled automatically by
classful routing protocols. However, summarization at other points within the major network address is not
allowed by classful routing protocols.
4.2.4 Classless Routing
One of the most serious limitations in a classful network environment is that the routing mask is not
exchanged during the routing update process. This requires the same routing mask be used on all
subnetworks. The classless approach advertises the routing mask for each route and therefore a more precise
lookup can be performed in the routing table. Classless routing, which is also known as Classless
Interdomain Routing (CIDR), is thus not dependent on IP address classes but, instead, allows a variablelength subnet mask (VLSM), which extends IP addressing beyond the limitations of using fixed-length
subnet masks (FLSM),to be sent in the routing update with the route. This allows you to conserve IP
addresses, extending the use of IP addresses. Classless routing protocols also addressed the need to
summarize to a classful network with a default routing mask at major network boundaries. In the classless
environment, the summarization process is manually controlled and can be invoked at any point within the
network. VLSM is discussed in more detail in Section 2.1.4.
The routing protocols that support classless routing protocols are: Routing Information Protocol version 2
(RIPv2); Enhanced Interior Gateway Routing Protocol (EIGRP); Open Shortest Path First (OSPF); and
Integrated Intermediate System to Intermediate System (IS-IS).
4.2.5 Multipath Routing
A few sophisticated routing protocols support multiple paths to the same destination. A multipath algorithm
permits traffic to be multiplexed across multiple equal-cost lines or unequal-cost lines. The IP dynamic
routing protocols are able to load balance over multiple equal-cost lines to convey packets. Only the Cisco
proprietary routing protocols (GRP and EIGRP) are able to load balance over multiple and unequal-cost
lines.
4.3 Basic Switching Functions
In order to forward a packet that has arrived at a router interface, the router must perform a switching
function. This switching function has four steps:
•
A packet transiting the router will be accepted into the router if the frame header contains the MAC
address of one of the router's NIC cards. If properly addressed, the frame and its content will be buffered
occurs in memory pending further processing.
Leading the way in IT testing and certification tools, www.testking.com
- 85 -
CCNP/CCDP 642-891 (Composite)
•
The switching process checks the destination logical network portion of the packet header against the
network/subnetwork entries in the routing table. If the search is successful, the switching process
associates the destination network with a next-hop logical device and an outbound interface.
•
Once the next-hop logical device address is known, a lookup is performed to locate a physical address
for the next device in the relay chain. The lookup is performed in an Address Resolution Protocol (ARP)
table for LAN interfaces or a map table for WAN interfaces.
•
Once the physical address of the next-hop device is known, the frame header is overwritten, and the
frame is then moved to the outbound interface for transmission onto the media. As the frame is placed on
the media, the outbound interface adds the CRC character and ending delimiters to the frame. These
characters will need to be validated at the arriving interface on the next-hop relay device.
4.4 Convergence
In a routed network, the routing process in each router must maintain a loop-free, single path to each
possible destination logical network. When all of the routing tables are synchronized and each contains a
usable route to each destination network, the network is described as being 'converged'. Convergence is the
time it takes for all routers to agree on the network topology after a change in the network.
Convergence efforts are different within different routing protocols. There are at least two different
detection methods used by all routing protocols. The first method is used by the Physical Layer (Layer 1)
and the Data Link Layer (Layer2) protocols. When the network interface on the router does not receive three
consecutive keepalives, the link will be considered down. The second method is that when the routing
protocol at the Network/Transport Layer (Layer 3) fails to receive three consecutive Hello messages, the
link will be considered down.
Routing protocols have timers that are used to stop network loops from occurring on a network when a link
failure has been detected. Hold-down timers are used to give the network stability while new route
calculations are being performed. They also allow all the routers a chance to learn about the failed route to
avoid routing loops and counting to infinity problems. Since a network cannot converge during this holddown period, this can cause a delay in the routing process of the network. Because of this slow convergence
time, link-state routing protocols do not use hold-down timers.
4.4.1 Distance-Vector Routing Convergence
4.4.1.1 RIP and IGRP Convergence
Convergence time is one of the problems associated with distance-vector protocols, such as RIPv1 and IGRP.
When a router detects a link failure between itself and a neighbor, it sends a flash update with a poisoned
route to it other neighbors. These neighbors in turn create a new flash update and send it to all of its
neighbors, and so on. The Router that detected the link failure purges the entry for the failed link and
removes all routes associated with that link from the routing table. The router then sends a query to its
neighbors for the routs that have been removed. If a neighbor responds with a route, it is immediately
installed in the routing table. The router does not go into hold-down because the entry was already purged.
However, its neighbors are in hold-down for the failed route, thus ignoring periodic advertisement for that
route. As the other routers come out of hold-down, the new route announced by the router that detected the
failed link will cause their routing table entries to be updated.
Leading the way in IT testing and certification tools, www.testking.com
- 86 -
CCNP/CCDP 642-891 (Composite)
4.4.1.2 EIGRP Convergence
Enhanced IGRP (EIGRP) convergence differs slightly. If a router detects a link failure between itself and a
neighbor, it checks the network topology table for a feasible alternate route. If it does not find a qualifying
alternate route, it enters in an active convergence state and sends a Query out all interfaces for other routes
to the failed link. If a neighbor replies to the Query with a route to the failed link, the router accepts the new
path and metric information, places it in the topology table, and creates an entry for the routing table. It then
sends an update about the new route out all interfaces. All neighbors acknowledge the update and send
updates of their own back to the sender. These bi-directional updates ensure the routing tables are
synchronized and validate the neighbor's awareness of the new topology. Convergence time in this event is
the total of detection time, plus Query and Reply times and Update times.
4.4.2 Link-State Convergence
The convergence cycle used in Link-State Routing Protocols, such as OSFP and IS-IS, differs from that of
the distance-vector protocols. When a router detects a link failure between itself and a neighbor, it tries to
perform a Designated Router (DR) election process on the LAN interface, but fails to reach any neighbors. It
then deletes the route from the routing table, builds a link-state advertisement (LSA) for OSFP or a link-state
PDU (LSP) for IS-IS, and sends it out all other interfaces. Upon receipt of the LSA, the other neighbors that
are up copy the advertisement and forward the LSA packet out all interfaces other than the one upon which
it arrived. All routers, including the router that detected the failure, wait five seconds after receiving the LSA
and run the shortest path first (SPF) algorithm. There after the router that detected the failure adds the new
route to the routing table, while its neighbors update the metric in their routing table. After approximately 30
seconds, the failed router sends an LSA after aging out the topology entry from router that detected the
failure. After five seconds, all routers run the SPF algorithm again and update their routing tables to the path
to the failed link. Convergence time is the total of detection time, plus LSA flooding time, plus the five
seconds wait before the second SPF algorithm is run.
4.5 Routing and Switching in a Cisco Router
A packet transiting the router is accepted into the router if the frame header contains the Layer 2 address of
one of the router's interfaces. If properly addressed, after the framing is checked, the frame and its content
are buffered, pending further processing. The buffering occurs in main memory or some other specialized
memory location. If the source and destination Layer 3 address of the datagram have not been seen by this
router before, the datagram will be process switched or routed. This event, a process initiates a lookup in this
routing table and a decision about how the datagram should be forwarded. The packet is then encapsulated.
If fast switching is enabled, the packet is then examined again, and an entry is put into a route cache. On
subsequent packets, if the IP destination matches a prefix found in the route cache, the packet is forwarded
using this information. The routing function is not disturbed, nor are the CPU cycles required to feed this
monster expended. The type of route cache used depends on the hardware used. The caches available are
called fast switching, autonomous switching, silicon switching, and Cisco Express Forwarding (CEF). If
CEF switching is used, then the story changes again. With CEF switching, each card runs its own copy of
the express forwarding and has its own copy of a forwarding information base (FIB). In the event of a
routing change, this new entry is forwarded by the CPU to each separate line card.
4.6 The Structure of a Routing Table
Standard distance vector protocols broadcast the whole routing table to immediately connected routing
devices in flash updates and regular update periods. Link-state protocols use triggered multicast
Leading the way in IT testing and certification tools, www.testking.com
- 87 -
CCNP/CCDP 642-891 (Composite)
advertisement packets to alert other routers on topology changes. The specific topology changes are
transmitted, not the whole routing table. OSPF differs slightly; it transmits a declaration each 30 minutes in
order to sustain the synchronization of the topology database between all routers in the vicinity. The IP
routing table can be shown by typing show ip route at the IOS EXEC prompt (see Listing 4.1). The IP
routing table contains the information necessary to correctly execute the routing operations.
In addition to that information, the IP Routing Table also holds other relevant bits of information, some of
which are listed below:
•
The AD number is contained in the routing table.
•
The destination network
•
The interface that is used to reach the destination network
•
The method that route was learned about.
•
The metric as a hop count
•
The logical address of the next hop mechanism
•
The quantity of time that the data is in the routing table, measured from the last update time.
RouterTK1#show ip route
Codes: C_- connected, S – static, I – IGRP, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
N1 – OSPF NSSA external type 1, N2 – OSPF external type 2
E1 – OSPF external type 1, E2 – OSPF external type 2, E – EGP
I – IS-IS, L1 – IS-IS level-1, L2 – IS-IS level-2, * - candidate
def
U – per-user static route, o – ODR
T – traffic engineered route
Gateway of last resort is not set
R
R
R
C
C
C
C
R
C
183.15.0.0/16 is variably subnetted, 9 subnets, 2 masks
183.15.9.0/25 [121/1] via 183.16.2.1, 00:00:21, Serial0/2
183.15.8.0/25 [121/1] via 183.16.2.1, 00:00:21, Serial0/2
183.15.7.0/25 [121/1] via 183.16.2.1, 00:00:21, Serial0/2
183.15.6.0/25 is directly connected, Loopback2
183.15.5.0/25 is directly connected, Loopback1
183.15.4.0/25 is directly connected, Loopback0
183.15.3.0/31 is directly connected,__Serial0/1
183.15.2.0/25 [121/1] via 183.16.2.1, 00:00:00, Serial0/2
183.15.1.0/25_is directly connected, Ethernet0/0
Listing 4.1: The IP Router Table
The IP Routing table shows how the route was learned via the RIP routing protocol indicated by R and the
destination network of 183.15.9.0/25. The administrative distance for the RIP in this instance is 121 with
a hop count or metric of 1 as indicated by the [121/1]. The logical address of the next hop mechanism is
indicated as 183.16.2.1. The entry has existed for 21 seconds from the time when the last update occurred.
Interface Serial0/2 was use to reach the destination. By stipulating a single network number by means of
the show ip route [network] command, more information can be obtained about a specific route in the
routing table.
Leading the way in IT testing and certification tools, www.testking.com
- 88 -
CCNP/CCDP 642-891 (Composite)
4.7 Testing and Troubleshooting Routes
There are two tools that can be used for testing and troubleshooting routes or reachability. These are: ping
and traceroute.
4.7.1 The ping Command
The ping command, which is included as a part of the TCP/IP protocol suite, is supported at the user and
privileged exec modes. In user mode, you must specify an IP address or a host name, if the host name can be
resolved to an IP address, with the ping command. The ping command tests the round-trip path to and from
a target. In privileged mode, you must enter a protocol, a target IP address, a repeat count, datagram size,
and a timeout in seconds.
Cisco IOS makes ping available for a number of protocols including IPX and AppleTalk. Cisco introduced
ping for IPX in IOS version 8.2. This is, however, a Cisco proprietary tool. Therefore non-Cisco devices
such as Novell servers do not respond to it. If you want the Cisco router to generate Novell-compliant pings,
you must use the global configuration command ipx ping-default novell. Ping for AppleTalk sends
AppleTalk Echo Protocol (AEP) packets to the destination node and waits for replies.
Generally, the syntax for the ping command is:
ping –s ip_address [ packet_size] [ packet_count]
TABLE 4.1: Parameters for the ping Command
Parameter
Purpose
-s
Causes ping to send one datagram per second, printing one line of
output for every response received. The ping command does not
return any output when no response is received.
ip_address
The IP address or IP alias of the host.
packet_size
This optional parameter represents the number of bytes in a packet,
from 1 to 2000 bytes, with a default of 56 bytes. The actual packet size
is eight bytes larger because the switch adds header information.
packet_count
This optional parameter represents the number of packets to send.
4.7.2 The traceroute Command
The traceroute command was introduced with the release 10.0 of Cisco IOS and can be used to find the
route between IP devices. The traceroute command can be executed in user and privileged exec modes,
but in privileged exec mode, you can use the extended traceroute, which is more flexible and informative.
Initially, traceroute was available only for the IP protocol but since release 12.0 of Cisco IOS,
traceroute is also available for IPX. This command can be very useful in troubleshooting by determining
where along a particular network path a particular problem might be as the traceroute command displays a
hop-by-hop path through an IP network from the switch to a specific destination host. The syntax for the
traceroute command is:
Leading the way in IT testing and certification tools, www.testking.com
- 89 -
CCNP/CCDP 642-891 (Composite)
traceroute [ -n ] [- w wait_time ] [ -i initial_ttl ] [ -m max_ttl ]
[ -p dest_port ] [ -q nqueries ] [ -t tos ] ip_address [ data_size ]
TABLE 4.2: Parameters for the traceroute Command
Parameter
Description
-n
Prevents traceroute from performing a DNS lookup for each hop on
the path. Only numerical IP addresses are printed.
-w wait_time
Specifies the amount of time that traceroute will wait for an ICMP
response message. The allowed range for wait time is 1 to 300
seconds; the default is 5.
-i initial_ttl
Causes traceroute to send ICMP datagrams with a TTL value equal to
initial_ttl instead of the default TTL of 1. This causes
traceroute to skip processing for hosts that are less than
initial_ttl hops away.
-m max_ttl
Specifies the maximum TTL value for outgoing ICMP datagrams.
The allowed range is 1 to 255; the default value is 30.
-p dest_port
Specifies the base UDP destination port number used in traceroute
datagrams. This value is incremented each time a datagram is sent.
The allowed range is 1 to 65535; the default base port is 33434.
-q nqueries
Specifies the number of datagrams to send for each TTL value. The
allowed range is 1 to 1000; the default is 3.
-t tos
Specifies the TOS to be set in the IP header of the outgoing
datagrams. The allowed range is 0 to 255; the default is 0.
ip_address
IP alias or IP address in dot notation of the destination host.
Data_size
Number of bytes, in addition to the default of 40 bytes, of the
outgoing datagrams. The allowed range is 0 to 1420; the default is 0.
Leading the way in IT testing and certification tools, www.testking.com
- 90 -
CCNP/CCDP 642-891 (Composite)
5. OSPF in a Single Area Network
Open Shortest Path First (OSPF) is the open standard link-state routing protocol designed for use in large IP
networks. Its purpose is to convey routing information to every router within the network. It can be used to
connect various technologies and vendor solutions.
The terminology used in this chapter to discuss OSPF is listed in Table 5.1. These terms are discussed in
more detail in later in the chapter.
TABLE 5.1: OSPF Terminology
Term
Meaning
Adjacency
The situation that takes place when two OSPF routers have exchanged
information that results in the two routers having identical topology tables.
Area
The grouping of networks and routers that have the same link state
information and area ID number. OSPF routers must reside in an Area
Area Border Router
(ABR)
This refers to those routers who exist on the border of more than one OSPF
area that connects to routers to the backbone.
Autonomous System
(AS)
This refers a group of routers that form part of the same network management
and administration, who share OSPF routing information.
Autonomous System
Boundary Router
(ASBR)
This refers an Autonomous System Boundary Router (ASBR) that exists
between an OSPF Autonomous System (AS) and a non OSPF network that
executes several or multiple routing protocols.
Backbone
This is the foremost path or route that is used for network traffic. The
backbone is that segment of the network that is most frequently resourced
from.
Backup Designated
Router (BDR)
This is a standby Designated Router (DR) that receives that same information
as a DR so that it can function when there is a DR failure.
Cost
This refers to the metric utilized by OSPF that is symbolized by a numerical
value and is allocated to a particular link. Cost is based on connection output
speed.
Designated Router
This is an OSPF router that performs numerous functions is a multi-access
network. A DR decreases traffic, as well as the size of the topology database.
Hello
This is a packet that is utilized by OSPF to create and administer relationships with neighbor devices.
Neighbor
This refers to two routers who have links on a shared network
Link
This refers to the interface between a connected network and an OSPF router
Link State
This refers to the circumstance or condition of a particular link between two
routers that share link state advertisements. A link state can be in the full state,
down, loading, init, two, exstart, or exchange condition.
This refers to the situation whereby OSPF broadcasts packets that hold
Link State
Advertisement (LSA) updates that is utilized to sustain routing tables. Remember that Routing
Leading the way in IT testing and certification tools, www.testking.com
- 91 -
CCNP/CCDP 642-891 (Composite)
Tables contain information about path costs and neighbors.
Router ID
This is an exclusive number on a Cisco router. The Router ID is configured by
the:
• highest configured IP address,
• highest configured IP address loop-back address, or
• manually assigned.
Router Priority
This is an 8-bit number that specifies the priority of the router during the DR
and BDR election method. When the need arises, the Router Priority can be
manually reconfigured.
Routing Table /
Forwarding
Database
This refers to the table that is established when the SPF algorithm is
performed on the link-state database
Topology Table /
Link State Database
This is that table that contains each link in the whole network.
5.1 OSPF Neighbors
A neighbor in OSPF is a router that shares the same physical segment, or network link. A router running
OSPF discovers its neighbors by sending and receiving a Hello protocol. A router configured for OSPF
sends out a small hello packet periodically. It has a source address of the router and a multicast destination
address set to AllSPFRouters (224.0.0.5). All routers running OSPF or the SPF algorithm listen to the
protocol and send their own hello packets periodically. How the Hello protocol works and how OSPF
neighbors build their databases depend on the physical medium being used. OSPF identifies five distinct
network topologies:
•
Broadcast multi-access, which is any LAN network such as Ethernet, Token Ring, or FDDI. In this
environment, OSPF sends out multicast traffic.
•
Point-to-point, which is used where there is one other system directly connected to the transmitting or
receiving router. In this environment, network traffic uses the multicast address for OSPF
AllSPFRouters, 224.0.0.5.
•
Point-to-multipoint, which is a single interface that connects to many destinations. The underlying
network treats the network as a series of point-to-point circuits. It replicates LSA packets for each circuit.
The addressing of network traffic is multicast. This topology uses one IP subnet.
•
Nonbroadcast multiaccess (NBMA), which resembles a point-to-point line, but many destinations are
possible. WAN clouds, including X.25 and Frame Relay, are examples of this topology. NBMA uses a
fully meshed or partially meshed network. OSPF sees it as a broadcast network, and it will be
represented by one IP subnet.
•
Virtual links, which is a virtual connection to a remote area that does not have any connections to the
backbone (Area 0). Although OSPF treats this link as a direct, single hop connection to the backbone
area, it is a virtual connection that tunnels through the network. The OSPF network traffic is sent in
unicast datagrams across these links.
5.1.1 Adjacent OSPF Neighbors
Leading the way in IT testing and certification tools, www.testking.com
- 92 -
CCNP/CCDP 642-891 (Composite)
After neighbors have been established by means of the Hello protocol, they exchange routing information.
When their topology databases are the same or synchronized, the neighbors are fully adjacent. The Hello
protocol continues to transmit periodically. The transmitting router and its networks reside in the topology
database for as long as the other routers receive the Hello protocol. Neighbor relationships provide another
mechanism for determining that a router has gone down, i.e., when the neighbor no longer sends Hello
packets. It also streamlines communication results because after the topological databases are synchronized,
incremental updates will be sent to the neighbors as soon as a change is perceived. In addition, adjacencies
created between neighbors control the distribution of the routing protocol packets, resulting in a much faster
convergence of the network than can be achieved by RIPv1. This is because RIPv1 must wait for
incremental updates and holddown timers to expire on each router before the update is sent out.
5.2 The Designated Router (DR) and the Backup Designated Router (BDR)
The designated router is a router on broadcast multiaccess media that is responsible for maintaining the
topology table for the segment. This router is dynamically elected by the use of the Hello protocol o can be
designated by the network administrator. Redundancy is provided by the Backup Designated Router (BDR).
The hello packet carries the information that determines the DR and the BDR. The election is determined by
either the highest IP address or the ip ospf < priority_number > command. All other routers need only
peer with the designated router, which informs them of any changes on the segment. All routers have an
adjacency with the designated router and the backup designated router. If the designated router fails, the
backup designated router immediately becomes the designated router.
When selected dynamically, the designated router is elected on the basis of the highest router ID or IP
address, i.e., the numerically highest number. After the designated and backup designated routers have been
elected, all routers on the broadcast medium will communicate directly with the designated routers. They
will use the multicast address to all designated routers. The backup router will listen but will not respond.
The designated router will send out multicast messages if it receives any information pertinent to the
connected routers for which it is responsible.
To manually determine the designated router, you must set the priority of the router. A router interface can
have a priority of 0 to 255. If there is more than one router on the segment with the same priority level, the
election process picks the router with the highest router ID.
5.3 The OSPF Routing Table
In OSPF, an adjacency is formed after a neighbor is discovered. When a router is added to the network, it
will find a neighbor using the Hello protocol and will build a routing table by listening to the established
routers with complete routing tables. Every router within an area will have the same database and will know
of every network within the area. The routing table built from this database is unique to the router because
the decisions depend on the specific router's position within the area, relative to the remote destination
network.
When a change in the network topology occurs, information about the change must be propagated through
the network. A router that notices a change floods the area with the update so that all routers can alter their
routing tables to reflect the most current and accurate information.
5.3.1 Building the Routing Table on a New OSPF Router
Leading the way in IT testing and certification tools, www.testking.com
- 93 -
CCNP/CCDP 642-891 (Composite)
Five packets are used to build the routing table on a new OSPF router. These are the Hello protocol, which
is used to find neighbors and to determine the designated and backup designated router; the database
descriptor, which is used to send summary information to neighbors to synchronize topology databases; the
link-state request, which works as a request for more detailed information that is sent when the router
receives a database descriptor that contains new information; the link-state update, which works as the
link-state advertisement (LSA) packet issued in response to the request for database information in the linkstate request packet; and the link-state acknowledgement, which acknowledges the link-state update.
When the new OSPF router is connected to the network, it must learn the network from the routers that are
up and running. The router goes through three stages while exchanging information: the down state, the init
stage, and the two-way state. It is possible to see what stage an interface running OSPF is in by using the
show ip ospf neighbor command or the debug ip ospf adjacency command.
•
The new router starts in a down state. It transmits its own hello packets to introduce itself to the segment
and to find any other OSPF-configured routers. This is sent out as a hello to the multicast address
224.0.0.5 (AllSPFRouters). It sets the DR and BDR in the hello to be 0.0.0.0.
While the new router waits for a reply, which usually is four times the length of the hello timer, the
router is in the init state. Within the wait time, the new router hears a hello from another router and
learns the DR and the BDR. If there is no DR or BDR stated in the incoming hello, an election takes
place.
•
Once the new router sees its own router ID in the list of neighbors, and a neighbor relationship is
established, it changes its status to the two-way state.
The new router and the DR have now established a neighbor relationship and need to ensure that the new
router has all the relevant information about the network. The DR must update and synchronize the topology
database of the new router. This is achieved by using the exchange protocol with the database description
packets. There are four different stages that the router goes through while exchanging routing information
with a neighbor: the exstart state, the exchange state, the loading state, and the full state.
•
During the exstart state, one of the routers will take seniority and become the master router, based on
highest IP interface address.
•
Both routers will send out database description packets, changing the state to the exchange state. At this
stage, the new router has no knowledge and can inform the DR only of the networks or links to which it
is directly connected. The DR sends out a series of database description packets (DDPs) containing the
networks, referred to as links, that are held in the topology database. Most of these links have been
received from other routers via link-state advertisements. The source of the link information is referred
to by the router ID. Each link will have an interface ID for the outgoing interface, a link ID, and a
metric to state the value of the path. The database description packet will contain a summary rather than
all the necessary information. When the router has received the DDPs from the neighboring router, it
compares the received network information with that in its topology table. In the case of a new router, all
the DDPs are new.
•
If the new router requires more information, it will request that particular link in more detail using the
link-state request packet (LSR). The LSR will prompt the master router to send the link-state update
packet (LSU). This is the same as a link-state advertisement (LSA) used to flood the network with
routing information. While the new router is awaiting the LSUs from its neighbor, it is in the loading
state.
Leading the way in IT testing and certification tools, www.testking.com
- 94 -
CCNP/CCDP 642-891 (Composite)
•
When these LSRs are received and the databases are updated and synchronized, the neighbors are fully
adjacent. This is the full state.
5.3.2 The Topology Database
The topology database, sometimes referred to as the link-state database, is the router's view of the network
within the area. It includes every OSPF router within the area and all the connected networks. This database
is a routing table for which no path decisions have been made; it is at present a topology database. The
topology database is updated by the LSAs. Each router within the area has exactly the same topology
database. All routers must have the same vision of the network; otherwise, confusion, routing loops, and loss
of connectivity will result. The synchronization of the topology maps is ensured by the use of sequence
numbers in the LSA headers.
From the topology map, a routing database is constructed. This database will be unique to each router, which
creates a routing database by running the shortest path first (SPF) algorithm called the Dijkstra algorithm.
Each router uses this algorithm to determine the best path to each network and creates an SPF tree on which
it places itself at the top or root. If there are equal metrics for a remote network, OSPF includes all the paths
and load balances the routed data traffic among them. Occasionally a link may flap, go up and down. This
could cause many LSAs to be generated in updating the network. To prevent this, OSPF uses timers that
force OSPF to wait before recalculating SPF. These timers are configurable.
5.3.3 The Shortest Path First
As with any routing protocol, OSPF examines all the available paths to every network that it knows about. It
selects the shortest, most direct path to that destination. This decision is based on the metric used by the
routing protocol. RIP uses hop count, which shows how many routers must be passed through to get to the
destination. When CPU and memory were very expensive, the latency of traveling through the router had
much higher implications on network performance. OSPF has few of those constraints and so chooses the
metric of cost. Cost is not defined, however; it depends on the implementation of the protocol. The metric
may be programmed to be either complex or simple. Cisco's implementation of a dynamic and default cost
uses a predefined value based on the bandwidth of the router interface. The network administrator can
manually override this default. The cost is applied to the outgoing interface. The routing process will select
the lowest accumulated cost of the interfaces to the remote network.
Once the shortest path or multiple equal-cost paths are determined, the routing process will need to supply
additional information to the routing table for forwarding the data down the chosen path. This includes the
next logical hop, the link, and the outgoing interface.
5.4 OSPF Across Nonbroadcast Multiaccess Networks (NBMA)
A nonbroadcast multiaccess (NBMA) network is a network that has multiple destinations but cannot carry
broadcast traffic. Examples of NBMA networks include Frame Relay, X.25, and ATM. The solution to how
OSPF operates using multicast traffic to exchange network information and to create adjacencies to
synchronize databases across this WAN cloud without using the multicast addresses, depends on the
technology involved and the network design. The modes available fall into two technologies, within which
there are additional options. The two technologies are point-to-point and NBMA.
Leading the way in IT testing and certification tools, www.testking.com
- 95 -
CCNP/CCDP 642-891 (Composite)
The NBMA technology is then subdivided into two categories, under which different configuration options
are available. These two categories are the RFC-compliant solution and the Cisco-specific solution:
•
The RFC-compliant category offers a standard's solution, which is independent of the vendor platform.
The configuration options are NBMA and point-to-multipoint.
•
The Cisco-specific configuration options are proprietary to Cisco and include point-to-multipoint
nonbroadcast, broadcast, and point-to-point.
The option selected depends on the network topology that is in use. The OSPF technology is separate from
the physical configuration, and the choice of implementation is based on the design topology.
The Frame Relay topologies include:
•
Full mesh, in which every router is connected to every other router. This solution provides redundancy,
and it may allow load sharing. This is the most expensive solution.
•
Partial mesh, in which some routers are connected directly; others are accessed through another router.
•
Star, or hub and spoke, in which one router acts as the connection to every other router. This is the
least expensive solution because it requires the fewest number of permanent virtual circuits (PVCs).
Here a single interface is used to connect to multiple
Physical Interface and
destinations.
Logical Subinterfaces
The considerations in choosing the OSPF topology depend on its
method of updating the network and its effect on network
overhead. In a point-to-point circuit no DR or BDR is required,
each circuit will have an adjacency, which will create many more
adjacencies on the network and will increase the need for network
resources. In an NBMA environment a DR and a BDR may be
required, unless the underlying technology is point-to point. This
is economical for most routers, requiring only two adjacencies,
except for the DR and BDR. However, it may require more
administration in terms of configuration.
5.5 Problems with OSPF in a Single Area
There are a number of limitations to using OSPF in a single area.
These problems are related to the growth of the network. The
larger the network, the greater the probability that the network
will change and that a recalculation of the entire area will be
required. This increases the frequency with which the SPF
algorithm is being run. In addition, each recalculation will also
take longer. As the network grows, the size of the routing table
will increase. The routing table is not sent out wholesale as in a
distance vector routing protocol; however, the greater the size of
the table, the longer each lookup becomes. The memory
requirements on the router will also increase. Furthermore, the
topological database will increase in size and will eventually
become unmanageable. Also, as the various databases increase in
size and the calculations become increasingly frequent, the CPU
utilization will increase as the available memory decreases. This
On a Cisco router, it is possible to configure
a physical interface to be many logical
interfaces. This is useful in a WAN
environment and means that the logical
topology is independent of the physical
configuration. These subinterfaces can be
configured to be point-to-point or point-tomultipoint. One of the main determining
factors is the number of subnets to be used.
A point-to-point interface requires its own
subnet to identify it.
If the point-to-point option is selected, the
routers at each end create adjacencies. This
however requires more network overhead
and restricts some communication. In a
point-to-point network, the concept of a
broadcast is not relevant because the
communication is directly to another
router. In a point-to-multipoint network,
although OSPF simulates a broadcast,
multicast environment, the network traffic
is replicated and sent to each neighbor.
For serial interfaces with HDLC
encapsulation, the default network type is
point-to-point. For serial interfaces with
Frame Relay encapsulation, the default
network type is nonbroadcast. For serial
interfaces with Frame Relay encapsulation
and using point-to-point subinterfaces, the
default network type is point-to point. For
serial interfaces with Frame Relay
encapsulation and using point-to-multipoint
subinterfaces, the default network type is
nonbroadcast.
Leading the way in IT testing and certification tools, www.testking.com
- 96 -
CCNP/CCDP 642-891 (Composite)
will make the network response time sluggish not because of congestion on the line, but because of
congestion within the router itself.
Note: You can check the CPU and the memory utilization on the router by
using the show cpu process and the show memory commands
5.6 Configuring OSPF in a Single Area
There are a few simple commands that are used to configure a
Cisco router for OSPF within a single area.
5.6.1 Configuring OSPF on an Internal Router
An internal router within a single area needs to understand how to
participate in the OSPF network. Therefore, it requires the OSPF
process, i.e., the routing protocol needs to be started on the router;
the participating router interfaces used to send or receive OSPF
routing updates; the identification of the area; and a router ID,
which allows the router to be identified by the other routers in the
network.
The Process ID, the Router ID,
and the Area ID
The Process ID, the Router ID, and the
Area ID are not related in any way. The
process ID is a mechanism to allow more
than one process to be configured on a
router. The router ID is the mechanism by
which a router is identified within the
OSPF domain, and the area ID is a
mechanism of grouping routers that share
full knowledge of OSPF-derived routes
within the OSPF domain.
5.6.1.1 The router ospf Command
By default, there is no IP routing protocol running on the Cisco router. To configure OSPF as the routing
protocol, use the router ospf < process_number > command. In this command, the process_number is
a number local to the router. It is possible to have more than one process running on a router. The process
number does not have to be the same on every router in the area or the autonomous system.
5.6.1.2 The network Command
Once OSPF has been configured as the routing protocol the networks that are to participate in the OSPF
updates and the area that they reside in must be defined. This can be accomplished by using the following
command:
network network_number wildcard_mask
The network command in OSPF is similar to that of the network command in RIP or IGRP. The difference
is the level of granularity afforded in OSPF. In RIP and IGRP, the network command is defined at the class
level. In OSPF, it is possible to define the network command at the level of the specific address of an
interface. After the network command has been entered, OSPF identifies which interfaces are participating
in OSPF by comparing the interface IP address with the address given in the network command, filtered
through the wildcard_mask. The wildcard_mask indicates how much of the address to pay attention to.
This could a class of address, such as everything in network 10.0.0.0 or it can be more specific and identify
an interface address. All interfaces that match the given network number will reside in the area specified in
the network command.
Leading the way in IT testing and certification tools, www.testking.com
- 97 -
CCNP/CCDP 642-891 (Composite)
After identifying the interfaces on the router that are participating in the OSPF domain, updates will be
received on the interface, updates will be sent out of the interfaces, the interface will be placed in the defined
area. In addition, the Hello protocol, if appropriate, will be propagated and, depending on the interface type,
a default hello and dead interval are defined. Table 5.2 lists the default hello and dead time intervals.
TABLE 5.2: The Default Hello and Dead Time Intervals
Interface Type
Hello Interval
Dead Interval
Point-point Nonbroadcast
30 seconds
120 seconds
Point-to-point
10 seconds
40 seconds
Broadcast
10 seconds
40 seconds
NBMA
30seconds
120 seconds
Point-to Multipoint
30seconds
120 seconds
5.6.2 Configuring OSPF on the External Router
Configuring OSPF on an internal router is not necessary to make OSPF function properly within an area. It
may, however, be useful in your network design.
5.6.2.1 The interface loopback Command
The router needs a Router ID to participate in the OSPF domain. The router ID is used to identify the source
of LSA updates as shown in the OSPF database. However, there is no command to define the OSPF router
ID, but the Cisco rule states that the router ID will be taken from the address of the loopback interface. The
loopback interface is a virtual interface that does not exist physically but has an IP address. If no loopback
interface is defined, the router uses the highest IP address configured on the router as its router ID. You can
configure a loopback interface by using the interface loopback < interface_number > ip address <
ip_address > < subnet_mask > command.
5.6.2.2 The cost Command
You can use the cost command to manually override the default cost that the router assigns to an interface.
The default cost is calculated based on the speed of the outgoing interface. The syntax for the cost
command is: ip ospf cost < cost >. A lower cost increases the likelihood that the interface will be
selected as the shortest path. The range of values configurable for the cost of a link is 1 to 65535.
5.6.2.3 The auto-cost Command
In general, the path cost in Cisco routers is calculated using the 108/ bandwidth formula but it is possible to
control how OSPF calculates the default cost for the interface by using the auto-cost
reference-bandwidth global configuration command to change the numerator of the OSPF cost formula.
The value set by the ip ospf cost command overrides the cost calculated using the auto-cost
reference-bandwidth command. Table 5.3 lists the default costs in OSPF.
Note: In the Cisco IOS documentation, the auto-cost command is
documented as ospf auto-cost; however, ospf auto-cost is not
Leading the way in IT testing and certification tools, www.testking.com
- 98 -
CCNP/CCDP 642-891 (Composite)
recognized in the Cisco IOS. The auto-cost command is actual command
to use in the Cisco IOS.
TABLE 5.3: Default Costs in OSPF
Link Type
Default Cost
56-kbps serial link
1785
T1 (1.544-Mbps serial link)
64
Ethernet
10
16-Mbps Token Ring
6
5.6.2.4 The priority Command
The priority command is used to determine the designated router (DR) and backup designated router
(BDR) on a multiaccess link. The Hello protocol is the mechanism by which the designated routers are
elected; however, to eligible for election, the router must have a priority of between 1 and 255. If the priority
is 0, the router will not participate in the election. The higher the priority, the greater the likelihood of the
router being elected. If no priority is set, all Cisco routers have a default priority of 1, and the highest router
ID, i.e., the highest IP address of all interfaces on the router, is used.
You may want to change the router priority if the router has greater CPU and memory than the others do on
the LAN; if the router is the most reliable router on the segment; if all the other routers on the LAN connect
to stub networks and form the access layer of the network; if there are point-to-multipoint connections in an
NBMA cloud, and the hub router needs to be configured as the centralized resource, requiring it to be the
designated router; or if the router is an ABR, and you do not want it to consume more resources as a DR.
5.6.3 Configuring OSPF over an NBMA Topology
The design considerations of running OSPF over an NBMA topology are important because it influences the
configuration choices that will be made. If the network is partially meshed, then the choice to use only pointto-point subnets can waste addresses. If a point-to-multipoint configuration is chosen, the network uses one
subnet, and there is no DR/BDR negotiation. This has the advantage of saving addresses but behaving as if it
is a series of point-to-point links. When the decision is made as to which technology is to be implemented,
the configuration is straightforward. The choice is defined on the interface as a network command. The
network command syntax is:
ip ospf network { broadcast | non-broadcast |
{ point-to-point | point-to-multipoint [ non-broadcast ] } }
The parameters in this command are:
• broadcast, which sets the network mode to broadcast;
• non-broadcast, which sets the network mode to nonbroadcast multiaccess (NBMA mode). This is the
default mode for serial interfaces with Frame Relay encapsulation and point-to-multipoint subinterfaces;
• point-to-point, which sets the network mode to point-to-point. This is the default mode for point-topoint subinterfaces; and
Leading the way in IT testing and certification tools, www.testking.com
- 99 -
CCNP/CCDP 642-891 (Composite)
•
point-to-multipoint, which sets the network mode to point-to-multipoint. When this parameter is
used with the optional [ non-broadcast ] parameter, it sets the network mode to point-to-multipoint
nonbroadcast.
5.6.3.1 Configuring OSPF in NBMA Mode
NBMA mode is used by default; therefore there is no need to configure it using the ip ospf network
non-broadcast command. In NBMA mode, the design considerations are imperative because the selection
of the DR and BDR need to have physical connectivity to all routers in the NBMA cloud. This is a
nonbroadcast environment, so the DR and BDR must be configured with a static list of the other routers
attached to the cloud so that they can become neighbors and create adjacencies. This is achieved with the
use of the neighbor command. The syntax of the neighbor command is:
neighbor ip_address
The neighbor command must specify an ip_address which is the interface IP address for the neighbor. In
addition, the neighbor command can take a number of optional keywords. These optional parameters are:
•
[ priority priority_number ] which can be used to affect outcome of the BDR or the DR election.
The default is 0.
•
[ poll-interval poll_interval ] which is an integer value reflecting the poll interval. The default
is 120 seconds. If a neighboring router becomes inactive and no hello packets been seen for the router
dead interval, it may still be necessary to send hello packets to the dead neighbor. These hello packets
will be sent at the rate specified by the poll-interval keyword.
5.6.3.2 Configuring OSPF in Point-to-Multipoint Mode
An OSPF point-to-multipoint interface is seen as one or more numbered point-to-point interfaces. The cloud
is configured as one subnet. A host route will be added for each router involved in the OSPF cloud.
By default, the network is considered to be a series of point-to-point interfaces. There is no need to specify
neighbors because the neighbors will see each other and simply become adjacent, with no need for the
election of a DR or a BDR. However, you can specify neighbors using the neighbor command, in which
case you should specify a cost to each neighbor. You are not required to have a fully meshed topology,
which reduces the number of PVCs needed and the number of neighbor entries in the neighbor table. It is
possible to change the default by using the command ip ospf network non-broadcast. The point-tomultipoint network is then considered a nonbroadcast network. The neighbor command is required to
identify neighbors in a nonbroadcast network. In this case, assigning a cost to a neighbor is optional.
5.6.3.3 Configuring OSPF in Broadcast Mode
You can use broadcast mode to avoid the use of the neighbor command and all the attendant
configurations. This mode works best with a fully meshed network.
5.6.3.4 Configuring OSPF in Point-to-Point Mode on a Frame Relay Subinterface
In this mode, the adjacency created between the routers is automatic because each subinterface behaves as a
physical point-to-point network. Therefore, the communication is direct and automatic. To configure OSPF
Leading the way in IT testing and certification tools, www.testking.com
- 100 -
CCNP/CCDP 642-891 (Composite)
point-to-point mode on subinterfaces you must create a subinterface at the interface level, remove any
network layer (Layer 3) address assigned to the physical interface by using the no ip address command
and assign the Layer 3 address to the subinterface. Then configure Frame Relay encapsulation, the
subinterfaces, and the Layer 3 and Layer 2 (DLCI) addresses on the subinterface. Point-to-point mode is the
default OSPF mode for point-to-point subinterfaces, so no further configuration is required.
5.7 Verifying the OSPF Configuration on a Single Router
There are a number of show ip commands that are particularly useful in troubleshooting the OSPF network.
These commands are:
•
show ip ospf, which provides information about the OSPF process and its details.
•
show ip ospf database, which provides information about the contents of the topological database.
•
show ip ospf interface, which provides information on how OSPF has been configured on each
interface.
•
show ip ospf neighbor, which displays all the information about the relationship that the router has
with its neighbors.
•
show ip protocols, which displays the IP configuration on the router, including the interfaces and the
configuration of the IP routing protocols.
•
show ip route, which provides detailed information on the networks that the router is aware of and the
preferred paths to those networks. It also gives the next logical hop as the next step in the path.
5.8 Differences between OSFP and RIP Routing Protocols
There are a number of differences between OSFP and RIP. These include the following:
•
OSPF converges faster than RIPv1. It transmits changes straight away and fewer packets are lost.
•
There is no limitation to the size of an OSFP network. A RIPv2 network cannot expand bigger than 15
hops.
•
OSPF transmits a multicast update when there is a topology change while RIP, by default, broadcasts the
whole routing table each 30 seconds.
•
OSPF and RIPv2 support Variable Length Subnet Mask (VLSM). RIPv1 does not support VLSM.
•
Bandwidth cost and delay concerns are not taken into account by RIP. The routing decision is solely
based on hop count as a metric. OSPF can use VLSM masks for route summaries. This decreases the
routing table entries and the quantity of update traffic and router operating costs.
•
OSPF and RIPv2 provide authentication in the packet. RIPv1 does not provide this authentication.
•
RIP is easier to configure, troubleshoot and monitor than OSPF. However, Cisco IOS provides a more
advanced and affluent suite of configuring, troubleshooting and monitoring capabilities with OSPF than
RIP.
•
OSPF can use a substantial quantity of CPU.
Leading the way in IT testing and certification tools, www.testking.com
- 101 -
CCNP/CCDP 642-891 (Composite)
6. OSPF in a Multiple Area Network
An area is a logical grouping of routers that are running OSPF with identical topological databases. It is a
subdivision of the greater OSPF domain. The creation of multiple areas solves the problem of a large
network outgrowing its capacity to communicate the details of the network to the routing devices charged
with maintaining control and connectivity throughout the network. The division of the AS into areas allows
routers in each area to maintain their own topological databases. This limits the size of the topological
databases, and summary and external links ensure connectivity between areas and networks outside the AS.
There are two approaches to implementing multiple area networks. The first approach is to grow a single
area until it becomes unmanageable. The second approach is to design the network with multiple areas,
which are very small, in the expectation that the networks will grow to fit comfortably into their areas. The
first approach requires less initial work and configuration. Great care should be put into the design of the
network, however, because this may cause problems in the future, particularly in addressing. In practice,
many companies convert their networks into OSPF from a distance vector routing protocol when they
realize that they have outgrown the existing routing protocol. This allows the planned implementation of the
second approach.
As mentioned in Section 5.5, there are a number of limitations to using OSPF in a single area. These
problems are related to the growth of the network. One of the main features of OSPF is its ability to scale
and to support large networks. It achieves this by creating areas from groups of subnets. The area is seen
internally as a small entity on its own. It communicates with the other areas, exchanging routing information;
this exchange is kept to a minimum, however, allowing only that which is required for connectivity. All
computation is kept within the area. In this way, a router is not overwhelmed by the entirety of the
organization's network. This is important because a link-state routing protocol is CPU- and memoryintensive.
6.1 Different Router Types
Because of the hierarchical nature of the OSPF network, routers have different responsibilities, depending
on their position and functionality within the OSPF hierarchical design. These routers have different
designations such as internal routers, backbone routers, area border routers (ABR), and autonomous system
boundary routers (ASBR).
•
The Internal Router exists within an area. It is responsible for maintaining a current and accurate
database of every subnet within the area. It is also responsible for forwarding data to other networks by
the shortest path. Flooding of routing updates is confined to the area. All interfaces on this router are
within the same area.
•
The Backbone Router exists within the backbone area, which is also called Area 0. The design rules for
OSPF require that all the areas be connected through a single area, known as Area 0. Area 0 is also
known as Area 0.0.0.0 on other routers. A router within this area is referred to as a backbone router. It
may also be an internal router or an Area Border Router.
•
The Area Border Router (ABR) is responsible for connecting two or more areas. It holds a full
topological database for each area to which it is connected and sends LSA updates between the areas.
These LSA updates are summary updates of the subnets within an area. It is at the area border that
summarization should be configured for OSPF because this is where the LSAs make use of the reduced
routing updates to minimize the routing overhead on both the network and the routers.
Leading the way in IT testing and certification tools, www.testking.com
- 102 -
CCNP/CCDP 642-891 (Composite)
•
The Autonomous System Boundary Router (ASBR) is used to connect to a network or routing
protocol outside the OSPF domain. OSPF is an interior routing protocol or Interior Gateway Protocol
(IGP); gateway is an older term for a router. If there is any redistribution between other protocols to
OSPF on a router, it will be an ASBR. This router should reside in the backbone area but you can place
it anywhere in the OSPF hierarchical design.
6.2 The Link-State Advertisements
Five commonly used types of link-state advertisements (LSAs) sent between routers in the same area and on
the same segment are used in the OSPF domain. These are:
•
The router link, which is generated for each area to which the router belongs. This LSA gives the link
states to all other routers within an area. It is flooded into an area and is identified as a Type 1 LSA.
•
The network link, which is sent out by the designated router and lists all the routers on the segment for
which it is the designated router and has a neighbor relationship. This LSA is flooded to the whole area
and is identified as a Type 2 LSA.
•
The network summary link, which is sent between areas and summarizes the IP networks from one
area to another. It is generated by an ABR and is identified as a Type 3 LSA.
•
The AS external ASBR summary link, which is sent to a router that connects to the outside world
(ASBR). It is sent from the ABR to the ASBR. This LSA contains the metric cost from the ABR to the
ASBR and is identified as a Type 4 LSA.
•
The external link, which is originated by AS boundary routers and is flooded throughout the AS. Each
external advertisement describes a route to a destination in another autonomous system. Default routes
for the AS can also be described by AS external advertisements. This LSA is identified as a Type 5 LSA.
•
The NSSA External LSA, which is created by the ASBR residing in a not so stubby area (NSSA). It is
very similar to an autonomous system external LSA, except that this LSA is contained within the NSSA
area and is not propagated into other areas. This LSA is identified as a Type 7 LSA.
6.3 OSPF Path Selection Between Areas
The OSPF routing table that exists on a router depends on the position that the router has in the area and the
status of the network; the type of area that the router is located in; whether there are multiple areas in the
domain; and whether there are communications outside the autonomous system. The router receives LSAs.
It builds the topological database. Then it runs the Dijkstra algorithm, from which the shortest path first is
chosen and entered into the routing table. The routing table is therefore the conclusion of the decisionmaking process. It holds information on how that decision was made by including the metric for each link.
This enables the network administrator to view the operation of the network.
Different LSAs hold different weighting in the decision-making process. It is preferable to take an internal
route (within the area) to a remote network rather than to traverse multiple areas just to arrive at the same
place. Not only does multiple-area traveling create unnecessary traffic, but it also can create a loop within
the network. The routing table, thus, reflects the network topology information and indicates where the
remote network sits in relation to the local router.
The costs of paths to networks in other areas and paths to networks in another AS are calculated differently.
Leading the way in IT testing and certification tools, www.testking.com
- 103 -
CCNP/CCDP 642-891 (Composite)
6.3.1 The Path to Another Area
The path to another area is calculated as the smallest cost to the ABR, added to the smallest cost to the
backbone. Thus, if there were two paths from the ABR into the backbone, the shortest or lowest-cost path
would be added to the cost of the path to the ABR.
6.3.2 The Path to Another AS
The path to another AS are routes passed between a router within the OSPF domain and a router in another
autonomous system or routing domain. The routes discovered by OSPF in this way can have the cost of the
path calculated in one of two ways:
•
The cost of the path to the ASBR is added to the external cost to the next-hop router outside the AS. This
is known as E1.
•
The cost of the path to the ASBR is all that is considered in the calculation. This is the default
configuration and is used when there is only one router advertising the route and no selection is required.
This is known as E2.
If both an E1 and an E2 path are offered to the remote network, the E1 path will be used.
6.4 Different Types of Areas
OSPF networks use several types of areas. Of these, the only obligatory area is Area 0. The different types
of areas are:
•
An ordinary or standard area is an area that connects to the backbone. This area is seen as a separate
entity. Every router knows about every network in the standard area, and each router has the same
topological database. However, the routing tables will be unique from the perspective of the router and
its position within the standard area.
•
A stub area is an area that will not accept external summary routes. Type 4 LSAs and Type 5 LSAs are
blocked in this area, as a result, only a router within the area can see outside the autonomous system is
via the configuration of a default route. Every router within the area can see every network within the
area and the networks within other areas. This type of area is typically used in a hub-and-spoke network
layout.
•
A totally stubby area is an area that does not accept summary LSAs from the other areas or the external
summary LSAs from outside the autonomous system. Type 3 LSAs, Type 4 LSAs and Type 5 LSAs are
blocked in this area, as a result, the only way out of the area is via a configured default route. A default
route is indicated as via 0.0.0.0. This type of area is useful for remote sites that have few networks and
limited connectivity with the rest of the network. This is a proprietary solution offered only by Cisco.
•
A not so stubby area (NSSA) is a stub area but that can receive external routes that it will not propagate
into the backbone area, and thus the rest of the OSPF domain. External routes are not propagated into or
out of the area. It does not allow Type 4 or Type 5 LSAs. This area is used primarily to connect to ISPs,
or when redistribution is required, using a Type 7 LSA. This LSA may be originated and communicated
throughout the area, but it will not be propagated into other areas, including Area 0. If the information is
to be propagated into or throughout the AS, it is translated into an LSA Type 5 at the NSSA ABR.
•
The backbone area is often referred to as Area 0, and connects all the other areas. It can propagate all
the LSAs except for LSA Type 7, which would have been translated into LSA Type 5 by the ABR.
Leading the way in IT testing and certification tools, www.testking.com
- 104 -
CCNP/CCDP 642-891 (Composite)
6.5 Design Considerations in Multiple Area OSPF
The major design consideration in OSPF is how to divide the areas because it impacts the addressing scheme
for IP within the network. An OSPF network works best with a hierarchical design, in which the movement
of data from one area to another comprises only a subset of the traffic within the area itself. With all the
interarea traffic disseminated by the backbone, any reduction of overhead through a solid hierarchical design
and summarization is beneficial. The lower the number of summary LSAs that need to be forwarded into the
backbone area, the greater the benefit to the entire network. This will allow the network to grow more easily
because the network overhead is at a minimum. As a result, summarization is the natural consequence.
However, as indicated in Section 2.2, summarization cannot be imposed on a network; it must be part of the
initial network design as the addressing scheme must be devised to support the use of summarization.
6.5.1 Cisco Design Guidelines
Although it is possible to have more than three areas (per router) in OSPF, the results of having more areas
will vary depending on the router, as well as network topology and how many LSAs are generated. Cisco
recommends that you not exceed 50 routers in an OSPF area. OSPF is very CPU-intensive in its
maintenance of the databases and in the flooding of LSAs, as well as when it calculates the routing table, a
process based on LSAs. Therefore, it is not necessarily the number of routers or areas that is important, but
the number of routes and the stability of the network. Generally, Cisco recommends that you should not
exceed 50 routers per OSPF area; 60 neighbors and 3 areas per router; and that a router may not be a DR or
BDR for more than one LAN.
6.5.2 Summarization
Two types of summarization exist in multiple area OSPF: interarea summarization, which is performed at
the ABR and creates Type 3 LSAs and Type 4 LSAs; and external summarization, which is performed at
the ASBR and creates Type 5 LSAs. Both types of summarization have the same fundamental requirement
of contiguous addressing.
OSPF is stringent in its demand for a solid hierarchical design—so much so that it has devised some
commands to deal with situations that break its rules of structure.
6.5.3 The Virtual Link
OSPF require that all areas in the multiple area network must all connect directly to the backbone area. The
connection to the backbone area is via an ABR, which is resident in both areas and holds a full topological
database for each area. When this requirement cannot be met, you can use a virtual link, which is a tunnel
connection to an area that does have direct connectivity to the backbone. In this configuration, a tunnel is
created to the ABR in the intermediary area. From the viewpoint of OSPF, it has a direct connection.
6.5.4 OSPF over an NBMA Network
An NBMA network can be included as part of the OSPF domain if the NBMA network is created as Area 0;
or if the NBMA network is a hub-and-spoke topology.
•
If the NBMA network is created as Area 0, the NBMA is used to connect all remote sites and all traffic
will have to traverse the NBMA network. This option is ideal in a full-mesh environment, although it
Leading the way in IT testing and certification tools, www.testking.com
- 105 -
CCNP/CCDP 642-891 (Composite)
will result in a large number of LSAs being flooded into the WAN and puts extra demands on the routers
connecting to the NBMA network.
•
If the NBMA network is a hub-and-spoke topology, it makes sense to assign the hub network as Area 0
with the other remote sites and the NBMA network as other areas. This is ideal design if the satellite
areas are stub areas because it means that the routing information is kept to a minimum over the NBMA
cloud.
6.6 Configuring OSPF in a Multiple Area Network
Some of the commands used to configure OSPF in a single area (see Section 5.6) are also used to configure
OSPF in multiple area networks.
6.6.1 The network Command
The network command for OSPF in a multiple area network is similar to that of the network command for
OSPF in a single area. The difference is that while the network command for OSPF in a single area
identified the interfaces that participated in the OSPF routing process, the network command for OSPF in a
multiple area network identifies not only the interfaces that are sending and receiving OSPF updates, but
also the area in which they reside. This configuration is used on an ABR. The syntax for this command is:
network network_number wildcard_mask area area_number
After this network command has been entered, OSPF identifies which interfaces are participating in OSPF
by comparing the interface IP address with the address given in the network command, filtered through the
wildcard_mask. The wildcard_mask indicates how much of the address to pay attention to. This could a
class of address, such as everything in network 10.0.0.0 or it can be more specific and identify an interface
address. All interfaces that match the given network number will reside in the area specified in the network
command.
6.6.2 The area range Command for an ABR
The area range command is configured on an ABR because it dictates the networks that will be advertised
out of the area. The range keyword is used to consolidate and summarize routes at an area boundary. The
syntax for this command is:
area area_id range ip_address subnet_mask
The no form of this command, as in no area area_id range ip_address subnet_mask can be used to
disable this function for the specified area.
In the area range command, the area_id parameter specifies the identifier (ID) of the area for which
routes are to be summarized. This can be specified either as a decimal value or as an IP address. The
ip_address parameter specifies the IP address while the subnet_mask parameter specifies the IP subnet
mask.
6.6.3 The summary-address Command for an ASBR
Leading the way in IT testing and certification tools, www.testking.com
- 106 -
CCNP/CCDP 642-891 (Composite)
The summarization command is used on the ASBR to summarize the networks to be advertised outside the
OSPF domain. The syntax for this command is:
summary-address ip_address subnet_mask
6.6.4 The area Command
After designing the addressing scheme for the network, it should be clear which areas should be configured
as a stub area, a totally stubby area, or not so stubby area. These areas can be configured using the area
command.
The syntax for the area command for a stub area is: area area_id stub. All OSPF routers inside a stub
area must be configured as stub routers.
The syntax for the area command for a totally stubby area is similar to that of used to configure a stub area.
The difference being the addition of a no-summary parameter informs the ABR not to send summary
updates from other areas into the area. The syntax for this command is: area area-id stub no-summary.
This command needs to be configured only on the ABR because it is the only router with this responsibility.
Furthermore, this command is configurable only on a Cisco router because it is a proprietary command.
In addition, the area command can be used to define the cost to the default route into the area. The syntax
for this command is: area area_id default-cost cost. If the cost is not specified, the path will be
calculated as the internal area cost plus 1. This command needs to be configured only on the ABR because it
is the only router with this responsibility.
6.6.5 Configuring a Virtual Link
When it is not possible to connect an area directly the backbone area, Area 0, you can create a virtual link to
the backbone area. The command used to configure a virtual link is:
area area_id virtual-link router_id
In this command the area_id parameter specifies the area ID assigned to the transit area for the virtual link
while the router_id parameter specifies the router ID of the virtual link neighbor.
This area command is given between ABRs, at least one of which must be in Area 0. The command, issued
at both ABRs, states the transit area and the router ID of the remote destination ABR. This creates
essentially a tunnel through the transit area, which, although it may involve many routers to forward the
traffic, appears to the remote ABRs as next hops.
6.7 Verifying the OSPF Configuration a Multiple Area Network
As in the case of OSPF in a single area (see Section 5.7), there are a number of show commands that can be
used to troubleshoot the OSPF configuration in a multiple area network.
The commands used to troubleshoot OSPF configurations in a single area are also useful in a multiple area
configuration. These commands are:
Leading the way in IT testing and certification tools, www.testking.com
- 107 -
CCNP/CCDP 642-891 (Composite)
•
show ip ospf, which provides information about the OSPF process and its details.
•
show ip ospf database, which provides information about the contents of the topological database.
•
show ip ospf interface, which provides information on how OSPF has been configured on each
interface.
•
show ip ospf neighbor, which displays all the information about the relationship that the router has
with its neighbors.
•
show ip protocols, which displays the IP configuration on the router, including the interfaces and the
configuration of the IP routing protocols.
•
show ip route, which provides detailed information on the networks that the router is aware of and the
preferred paths to those networks. It also gives the next logical hop as the next step in the path.
There are two additional commands that can be used in a multiple area configuration. These commands are:
•
show ip ospf border-routers, which shows the OSPF ABRs and ASBRs for which the internal
router has entries in its routing table. This command is useful for troubleshooting configuration errors
and understanding how the network is communicating about its routes. It is also useful for verifying that
the configuration is correct and that the OSPF network is functioning properly.
•
show ip ospf virtual-links, which shows the virtual links that exist on the network. The show ip
ospf neighbors command should be used in conjunction with this command.
Leading the way in IT testing and certification tools, www.testking.com
- 108 -
CCNP/CCDP 642-891 (Composite)
7. EIGRP in Enterprise Networks
The Enhanced Interior Gateway Routing Protocol (EIGRP) is a proprietary Cisco routing protocol that has
the capability to support IP, AppleTalk, and IPX. As its name implies, EIGRP is an enhanced version of
IGRP and is designed for use in large networks. It uses the same distance vector technology as IGRP. The
changes were effected in the convergence properties and the operating efficiency of the protocol. It has some
characteristics similar to those of a link-state routing protocol and, therefore, it is sometimes referred to as a
hybrid routing protocol. EIGRP is an efficient solution to networking large environments as it scales well,
however, like OSPF its ability to scale is dependent on the design of the network.
The major concern in scaling an organizational network is controlling the network overhead that is sent,
particularly over slow WAN links. The less information about the network, its services, and networks that
need to be sent, the greater the capacity available for the data between clients and servers. Although sending
less routing information relieves the network, it gives the routers less information with which to make
decisions. As seen with summarization, static and default routes can lead to poor routing decisions and loss
of connectivity. OSPF was the first protocol to attempt to address these problems. IGRP offers another
alternative. As a proprietary distance vector protocol, it has solved many of the problems. However, it does
face some issues with regard to scaling because of the inherent nature of distance vector. EIGRP addresses
many of the problems related to scaling the network that IGRP suffered from. There are four main
components of EIGRP: the protocol-dependent modules, the Reliable Transport Protocol (RTP), neighbor
discovery/recovery, and of the Diffusing Update Algorithm (DUAL).
The main attributes of EIGRP is listed below:
•
It enables a loop-free environment
•
EIGRP is backward compatible with the IGRP protocol
•
It provides support for the multiple routed protocols
•
It caters for classless routing, discontiguous networks and VLSM.
•
EIGRP supports routing update authentication
•
It relays network changes in place of periodic changes
•
EIGRP can load balance a maximum of six equal or unequal paths.
•
With EIGRP, metric is based on the composite of bandwidth, delay and maximum transmission unit
(MTU) sizes. This provides for the most favorable path to a destination
•
EIGRP has a quick convergence time and a reduced use of bandwidth
TABLE 7.1: EIGRP Terminology
Term
Meaning
ACK
This refers to an acknowledgement that normally entails a Hello packet with
no data.
Active
This refers to the time period when a router is probing neighbors for network
path information
Feasible Distance
This is the metric to a remote network
Leading the way in IT testing and certification tools, www.testking.com
- 109 -
CCNP/CCDP 642-891 (Composite)
Feasible Successor
This is used for an EIGRP neighbor that is not utilized for forwarding data.
The EIGRP router does not represent the least-cost path either
Hello
This refers to a multicast data packet that is utilized to establish and manage
EIGRP neighbor relations.
Holdtime
This is the quantity of time that a router holds on for a Hello packet prior to
‘downing’ a neighbor relationship.
Neighbor
This refers to two routers that are linked on a shared network.
Neighbor Table
This refers to the table that is sustained by every EIGRP router that contains a
list of adjacencies. An EIGRP router owns a neighbor table for every
supported route protocol.
Passive
This is the normal operating mode for locating a route to a destination
Query
This is a data formation utilized to address neighboring routers on a lost
network path.
Reply
This refers the response to the query packet.
Retransmission
timeout (RTO)
This is the quantity of time that an EIGRP router holds on, prior to
retransmitting to a neighbor.
Routing Table
This is a table maintained for every routed protocol that is produced from the
most favorable routes to a destination.
Smooth Round Trip
Time (SRTT)
This refers to the quantity of time required to dependably send a packet to a
neighbor and receive the ensuing ACK. The time is recorded in milliseconds.
Successor
This refers to the route from the topology table that holds the most favorable
aspects for the destination, which is injected into the routing table.
Stuck In Active (SIA) This refers a route which is abandoned because it took too long a period, to
reply to an EIGRP query.
7.1 Operation of EIGRP
As a revised and improved version of IGRP, EIGRP's purpose is to solve the scaling limitations that IGRP
faces. EIGRP increases the potential growth of a network by reducing the convergence time. This is
achieved by the implementation of DUAL, loop-free networks, incremental updates, multicast addressing
for updates, and holding information about neighbors as opposed to the entire network
Like on OSPF, the EIGRP router sends out a small hello packet to dynamically learn of other routing
devices that are in the same broadcast domain. The Hello protocol uses a multicast address of 224.0.0.10,
and all routers periodically send hellos. On hearing hellos, the router creates a table of its neighbors. The
continued receipt of these packets maintains the neighbor table. If a hello from a known neighbor is not
heard within a predetermined amount of time, the holdtime, the router will mark the neighbor as dead. The
holdtime is set at the default of three times the Hello timer. Therefore, if the router misses three hellos, the
neighbor is marked dead. To become a neighbor, the router must hear a hello packet or an ACK from a
neighbor, the AS number in the packet header must be the same as that of the receiving router, and the
neighbor's metric settings must be the same.
Leading the way in IT testing and certification tools, www.testking.com
- 110 -
CCNP/CCDP 642-891 (Composite)
7.1.1 The Neighbor Table
In EIGRP, the neighbor table includes the address of the neighbor; the interface through which the
neighbor's hello was heard; the holdtime; the uptime, i.e., how long since the router first heard from the
neighbor; and the sequence number.
The neighbor table tracks all the packets sent between the neighbors. It tracks both the last sequence number
sent to the neighbor and the last sequence number received from the neighbor. Although the Hello protocol
is a connectionless protocol, other protocols used by EIGRP are connection-oriented. The sequence number
is in reference to these protocols.
•
Smooth Round Trip Time (SRTT) is used to calculate the retransmission timeout (RTO). This is the
time in milliseconds that it takes a packet to be sent to a neighbor and a reply to be received. This states
how long the router will wait on a connection-oriented protocol without an acknowledgment before
retransmitting the packet. If the original packet that was unacknowledged was multicast, the
retransmitted packets will be unicast.
•
The number of packets in a queue. This is a means by which administrators can monitor congestion on
the network.
7.1.2 The Topology Table
Once the router knows who its neighbors are, it is can create a database of feasible successors. This view of
the network is held in the topology table. The topology table is created from updates received from the
neighboring routers. The updates are exchanged between the neighbors. Packets called replies will also
update the topology table. Replies are sent in response to queries sent by the router, inquiring about suspect
routes. The queries and responses used by EIGRP for the DUAL algorithm are sent reliably as multicasts. If
a router does not hear an acknowledgment within the specified time, it retransmits the packet as a unicast. If
there is no response after 16 attempts, the router marks the neighbor as dead. The window for the RTP is set
as 1. The router must hear an acknowledgment from every router before it can send the next packet. The
capability to send unicast retransmissions decreases the time that it takes to build the tables.
The topology table in EIGRP manages the selection of routes to be added to the routing table. The topology
table has a record of all known network routes within the organization. The table is built from the update
packets that are exchanged by the neighbors and by replies to queries sent by the router. When the router has
an understanding of the network, it runs DUAL to determine the best path to the remote network. The result
is entered into the routing table. The topology table is updated because the router either gains or loses direct
connectivity with a router or hears a change through the network communication of EIGRP.
Like the neighbor table that tracks the receipt of the EIGRP packets, the topology table records the packets
that have been sent by the router to the neighbors. It also identifies the status of the networks in the table. A
healthy network is marked as passive; it will be labeled as active if the router is attempting to find an
alternative path to the remote network that is believed to be down. Because the routing table is built from the
topology table, the topology table must have the information required by the routing table. This includes the
next logical hop, or the address of the neighbor that sent the update with that network. The routing table will
also calculate the metric to the remote network.
7.1.3 EIGRP Metrics
Leading the way in IT testing and certification tools, www.testking.com
- 111 -
CCNP/CCDP 642-891 (Composite)
The metrics used in EIGRP are similar to those of IGRP. The main difference is that the result of the
calculation is held in a 32-bit field. This means that the decision can be more detailed. The DUAL algorithm
will use this metric to select the best path or paths to a destination. The computation is performed on paths
held in the topology table to identify the best path to place into the routing table. There can be up to six
paths held for one destination, and there can be three different types of paths:
•
Internal, which are paths that are internal to the AS;
•
Summary, which are internal paths that have been summarized; and
•
External, which are paths that are external to the AS that have been redistributed into the EIGRP AS.
The metric is the same composite metric used by IGRP, with the default being bandwidth and delay. It is
possible to change the metric, however, any configuration changes made be effected on every router in the
EIGRP AS. The formula used to calculate the default metric is: [(108 ÷ the smallest bandwidth in kbps) +
the delay] x 256.
7.2 Updating the Routing Table
DUAL is responsible for maintenance of the topology table and the creation of the routing table. The
topology table records the metric as received from the advertising router, or the next logical hop. It then adds
the cost of getting to that neighbor, the one that is advertising the route. The cost to the destination network
from the advertising router, plus the cost to that router, equals the metric to the destination network from the
router. The metric or cost from the neighbor advertising the route is known as the advertised distance (AD).
The metric or cost from the router is referred to as the feasible distance (FD). If the AD is less than the FD,
then the next-hop router is downstream and there is no loop.
7.2.1 Updating the Routing Table in Passive Mode
In EIGRP, DUAL determines whether there is an acceptable route in the topology table to replace the
current path in the routing table, this is replacing a successor in the routing table with a feasible successor
(FS) from the topology table. If the FD is more than the AD, it means that the FD is a feasible condition
(FC), allowing it to become an FS. If a link between a neighboring router, i.e., the successor, and the next
hop on a path goes down, the router would look in its topology table for alternative routes. It uses the
metrics to determine another FS. To qualify as an FS, the alternative route must have as AD that is less than
the original FD. If the router finds an alternative path through another neighbor with an AD that is less than
the original AD, it replaces the original FS with the alterative route without changing from passive to active
mode. The neighbor through which the FS now passes becomes the new successor.
7.2.2 Updating the Routing Table in Active Mode
If the router does to find an alternative path that is an acceptable FS, i.e., an alternative path with an AD that
is less than the original AD, it must go into active mode to query its neighbors. The neighbors with an FD
that is greater than their AD will reply. The router selects the path with the lower cost as the best alternative
route. The topology and routing tables will then be updated, DUAL will be calculated, and the network will
be returned to passive mode. In this event, the neighboring router through which the new path passes
becomes the successor.
7.2.3 Adding a Network to the Topology Table
Leading the way in IT testing and certification tools, www.testking.com
- 112 -
CCNP/CCDP 642-891 (Composite)
When a new network is added, the access router through which the new network is connected becomes
aware of the new network and starts to send Hello packets out the new interface. It will not receive a reply
because it is the access router giving connectivity to the new network. Therefore, there are no new entries in
the neighbor table because no neighbors have responded to the Hello protocol. There is a new entry in the
topology table, however, because this is a new network. EIGRP then send an update to all its neighbors,
informing them of the new network. The sent updates are tracked in the topology table and the neighbor
table because the updates are connection-oriented and the acknowledgments from the neighbors must be
received within a set time frame. The router, having added the network to the topology table, adds the
network to the routing table. The network then will be marked as passive because it is operational.
However, on hearing the update from the access router, a backbone router updates the sequence number in
the neighbor table and adds the network to the topology table. It calculates the FD and the successor to place
in the routing table. It is then in a position to send an update to all of its neighbors, except the access router,
obeying the split horizon rule. In this way, the new network is propagated to the affected routers.
7.2.4 Removing a Path or Router from the Topology Table
If a network connected to a router is disconnected, the router updates its topology and routing table, and
sends an update to its neighbor. When its neighbor receives the update, it updates the neighbor table and the
topology table. The neighbor then examines the topology table for alternatives routes to the remote network.
Because there was only one path to the remote network, no alternatives routes will be found. The neighbor
then sends out a query to its neighbors, requesting that they look in their tables for paths to the remote
network. The route is marked active in the topology table at this time. The query is tracked and, when all the
replies are in, the neighbor and topology tables are updated. DUAL, which starts to compute as soon as a
network change is registered, runs to determine the best path, which is placed in the routing table. However,
because no alternative route is available, the neighbors reply to the query stating that they have no path, after
they have queried their own neighbors, etc. When no router can supply a path to the network, all the routers
remove the network from their routing and topology tables.
7.3 Scaling EIGRP
EIGRP is designed to work in very large networks. However, as with OSPF, it is design sensitive. The
factors that can affect the scaling of EIGRP are: the amount of information sent between neighbors; the
number of routers that are sent updates; how far away the routers are that have to send updates; and the
number of alternative paths to remote networks.
A poorly scaled EIGRP network can result in a route being stuck in active (SIA), in which case the routers
are continuously searching for alternative paths to a remote network. It can also result in network
congestion with delays, lost routing information, flapping routes and retransmission being the main
symptoms. In addition, it could result in unreliable circuit or unidirectional link, and router memory and
CPU over utilization. Therefore, the design of the network is very important. During the network design,
you should consider the allocation of contiguous addresses and a hierarchical tiered network design to
allow summarization; sufficient network resources on network devices; sufficient bandwidth on WAN
links; appropriate EIGRP configuration on WAN links; filters; and network monitoring.
7.4 Configuring EIGRP
Leading the way in IT testing and certification tools, www.testking.com
- 113 -
CCNP/CCDP 642-891 (Composite)
The commands for EIGRP are consistent with the other IP routing protocols. Although IP routing is on
automatically, the chosen routing protocol must be configured and the participating interfaces must be
identified. EIGRP allows for VLSM and, therefore, summarization because the mask is sent in the update
packets. Although summarization is automatic, EIGRP summarizes at the NIC or major network boundary.
To summarize within the NIC number, it must be manually configured. Unlike OSPF that can only
summarize at the Area Border Router (ABR), EIGRP can summarize at any router.
The router needs to understand how to participate in the EIGRP network. Therefore, it requires the EIGRP
process; an EIGRP autonomous system number so that it can be identified as part of the same autonomous
system; participating router interfaces to send or receive EIGRP routing updates.
By default, there is no IP routing protocol running on the Cisco router. To configure EIGRP as the routing
protocol, you must issue the following command:
router eigrp autonomous_system_number
Although EIGRP has been turned on, it has no information on how to operate. The connected networks that
are to be sent in the EIGRP updates and the interfaces that participate in the EIGRP updates must be defined.
If the EIGRP information is not specified, the process with insufficient configuration will not start.
Prior to Cisco IOS 12.0(4)T, the network command In EIGRP played a similar role to that of the network
command in RIP or IGRP. However, unlike OSPF, in which it is possible to identify the specific address of
an interface, the network command for EIGRP is stated at the class level. EIGRP does not have the design
specification of areas and, therefore, has no need for granularity. This network command used the following
command syntax:
network network_number
From Cisco IOS 12.0(4)T onward, there have been some changes to the network command. It is now
possible to identify which interfaces are running EIGRP by stating a wildcard mask. This is similar to the
use of the network command in OSPF. However, OSPF has the added parameter, which defines the area for
the interface. The new syntax for the network command is:
network network_number [ wildcard_mask ]
The no form of this command, as in no network network_number [ wildcard_mask ] disables EIGRP.
After the network has been defined to EIGRP, it will identify the interfaces directly connected to the routers
that share that network address. Once it has identified the interfaces on the router that are participating in the
EIGRP domain, updates will be received on the interface, updates will be sent out the interfaces, the network
will be advertised out all EIGRP interfaces, and if appropriate, the Hello protocol will be propagated.
There are a number of optional commands that can be used to configure the way EIGRP works within the
network. These commands should be used in reference to the design of the network and its technical
requirements. The optional EIGRP commands are:
•
no autosummary, which will turn off automatic summarization. If summarization is not configured,
EIGRP will automatically summarize at the class boundary. This command is IOS-specific and applies
Leading the way in IT testing and certification tools, www.testking.com
- 114 -
CCNP/CCDP 642-891 (Composite)
to the entire router. Thus, if there are slow serial interfaces or congested links on the router, they will
transmit all the subnets known on the router. This may significantly increase the overhead for the link.
•
subnet_mask, which
configures summarization at the interface level. This command must be preceded by the interface
interface_number command.
•
variance multiplier, which is used to configure EIGRP to load-balance across unequal paths. By
default, EIGRP automatically load-balances across links of equal cost. The variance command allows
the administrator to identify by the use of the multiplier parameter the metric scope for including
additional paths. The multiplier is a number that ranges from 1 to 128. The default is 1, which allows for
equal cost load balancing. If the number is higher, it will multiply the best cost or metric value for a path
by the number stated as the multiplier. All paths to the same destination that have metrics within this
new range are now included in load balancing. The amount of traffic sent over each link is proportional
to the metric for the path.
•
bandwidth line_speed, which allows the administrator to override the default bandwidth settings on
the links. EIGRP will not use more than 50 percent of the stated bandwidth on a link. This is often
necessary on serial links because the default bandwidth is 1.544 Mbps or a T1. If in reality the link is 56
kbps, it is easy to see how EIGRP could saturate the link. EIGRP will try to use 50 percent of a T1 link
(772 kbps), far exceeding the real bandwidth of the line. This will mean not only the dropping of data
packets due to congestion, but also the dropping of EIGRP packets. Therefore, it is essential to configure
all interfaces to reflect the true speed of the line. This command must be preceded by the interface
interface_number command.
•
ip bandwidth-percent eigrp autonomous_system_number percent, which interacts with the
bandwidth command on the interface. The reason for using this command is primarily because in your
network, the bandwidth command does not reflect the true speed of the link. The bandwidth command
ip
summary
address
eigrp
autonomous_system_number
ip_address
may have been altered to manipulate the routing metric and path selection of a routing protocol, such as
IGRP or OSPF. It might be better to use other methods of controlling the routing metric and return the
bandwidth to a true value. Otherwise, the bandwidth-percent command is available. It is possible to
set a bandwidth percent that is larger than the stated bandwidth. This is in the understanding that
although the bandwidth may be stated to be 56 kbps, the link is in fact 256 kbps. This command must
also be preceded by the interface interface_number command.
7.5 Verifying the EIGRP Operation
There are a number of show and debug commands that can be used to configure, maintain, and troubleshoot
a live EIGP network. The show commands are:
•
show ip eigrp neighbors, which provides detailed information on the neighbors. This command
records the communication between the router and the neighbors as well as the interface and address by
which they communicate.
•
show ip eigrp topology, which provides details about the routes held in the topology table and for
detailed information on the networks that the router is aware of and the preferred paths to those networks,
as well as the next logical hop as the first step in the path. The router will track the EIGRP packets that
have been sent to neighbors in this table.
•
show ip eigrp topology all, which provides details about all the routes and alternative paths held in
the topology table. The router will track the EIGRP packets that have been sent to neighbors in this table.
Leading the way in IT testing and certification tools, www.testking.com
- 115 -
CCNP/CCDP 642-891 (Composite)
•
show ip eigrp traffic, which provides information on the aggregate traffic sent to and from the
EIGRP process.
•
show ipx route, which shows the routing table for IPX and is the source of the information on how to
reach the remote destination network.
•
show ip route, which provides detailed information on the networks that the router is aware of and the
preferred paths to those networks. It also gives the next logical hop as the next step in the path.
•
show ip protocols, which displays the IP configuration on the router, including the interfaces and the
configuration of the IP routing protocols.
Leading the way in IT testing and certification tools, www.testking.com
- 116 -
CCNP/CCDP 642-891 (Composite)
8. Using BGP-4 to Communicate with Other Autonomous Systems
8.1 BGP-4 Overview
BGP-4 is an external routing protocol that is one of the methods used to make a connection to the Internet
via an Internet service provider (ISP). BGP-4 is increasingly
An Autonomous System
important in larger environments to communicate with the
An autonomous system (AS) is a routing
Internet agent or (ISP). However, if your network is a simple one, domain that shares routing information.
using static and default routes may be configurations that are more Typically, an AS is the same as an
appropriate.
organization. The Cisco glossary describes
an AS as a collection of networks under a
common administration sharing a common
routing strategy. An AS is subdivided by
areas and must be assigned a unique 16-bit
number by IANA. An AS is used to
determine the demarcation between
organizations and the Internet. The
capability of the Internet to identify an
entire organization by the means of the
unique 16-bit number allows for great
constriction of the amount of information
that needs to be held in routing tables or
transmitted in routing updates. BGP-4 is
the routing protocol that is used between
AS to carry this pared down information
into and across the Internet.
BGP-4 is an extremely complex path vector protocol that is used
within the Internet and multinational organizations. Its main
purpose is to connect very large networks that are mainly
autonomous systems. It makes use of routing updates. At the start
of a session, full routing updates are sent. Thereafter, trigger
updates are sent while the session is active. This makes BGP-4
appropriate in these environments. The protocol is not interested
in communicating a full knowledge of every subnet within the
organization. It takes summarization to the extreme,
communicating only that which is defined, as necessary. The
routing update carries a list of autonomous system numbers and
aggregated prefix addresses, as well as some policy routing
information. The little information that it carries is extremely
important, so great efforts are made to ensure the reliability of the transport carrying the updates and to
ensure that the databases are synchronized. This is the pinnacle of hierarchical routing design. Because of
this, the transport medium for BGP is TCP, which provides an additional layer of reliability. Policy-based
routing is another distinctive characteristic of BGP-4. Unlike interior routing protocols, BGP-4 can be
configured to advocate one path above another. This is done in a more sophisticated and controlled manner
than can the metric afforded to the interior routing protocols.
8.1.1 The BGP-4 Operation
BGP-4 is connection-oriented. When a neighbor is seen, a TCP peering session is established and
maintained. BGP-4 probes keepalives sent out periodically to sustain the link and maintain the session.
These keepalives are the 19 byte header used in the BGP updates. Having established the session, the
routing tables are exchanged and synchronized. The routers now send incremental updates only when
changes occur. The update refers to a single path and the networks that may be reached via that path. Having
corrected the routing table, the BGP-4 process propagates the change to all neighbors, with a few exceptions,
based on an algorithm to ensure a loop-free network.
The BGP-4 operation uses four different message types. These are: open messages, which are used to
establish connections with peers; keepalives, which are used to maintain connections and verify paths held
by the router sending the keepalive; update messages, which contain paths to destination networks and the
path attributes; and notification, which is used to inform the receiving router of errors.
8.1.2 Types of BGP-4
Leading the way in IT testing and certification tools, www.testking.com
- 117 -
CCNP/CCDP 642-891 (Composite)
There are two types of BGP-4: Internal BGP-4 (IBGP-4) and External BGP-4 (EBGP-4). The difference
depends on the function of the routing protocol. The router will determine if the peer BGP-4 router is going
to be an external BGP-4 peer or an internal BGP-4 peer by checking the autonomous system number in the
open message that was sent.
•
Internal BGP-4 is used within an autonomous system (AS). It conveys information to all BGP-4 routers
within the domain and ensures that they have a consistent understanding of the available networks.
Internal BGP-4 is used within an ISP or a large organization to coordinate the knowledge of that AS.
The routers are not required to be physical neighbors on the same medium, and are often located on the
edges of the network. Internal BGP-4 is used to convey BGP-4 information about other ASs across a
transit autonomous system. Another routing protocol, an interior routing protocol such as OSPF, is
used to route the BGP-4 packets to their remote locations. To achieve this, internal BGP requires the
destination BGP neighbor's IP address to be contained within the normal routing table kept by another
routing protocol.
•
External BGP-4 complies with the common perception of an external routing protocol; it sends routing
information between differing ASs. Therefore, the border router between different ASs is the external
BGP router.
8.1.3 BGP-4 Synchronization
Before IBGP-4 can propagate a route into another AS handing it over to EBGP-4, the route must be totally
known within the AS. In other words, the Internal Gateway Protocol (IGP) or internal routing protocol must
be synchronized with BGP-4. This ensures that if traffic is sent into the AS, the interior routing protocol can
direct it to its destination. It thus prevents traffic from being forwarded to unreachable destinations and
reduces unnecessary traffic. It also ensures consistency within the AS.
Synchronization is enabled by default, but, in some case it may be useful to turn off synchronization, such
when all the routers in the AS are running BGP-4, or when all the routers inside the AS are meshed, or when
the AS is not a transit domain, i.e., an AS that is used to carry BGP-4 updates from one AS to another
8.1.4 BGP-4 Policy-Based Routing
Policy-based routing allows you to define how traffic will be routed at the AS level. This is a level of control
above the dynamic routing protocol. Given that many variables in BGP-4 can influence dynamic routing;
this is a very high level of control. This other dimension distinguishes BGP-4 from other routing protocols.
Policy-based routing is a form of static routing enforced by specialized access lists called route maps.
There are some rules that are associated with Policy routing. These rules are:
•
Traffic can be directed on either the source address or both the source and destination addresses.
•
It affects only the next hop in the path to the destination.
•
It does not affect the destination of the packet, only the path used to get to the destination.
•
It does not allow traffic sent into another AS to take a different path from the one that would have been
chosen by that AS.
•
It is possible to influence only how traffic will get to a neighboring AS, not how it will be routed within
that AS.
Leading the way in IT testing and certification tools, www.testking.com
- 118 -
CCNP/CCDP 642-891 (Composite)
•
It is configured on the inbound interface because it examines the source address
8.1.5 BGP-4 Attributes
Attributes in BGP-4 are used to determine the best path, and are the metric for BGP-4. However, they also
carry information that decisions are based on. The variables describe characteristics or attributes of the path
to the destination. These characteristics can be used to distinguish the paths, and this allows a choice to be
made among the paths. Furthermore, some of the information carried in the update messages is more
important than others. Because of this, it has been categorized by importance.
There are two types of attributes: well-known attributes and optional attributes. The well-known attributes
are those attributes that are mandatory, while the optional attributes are optional. Both types of attributes are
subdivided into two further categories, allowing for considerable granularity. The categories of BGP-4
attributes are discussed in Table 8.1.
TABLE 8.1: The Categories of BGP-4 Attributes
Category
Description
Well-known
These are required and are therefore recognized by all BGP-4
implementations. It does not need to be present in the update
messages, but if they are, all routers running BGP-4 will recognize and
act on the information contained. The two subcategories are:
• Mandatory; and
• Discretionary.
Optional
Routers may not recognize optional attributes. If this is the case, it
marks the update as partial and sends the update, complete with
attributes, to the next router. Thus, the optional attributes traverse the
router unchanged, if they are not recognized. The two subcategories
are:
• Transitive; and
• Nontransitive.
Nontransitive attributes are dropped if they fall onto a router that does
not understand or recognize the attribute. These attributes will not be
propagated to the BGP-4 peers. Unrecognized nontransitive optional
attributes must be quietly ignored and not passed along to other BGP
peers. New transitive optional attributes may be attached to the path by
the originator or by any other AS in the path.
Table 8.2 discusses the various attributes supported by Cisco.
TABLE 8.2: The BGP-4 Attributes supported by Cisco
Attribute
Category
Code
Preference
Description
AS_Path
Well-known,
mandatory
2
Shortest path
Next hop
Well-known,
mandatory
3
Shortest path This attribute states the next hop on the path
or IGP metric for the router to take. In EBGP-4, this will be
This attribute is a sequence of the ASs that the
packet has traversed.
Leading the way in IT testing and certification tools, www.testking.com
- 119 -
CCNP/CCDP 642-891 (Composite)
the source address of the router that sent the
update. In IBGP-4, for routes originated
outside the AS, the address will still be the
source address of the router that sent the
update.
Multiple Exit Optional,
Discriminator nontransitive
(MED)
4
Lowest value
This attribute informs routers outside the AS
which path to take into the AS. It is known as
the external metric of a route. Therefore, it is
passed between the ASs, but it will not be
propagated into a third AS.
Local
preference
Well-known,
discretionary
5
Highest value This attribute is used to tell routers within the
AS how to exit the AS if multiple paths exist.
It is passed solely between IBGP peers.
Atomic
aggregate
Well-known,
discretionary
6
This attribute states that the routes have been
Information
not used in aggregated and that some information has been
path selection lost.
Aggregator
Optional,
transitive
7
Information
This attribute states the BGP-4 router ID and
not used in autonomous system number of the router that
path selection was responsible for aggregating the route.
Community
Optional,
transitive
8
Information
This is the capability to tag certain routes that
not used in have something in common. They are thereby
path selection made members of the same club or
community. This is often used in conjunction
with another attribute that will affect route
selection for the community. For example, the
use of the local preference and community
attributes would allow the network
administrators and other privileged beings to
use the high-speed link to the Internet, while
others shared a fractional T1. Communities
have no geographical or logical limits. BGP-4
can filter on incoming or outgoing routes for
filtering, redistribution, or path selection.
Originator ID
Optional,
nontransitive
9
The route reflector (described in the following
Information
not used in chapter) appends this attribute. It carries the
path selection router ID of the originating router in the local
autonomous system. It is used to prevent
loops.
Cluster list
Optional,
nontransitive
10
Information
This attribute identifies the routers involved in
not used in the route reflection. It shows the reflection
path selection path that has been taken and is used to prevent
looping errors.
Weight
Cisco-defined
Highest value This is a Cisco proprietary attribute used in
route selection. It is local to the router and is
not propagated to other routers. Therefore it
Leading the way in IT testing and certification tools, www.testking.com
- 120 -
CCNP/CCDP 642-891 (Composite)
has no compatibility problems.
8.2 Basic BGP-4 Configuration Commands
To connect to another AS, you must configure the start of the routing process, the networks to be advertised,
and the BGP-4 neighbor that the routing process will be synchronizing routing tables with over a TCP
session.
8.2.1 Starting the Routing Process
You can configure the routing process using the same command as seen for the interior routing protocols.
The syntax for this command is:
router bgp autonomous_system_number
8.2.2 Defining the Networks to Be Advertised
To define the network that is to be advertised for the AS, you can use the following network command:
network network_number mask network_mask
For each network you must issue a separate network command. The network command determines the
networks that are originated by the router. This command does not identify the interfaces upon which to run
BGP; instead, it states the networks that are to be advertised by BGP. The network command must include
all the networks in the AS to be advertised, not just those that are directly connected to the router.
8.2.3 Identifying Neighbors and Defining Peer Groups
In IBGP-4, the remote autonomous system number that is defined for the BGP-4 peer will be the same; in
EBGP-4, these numbers will differ. The syntax is as follows:
neighbor ip_address | peer_group_name remote-as autonomous_system_number
The use of the peer_group_name allows the identification of this router as a member of a peer group. A
peer group is a group of neighbors that share the same update policy. This is the mechanism by which
routers can be grouped to simplify configuration.
8.2.4 Forcing the Next-Hop Address
On a multiaccess network, the rule is that the source address of a packet is that of the router that originated
the packet onto the network. This can cause problems on a NBMA network that appears to be a multiaccess
network but that in reality may not have full connectivity to all the routers on the network. If the source
address is the address of the initiating router, the other routers may not have a path to this next hop, and
packets will be dropped. To overcome this problem, you can use the neighbor command to configure the
next-hop address to be that of the transmitting router. The syntax for this command is:
neighbor { ip_address | peer_group } next-hop-self
Leading the way in IT testing and certification tools, www.testking.com
- 121 -
CCNP/CCDP 642-891 (Composite)
8.2.5 Disabling Synchronization
Synchronization is enabled by default, but, in some case it may be useful to turn off synchronization such as
if the IBGP-4 network is fully meshed. You can use the no synchronization command to turn off
synchronization. This allows routers to advertise routes into BGP-4 before the IGP has a copy of the route in
its routing table.
8.2.6 Aggregating Routes
To summarize or aggregate routes within the BGP-4 domain, you can use the aggregate-address command in
config-router mode. The syntax for this command is:
aggregate-address ip_address subnet_mask [ summary-only ] [ as-set ]
This command has two optional parameters:
•
the summary-only parameter, which suppresses the specific routes and propagate only the summary
route; and
•
the as-set parameter, which records all the AS that have been traversed in the update message. This is
the default configuration.
8.3 Effecting BGP-4 Configuration Changes
After you perform configuration changes in BGP-4, you must reset the TCP session between neighbors. You
can accomplish this by using the clear ip command. The syntax for his command is:
clear ip bgp { * | ip_address }[ soft [ in | out ] ]
The clear ip command disconnects the session between the neighbors and re-establishes it using the new
configuration that has been entered. The soft option does not tear down the sessions, but it resends the
updates. The in and out options allow the configuration of inbound or outbound soft updates. The default
is for both.
8.4 Verifying the Basic BGP-4 Configuration
There are a number of show commands, as well as a debug command, that can be used to trouble shoot the
BGP-4 configuration. These commands are:
•
show ip bgp, which displays the BGP routing table.
•
show ip bgp paths, which displays the topology table.
•
show ip bgp summary, which displays information about the TCP sessions.
•
show ip bgp neighbors, which displays information about the TCP connections to neighbors.
•
debug ip bgp [ dampening | events | keepalives | updates ], which displays live
information of events as they occur. These options limit the output to the specific type of information.
Leading the way in IT testing and certification tools, www.testking.com
- 122 -
CCNP/CCDP 642-891 (Composite)
8.5 Advanced BGP-4 Configuration
8.5.1 Configuring Route Reflectors
With internal BGP-4, a fully meshed configuration is required to ensure full connectivity. An IBGP-4 router
will propagate a route if it is a route generated by the transmitting router, or if it is a connected route. If the
router was learned via an update from a BGP-4 peer within the same AS, it can propagate this route only to
an EBGP-4 peer. For this reason, IBGP-4 peers must all be fully meshed to have a complete knowledge of
the network. The problem is that BGP-4 maintains up-to-date and accurate routing information by sending
incremental updates across a TCP connection. The TCP connection is a costly in network resources. The
greater the number of connections, the greater the number of required resources. The n (n – 1) / 2 equation
can be used to determine the number of required IBGP-4 sessions, when n is the number of routers.
The problem presented by a fully meshed IBGP-4 network can be solved by proper network design. If a
hub-and-spoke network were developed, this would streamline the TCP connections. This is a good thing,
but it does require some additional design and configuration. The solution is the implementation of route
reflectors and the network design that they support. The only design requirement is that the IBGP-4 route
reflectors must be fully meshed to ensure the correct propagation of updates.
A route reflector is a router that has been configured to forward routing updates to neighbors or peers within
the same AS. These IBGP-4 peers need to be identified in the configuration. The route reflector defies the
split horizon rule that states that the IBGP-4 router will not propagate a route that was learned from a peer
within the same AS. However, when a router has been configured as a route reflector, it forwards learned
paths from IBGP-4 peers to other IBGP-4 peers. It forwards only to those routers that have been identified
as route reflectors and to IBGP/EBGP neighbors clients. This means that a logical hub-and-spoke design can
be implemented within an AS between IBGP-4 peers, thus reducing the number of required IBGP-4 sessions.
A route reflector is a router that forwards updates to its clients. When a client, i.e., a router that receives
updates from a route reflector, sends an update to the route reflector, it is forwarded or reflected to the other
clients. Therefore, both a route reflector and a client form a unit, called a cluster that shares information. The
AS is divided into clusters, and there must be at least one route reflector per cluster. The route reflector
connects to other route reflectors. These route reflectors need to be fully meshed. This is to ensure that the
IBGP-4 routing tables are complete. Furthermore, non-clients must be fully meshed with the route reflector.
When the route reflector forwards an update, the Originator-ID attribute is set. This is the BGP-4 router ID
of the router that originated the path. The purpose of this attribute is not to award honors to the originating
router, but so that if this router receives back the update, it will see its own ID and will ignore the packet.
This prevents the possibility of routing loops. If there are multiple route reflectors in the cluster, to provide
redundancy, then the originating router is identified by the Cluster-ID attribute. This serves the same
purpose as the Originator-ID in preventing routing loops.
A neighbor command is used to configure a route reflector. The syntax for this command is:
neighbor ip_address route-reflector-client
You can use the no form of this command, as in no neighbor ip_address route-reflector-client, to
remove a router as a client.
In this command, the ip_address is the IP address of the neighboring router being identified as a client
while the route-reflector-client parameter points to the client of the route reflector. The client is not
Leading the way in IT testing and certification tools, www.testking.com
- 123 -
CCNP/CCDP 642-891 (Composite)
configured and is unaware of its change of status. It does nothing but continue to send updates to the route
reflector, which forwards them unchanged to other clients.
8.5.2 Controlling BGP-4 Traffic
BGP-4 updates can be controlled. It is often advantageous to limit the way that the BGP-4 routing updates
are propagated. This not only streamlines the traffic flow on the network, but it also simplifies the network
and its maintenance. Designing how the routing information should be forwarded through the network forms
a basic level of security and can reduce the possibility of routing loops. Filtering is a means of traffic control.
There are three main types of filtering on a Cisco router. These are:
•
Access list, which is used in BGP-4 for the creation of route maps. It is also used to filter updates sent
from a peer based on the AS path. In addition, other technologies use access lists for standard filtering.
•
Prefix list/distribute list, which filter routing updates, particularly in redistribution. From Cisco IOS
version 11.2, ISPs were given prefix lists, which are a more efficient form of filtering. Prefix lists filter
based on the prefix of the address. This option was made a part of the general release IOS in version 12.0.
Both prefix lists and distribute lists filter on network numbers, not autonomous system paths, for which
access lists are used.
•
Route maps, which is a sophisticated access list that defines criteria upon which a router acts when a
match is found for the stated criteria. It is used in BGP-4 for setting the attributes that determine the
basis for selecting the best path to a destination.
You can create an entry in a prefix list and assigns a sequence number to the entry by issuing the
ip prefix-list command. The syntax for this command is:
ip prefix-list list_name [ seq seq_number ] { deny | permit }
network/length [ ge ge_value ] [ le le_value ]
In this command, the seq option can be used to manually specify the sequence number. The
network/length parameter states the prefix to be matched and the length of the prefix. The ge keyword is
used if the prefix is greater than the value stated in the list, while the le keyword is used if the prefix is less
than the value stated in the list.
You can configure a router to use a prefix list as a filter in distributing routes by using the following
neighbor command:
neighbor { ip_address | peer_group }
prefix-list prefix-list-name { in | out }
8.5.3 Redundant Connections into the Internet
An enormous amount of traffic leaves an organization in search of the Internet. This is not only the use of email as a means of communication, but also people doing research on the Internet. The majority of it is valid
work. As the use of the Internet expands as both an individual tool and a major mechanism of finance and
commerce, it becomes increasingly necessary for the network administrator to provide constant access to the
Internet, with load balancing and redundancy. This can be achieved by having more than one link to the
Internet. This is called multihoming.
Leading the way in IT testing and certification tools, www.testking.com
- 124 -
CCNP/CCDP 642-891 (Composite)
There are some concerns about connecting to more than one ISP. The ISPs may not each be propagating the
same routes into or from the Internet. If the providers are sending subsets of the required routes, there could
be a major problem with connectivity if the link to one of the providers fails. In addition, it is possible that if
you are connected to two different ISPs, your AS could become a transit autonomous system between the
ISPs. This could happen if a router in the AS of one ISP sees a path to a destination via the other ISP's AS
and your AS provides the best route to the AS of the other ISP. Configuration at the ISP level is the solution
to these concerns and is dealt with when setting up the service. Therefore, it is important that the need for
multihoming is raised during the negotiations so that the ISP is aware of the need for the additional
configuration.
8.5.4 Determining the BGP-4 Path by Configuring the Attributes
You can configure BGP-4 to take a path to a destination based on different criteria. The local preference
attribute, and the weight attribute can be configured to influence the choice of path.
The weight attribute selects the exit path out of the router when there are multiple paths to the same
destination. The higher the weight value, the better the path. To configure the weight attribute, you can use
the neighbor command. The syntax for this command is:
neighbor { ip_address | peer_group_name } weight weight
In this command, ip_address is the IP address of the neighboring router; peer_group_name identifies the
BGP-4 peer group, if there is one; and weight weight specifies the weight attribute and its value. This is
used in route selection. The default is 32768, but the range extends from 0 to 65535.
The local preference attribute can be configured by using the bgp default local-preference command.
The syntax for this command is:
bgp default local-preference value
In this command, value has a range from 0 to 4,294,967,295.
8.6 Verifying the Advanced BGP-4 Configuration
There are a number of show commands that can be used to troubleshoot the advanced BGP-4 configuration.
These commands are:
•
•
show ip prefix-list [ detail | summary ], which displays information about all prefix lists,
including the hit count, which is the number of times that a match has been found for the criteria in the
prefix list. This is very important in troubleshooting for capacity planning and security.
show ip prefix-list [detail | summary] name, which displays a table showing the entries in a
prefix list identified by name.
•
show ip prefix-list name [ network/length ], which displays the filtering associated with the
node based on the absolute of the defined prefix.
•
show ip prefix-list name [ seq seq_number ] , which displays the prefix list entry with a given
sequence number.
Leading the way in IT testing and certification tools, www.testking.com
- 125 -
CCNP/CCDP 642-891 (Composite)
•
show ip bgp, which displays all the values of all the BGP-4 attributes and their status.
Leading the way in IT testing and certification tools, www.testking.com
- 126 -
CCNP/CCDP 642-891 (Composite)
9. Using Integrated IS-IS in Connectionless Networks
9.1. IS-IS Overview
Intermediate System-to-Intermediate System (IS-IS) Protocol is an intradomain Open System
Interconnection (OSI) dynamic routing protocol that is designed to operate in OSI Connectionless Network
Service (CLNS). In recent years, IS-IS has become increasingly popular, with widespread usage among
Internet Service Providers. It is a link-state protocol, which enables very fast convergence with large
scalability. It is also a very flexible protocol and has been extended to incorporate leading edge features such
as MPLS Traffic Engineering.
An IS-IS routing domain is similar to a BGP autonomous system. IS-IS uses a two-level hierarchy to
support large routing domains. A routing domain is a collection of areas under an administration that
implements routing policies within the domain. A large domain may be administratively divided into areas.
Each IS resides in exactly one area.
With IS-IS, an individual router is in only one area, and the border between areas is on the link that connects
two routers that are in different areas. The reason for this is that an IS-IS router generally has one network
service access point (NSAP) address, and an IP router generally has multiple IP addresses. Routers can be
Level 1, Level 2, or both. Within Cisco IOS Software, the default configuration is both Level 1 and Level 2
at the same time which allows an IS-IS network to run with minimal configuration in a plug-and-play
fashion. Routing within an area is referred to as Level 1 routing. Routing between areas is referred to as
Level 2 routing. IS-IS does not have a backbone area, instead contiguous Level 2-capable routers form the
backbone. In other words, Level 2-capable routers connect all areas within a routing domain. Level 2 routers
advertise their own area NSAP addresses to the other Level 2 routers in the backbone. All Level 1 routers
and hosts in an area must have an NSAP with the same area address.
A Level 2 Intermediate System (IS) keeps track of the paths to destination areas. A Level 1 IS keeps track of
the routing within its own area. For a packet destined for another area, a Level 1 IS sends the packet to the
nearest Level 2 IS in its own area, regardless of what the destination area is. Then the packet travels via
Level 2 routing to the destination area, where it may travel via Level 1 routing to the destination. It should
be noted that selecting an exit from an area based on Level 1 routing to the closest Level 2 IS might result in
suboptimal routing.
A Level 1/Level 2 router may have neighbors in any area. It has two link-state databases: a Level 1 link-state
database for intra-area routing and a Level 2 link-state database for inter-area routing. A Level 1/Level 2
router runs two SPFs and may require more memory and processing as a result. A Level 1/Level 2 router
running Integrated IS-IS will leak all the IP subnets from Level 1 into Level 2; these subnets can be
summarized where this is desirable. All IS-IS areas are “stub” areas, although with Cisco IOS Software
Release 12.0T, it has become possible to leak Level 2 routes into Level 1, creating a IS-IS not-so-stubby
area.
On LAN, a Designated Intermediate System (DIS) is elected and will conduct the flooding over the media.
The DIS is analogous to the designated router (DR) in Open Shortest Path First (OSPF) Protocol but is
elected by priority with the highest priority becomes the DIS. You can configure the priority level on an
interface basis. In the case of a tie, the router with the highest NSAP address will become the DIS.
9.1.1 The OSI Connectionless Network Service (CLNS)
Leading the way in IT testing and certification tools, www.testking.com
- 127 -
CCNP/CCDP 642-891 (Composite)
OSI Connectionless Network Service (CLNS) is a network layer service similar to the IP service. CLNS
entities use the Connectionless Network Protocol (CLNP) to communicate with each other.
In the OSI architecture, hosts are referred to as End Systems (ESs), and routers are referred to as
Intermediate Systems (ISs).
•
End Systems (ESs) have no routing information; they discover ISs (routers) by listening to Intermediate
System Hello (ISH) messages and sending traffic to any random router. ESs send End System Hello
(ESH) messages; they do not choose a designated router to handle all traffic, and optimal routing is
accomplished via redirects.
•
Intermediate Systems (ISs) discover ESs by listening to ESHs, and ISs send ISHs to ESs.
There is no Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP) or Interdomain
Routing Protocol (IDRP) for CLNS, but End System-to-Intermediate System (ES-IS) Protocol provides the
same kind of reporting functions for ISs and ESs by providing configuration information, and route
redirection information.
•
Configuration information permits End Systems to discover the existence and reachability of
Intermediate Systems and permits Intermediate Systems to discover the existence and reachability of
End Systems. This allows ESs and ISs attached to the same subnetwork to dynamically discover each
other’s existence and availability.
•
Route redirection information allows Intermediate Systems to inform End Systems of potentially
better paths to use when forwarding network protocol data units (NPDUs) to a particular destination. A
better path could either be another IS on the same subnetwork as the ES, or the destination ES itself, if it
is on the same subnetwork as the source ES. This minimizes the complexity of routing decisions in End
Systems and improves performance because the ESs may make use of the better IS or local subnetwork
access for subsequent transmissions.
9.1.2 Integrated IS-IS
The IS-IS Routing Protocol may be used as an IGP to support IP as well as OSI. This allows a single routing
protocol to be used to support pure IP environments, pure OSI environments, and mixed environments.
Integrated IS-IS is deployed extensively in an IP-only environment in the top-tier Internet service provider
(ISP) networks. By supporting both IP and OSI traffic, integrated IS-IS can support traffic to IP hosts, OSI
end systems, and dual end systems.
9.2 IS-IS Operations
Routers running IS-IS will send hello packets out all IS-IS-enabled interfaces to discover neighbors and
establish adjacencies. Routers sharing a common data link will become IS-IS neighbors if their hello packets
contain information that meets the criteria for forming an adjacency. The criteria differ slightly depending
on the type of media being used. The main criteria are matching authentication, IS-type and MTU size.
Routers may build a link-state packet (LSP) based upon their local interfaces that are configured for IS-IS
and prefixes learned from other adjacent routers. Generally, routers flood LSPs to all adjacent neighbors
except the neighbor from which they received the same LSP. However, there are different forms of flooding
and also a number of scenarios in which the flooding operation may differ. All routers will construct their
link-state database from these LSPs. A shortest-path tree (SPT) is calculated by each IS, and from this SPT
the routing table is built.
Leading the way in IT testing and certification tools, www.testking.com
- 128 -
CCNP/CCDP 642-891 (Composite)
9.2.1 IS-IS Data-Flow Diagram
In IS-IS, routers may have adjacencies with other routers on point-to-point links. In a LAN environment,
routers report their adjacencies to a Designated Intermediate System (DIS), which generates a pseudonode
LSP. The DIS is responsible for conducting flooding over the LAN and also for maintaining synchronization.
The flow of information within the
IS-IS
routing
function
is
represented by the IS-IS data-flow
diagram (See Figure 9.1), which
consists of four processes and a
Routing Information Base (RIB).
The RIB consists of the link-state
database and the forwarding
database. The four processes in the
IS-IS data-flow diagram are:
receive, update, decision, and
forward.
FIGURE 9.1: The IS-IS Data-Flow Diagram
•
The receive process is the entry
point for all data, including user data, error reports, routing information, and control packets. It passes
user data and error reports to the forward process and routing information and control packets to the
update process.
•
The update process generates local link information that is flooded to adjacent routers; in addition, the
update process receives, processes, and forwards link information received from adjacent routers. This
process manages the Level 1 and Level 2 link-state databases and floods Level 1 and Level 2 LSPs
throughout an area. Each LSP that resides in the link-state database has a remaining lifetime, a checksum,
and a sequence number.
ƒ The LSP remaining lifetime counts down from 1,200 seconds to 0. The LSP originator must
periodically refresh its LSPs to prevent the remaining lifetime from reaching 0. The refresh
interval is 15 minutes, with a random jitter of up to 25 percent. If the remaining lifetime reaches
0, the expired LSP will be kept in the database for an additional 60 seconds before it is purged.
The additional period for which the expired LSP is kept in the database is known as
ZeroAgeLifetime.
ƒ If a router receives an LSP with an incorrect checksum, the router will cause a purge of the LSP
by setting the remaining lifetime value to 0, removing the body and reflooding it. This triggers
the LSP originator to send a new LSP. IS-IS can be configured so that LSPs with incorrect
checksums are not purged, but the router that originated the LSP will not know that the LSP was
not received.
•
The decision process runs shortest-path-first (SPF) algorithm on the link-state database, and creates the
forwarding database. It computes next-hop information and computes sets of equal-cost paths, creating
an adjacency set that is used for load balancing. On a Cisco router, IS-IS supports load balancing over
and up to six equal-cost paths.
•
The forward process gets its input from the receive process and uses the forwarding database to forward
data packets toward their destination. It also redirects load sharing and generates error reports.
Leading the way in IT testing and certification tools, www.testking.com
- 129 -
CCNP/CCDP 642-891 (Composite)
9.2.2 Adjacency Building
Neighbors on point-to-point networks always become adjacent unless they do not see themselves in their
neighbors’ hello PDU and match on certain parameters. On broadcast networks and nonbroadcast
multiaccess (NBMA) networks, the DIS will become adjacent with its neighbors. Two Level 1 routers will
become neighbors if they share a common network segment and have their interfaces configured to be in the
same area. Two Level 2 routers in different areas will become neighbors if they share a common network
segment and are configured as Level 2.
9.2.3 The Link-State Database and Reliable Flooding
All valid LSPs received by a router are stored in a link-state database. These LSPs describe the topology of
an area. Routers use this link-state database to calculate its shortest-path tree. Each router floods its LSPs to
adjacent neighbors, and the LSPs are passed along unchanged to other adjacent routers until all the routers in
the area have received them. All the Level 1 LSPs received by one router in an area describe the topology of
the area.
The IS-IS link-state database consists of all the LSPs the router has received. Each node in the network
maintains an identical link-state database. A change in the topology means a change in one or more of the
LSPs. The router that has experienced a link going up or down will resend its LSP to inform the other
routers of the change.
The LSP sequence number is increased by one to let the other routers know that the new LSP supersedes the
older LSP. When a router first originates an LSP, the LSP sequence number is 1. If the sequence number
increases to the maximum (oxFFFFFFFF), the IS-IS process must shut down for at least the MaxAge plus
the ZeroAgeLifetime to allow the old LSPs to age out of all the router databases. Flooding is the process by
which these new LSPs are sent throughout the network to ensure that the databases in all routers remain
identical.
When a router receives a new LSP, it floods this LSP to its neighbors, except the neighbor that sent the new
LSP. On point-to-point links, the neighbors acknowledge the new LSP with a partial sequence number
PDU (PSNP), which holds the LSP ID, sequence number, checksum, and remaining lifetime. When the
acknowledgment PSNP is received from a neighbor, the originating router stops sending the new LSP to that
particular neighbor although it may continue to send the new LSP to other neighbors that have not yet
acknowledged it. On LANs there is no explicit acknowledgement with a PSNP. Missing LSPs are detected
when a complete sequence number PDU (CSNP) is received and the list of LSPs within the CSNP is
compared with the LSPs in a router’s own database. If any LSPs are missing or outdated, the router will
send a request for these in the form of a PSNP. If a router receives an LSP that has an older sequence
number than the one in its IS-IS database, it sends the newer LSP to the router that sent the old LSP and
keeps resending it until it receives an acknowledgment PSNP from the originator of the old LSP. LSPs must
be flooded throughout an area for the databases to synchronize and for the SPF tree to be consistent within
an area. It is not possible to control which LSPs are flooded by using a distribute list, although it is possible
to use a routemap to control which routes are redistributed into IS-IS from another routing protocol.
9.2.4 DIS and Pseudonodes
The DIS creates a virtual node called a pseudonode, and all the routers on a LAN, including the DIS, form
an adjacency with the pseudonode. On a LAN, one of the routers will elect itself the DIS based on interface
priority. If all interface priorities are the same, the router with the highest subnetwork point of attachment
Leading the way in IT testing and certification tools, www.testking.com
- 130 -
CCNP/CCDP 642-891 (Composite)
(SNPA) is selected. MAC addresses are the SNPA on LANs. On Frame Relay networks, the local data-link
connection identifier (DLCI) is the SNPA. If the SNPA is a DLCI and is the same at both sides of a link, the
router with the higher system ID in the NSAP address will become the DIS.
The DIS election is pre-emptive. If a new router boots on the LAN with a higher interface priority, it
becomes the DIS, purges the old pseudonode LSP, and a new set of LSPs will be flooded. The DIS sends
CSNPs describing all the LSPs in the database every 3 seconds. If a router needs an LSP because it is older
than the LSP advertised by the DIS in its CSNP or it is missing an LSP that is listed in the CSNP, it will
send a PSNP to the DIS and receive the LSP in return. This mechanism can work both ways: If a router sees
that it has a newer version of an LSP, or it has an LSP that the DIS does not advertise in its CSNP, the router
will send the newer or missing LSP to the DIS. A pseudonode LSP represents a LAN, including all ISs
attached to that LAN, just as a non-pseudonode LSP represents a router, including all ISs and LANs
connected with the router.
9.2.5 IS-IS Metrics
The original IS-IS specification defines four different types of metrics: cost, which is the default metric and
is supported by all routers; delay, which measures transit delay; expense, which measures the monetary cost
of link utilization; and error, which is the residual error probability associated with a link. However, the
Cisco implementation uses cost only. If the optional metrics were implemented, there would be a link-state
database for each metric and SPF would be run for each link-state database.
9.3 IS-IS Routing
The SPF algorithm is used for calculating routes with the IS-IS routing protocol and supports both TCP/IP
and OSI. This algorithm computes the shortest paths from a single source vertex to all other vertices in a
weighted, directed graph. Within the Cisco IOS implementation, weight assigned to branches of the tree is a
configurable metric and spans 224 per individual link and 232 per path (root to leaf).
IS-IS is a link-state protocol and therefore provides full visibility of the network topology in the Link-State
Database. This visibility is attained through a flooding mechanism that ensures that each router in an area
receives information that can be used to build the network topology map. In IS-IS, this information is
flooded via link-state protocol data units, and each IS or router advertises information pertaining to itself and
its links. Once the information is flooded and all routers obtain the same information, the SPF algorithm is
run separately per router to process the topology and extract the shortest path from the router itself to all the
leaves of the tree. This is like putting a jigsaw puzzle together. The information derived from this process is
used to populate the forwarding table on the router.
9.3.1 IP Routing with IS-IS
Integrated IS-IS can perform IP routing via default IP routing; redistribution; summarization; and route
leaking.
•
Default IP routing can be achieved with the attached bit method, which is set by a Level 1/Level 2
router in its own Level 1 LSP and used to indicate to all Level 1 routers within the area that this router is
a potential exit point of the area; and by the default information originate method, which can be
configured in Level 1 as well as Level 2.
Leading the way in IT testing and certification tools, www.testking.com
- 131 -
CCNP/CCDP 642-891 (Composite)
•
Redistribution from any other routing protocol, static configuration, or connected interfaces is allowed
in any type of router. By default the metric type will be set as internal, which means that the metric of
the route will compete with all other internal routes. Metric type may be set to external, which means
that the prefix will have a metric equal to the cost specified in the redistribution command plus a value
of 64. Although the metric is increased if the metric is flagged as external on redistribution, the
internal/external bit used to increase the metric is actually ignored when calculating routes unless the use
of external metric is specified in the configuration. If the configured metric type is external or the metric
type is not specified, redistribution will not take place.
•
Summarization reduces the amount of routing updates that will be flooded across the areas and the
routing domain. The use of IS-IS requires a good addressing scheme to aid summarization and avoid
having a huge Level 2 database. For IP we can summarize only native IS-IS routes into Level 2 from the
Level 1 database. It is not possible to summarize IS-IS internal routes at Level 1, although it is possible
to summarize external (redistributed) routes at Level 1.
•
By default, Level 1 routers do not carry any routing information external to the area they belong to.
Instead they use a default route to exit the area. This interferes with BGP routing and Multiprotocol
Label Switching (MPLS) and MPLS-VPN where all BGP next-hop addresses must be present in the
local routing table. Route Leaking can be used to overcome this problem. In route leaking, selected
Level 2 routes can be advertised by a Level 1/Level 2 router into Level 1. Those leaked routes are
specially tagged so they will not be re-advertised into Level 2 by another Level 1/Level 2 router.
Route-leaking imposes some risks if used in an unstable environment. An IS-IS area concept with
summarization usually prevents the instabilities in one area to have an effect on other areas. Routes
leaked from Level 2 into Level 1 are generally not summarized. Each time a topology change occurs in
an area, all Provider Edge (PE) router addresses of this area may change metric, and therefore the Level
1/Level 2 router that will have to propagate the PE addresses into the Level 2 core will have to recreate
its Level 2 LSP. This means that leaking will have to recur and lead to a situation where for any
topology change in one area you will have to re-compute via Partial Route Calculation many routes in all
other areas as well. Route-leaking should therefore be planned and deployed carefully.
9.4 Security
The Cisco IS-IS implementation offers a three types of authentication that can prevent unauthorized routers
from forming adjacencies or injecting TLVs. These types of authentication are:
•
IS-IS Authentication, which uses clear text.
•
IS-IS HMAC-MD5 Authentication, which adds an HMAC-MD5 digest to each IS-IS protocol data unit
(PDU). It was introduced in Cisco IOS software version 12.2(13)T.
•
Enhanced Clear Text Authentication, which allows you to configure clear text authentication so that
passwords will be encrypted when the software configuration is displayed. It also makes passwords
easier to manage and change.
Routers that want to become neighbors must exchange the same password for their configured level of
authentication. A router not in possession of the appropriate password is prohibited from participating in the
corresponding function.
9.5 Configuring Integrated IS-IS
Leading the way in IT testing and certification tools, www.testking.com
- 132 -
CCNP/CCDP 642-891 (Composite)
To configure IS-IS, you must enable IS-IS and assign areas. In addition you can enable IP routing for an
area on an interface, and monitor IS-IS. You can also filter routing information and specify route
redistribution.
9.5.1 Enabling IS-IS and Assigning Areas
To enable IS-IS, you must create an IS-IS routing process and assign it to a specific interface, rather than to
a network. You can specify more than one IS-IS routing process per Cisco router, using the multiarea IS-IS
configuration syntax. You then configure the parameters for each instance of the IS-IS routing process. A
single Cisco router can participate in routing in up to 29 Level 1 areas, and can perform Level 2 routing in
the backbone. In general, each routing process corresponds to an area. By default, the first instance of the
routing process configured performs both Level 1 and Level 2 routing. You can configure additional router
instances, which are automatically treated as Level 1 areas. You must configure the parameters for each
instance of the IS-IS routing process individually.
If Level 2 routing is configured on any process, all additional processes are automatically configured as
Level 1. You can configure this process to perform Level 1 routing at the same time. If Level 2 routing is
not desired for a router instance, remove the Level 2 capability using the is-type router configuration
command. Use the is-type router configuration command also to configure a different router instance as a
Level 2 router.
Network entity titles (NETs) define the area addresses for the IS-IS area and the system ID of the router.
To enable IS-IS and specify the area for each instance of the IS-IS routing process, use the following router
command in global configuration mode:
router isis [ area_tag ]
This command enables IS-IS routing for the specified routing process, and places the router in router
configuration mode. The area_tag arguments specifies the area to which this IS-IS router instance is
assigned. A value for tag is required if you are configuring multiple IS-IS areas.
9.5.2 Enabling IP Routing for an Area on an Interface
To enable IP routing and specify the area for each instance of the IS-IS routing process, you must specify the
following commands:
ip router [ area_tag ]
ip address ip_address_mask
The first command ip router, configures an IS-IS routing process for ISO CLNS on the interface and
attaches an area designator to the routing process. The ip address command defines the IP address for the
interface.
9.5.3 Configuring Optional Interface Parameters
The Cisco IS-IS implementation also allows you to configure certain interface-specific IS-IS parameters,
such as metric or cost of the specified interface; the hello packet interval period; the CSNP interval period;
the time between retransmission of IS-IS LSPs for point-to-point links; the delay between successive IS-IS
Leading the way in IT testing and certification tools, www.testking.com
- 133 -
CCNP/CCDP 642-891 (Composite)
LSP transmissions; etc. Most interface configuration commands can be configured independently from other
attached routers. These commands are:
•
isis metric default_metric [ level-1 | level-2 ], which configures the metric (or cost) for
the specified interface;
•
isis hello-interval { interval | minimal } [ level-1 | level-2 ], which specifies the
length of time in seconds between hello packets the Cisco IOS software sends on the specified interface;
•
isis csnp-interval interval { level-1 | level-2 }, which configures the IS-IS CSNP interval
in seconds for the specified interface;
•
•
isis retransmit-interval interval, which configures the number of seconds between
retransmission of IS-IS LSPs for point-to-point links;
isis lsp-interval interval, which configures the delay in milliseconds between successive IS-IS
LSP transmissions;
•
isis retransmit-throttle-interval interval, which configures the IS-IS LSP retransmission
throttle interval in milliseconds;
•
isis hello-multiplier multiplier [ level-1 | level-2 ], which sets the hello multiplier;
•
isis priority priority_value [ level-1 | level-2 ], which configures the priority to use for
designated router election;
•
•
isis circuit-type [ level-1 | level-1-2 | level-2-only ], which configures the type of
adjacency desired for neighbors on the specified interface (the interface circuit type);
isis password password [ level-1 | level-2 ], which configures the authentication password
for a specified interface.
9.5.4 Configuring IS-IS Authentication Passwords
You can assign passwords to areas and domains. The area authentication password is inserted in Level 1
LSPs, and the routing domain authentication password is inserted in Level 2 LSPs.
To configure the area authentication passwords, use the following command in router configuration mode
area-password password
To configure the domain authentication passwords, use the following command in router configuration
mode
domain-password password
9.5.5 Monitoring IS-IS
There are many show commands available for monitoring the state of IS-IS on a Cisco router. These
commands are:
•
show clns neighbor, which displays the adjacencies for a specific router;
Leading the way in IT testing and certification tools, www.testking.com
- 134 -
CCNP/CCDP 642-891 (Composite)
•
show isis database [ level-1 ] [ level-2 ] [ detail ] [ lspid ], which displays the IS-IS
link-state database;
•
show isis area_tag routes, which displays the IS-IS Level 1 routing table;
•
show isis spf-log, which displays how often and why the router has run a full SPF calculation;
•
show isis area_tag topology, which displays a list of all connected routers in all areas.
Leading the way in IT testing and certification tools, www.testking.com
- 135 -
CCNP/CCDP 642-891 (Composite)
10. Controlling Routing Updates Across the Network
It is rare to find just one routing protocol running within an organization. If the organization is running more
than one routing protocol, it is necessary to find a way of passing the networks learned by one routing
protocol into another so that every workstation can reach every other workstation. This process is called
redistribution. Although the organization as a whole has one routing domain, each routing protocol
considers the routing updates as propagated by another domain or autonomous system (AS). The routing
protocol views these redistributed updates as external. This distinction allows a different value to be placed
on those routes during the path selection process. The interior routing protocols within the organization see
an AS as containing one routing protocol. This is a defining characteristic of an AS. The exterior routing
protocols see the organization as the AS that connects to the Internet or a service provider.
To manage the complexity of these networks and to reduce the possibility of routing loops, some level of
restriction in the information sent across the various domains is often necessary.
Various methods enable you to control the routing information sent between routers. These methods include:
•
Passive interfaces, which is an interface that does not participate in the routing process. In RIP and
IGRP, the process listens but will not send updates. In OSPF and EIGRP, the process neither listens nor
sends updates because no neighbor relationship can form. The interfaces that participate in the interior
routing process are controlled by the interface configuration.
•
Default routes, which is a route used if there is no entry in the routing table for the destination network.
If the lookup finds no entry for the desired network and no default network is configured, the packet is
dropped. If the routing process is denied the right to send updates, the downstream routers will have a
limited understanding of the network. To resolve this, use default routes. Default routes reduce overhead,
add simplicity, and can remove loops.
•
Static routes, which is a route that is manually configured. It takes precedence over routes learned via a
routing process because it has lower administrative distance. If no routing process is configured, static
routes may be configured to populate the routing table. In small environments or for stub networks this is
an ideal solution.
•
The null interface, which is an imaginary interface that is defined as the next logical hop in a static
route. All traffic destined for the remote network is carefully routed into a black hole. This can be used
in a similar way to the passive interface, but it allows for greater granularity in the denied routes. It is
also used to feed routes into another routing protocol. It allows another network mask to be set and is
therefore useful when redistribution occurs between a routing protocol that uses VLSM and one that
does not. In this way, it aggregates routes.
•
Distribution lists, which are access lists that are applied to the routing process, determining which
networks will be accepted into the routing table or sent in updates. When communicating to another
routing process, it is important to control the information sent into the other process. This control is for
security, overhead, and management reasons. Access lists afford the greatest control for determining the
traffic flow in the network.
•
Route maps, which are complex access lists that permit conditional programming. If a packet or route
matches the criteria defined in a match statement, then changes defined in the set command are
performed on the packet or route in question.
Leading the way in IT testing and certification tools, www.testking.com
- 136 -
CCNP/CCDP 642-891 (Composite)
10.1 Features of Redistribution
Redistribution is used when a router is receiving information about remote networks from various sources.
Although all the networks are entered into the routing table and routing decisions will be made on all the
networks present in the table, a routing protocol propagates only those networks that it learned through its
own process. When there is no sharing of network information between the routing processes, this is referred
to as ships in the night (SIN) routing. Redistribution can occur only between processes routing the same
Layer 3 protocol. For example, OSPF, RIP, IGRP, and EIGRP can redistribute routing updates between
them because they all support the same TCP/IP stack and share the same routing table.
EIGRP is a routing protocol that carries updates for multiple protocols. The key to how this works is the
separate routing tables held for each protocol, using the routing protocol as the mechanism for the
forwarding of updates and path selection. EIGRP supports AppleTalk's RTMP, IPX's RIP and NLSP, as well
as IP. Automatic redistribution is performed between RTMP and EIGRP, and IPX RIP and EIGRP. EIGRP
must be manually redistributed into NLSP. There is also automatic redistribution between IGRP and EIGRP
as long as they are members of the same autonomous system. Some routing protocols automatically
exchange networks, although others require some level of configuration.
10.2 Problems of Configuring Multiple Routing Protocols
The problems experienced as a result of multiple routing processes and their redistribution include:
•
The wrong or a less efficient routing decision being made because of the difference in routing metrics.
The choice of the less efficient route is referred to as choosing the suboptimal path.
•
A routing loop occurring, in which the data traffic is sent in a circle without ever arriving at the
destination. This is normally due to routing feedback, where routers send routing information received
from one autonomous system back into that same autonomous system.
•
The convergence time of the network increasing because of the different technologies involved. If the
routing protocols converge at different rates, this may also cause problems.
10.2.1 Path Selection
When a routing process does a routing table lookup for a destination network, it finds the best path in
accordance with the routing decision that was made. If the routing process knows of several paths to a
remote network, it chooses the most efficient path based on its metric and routing algorithm, and places this
into the routing table. If there is more than one path with the same metric, the routing process may add up to
six of these paths and then distribute the traffic equally between them. This is routing protocol dependent. It
is also possible in IGRP and EIGRP to load-share over unequal cost paths by using the variance command.
Path selection by a routing protocol is how a single routing protocol selects a single path to put into the
routing table. This keeps processing to a minimum because the decisions are made before the packets arrive
for routing. When the routing table is complete, the packets are just switched to the destination based on the
decisions made earlier and stored in the routing table.
If there are multiple equal-cost paths to a destination, you would expect the protocol to load-balance across
the links. When fast switching is in force, the load-balancing feature is turned off. The reason for this is that
the cache has one path cached, so the packets are load-balanced to the destination on a session basis. This is
not a problem as long as you are aware of how the traffic flows across your network and the implication of
this feature.
Leading the way in IT testing and certification tools, www.testking.com
- 137 -
CCNP/CCDP 642-891 (Composite)
On occasions, more than one routing protocol is running on the router. If they have paths to the same remote
destination network, the routing process must decide which path to enter into the routing table, to have one
entry per network. Because the metrics differ between the protocols, selection based on the metric is ruled
out as a solution. Instead, another method was devised to solve the problem, i.e., the administrative distance
(AD). The AD selects one path to enter into the routing table from several paths offered by multiple routing
protocols. The AD is an arbitrary set of values placed on the different sources of routing information. The
AD can be manually configured. The reason for manually configuring the administrative distance for a
protocol such as EIGRP is that it may have a less desirable path compared to one offered by another
protocol such as RIP, which has a higher default AD.
The AD disregards the metrics. This means that a slower and more expensive link could be selected.
Another occasion when the administrative distance would select the suboptimal path is that of a directly
connected network. A network that is directly connected to the router has precedence in terms of
administrative distance. Table 10.1 lists the default administrative distances.
TABLE 10.1: The Default Administrative Distance
Routing Source
Administrative Distance
Connected interface or static route that identifies the
outgoing interface rather than the next logical hop
0
Static route
1
EIGRP summary route
5
External BGP
20
EIGRP
90
IGRP
100
OSPF
110
IS-IS
115
RIP v1, v2
120
EGP
140
External EIGRP
170
Internal BGP
200
An unknown network
255 or infinity
10.2.2 Routing Loops
Routing loops occur when, networks that originated within that routing process, but that the routing protocol
now learns from another routing protocol through redistribution. The routing protocol may now see a
network that it owns as having a more favorable path although this will send the traffic in the opposite
direction, into a different routing protocol domain. This is solved by changing the metric, and the AD;
implementing default routes, passive interfaces, and distribution lists. These configurations are discussed
below in Section 10.3.
Leading the way in IT testing and certification tools, www.testking.com
- 138 -
CCNP/CCDP 642-891 (Composite)
10.2.3 Redistribution and Network Convergence
To maintain consistent and coherent routing among different routing protocols, you must consider the
different technologies involved. A major concern is the computation of the routing table and how long it
takes the network to converge. Although Enhanced IGRP is renowned for its speed in convergence, RIP has
a poorer reputation in this regard. Sharing the network information across the two technologies may cause
some problems. The first problem is that the network takes time to converge. At some point, this will create
timeouts and possibly routing loops. Adjusting the timers may solve the problems, but any routing protocol
configuration must be done with a sound knowledge of the entire network and of the routers that need to be
configured. Timers typically require every router in the network to be configured to the same value.
Guidelines in network design to avoid routing loops include:
•
Have a sound knowledge and clear documentation of the network topology, the routing protocol
domains, and the traffic flow;
•
Do not overlap routing protocols. It is much easier if the different protocols can be clearly delineated
into separate domains, with routers acting in a similar function to Area Border Routers in OSPF. This is
often referred to as the core and edge protocols;
•
If distribution is needed, ensure that it is a one-way distribution, to avoid networks being fed back into
the originating domain. Use default routes to facilitate the use of one-way distribution, if necessary; and
•
If two-way distribution cannot be avoided, manually configuring the metric, and the AD the
administrative distance, and using distribution access lists can help you avoid conversion problems.
10.3 Configuring Redistribution
Redistribution configuration is specific to the routing protocol itself. For redistribution, all protocols require
the configuration of redistribution; and the definition of the default metric to be assigned to any networks
that are distributed into the routing process. The commands for redistribution are configured as
subcommands to the routing process. The redistribute command identifies the routing protocol from
which the updates are to be accepted. It identifies the source of the updates.
The syntax for the redistribute command used to configure redistribution between routing protocols is:
redistribute protocol [ process_id ] { level-1 | level-1-2 | level-2 }
[ metric metric_value ] [ metric-type type_value ]
[ match { internal | external 1 | external 2 } ] [ tag tag_value ]
[ route-map map_tag ] [ weight weight ] [ subnets ]
In this command, the protocol parameter identifies the routing protocol that is providing the routes. It can
be: connected, bgp, egp, igrp, eigrp, isis, iso-igrp, mobile, ospf, static, or rip. The process_id
parameter is used for bgp, egp, igrp, eigrp, and ospf. For bgp, egp, igrp, or eigrp it is an autonomous
system number; and for ospf it is the OSPF process ID in. RIPv1 and RIPv2 do not use either a process ID
or an autonomous system number. The { level-1 | level-1-2 | level-2 } parameter is used for isis.
It indicates the route levels used by Intermediate System-to-Intermediate System (IS-IS) for redistribution.
In IS-IS, Level 1 (level-1) routes are redistributed into other IP routing protocols independently; both
Level 1 and Level 2 (level-1-2) redistributed into other IP routing protocols; and Level 2 (level-2) routes
are redistributed into other IP routing protocols independently.
Leading the way in IT testing and certification tools, www.testking.com
- 139 -
CCNP/CCDP 642-891 (Composite)
This command has a number of additional optional parameters. These are:
•
metric metric_value, which is used to specify the metric used for the redistributed route.
•
metric-type type_value, which is an optional OSPF parameter that specifies the external link type
associated with the default route advertised into the OSPF routing domain. This value can be 1 for type 1
external routes, or 2 for type 2 external routes. The default is 2.
•
match, which is an optional OSPF parameter that specifies the criteria by which OSPF routes are
redistributed into other routing domains. It can be:
ƒ internal, which redistribute routes that are internal to a specific AS;
ƒ external 1, which redistribute routes that are external to the AS but that are imported into OSPF
as a type 1 external route; and
ƒ external 2, which redistribute routes that are external to the AS but that are imported into OSPF
as a type 2 external route.
•
tag tag_value, which is a 32-bit decimal value attached to each external route. It is not used by the
OSPF protocol itself, but it may be used to communicate information between autonomous system
boundary routers. If no value is specified, then the remote autonomous system number is used for routes
from BGP and EGP; for other protocols, zero (0) is used.
•
route-map, which instructs the redistribution process that a route map must be referenced to filter the
routes imported from the source routing protocol to the current routing protocol. If it is not specified, all
routes are redistributed because no filtering is performed. If this keyword is specified but no route map
tags are listed, no routes will be imported.
•
map_tag, which is the optional identifier of a configured route map to filter the routes imported from the
source routing protocol to the current routing protocol.
•
weight weight, which sets the attribute of network weight when redistributing into BGP. The weight
determines the preferred path out of a router when there are multiple paths to a remote network. This is
an integer between 0 and 65535.
•
subnets, which is the scope of redistribution for the specified protocol and is used for redistributing
routes into OSPF.
10.4 The Default or Seed Metric
A metric is calculated in terms of how far the network is from the router. The router to which the network is
connected issues a seed metric. This seed metric is added to as the path information is passed through the
network in routing updates. However, a route that has been redistributed is not directly connected to the
router, so no seed metric can be determined. This is a problem because in accepting the new networks, the
receiving process must know how to calculate the metric. Therefore, it is necessary to define the default
metric to be assigned to the networks that are accepted from the other routing protocol. This is like manually
configuring the seed metric. This metric will be assigned to all the redistributed networks from that process
and will be incremented from now on as the networks are propagated throughout the new routing domain.
The default metric can be configured in several ways. The first is to include the metric in the redistribute
command with the metric parameter, as discussed in Section 10.3.
Leading the way in IT testing and certification tools, www.testking.com
- 140 -
CCNP/CCDP 642-891 (Composite)
10.4.1 Configuring the Default Metric for OSPF, RIP, EGP or BGP-4
It is possible to redistribute the routing protocol and then, with a separate command, to state the default
metric. The advantage of this is it is a simpler configuration, which is helpful in troubleshooting. Also, if
more than one protocol is being redistributed into the routing protocol, the default metric applies to all the
protocols being redistributed. To configure the default metric for OSPF, RIP, EGP, or BGP-4, you can use
the use the default-metric number command.
10.4.2 Configuration for EIGRP or IGRP
To configure the default metric for IGRP or EIGRP, you can use the default-metric bandwidth delay
reliability loading mtu command. The parameters for this command are:
•
bandwidth, which is the minimum bandwidth in kilobits/per second (kbps)seen on route to the
destination.
•
delay, which is the delay experienced on the route and presented in microseconds.
•
reliability, which is the probability of a successful transmission given the history of this interface.
The value is expressed in a number from 0 to 255, where 255 indicates that the route is stable and
available.
•
loading, which is a number range of 0 to 255, where 255 indicates that the line is 100 percent loaded.
•
mtu, which is the maximum packet size that can travel through the network.
Typically, you should take the values shown on one of the outgoing interfaces of the router being configured,
by issuing the show interface exec command:
10.5 Configure the Administrative Distance
As mentioned in Section 10.2.1, it is important to ensure that routes redistributed into another routing
protocol are assigned an appropriate metric. However, it is equally important to consider the need to control
the choice that the routing process makes when presented with multiple routes to the same destination from
different routing protocols. The metric is not appropriate because the multiple routes are from different
routing protocols that are not redistributing. Changing the administrative distance (AD) allows the best path
to be chosen. The command structure for changing the AD is protocol-dependent.
10.5.1 Configuring the Administrative Distance in EIGRP
In EIGRP, the command syntax for changing the administrative distance is:
distance eigrp internal_distance external_distance
The two parameters in this command are internal_distance and external_distance. The
internal_distance parameter specifies the administrative distance for EIGRP internal routes. These are
routes learned from another entity within the same autonomous system, such as IGRP. The
external_distance parameter specifies the administrative distance for EIGRP external routes. These are
routes for which the best path is learned from a neighbor external to the autonomous system, such as EIGRP
from another autonomous system or another TCP/IP routing protocol such as OSPF.
Leading the way in IT testing and certification tools, www.testking.com
- 141 -
CCNP/CCDP 642-891 (Composite)
10.5.2 Configuring the Administrative Distance in Other Protocols
To configure the administrative distance for the other IP protocols, you use the following command syntax:
distance weight [ ip_address wildcard_mask ]
[ access-list_number | name ] [ ip ]
The weight parameter in this command specifies the administrative distance. This can be an integer from 10
to 255, where 255 means that the route is unreachable. The values 0 to 9 are reserved for internal use. The
optional ip_address parameter allows filtering of networks according to the IP address of the router
supplying the routing information. The access-list_number | name optional parameter specifies the
number or name of standard access list to be applied to the incoming routing updates. This allows the
filtering of the networks being advertised. Finally, the optional ip parameter specifies the IP-derived routes
for Intermediate System-to-Intermediate System (IS-IS).
10.6 The Passive Interface
The passive interface is the interface that listens but does not speak. It is used for routing protocols that send
updates through every interface with an address that is included in the network command. If the routing
protocol is not running on the next-hop router, it is a waste of time to send updates out of the interface. The
command reduces the spending of limited resources without compromising the integrity of the router. The
router processes all routes received on an interface. The command syntax to configure a passive interface is:
passive-interface type number where type and number, indicate the interface to be made passive.
10.7 Static Routes
Another method of controlling routes is to manually configure the entries into the routing table. This may be
done to prevent the need for a routing protocol to run on the network, reducing the network overhead to zero.
This is used in dial-up lines. This may also be done if there are two ASs that do not need to exchange the
entire routing table, but only a few routes. Another reason for using static routes is to change the mask of the
network. The command syntax for configuring the static route is:
ip route prefix mask { ip_address | interface } [ distance ] [ tag tag ]
[ permanent ]
This command defines the path by stating the next-hop router to which to send the traffic. This configuration
can be used only if the network address for the next-hop router is in the routing table. If the static route
needs to be advertised to other routers, it should be redistributed. The prefix parameter specifies the route
prefix for the destination. The mask parameter specifies the prefix mask for the destination. The ip_address
parameter specifies the IP address of the next-hop router that can be used to reach that network. The
interface parameter specifies the network interface to use to get to the destination network. The optional
distance parameter specifies the administrative distance to assign to this route. The optional tag tag
parameter specifies the value that can be used as a match value in route maps. The optional permanent
parameter specifies that the route will not be removed even if the interface associated with the route goes
down.
10.8 Controlling Routing Updates with Filtering
Leading the way in IT testing and certification tools, www.testking.com
- 142 -
CCNP/CCDP 642-891 (Composite)
Filtering, in the form of access lists, provide more flexible in controlling reducing the routing updates on
your network. Access lists, which are applied to routing updates are referred to as distribute lists. Routing
updates can be filtered for any routing protocol by defining an access list and applying it to a specific
routing protocol. When creating a routing filter or distribute list, you should write out in longhand what you
are trying to achieve; identify the network addresses to be filtered, and create an access list; determine
whether you are filtering routing updates coming into the router or updates to be propagated to other routers;
and assign the access list using the distribute-list command. The syntax for the distribute-list
command used to configure the distribute list to filter incoming updates is:
distribute-list { access-list_number | name } in [ type number ]
In this command, the access-list-number | name parameter specifies the standard access list number or
name, the in parameter applies the access list to incoming routing updates, and the type number parameter
provides the optional interface type and number from which updates will be filtered.
The syntax for the distribute-list command used to configure the distribute list to filter outgoing
updates is:
distribute-list { access-list_number | name } out
[ interface_name | routing_process | autonomous_system_number ]
In this command, the out parameter applies the access list to outgoing routing updates, the optional
interface_name parameter specifies the interface name out which updates will be filtered, the optional
routing_process parameter specifies the name of the routing process, or the keyword static or
connected, from which updates will be filtered, and the optional autonomous_system_number specifies the
autonomous system number of routing process.
10.9 Policy-Based Routing Using Route Maps
Route maps are similar to access lists in that they state criteria that are used to determine whether specific
packets are to be permitted or denied. The main difference is that the route map has the additional capability
of adding a set criterion to the match criterion. In an access list, the match criterion is implicit; in a route
map, it is a keyword. This means that if a packet is matched to the criterion given in the route map, some
action can be taken to change the packet. Route maps can be to control redistribution, to control and modify
routing information, and to define policies in policy routing. You can create a route map by using the
route-map command. The syntax for the route-map command is:
route-map map_tag [ [ permit | deny ] | [ sequence_number ] ]
This route-map command is followed by the route map configuration commands match and set.
The map_tag parameter specifies the name of the route map. This name is used to reference the route map
when using the redistribute router configuration command. The optional permit | deny parameter
specifies that the set actions must be permitted or denied if the match criteria for this route map are met.
The optional sequence_number parameter indicates the position that a new route map will have in the list of
route map statements already configured with the same name.
Leading the way in IT testing and certification tools, www.testking.com
- 143 -
CCNP/CCDP 642-891 (Composite)
10.10 Managing the Redistribution
There are a number of commands that can be used to verify, manage, monitor and troubleshoot
redistribution. These include:
•
show ip protocol, which displays the IP configuration on the router, including the interfaces and the
configuration of the IP routing protocols.
•
show ip route, which provides detailed information on the networks that the router is aware of and the
preferred paths to those networks. It also gives the next logical hop as the next step in the path.
•
show ip eigrp neighbors, which provides detailed information on the neighbors. This command
records the communication between the router and the neighbors as well as the interface and address by
which they communicate.
•
show ip ospf database, which provides information about the contents of the topological database.
10.10.1 Trouble shooting Redistribution
In addition to these commands, you can use the traceroute and extended ping commands to troubleshoot
redistribution. Both of these commands can be used to test reachability. These commands were discussed in
detail in Section 4.7.
10.10.2 Monitoring Policy-Routing Configurations
You can use the following exec commands to monitor the policy-routing configuration:
•
show ip policy, which displays the route maps used for policy routing on the router's interfaces.
•
show route-map, which displays configured route maps.
•
debug ip policy, which displays IP policy-routing packet activity. This command can be used to
determine what policy routing is doing. It displays information about whether a packet matches the
criteria and, if so, the resulting routing information for the packet.
Leading the way in IT testing and certification tools, www.testking.com
- 144 -
CCNP/CCDP 642-891 (Composite)
11. Virtual LANs (VLANs) and Trunking
A fully Layer 2 switched network is referred to as a flat network topology. A flat network is a single
broadcast domain, such that every connected device sees every broadcast packet that is transmitted. As the
number of stations on the network increases, so does the number of broadcasts. Due to the Layer 2
foundation, flat networks cannot contain redundant paths for load balancing or fault tolerance. To gain any
advantage from additional paths to a destination, Layer 3 routing functions must be introduced.
As mentioned in Section 1.1.3, a switched environment offers the technology to overcome flat network
limitations. Switched networks can be subdivided into virtual LANs (VLANs). A VLAN is a single
broadcast domain. All devices connected to the VLAN receive broadcasts from other VLAN members.
However, devices connected to a different VLAN will not receive those same broadcasts because is made up
of defined members communicating as a logical network segment. A VLAN can have connected members
located anywhere in the campus network, as long as VLAN connectivity is provided between all members.
Layer 2 switches are configured with a VLAN mapping and provide the logical connectivity between the
VLAN members.
11.1 VLAN Membership
When a VLAN is provided at an access layer switch, an end user must be able to gain membership to it.
Two membership methods exist on Cisco Catalyst switches: static VLANs and dynamic VLANs.
•
Static VLANs offer port-based membership, where switch ports are assigned to specific VLANs. End
user devices become members in a VLAN based on which physical switch port they are connected to.
No handshaking or unique VLAN membership protocol is needed for the end devices; they
automatically assume VLAN connectivity when they connect to a port. The static port-to-VLAN
membership is normally handled in hardware with application specific integrated circuits (ASICs) in the
switch. This membership provides good performance because all port mappings are done at the hardware
level with no complex table lookups needed.
You must enter the following commands in enable mode to configure static VLANs on an IOS-based
switch:
Switch# vlan database
Switch(vlan)# vlan vlan-number name vlan_name
Switch(vlan)# exit
Switch# configure terminal
Switch(config)# interface interface module_number/port_number
Switch(config-if)# switchport mode access
Switch(config-if)# switchport access vlan vlan_number
Switch(config-if)# end
You must enter the following commands in enable mode to configure static VLANs on a CLI-based switch:
Switch(enable) set vlan vlan-number [ name name ]
Switch(enable) set vlan vlan-number module_number/port_list
Leading the way in IT testing and certification tools, www.testking.com
- 145 -
CCNP/CCDP 642-891 (Composite)
Note: To create a new VLAN, several prerequisites relating to VLAN
Trunking Protocol (VTP) must be met. The switch must be assigned to a
VTP domain and be configured for either server or transparent VTP mode.
VTP is discussed in Section 11.5.
•
Dynamic VLANs are used to provide membership based on the MAC address of an end user device.
When a device is connected to a switch port, the switch must query a database to establish VLAN
membership. A network administrator must assign the user's MAC address to a VLAN in the database of
a VLAN Membership Policy Server (VMPS). With Cisco switches, dynamic VLANs are created and
managed through the use of network management tools like CiscoWorks 2000 or CiscoWorks for
Switched Internetworks (CWSI). Dynamic VLANs allow a great deal of flexibility and mobility for end
users, but require more administrative overhead.
11.2 Extent of VLANs
The number of VLANs will be implemented on a campus network is dependent on traffic patterns,
application types, segmenting common workgroups, and network management requirements. However,
consideration must be given to the relationship between VLANs and the IP addressing schemes. Cisco
recommends a one-to-one correspondence between VLANs and IP subnets, which means that if a Class C
network address is used for a VLAN, then no more than 254 devices should be in the VLAN. Cisco also
recommends that VLANs not extend beyond the Layer 2 domain of the distribution switch, i.e., the VLAN
should not reach across the core of a network and into another switch block. This is designed to keep
broadcasts and unnecessary movement of traffic out of the core block. VLANs can be scaled in the switch
block by using two basic methods: end-to-end VLANs and local VLANs.
•
End-to-end VLANs span the entire switch fabric of a network and are also called campus-wide VLANs.
They are positioned to support maximum flexibility and mobility of end devices. Users are assigned to
VLANs regardless of their physical location. This means that each VLAN must be made available at the
access layer in every switch block. End-to-end VLANs should group users according to common
requirements, following the 80/20 rule. Although only 20 percent of the traffic in a VLAN is expected to
cross the network core, end-to-end VLANs make it possible for all traffic within a single VLAN to cross
the core. Because all VLANs must be available at each access layer switch, VLAN trunking must be
used to carry all VLANs between the access and distribution layer switches.
•
In the modern campus network, end users require access to central resources outside their VLAN. Users
must cross into the network core more frequently, making the end-to-end VLANs cumbersome and
difficult to maintain. Most enterprise networks have adopted the 20/80 rule. Local VLANs deployed in
this type of network. Local VLANs are designed to contain user communities based on geographic
boundaries, with little regard to the amount of traffic leaving the VLAN. They range in size from a
single switch in a wiring closet to an entire building. Local VLANs enables the Layer 3 function in the
campus network to intelligently handle the inter-VLAN traffic loads. This provides maximum
availability by using multiple paths to destinations, maximum scalability by keeping the VLAN within a
switch block, and maximum manageability.
11.3 VLAN Trunks
At the access layer, end user devices connect to switch ports that provide simple connectivity to a single
VLAN each. The attached devices are unaware of any VLAN structure.
Leading the way in IT testing and certification tools, www.testking.com
- 146 -
CCNP/CCDP 642-891 (Composite)
A trunk link can transport more than one VLAN through a single switch port. A trunk link is not assigned to
a specific VLAN. Instead, one or more active VLANs can be transported between switches using a single
physical trunk link. Connecting two switches with separate physical links for each VLAN is also possible.
Cisco supports trunking on both Fast Ethernet and Gigabit Ethernet switch links, as well as aggregated Fast
EtherChannel links and Gigabit EtherChannel links.
11.3.1 VLAN Frame Identification
To distinguish between traffic belonging to different VLANs on a trunk link, the switch must be able to
identifying each frame with the appropriate VLAN. Frame identification, or tagging is one identification
method that trunk links have. Frame identification assigns a unique user-defined ID to each frame
transported on a trunk link. As each frame is transmitted over a trunk link, a unique identifier is placed in the
frame header. As each switch along the way receives these frames, the identifier is examined to determine to
which VLAN the frames belong. If frames must be transported out to another trunk link, the VLAN
identifier is retained in the frame header. If frames are destined out an access link, the switch removes the
VLAN identifier before transmitting the frames to the end station. Therefore, all traces of VLAN association
are hidden from the end station. VLAN identification can be performed using several methods. Each uses a
different frame identifier mechanism, and some are suited for specific network media. These include:
•
Inter-Switch Link (ISL) protocol, a Cisco proprietary method for preserving the source VLAN
identification of frames passing over a trunk link and is primarily used for Ethernet media, although
Cisco has included provisions to carry Token Ring, FDDI, and ATM frames over Ethernet ISL. ISL
performs frame identification in Layer 2 by encapsulating each frame between a header and trailer. Any
Cisco switch or router device configured for ISL can process and understand the ISL VLAN information.
ISL.
•
IEEE 802.1Q protocol, a non-propriety method for preserving VLAN identification of frames passing
over a trunk link and thus allows VLAN trunks to exist and operate between equipment from different
vendors. The IEEE 802.1Q standard defines architecture for VLAN use, services provided with VLANs,
and protocols and algorithms used to provide VLAN services. Like Cisco ISL, IEEE 802.1Q can be used
for VLAN identification with Ethernet trunks. Instead of encapsulating each frame with a VLAN ID
header and trailer, 802.1Q embeds its tagging information within the Layer 2 frame. 802.1Q also
introduces the concept of a native VLAN on a trunk. Frames belonging to this VLAN are not
encapsulated with tagging information. In the event that an end station is connected to an 802.1Q trunk
link, the end station will be able to receive and understand only the native VLAN frames.
Note: Both ISL and 802.1Q tagging methods add to the length of an Ethernet
frame. ISL adds a total of 30 bytes to each frame, while 802.1Q adds 4
bytes. Because Ethernet frames cannot exceed 1518 bytes, the additional
VLAN tagging information can cause the frame to be too large.
•
LAN Emulation (LANE) is used for trunking VLANs between switches over an Asynchronous
Transfer Mode (ATM) link. Here, VLANs are transported using the IEEE LAN Emulation (LANE)
standard. LANE is discussed in more detail in Section 13.2.
•
IEEE 802.10, is another Cisco proprietary method for transporting VLAN information inside the
standard IEEE 802.10 FDDI frame. The VLAN information is carried in the Security Association
Identifier (SAID) field of the 802.10 frame.
Leading the way in IT testing and certification tools, www.testking.com
- 147 -
CCNP/CCDP 642-891 (Composite)
11.3.2 Dynamic Trunking Protocol
Trunk links on Catalyst switches can be manually configured for either ISL or 802.1Q mode. However,
Cisco has a proprietary point-to-point protocol called Dynamic Trunking Protocol (DTP) that will negotiate
a common trunking mode between two switches if both switches belong to the same VLAN Trunking
Protocol (VTP) management domain. DTP is available in Catalyst supervisor engine software Release 4.2
and later. However, routers cannot participate in DTP negotiation, therefore, DTP negotiation should be
disabled if a switch has a trunk link connected to a router.
11.3.3 VLAN Trunk Configuration
By default, all switch ports are non-trunking and operate as access links. There are a number commands that
can be used to configure VLAN trunks on both an IOS-based and CLI-based switch.
On an IOS-based switch, you would use the following set of commands to create a VLAN trunk link:
Switch(config)# interface interface module_number/port_number
Switch(config-if)# switchport mode trunk
Switch(config-if)# switchport trunk encapsulation { isl | dot1q }
Switch(config-if)# switchport trunk allowed vlan remove vlan_list
Switch(config-if)# switchport trunk allowed vlan add vlan_list
These commands place the switch port into trunking mode, using the encapsulation specified as either isl
or dot1q. The last two commands define which VLANs can be trunked over the link. To view the trunking
status on a switch port, use the following command:
show interface int module_number/port_number switchport
On a CLI-based switch, you would use the set trunk CLI-based commands to create a VLAN trunk link.
This command sets the trunking mode and any mode negotiation. The set trunk command can be used to
identify the VLANs that will be transported over the trunk link. The full syntax for this command:
Switch(enable) set trunk module_number/port_number
[ on | off | desirable | auto | nonegotiate ]
vlan_range [ isl | dot1q | dot10 | lane | negotiate]
By default, a switch will transport all VLANs over a trunk link, even if a VLAN range is specified in the set
trunk command. Therefore, to remove VLANs from a trunk link, use the following command:
Switch(enable) clear trunk module_number/port_number vlan_range
If VLANs need to be added back to the trunk, they can be specified as the vlan_range in the set trunk
command.
The options for setting the trunking mode with the set trunk command are:
•
on, which places the port in permanent trunking mode.
•
off, which places the port in permanent non-trunking mode.
•
desirable, which will cause the port to actively attempt to convert the link into trunking mode.
Leading the way in IT testing and certification tools, www.testking.com
- 148 -
CCNP/CCDP 642-891 (Composite)
•
auto, which will allow the port to convert the link into trunking mode and negotiate trunking.
•
nonegotiate, which places the port in permanent trunking mode, but no DTP frames are generated for
negotiation.
The trunk encapsulation or identification mode is specified at the end of the set trunk command. These
values are:
•
isl, which specifies that the Cisco ISL protocol should be used. This protocol is the default, if no value
is specified.
•
dot1q, which specifies that the IEEE 802.1Q standard protocol should be used.
•
dot10, which specifies that the IEEE 802.10 protocol should be used (only on an FDDI switch port).
•
lane, which specifies that LAN Emulation should be used (only on an ATM link).
•
negotiate, which specifies that Fast and Gigabit Ethernet ports must negotiate to select either ISL or
IEEE 802.1Q.
To
view
and
verify
the
trunk
configuration
on
a
switch,
use
the
show
trunk
[ module_number/port_number ] command.
11.4 Service Provider Tunneling
When a campus network is physically detached Layer 2 connectivity must be acquired from a service
provider who can supply a VLAN between more than one location through Metro Ethernet. Connecting a
single trunk link to a service provider’s network to transport many VLANS is better than many singleVLAN links. An IEEE 802.1Q trunk makes it possible for one or more VLANs to be transported over a
single physical connection, as well as enabling the transportation of a whole trunk over a third-party
network. Instead of using IEEE 802.1Q trunks to tunnel across a service provider’s network, Label
Switching (MPLS) can be used.
11.4.1 IEEE 802.1Q Tunnels
An IEEE 802.1Q trunk port that is located at the edge of a campus network connects to a service provider’s
IEEE 802.1Q tunnel port. Each active VLAN on the trunk, tunnels in and over the service provider’s
central network and ends at a customer remote location or tunnel endpoint. With a 802.1Q tunnel, a second
layer of VLAN tagging is added to each frame on a trunk, and the whole trunk is encapsulated into a new
trunk. A second outer 4-byte tag is added to each frame. The end result is that a customer’s trunk link is
tagged by means of a VLAN ID that matches the identity of the customer. The VLAN ID switches the
frames to the applicable remote tunnel endpoint location. The Layer 3 addresses of the original frame are
unobtainable in the tunnel and cannot be looked at after tunneling, because of the double layer encapsulation
that takes place. This process is also known as a Q-in-Q tunnel or a nested IEEE 802.1Q trunk.
The following commands are used to configure a 802.1Q tunnel:
Switch(config)# interface type mod/num
Switch(config-if)# switchport access vlan vlan-id
Switch(config-if)# switchport mode dot1qtunnel
Switch(config-if)# exit
Switch(config)# vlan dot1q tag native
Leading the way in IT testing and certification tools, www.testking.com
- 149 -
CCNP/CCDP 642-891 (Composite)
•
switchport access vlan identifies the VLAN ID for the customer that is connected to the physical
interface.
•
switchport mode dot1qtunnel sets the switch port into tunnel mode.
•
vlan dot1q tag native compels the service provider’s edge switch to use tags on all native VLAN
frames. Native VLAN frames that are set off within a service provider’s core network are instinctively
tagged. Ingress frames that are not tagged on customer trunks are abandoned. .
11.4.2 Layer 2 Protocol Tunnels
When a switch converses with another switch, it uses protocols like Spanning Tree Protocol (STP), Cisco
Discovery Protocol (CDP) and LAN Trunking Protocol (VTP). Frames, also known as Layer 2 control
protocol data units (PDUs), who carry switch data, cannot be correctly managed in a tunnel. These
protocols are not pressed forward across a tunnel.
Control PDUs are transmitted over VLAN 1 on a trunk. At a service provider’s 802.1Q tunnel port, instead
of being tunneled, they are read by the edge switch. Cisco Discovery Protocol is read by the edge switch
while VLAN Trunking Protocol and Spanning Tree Protocol are not. VTP and STP are not relevant to the
service provider’s internal network.
Layer 2 Protocol Tunnel that carries out Generic Bridge PDU Tunneling (GBPT) is used to handle these
PDUs. These frames are now updated with a GBPT destination MAC address of 0100.0ccd.cdd0, when they
are received at the service provider’s edge network. Once encapsulated, they are sent across the tunnel and
are seen as though they arrived from the native VLAN on the customer’s trunk.
The following commands are used to configure Layer 2 Protocol Tunneling:
Switch(config)# interface type mod/port
Switch(config-if)# l2protocol-tunnel [cdp | stp | vtp]
Switch(config-if)# l2protocol-tunnel drop-threshold pps [cdp | stp | vtp]
Switch(config-if)# l2protocol-tunnel shutdown-threshold pps [cdp | stp | vtp]
•
l2protocol-tunnel is used when tunneling all protocols, or you can specify which CDP, STP, and
VTP protocols to tunnel.
•
drop-threshold keyword is used to only tunnel PPS control frames (1 – 4096) in a 1 second period.
Once the ceiling is attained, all other control frames are dropped until that second has passed.
•
shutdown-threshold keyword is used to shut the tunnel port down as soon as more than PPS (1 –
4096) control frames are received in a 1 second period.
11.4.3 Ethernet Over Multiprotocol Label Switching (MPLS) Tunneling
MPLS can be used to forward packets over a large network. If a service provider has a MPLS core network,
Ethernet over MPLS (EoMPLS) can be used when tunneling customer traffic. EoMPLS tunneling has to be
utilized accomplish Layer 2 tunneling over an MPLS network.
Leading the way in IT testing and certification tools, www.testking.com
- 150 -
CCNP/CCDP 642-891 (Composite)
Routers located at the edge of a service provider’s core network act as edge label switch routers (LSRs).
The edge network reads packets that correspond with a certain condition for customer and allocates a MPLS
tag. LSRs in the MPLS method checks the MPLS labels while making forwarding choices.
By using Cisco Tag Distribution Protocol (TDP) or the Label Distribution Protocol (LDP), LSRs
exchange information pertaining to label information and the manner in which to route a packet. The initial
Layer 2 frame is encapsulated as an MPLS frame, and is given a new Layer 2 source and destination address
that matches to the current and next-hop routers correspondingly. A MPLS label is added below these
addresses and any prior labels are simply pressed down and stacked. A MPLS router checks the first label
while deciding on forwarding methods for the packet. All the labels form a stack and the initial Layer 3
packet is inserted into the frame after the last label. The packet is then sent over the MPLS network. The
edge router removes the last label from the frame and forwards the decapsulated packet. The Layer 3 packet
is therefore always reserved inside the encapsulation.
EoMPLS uses the MPLS label stack to distinguish the customer and single out the customer’s VLAN ID.
EoMPLS tunnels frames between locations transparently at Layer 2. Unlike MPLS, EoMPLS keeps the
entire initial Layer 2 frame with the initial source and destination addresses. In order to effectively use an
EoMPLs tunnel, EoMPLS needs to be configured only on those edge routers that interface with a customer’s
network. A faultless MPLS network has to exist inside a service provider’s central network as well.
11.5 VLAN Trunking Protocol (VTP)
Campus network environments are usually made up of many interconnected switches which make
administration complicated. Cisco has developed a method to manage VLANs across the campus network
using the VLAN Trunking Protocol (VTP). VTP uses Layer 2 trunk frames to communicate VLAN
information among a group of switches. VTP also manages the addition, deletion, and renaming of VLANs
across the network from a central point of control.
VTP is organized into management domains or areas with common VLAN requirements. A switch can
belong to only one VTP domain. Switches in different VTP domains do not share VTP information.
Switches in a VTP domain advertise several attributes to their domain neighbors. Each advertisement
contains information about the VTP management domain, VTP revision number, known VLANs, and
specific VLAN parameters.
11.5.1 VTP Modes
To participate in a VTP management domain, each switch must be configured to operate in one of several
modes. The VTP mode will determine how the switch processes and advertises VTP information. The VTP
modes are: server mode, client mode, and transparent mode.
11.5.1.1 Server Mode
Server mode is the default mode. In this mode, VTP servers have full control over VLAN creation and
modification for their domains. All VTP information is advertised to other switches in the domain, while all
received VTP information is synchronized with the other switches. Because it is the default mode, server
mode can be used on any switch in a management domain, even if other server and client switches are in use.
This mode provides some redundancy in the event of a server failure in the domain.
Leading the way in IT testing and certification tools, www.testking.com
- 151 -
CCNP/CCDP 642-891 (Composite)
11.5.1.2 Client Mode
Client mode is a passive listening mode. Switches listens to VTP advertisements from other switches and
modify their VLAN configurations accordingly. Thus the administrator is not allowed to create, change, or
delete any VLANs. If other switches are in the management domain, a new switch should be configured for
client mode operation. In this way, the switch will learn any existing VTP information from a server. If this
switch will be used as a redundant server, it should start out in client mode to learn all VTP information
from reliable sources. If the switch was initially configured for server mode instead, it might propagate
incorrect information to the other domain switches. Once the switch has learned the current VTP
information, it can be reconfigured for server mode.
11.5.1.3 Transparent Mode
Transparent mode does not allow the switch to participate in VTP negotiations. Thus, a switch does not
advertise its own VLAN configuration, and a switch does not synchronize its VLAN database with received
advertisements. VLANs can still be created, deleted, and renamed on the transparent switch. However, they
will not be advertised to other neighboring switches. VTP advertisements received by a transparent switch
will be forwarded on to other switches on trunk links.
11.5.2 VTP Advertisements
Each switch participating in VTP advertises VLANs, revision numbers, and VLAN parameters on its trunk
ports to notify other switches in the management domain. VTP advertisements are sent as multicast frames.
Because all switches in a management domain learn of new VLAN configuration changes, a VLAN need
only be created and configured on just one VTP server switch in the domain.
By default, management domains are set to use non-secure advertisements without a password. A password
can be added to set the domain to secure mode. The same password has to be configured on every switch in
the domain so that all switches exchanging VTP information will use identical encryption methods.
VTP Advertisements can originate as requests from client-mode switches that want to learn about the VTP
database at boot-up time. They can also originate from server-mode switches as VLAN configuration
changes occur. The VTP advertisement process starts with VTP revision number 0 (zero). This VTP revision
number is stored in nonvolatile random-access memory (NVRAM) and is not altered by a power cycle of the
switch. When subsequent changes are made, the revision number is incremented before advertisements are
sent out. When listening switches receive an advertisement with a
NVRAM
greater revision number than is locally stored, its database will be Catalyst switches in server mode use a
nonvolatile
random-access
updated with the new information. Therefore, any newly added separate
network switches should be initialized to VTP revision number memory (NVRAM) for VTP. All VTP
including
the
VTP
zero. This can be done by changing the VTP mode of the switch information,
configuration revision number, is retained
to transparent and then change the mode back to server, or by even when the switch power is off. In this
changing the VTP domain of the switch to a non-existent VTP manner, a switch is able to recover the last
domain and then change the VTP domain back to the original known VLAN configuration from its VTP
name. VTP advertisements can occur as summary advertisements, database once it reboots.
subset advertisements or clientadvertisement requests.
11.5.2.1 Summary Advertisements
Leading the way in IT testing and certification tools, www.testking.com
- 152 -
CCNP/CCDP 642-891 (Composite)
VTP domain servers send summary advertisements every 300 seconds and every time a VLAN topology
change occurs. The summary advertisement lists information about the management domain, including VTP
version, domain name, configuration revision number, timestamp, MD5 encryption hash code, and the
number of subset advertisements to follow. For VLAN configuration changes, summary advertisements are
followed by one or more subset advertisements, with more specific VLAN configuration data.
11.5.2.2 Subset Advertisements
VTP domain servers send subset advertisements after a VLAN configuration change occurs. These
advertisements list the specific changes that have been performed, such as creation or deletion of a VLAN,
suspending or activating a VLAN, changing the name of a VLAN, and changing the MTU of a VLAN.
Subset advertisements can list the following VLAN parameters: status of the VLAN, VLAN type, MTU,
length of the VLAN name, VLAN number, SAID value, and the VLAN name.
11.5.2.3 Client Request Advertisements
A VTP client can request any VLAN information that it lacks. After a client advertisement request, the VTP
domain servers respond with summary and subset advertisements.
11.5.3 VTP Configuration
Before VLANs can be configured, VTP must be configured. By default, every switch will operate in VTP
server mode for the management domain NULL, with no password or secure mode. The following sections
discuss the commands and considerations that should be used to configure a switch for VTP operation.
11.5.3.1 Configuring a VTP Management Domain
Before a switch is added into a network, the VTP management domain should be identified. If this switch is
the first one on the network, the management domain will need to be created. Otherwise, the switch may
have to join an existing management domain with other existing switches.
The following command can be used to assign a switch to a management domain on an IOS-based switch:
Switch# vlan database
Switch(vlan)# vtp domain domain_name
To assign a switch to a management domain on a CLI-based switch, use the following command:
Switch(enable) set vtp [ domain domain_name ]
11.5.3.2 Configuring the VTP Mode
Once you have assigned the switch to a VTP management domain, you need to select the VTP mode for the
new switch. There are three VTP modes that can be selected: server mode, client mode and transparent
mode. These VTP modes were discussed in Section 11.4.1.
On an IOS-based switch, the following commands can be used to configure the VTP mode:
Leading the way in IT testing and certification tools, www.testking.com
- 153 -
CCNP/CCDP 642-891 (Composite)
Switch# vlan database
Switch(vlan)# vtp domain domain_name
Switch(vlan)# vtp { server | client | transparent }
Switch(vlan)# vtp password password
On a CLI-based switch, the following command can be used to configure the VTP mode:
Switch(enable) set vtp [ domain domain_name ]
[ mode{ server | client | transparent }] [ passwd password ]
If the domain is operating in secure mode, a password can be included in the command line. The password
can have 8 to 64 characters.
11.5.3.3 Configuring the VTP Version
Two versions of VTP, VTP version 1 and VTP version 2, are available for use in a management domain.
Although VTP version 1 is the default protocol on a Catalyst switch, Catalyst switches are capable of
running both versions, however, the two versions are not interoperable within a management domain. Thus,
the same VTP version must be configured on each switch in a domain. However, a switch running VTP
version 2 may coexist with other version 1 switches, if its VTP version 2 is not enabled. This situation
becomes important if you want to use version 2 in a domain. Then, only one server mode switch needs to
have VTP version 2 enabled. The new version number is propagated to all other version 2-capable switches
in the domain, causing them to enable version 2 for use. By default, VTP version 1 is enabled. Version 2 can
be enabled or disabled using the v2 option. The two versions of VTP differ in the features they support. VTP
version 2 offers the following additional features over version 1:
•
In transparent mode VTP version 1 matches the VTP version and domain name before forwarding the
information to other switches using VTP. On the other hand, VTP version 2 in transparent mode
forwards the VTP messages without checking the version number.
•
VTP version 2 performs consistency checks on the VTP and VLAN parameters entered from the CLI or
by Simple Network Management Protocol (SNMP). This checking helps prevent errors in such things as
VLAN names and numbers from being propagated to other switches in the domain. However, no
consistency checks are performed on VTP messages that are received on trunk links or on configuration
and database data that is read from NVRAM.
•
VTP version 2 supports the use of Token Ring switching and Token Ring VLANs.
•
VTP version 2 has Unrecognized Type-Length-Value (TLV) support, which means that VTP version 2
switches will propagate received configuration change messages out other trunk links, even if the switch
supervisor is not able to parse or understand the message.
On an IOS-based switch, the VTP version number is configured using the following commands:
Switch# vlan database
Switch(vlan)# vtp v2-mode
On a CLI-based switch, the VTP version number is configured using the following command:
Switch(enable) set vtp v2 enable
Leading the way in IT testing and certification tools, www.testking.com
- 154 -
CCNP/CCDP 642-891 (Composite)
11.5.4 VTP Pruning
A switch must forward broadcast frames out all available ports in the broadcast domain because broadcasts
are destined everywhere there is a listener. Multicast frames, unless forwarded by more intelligent means,
follow the same pattern. In addition, frames destined for an address that the switch has not yet learned or has
forgotten must be forwarded out all ports in an attempt to find the destination. These frames are referred to
as unknown unicast. When forwarding frames out all ports in a broadcast domain or VLAN, trunk ports are
included. By default, a trunk link transports traffic from all VLANs, unless specific VLANs are removed
from the trunk with the clear trunk command. In a network with several switches, trunk links are enabled
between switches and VTP is used to manage the propagation of VLAN information. This causes the trunk
links between switches to carry traffic from all VLANs.
VTP pruning makes more efficient use of trunk bandwidth by reducing unnecessary flooded traffic.
Broadcast and unknown unicast frames on a VLAN are forwarded over a trunk link only if the switch on the
receiving end of the trunk has ports in that VLAN. VTP pruning occurs as an extension to VTP version 1.
When a Catalyst switch has a port associated with a VLAN, the switch sends an advertisement to its
neighbor switches that it has active ports on that VLAN. The neighbors keep this information, enabling them
to decide if flooded traffic from a VLAN should use a trunk port or not.
By default, VTP pruning is disabled on IOS-based and CLI-based switches. On IOS-based switches, the vtp
pruning command in the VLAN database configuration mode, the can be used to enable pruning while the
set vtp pruning enable command can be used to enabled VTP pruning on CLI-based switches
11.6 Token Ring VLANs
Only the Catalyst 5000 and the Catalyst 3900 switches support Token Ring using CLI-based commands. In
the basic topology of Token Ring networks, end stations are connected to multistation access units (MSAUs),
which interconnect with other MSAUs to form a ring. Multiple rings can be interconnected by bridges for
segmentation and frame forwarding using source-route bridging and the RIF information. The functionality
of rings and bridges are performed within the switches, using Token Ring switching functions.
Token Ring switching follows the same topology, but performs the various functions within the switch.
Where groups of end stations are connected by MAUs in a ring, the IEEE has defined the Concentrator
Relay Function (CRF). The function of a multiport bridge to connect individual rings is defined as the
Bridge Relay Function (BRF).
11.6.1 TrBRF
A Catalyst switch connects logical Token Ring Concentrator Relay Functions (TrCRFs) with a logical
multiport bridge, or Token Ring Bridge Relay Function (TrBRF). In the hierarchy of bridged Token Rings,
each TrCRF must be connected to a parent TrBRF. By default, the TrBRF interconnects only TrCRFs
located on the local switch. However, if trunking is used with ISL encapsulation, the TrBRF can extend to
TrCRFs located on other Catalyst switches. Each TrBRF exists as a special VLAN within a Catalyst switch.
A switch can support many TrBRFs, but only one VLAN can be assigned to each TrBRF. By default, one
TrBRF is defined as "trbrf-default" on VLAN1005. Each TrBRF can operate as a source-route bridge
(SRB), a source-route transparent (SRT) bridge, or both as a mixed mode. Furthermore, each TrBRF runs a
Leading the way in IT testing and certification tools, www.testking.com
- 155 -
CCNP/CCDP 642-891 (Composite)
separate instance of either the IBM or IEEE Spanning-Tree Protocol to prevent bridging loops. The
Spanning-Tree Protocol is covered in Chapter 5.
To define a TrBRF on a Catalyst switch, use the following command:
Switch(enable) set vlan vlan_number [ name name ] type trbrf
bridge bridge_number [ stp{ ieee | ibm }]
The only two required fields for a TrBRF are the vlan_number and the bridge_number.
11.6.2 TrCRF
In a Catalyst switch, individual Token Ring ports can be connected to a logical ring, or Token Ring VLAN,
by assigning them with identical ring numbers. Internally, the Catalyst performs the TrCRF to maintain the
ring connectivity. Frame forwarding between ports on a common ring is performed with source-route
switching, using either MAC addresses or route descriptors. The TrCRF can be confined within a single
switch or can be spread across multiple switches, depending on the topology and switch configuration.
When a TrCRF is contained completely within a switch, it is referred to as an undistributed TrCRF. A
TrCRF can be distributed across multiple switches if ISL trunking is enabled between switches and TrCRFs
with identical VLAN numbers are defined. By default, one TrCRF is defined on every Catalyst switch as
"trcrf-default" on VLAN1003. The trcrf-default is also assigned to the trbrf-default. If ISL trunking is in
use, every Token Ring port on every switch will be defined to the same distributed TrCRF. However,
because only one TrBRF is defined by default, no bridging will occur, instead, source-route switching will
be performed to forward frames between switch ports within the TrCRF.
To define a TrCRF on a Catalyst switch, use the following command:
Switch(enable) set vlan vlan_number [ name name ] type trcrf
{ ring hexadecimal_ring_number | decring decimal_ring_number }
parent vlan_number
On the Catalyst 5000, a single TrCRF can be distributed across multiple switches. To enable this feature, use
the set tokenring distrib-crf enable command. After a TrCRF VLAN has been created, switch ports
can be assigned to it. As with Ethernet switching, the following command is used to assign ports to a VLAN:
Switch(enable) set vlan vlan_number module_number/port_number
You can use the show vlan command to view the current Token Ring VLAN configuration.
Catalyst switches also offer a form of redundancy for Token Ring switching. When two switches are
connected by a common TrBRF and ISL trunking is enabled, connectivity between the TrCRFs in the
switches could be disrupted if the ISL trunk link fails. A backup TrCRF can be used to provide a backup
path for redundancy purposes. For each TrBRF, a single backup TrCRF can be defined with a single port
from each connected switch. Only one of the TrCRF ports will be active at all times, while the other ports
will be disabled. If the ISL trunk link goes down, the backup TrCRF links will become active and pass
traffic between switches.
Leading the way in IT testing and certification tools, www.testking.com
- 156 -
CCNP/CCDP 642-891 (Composite)
To enable a backup TrCRF, you must first define a TrCRF that spans between switches. Then you can
assign one port from each switch to the backup TrCRF. Finally, use the set vlan vlan_number
backupcrf on command to enable the backup TrCRF function.
11.6.3 VTP and Token Ring VLANs
The VLAN Trunking Protocol (VTP) can be used in a Token Ring network to simplify VLAN
administration. VTP allows TrCRF information to be propagated to all switches in a management domain.
VTP pruning can also be performed on Token Ring VLANs. Both the trbrf-default and the trcrf-default are
always pruning ineligible. VTP pruning is configured on a per-TrBRF basis. When a TrBRF is made
pruning-eligible, all TrCRFs connected to it are also made pruning-eligible.
11.6.4 Duplicate Ring Protocol (DRiP)
Catalyst switches also have a Duplicate Ring Protocol (DRiP) which is a mechanism to monitor the use of
TrCRFs or ring numbers within a domain of switches. DRiP collects and maintains the status of TrCRFs that
are interconnected by TrBRFs. This information is used to preventing duplicate ring numbers from being
assigned to TrCRFs; to filter All-Routes Explorer (ARE) frames from re-entering TrCRFs that they have
already visited; and to operate the backup TrCRF function when an ISL trunk link fails. Every switch
participating in Token Ring switching sends a DRiP advertisement out all ISL trunk ports every 30 seconds.
Advertisements are sent to multicast address 01:00:0c:cc:cc:cc and are sent only on the default VLAN1.
When a switch receives the multicast advertisements, the switch does not forward the advertisements on to
other switches over ISL links unless the advertisements contain new information. As well, a switch
compares advertisements to the information in its own configuration. If it detects that a TrCRF has already
been configured elsewhere, the local TrCRF configuration will be denied.
Leading the way in IT testing and certification tools, www.testking.com
- 157 -
CCNP/CCDP 642-891 (Composite)
12. Redundant Switch Links
12.1 Switch Port Aggregation with EtherChannel
Switches can use Ethernet, Fast Ethernet, Gigabit Ethernet, or 10Gigabit Ethernet ports to scale link speeds
by a factor of ten. Cisco offers another method of scaling link bandwidth by aggregating or bundling parallel
links, using EtherChannel technology. Two to eight links of either Fast Ethernet (FE) or Gigabit Ethernet
(GE) can be bundled as one logical link of Fast EtherChannel (FEC) or Gigabit EtherChannel (GEC),
respectively. This bundle provides a full-duplex bandwidth of up to 1600 Mbps on 8 links of Fast Ethernet
or 16 Gbps on 8 links of Gigabit Ethernet. Switches should not be configured to have multiple links
connected to form a loop. EtherChannel avoids this situation by bundling parallel links into a single logical
link, which can act as either an access or a trunk link. Traffic is balanced across the individual links within
the EtherChannel. EtherChannel also provides redundancy through the use of the several bundled physical
links. If one of the links in the bundle fails, traffic sent through that link will move to an adjacent link. When
links are restored, the load will be redistributed among the links
12.1.1 Bundling Ports with EtherChannel
Fast EtherChannel is available on the Catalyst 1900, 2820, 2900, 2900XL, 3500XL, 4000, 5000, and 6000
families. Gigabit EtherChannel is supported only on the Catalyst 2900, 2900XL, 4000, 5000, and 6000
families. Most of the switch families support a maximum of four Fast Ethernet or Gigabit Ethernet links
bundled in a single EtherChannel link. However, the Catalyst 6000 family supports up to eight bundled links
while the Catalyst 6000 supports up to 128 individual EtherChannel links.
Generally, all bundled ports must first belong to the same VLAN. If used as a trunk, bundled ports must all
be in trunking mode and pass the same VLANs. Also, each of the ports must be of the same speed and
duplex settings before they are bundled.
12.1.2 Distributing Traffic in EtherChannel
Traffic in an EtherChannel is statistically load-balanced across the individual links bundled together.
However, the load is not necessarily balanced equally across all of the links. Instead, frames are forwarded
on a specific link as a function of the addresses present in the frame. Some combination of source and
destination addresses is used to form a binary pattern used to select a link number in the bundle. Switches
perform an exclusive-OR (XOR) operation on one or more low-order bits of the addresses to determine what
link to use.
In a two-link EtherChannel, the XOR operation is performed independently on each bit position in the
address value. If the two address values have the same bit value, the XOR result is 0. If the two address bits
differ, the XOR result is 1. In this way, frames can be statistically distributed among the links with the
assumption that MAC or IP addresses are statistically distributed throughout the network. In a four-link
EtherChannel, the XOR is performed on the lower two bits of the address values resulting in a two-bit XOR
value or a link number from 0 to 3.
Communication between two devices will always be sent through the same EtherChannel link because the
two endpoint addresses stay the same. However, when a device communicates with several other devices,
chances are that the destination addresses are equally distributed with zeros and ones in the last bit. This
causes the frames to be distributed across the EtherChannel links.
Leading the way in IT testing and certification tools, www.testking.com
- 158 -
CCNP/CCDP 642-891 (Composite)
Switches with an Ethernet Bundling Controller (EBC) are limited to distributing frames based on source and
destination MAC addresses only. For each frame, the source MAC address is XOR'd with the destination
MAC address. Because this is the only choice, no switch configuration is necessary.
12.1.3 Port Aggregation Protocol (PAgP)
Cisco developed the Port Aggregation Protocol (PAgP) to provide automatic EtherChannel configuration
and negotiation between switches. PAgP packets are exchanged between switches over EtherChannelcapable ports. The identification of neighbors and port group capabilities are learned and are compared with
local switch capabilities. Ports that have the same neighbor device ID and port group capability will be
bundled together as a bidirectional, point-to-point EtherChannel link. PAgP will form an EtherChannel only
on ports that are configured for either identical static VLANs or trunking. PAgP also dynamically modifies
parameters of the EtherChannel if one of the bundled ports is modified. When ports are bundled into an
EtherChannel, all broadcasts and multicasts are sent over one port in the bundle only. Broadcasts will not be
sent over the remaining ports and will not be allowed to return over any other port in the bundle. Switch
ports can be configured for one of three PAgP modes:
•
Auto is the default mode. In this mode PAgP packets are sent to negotiate an EtherChannel only if the
far end initiates EtherChannel negotiations. Auto mode is thus a passive mode that requires a neighbor in
desirable mode.
•
On, in this mode the ports will always be bundled as an EtherChannel. No negotiation takes place
because PAgP packets are not sent or processed.
•
Off, in this mode the ports will never be bundled as an EtherChannel. They will remain as individual
access or trunk links.
•
Desirable, in this mode PAgP packets are sent to actively negotiate an EtherChannel. This mode starts
the negotiation process, and will bring up a successful EtherChannel with another switch in either
desirable or auto mode.
The following command is used to configure switch ports for PAgP:
Switch(config)# interface type mod/num
Switch(config-if)# channel-protocol pagp
Switch(config-if)# channel-group number mode {on | auto | desirable}
12.1.4 Link Aggregation Control Protocol (LACP)
LACP defined in IEEE 802.3ad can be used instead of PAgP. LACP operates much like PAgP, and can be
configured in active mode. The difference is that LACP allocates roles to the EtherChannel’s endpoints. A
switch that has the lowest system priority, a 2-byte priority value succeeded by a 6-byte switch MAC
address, is able to decide on which ports actively partake in the EtherChannel at a certain time. Ports are
chosen and activated in relation to their port priority value, a 2-byte priority succeeded by a 2-byte port
number. A low value means a higher priority. A maximum collection of up to 16 possible links can be
defined for every EtherChannel. With LACP, a switch chooses up to eight of these links with the lowest port
priorities as active EtherChannel links. The rest of the links are put in a standby state. They are only enabled
when an active link is down.
Leading the way in IT testing and certification tools, www.testking.com
- 159 -
CCNP/CCDP 642-891 (Composite)
The following command is used to configure a LACP EtherChannel:
Switch(config)# lacp system-priority priority
Switch(config)# interface type mod/num
Switch(config-if)# channel-protocol lacp
Switch(config-if)# channel-group number mode {on | passive | active}
Switch(config-if)# lacp port-priority priority
12.1.5 EtherChannel Configuration
Before configuring switch ports into an EtherChannel bundle, you should ensure that the switch module
supports it by using the show port capabilities [ module_number/port_number ] command. The
output of the show port capabilities command will show the acceptable port groupings, if they are
available.
The following guidelines that apply to the switch ports that will be grouped into an EtherChannel:
•
All ports should be assigned to the same VLAN or configured for trunking. In the latter case, all ports
should have the same trunk mode and should carry the same VLANs over the trunk.
•
All ports should be configured for the same speed and duplex mode.
•
The ports should not be configured as dynamic VLAN ports.
•
All ports should be enabled.
To configure an EtherChannel on a CLI-based switch, use the Switch (enable) set port channel
module_number/port_range mode { on | off | desirable | auto } command and on an IOS-based
switch, use the Switch (config-if)# port group group_number [ distribution{ source |
destination }] command.
Information about the current EtherChannel configuration can be obtained by using the show port channel
[ module_number/port_number ] [ info | statistics ] command on a CLI-based switch and the
show port group [ group_number ] command on an IOS-based switch.
12.2 Spanning-Tree Protocol (STP)
In a Layer 3 environment, the routing protocols keep track of redundant paths to a destination network so
that a secondary path can be quickly utilized if the primary path fails. Layer 3 routing allows many paths to
a destination to remain up and active and allows load sharing across multiple paths. However, in a Layer 2
environment, no routing protocols are used and, hence, redundant paths are not allowed. Instead, the
Spanning-Tree Protocol (STP) is used to provide network link redundancy and load balancing so that a
Layer 2 switched network can recover from failures without intervention in a timely manner.
A Layer 2 switch mimics the function of a transparent bridge. A transparent bridge must offer segmentation
between two networks, while remaining transparent to all the end devices connected to it. A transparent
bridge operates in the following manner:
•
The bridge has no initial knowledge of the location of any end device; therefore, the bridge must listen
to frames coming into each of its ports to figure out on which network a device resides.
Leading the way in IT testing and certification tools, www.testking.com
- 160 -
CCNP/CCDP 642-891 (Composite)
•
The bridge constantly updates its bridging table upon detecting the presence of a new MAC address or
upon detecting a MAC address that has changed location from one bridge port to another. The bridge is
then able to forward frames by looking at the destination address, looking up the address in the bridge
table, and sending the frame out the port where the destination device is located.
•
If a frame arrives with the broadcast address as the destination address, the bridge must forward or flood
the frame out all available ports. However, the frame is not forwarded out the port that initially
received the frame. Hence, broadcasts are able to reach all available networks. A bridge only segments
collision domains but does not segment broadcast domains.
•
If a frame arrives with a destination address that is not found in the bridge table, the bridge is unable
to determine which port to forward the frame to for transmission. This is known as an unknown unicast.
In this case, the bridge treats the frame as if it were a broadcast and forwards it out all remaining ports.
After a reply to that frame is received, the bridge will learn the location of the unknown station and add
it to the bridge table.
•
Frames that are forwarded across the bridge cannot be modified.
Bridging or switching in this fashion works well but offers no additional links for redundancy purposes. To
add redundancy, a second switch must be added. Now two switches offer the transparent bridging function
in parallel. However, when the switches receive an unknown unicast, both will flood the frame out all their
available ports, including the ports that link to the other switch, resulting in what is known as a bridging
loop, as the frame is forwarded around and around between two switches. This occurs because parallel
switches are unaware of each other. The Spanning-Tree Protocol (STP) was developed to overcome the
possibility of bridging loops. It enables switches to become aware of each other so that they can negotiate a
loop-free path through the network. Loops are discovered before they are opened for use, and redundant
links are shut down to prevent the loops from forming. STP is communicated between all connected
switches on a network. Each switch executes the Spanning-Tree Algorithm (STA) based on information
received from other neighboring switches. The algorithm chooses a reference point in the network and
calculates all the redundant paths to that reference point. When redundant paths are found, STA picks one
path to forward frames with and disables or blocks forwarding on the other redundant paths. STP computes a
tree structure that spans all switches in a subnet or network. Redundant paths are placed in a blocking or
standby state to prevent frame forwarding. The switched network is then in a loop-free condition. However,
if a forwarding port fails or becomes disconnected, the STA will run again to recompute the Spanning-Tree
topology so that blocked links can be reactivated.
By default, STP is enabled on all ports of a switch. STP should remain enabled in a network to prevent
bridging loops from forming. However, if STP has been disabled on a CLI-based switch, it can be reenabled with the following command:
Switch (enable) set spantree enable [ all | module_number/port_number ]
If STP has been disabled on an IOS-based switch, it can be re-enabled with the following command:
Switch (config)# spantree vlan_list
You can use the show spantree [ vlan ] command to view the status of STP on either a CLI- or IOSbased switch.
Leading the way in IT testing and certification tools, www.testking.com
- 161 -
CCNP/CCDP 642-891 (Composite)
12.3 Spanning-Tree Communication
STP operates as switches communicate with one another. Data messages are exchanged in the form of
Bridge Protocol Data Units (BPDUs). A switch sends a BPDU frame out a port, using the unique MAC
address of the port itself as a source address. As the switch is unaware of the other switches around it, the
BPDU frame has a destination address of the well known STP multicast address 01-80-c2-00-00-00 to reach
all listening switches. There are two types of BPDUs: the Configuration BPDU, which is used for Spanning
Tree computation; and the Topology Change Notification (TCN) BPDU, which is used to announce
changes in the network topology.
The exchange of BPDU messages works toward the goal of electing reference points as a foundation for a
stable Spanning-Tree topology. Also, loops will be identified and removed by placing specific redundant
ports in a blocking or standby state.
12.3.1 Root Bridge Election
For all switches in a network to agree on a loop-free topology, a common frame of reference must exist.
This reference point is called the Root Bridge. The Root Bridge is chosen by an election process among all
connected switches. Each switch has a unique Bridge ID that it uses to identify itself to other switches. The
Bridge ID is an 8-byte value. 2 bytes of the Bridge ID is used for a Bridge Priority field, which is the
priority or weight of a switch in relation to all other switches. The other 6 bytes of the Bridge ID is used for
the MAC Address field, which can come from the Supervisor module, the backplane, or a pool of 1024
addresses that are assigned to every Supervisor or backplane depending on the switch model. This address is
hard coded, unique, and cannot be changed.
The election process begins with every switch sending out BPDUs with a Root Bridge ID equal to its own
Bridge ID as well as a Sender Bridge ID. The latter is used to identify the source of the BPDU message.
Received BPDU messages are analyzed for a lower Root Bridge ID value. If the BPDU message has a Root
Bridge ID of the lower value than the switch's own Root Bridge ID, it replaces its own Root Bridge ID with
the Root Bridge ID announced in the BPDU. If two Bridge Priority values are equal, then the lower MAC
address takes preference. The switch is then nominates the new Root Bridge ID in its own BPDU messages
although it will still identify itself as the Sender Bridge ID. Once the process has converged, all switches
will agree on the Root Bridge until a new switch is added.
The Root Bridge election is based on the idea that one switch is chosen as a common reference point, and all
other switches choose ports that are closest to the Root. The Root Bridge election is also based on the idea
that the Root Bridge can become a central hub that interconnects other legs of the network. Therefore, the
Root Bridge can be faced with heavy switching loads in its central location. If heavy loads of traffic are
expected to pass through the Root Bridge, the slowest switch is not the ideal candidate. Furthermore, only
one Root Bridge is elected. This is thus not fault tolerant. To overcome these problems, you should set a
Root Bridge in a determined fashion, and set a secondary Root Bridge in case of primary Root Bridge failure.
The Root Bridge and the secondary Root Bridge should be placed near the center of the network.
To configure a CLI-based Catalyst switch to become the Root Bridge, use the following command to modify
the Bridge Priority value so that a switch can be given a lower Bridge ID value to win a Root Bridge
election:
Switch (enable) set spantree priority bridge_priority [ vlan ]
Leading the way in IT testing and certification tools, www.testking.com
- 162 -
CCNP/CCDP 642-891 (Composite)
Alternatively, you can use the following command:
Switch (enable) set spantree root [ secondary ] [ vlan_list ]
[ dia diameter ] [ hello hello_time ]
This command is a macro that executes several other commands. The result is a more direct and automatic
way to force one switch to become the Root Bridge. Actual Bridge Priorities are not given in the command.
Rather, the switch will modify STP values according to the current values in use within the active network.
To configure an IOS-based Catalyst switch to become the Root Bridge, use the following command to
modify the Bridge Priority value so that a switch can be given a lower Bridge ID value to win a Root Bridge
election:
Switch (config)# spanning-tree [ vlan vlan_list ] priority
bridge_priority
12.3.2 Root Ports Election
Once a reference point has been nominated and elected for the entire switched network, each non-root
switch must find its relation to the Root Bridge. This action can be performed by selecting only one Root
Port on each non-root switch. STP uses the Root Path Cost to select a Root Port. The Root Path Cost is the
cumulative cost of all the links leading to the Root Bridge. A particular switch link has a cost associated
with it called the Port or Path Cost. This cost is inversely proportional to the port's bandwidth. As the Path
Cost travels along, other switches can modify its value to make it cumulative. The Path Cost is known only
to the local switch where the port or "path" to a neighboring switch resides as it is not contained in the
BPDU. Only the Root Path Cost is contained in the BPDU. Path Costs are defined as a one-byte value.
The Root Bridge sends out a BPDU with a Root Path Cost value of zero because its ports sit directly on the
Root Bridge. When the next closest neighbor receives the BPDU, it adds the Path Cost of its own port where
the BPDU arrived. The neighbor then sends out BPDUs with this new cumulative value as the Root Path
Cost. This value is incremented by subsequent switch port Path Costs as the BPDU is received by each
switch on down the line. After incrementing the Root Path Cost, a switch also records the value in its
memory. When a BPDU is received on another port and the new Root Path Cost is lower than the previously
recorded value, this lower value becomes the new Root Path Cost. In addition, the lower cost tells the switch
that the Root Bridge must be closer to this port than it was on other ports. The switch has now determined
which of its ports is the closest to the root—the Root Port.
If desired, the cost of a port can be modified from the default value. However, changing one port's cost may
influence STP to choose that port as a Root Port. Therefore careful calculation is required to ensure that the
desired path will be elected. On a CLI-based switch, the port cost can be modified by using one of the
following commands:
Switch (enable) set spantree portcost module_number/port_number cost
or
Switch (enable) set spantree portvlancost module_number/port_number
[ cost cost ] [ vlan_list ]
Leading the way in IT testing and certification tools, www.testking.com
- 163 -
CCNP/CCDP 642-891 (Composite)
On an IOS-based switch, the port cost for individual VLANs can be modified by using the following
command:
Switch (config-if)# spanning-tree [ vlan vlan_list ] cost cost
12.3.3 Designated Ports Election
Once the Root Path Cost values have been computed, the Root Ports have been identified; however, all other
links are still connected and could be active, leaving bridging loops. To remove the bridging loops, STP
makes a final computation to identify one Designated Port on each network segment which would forward
traffic to and from that segment. Switches choose a Designated Port based on the lowest cumulative Root
Path Cost to the Root Bridge. All ports are still active and bridging loops are still possible. STP has a set of
progressive states that each port must go through, regardless of the type or identification. These states will
actively prevent loops from forming.
12.4 STP States
To participate in STP, each port of a switch must progress through several states. A port begins in a
Disabled state moving through several passive states and finally into an active state if allowed to forward
traffic. The STP port states are: Disabled, Blocking, Listening, Learning, and Forwarding.
•
Ports that are administratively shut down by the network administrator or by the system due to a fault
condition are in the Disabled state. This state is special and is not part of the normal STP progression for
a port.
•
After a port initializes, it begins in the Blocking state so that no bridging loops can form. In the Blocking
state, a port cannot receive or transmit data and cannot add MAC addresses to its address table. Instead,
a port is only allowed to receive BPDUs. Also, ports that are put into standby mode to remove a bridging
loop enter the Blocking state.
•
The port will be moved from the Blocking state to the Listening state if the switch thinks that the port
can be selected as a Root Port or Designated Port. In the Listening state, the port still cannot send or
receive data frames. However, the port is allowed to receive and send BPDUs so that it can actively
participate in the Spanning-Tree topology process. Here the port is finally allowed to become a Root
Port or Designated Port because the switch can advertise the port by sending BPDUs to other switches.
Should the port lose its Root Port or Designated Port status, it is returned to the Blocking state.
•
After a period of time called the Forward Delay in the Listening state, the port is allowed to move into
the Learning state. The port still sends and receives BPDUs as before. In addition, the switch can now
learn new MAC addresses to add into its address table.
•
After another Forward Delay period in the Learning state, the port is allowed to move into the
Forwarding state. The port can now send and receive data frames, collect MAC addresses into its
address table, and send and receive BPDUs. The port is now a fully functioning switch port within the
Spanning-Tree topology.
12.5 STP Timers
Leading the way in IT testing and certification tools, www.testking.com
- 164 -
CCNP/CCDP 642-891 (Composite)
STP operates as switches send BPDUs to each other in an effort to form a loop-free topology. The BPDUs
take a finite amount of time to travel from switch to switch. In addition, news of a topology change such as a
link or Root Bridge failure can suffer from propagation delays as the announcement travels from one side of
a network to the other. Because of the possibility of these delays, preventing the Spanning-Tree topology
from converging until all switches have had time to receive accurate information is important. STP uses
three timers for this purpose. There are three timers: Hello Time, Forward Delay, and Max Age.
•
Hello Time is the time interval between Configuration BPDUs sent by the Root Bridge. The Hello Time
value configured in the Root Bridge switch will determine the Hello Time for all non-root switches.
However, all switches have a locally configured Hello Time that is used to time Topology Change
Notification (TCN) BPDUs when they are retransmitted. The IEEE 802.1D standard specifies a default
Hello Time value of two seconds.
•
Forward Delay is the time interval that a switch port spends in both the Listening and Learning states.
The default value is 15 seconds.
•
Max Age is the time interval that a switch stores a BPDU before discarding it. While executing the STP,
each switch port keeps a copy of the "best" BPDU that it has heard. If the source of the BPDU loses
contact with the switch port, the switch will notice that a topology change has occurred after the Max
Age time elapses and the BPDU is aged out. The default Max Age value is 20 seconds.
To announce a change in the active network topology, switches send a Topology Change Notification (TCN)
BPDU. This occurs when a switch either moves a port into the Forwarding state or moves a port from
Forwarding or Learning into the Blocking state. The switch sends a TCN BPDU out its Designated Port.
The TCN BPDU carries no data about the change, but only informs recipients that a change has occurred.
However, the switch will not send TCN BPDUs if the port has been configured with PortFast enabled. The
switch will continue sending TCN BPDUs every Hello Time interval until it gets an acknowledgement from
an upstream neighbor. As the upstream neighbors receive the TCN BPDU, they will propagate it on toward
the Root Bridge. When the Root Bridge receives the BPDU, the Root Bridge sends out an
acknowledgement. The Root Bridge also sends out the Topology Change flag in a Configuration BPDU so
that all other bridges will shorten their bridge table aging times down from the default 300 seconds to the
Forward Delay value. This condition causes the learned locations of MAC addresses to be flushed out
sooner than they normally would, easing the bridge table corruption that might occur due to the change in
topology. However, any stations that are actively communicating during this time will be kept in the bridge
table. This condition lasts for the sum of the Forward Delay and the Max Age.
The three STP timers can be adjusted. These timers need only be modified on the Root Bridge and any
secondary or backup Root Bridges because the Root Bridge propagates all three timer values throughout the
network in the Configuration BPDU.
To modify STP timers on a CLI-based switch, use the following commands:
Switch(enable) set spantree hello interval [ vlan ]
Switch(enable) set spantree fwddelay delay [ vlan ]
Switch(enable) set spantree maxage agingtime [ vlan ]
To modify STP Timers on an IOS-based switch, use the following commands:
Switch(config)# spanning-tree [vlan vlan_list] hello-time seconds
Switch(config)# spanning-tree [vlan vlan_list] forward-time seconds
Leading the way in IT testing and certification tools, www.testking.com
- 165 -
CCNP/CCDP 642-891 (Composite)
Switch(config)# spanning-tree [vlan vlan_list] max-age seconds
12.6 Convergence
There are additional methods that exist to allow faster STP convergence in the event of a link failure. These
include: PortFast, UplinkFast, and BackboneFast.
12.6.1 PortFast: Access Layer Nodes
An end-user workstation is usually connected to a switch port in the Access layer. If the workstation is
powered off and then turned on, the switch port will not be in a useable state until STP cycles from the
Blocking state to the Forwarding state. With the default STP timers, this transition will take at least 30
seconds. Therefore, the workstation is unable to transmit or receive any data for 30 seconds.
On switch ports that connect only to single workstations or specific devices, bridging loops will not be
possible. Catalyst switches offer the PortFast feature that shortens the Listening and Learning states to a
negligible amount of time. The result is that when a workstation link comes up, the switch will immediately
move the PortFast port into the Forwarding state. Spanning-Tree loop detection is still in operation, however,
and the port will be moved into the Blocking state if a loop is detected on the port. To enable or disable the
PortFast feature on a CLI-based switch port, use the following command:
Switch(enable) set spantree portfast { module_number/port_number }
{ enable | disable }
On an IOS-based switch, you can use the following command:
Switch (config-if)# spanning-tree portfast
You should not enable PortFast on a switch port that is connected to a hub or another switch because
bridging loops could form. To view the PortFast state of switch ports, use the show spantree command.
12.6.2 UplinkFast: Access Layer Uplinks
If an Access layer switch has redundant uplink connections to two Distribution layer switches, one uplink
would be in the Forwarding state and the other in the Blocking state. If the primary uplink went down, up to
50 seconds would elapse before the redundant uplink could be used. The UplinkFast feature on Catalyst
switches enables leaf-node switches or switches at the ends of the Spanning-Tree branches to have a
functioning Root Port while keeping one or more redundant or potential Root Ports in Blocking mode. When
the primary Root Port uplink fails, another blocked uplink can be immediately brought up for use. To enable
or disable the UplinkFast feature on a CLI-based switch, use the following command:
Switch (enable) set spantree uplinkfast { enable | disable }
[ rate update_rate ] [ all-protocols off | on ]
On an IOS-based switch, you can use the following command:
Switch (config)# spanning-tree uplinkfast
[ max-update-rate pkts_per_second ]
Leading the way in IT testing and certification tools, www.testking.com
- 166 -
CCNP/CCDP 642-891 (Composite)
When UplinkFast is enabled, it is enabled for the whole switch and all VLANs. UplinkFast works by
keeping track of possible paths to the Root Bridge. Therefore, the command is not allowed on the Root
Bridge switch. UplinkFast also makes some modifications to the local switch to insure that it does not
become the Root Bridge and that the switch is not used as a transit switch to get to the Root Bridge. The
IOS-based switch command uses a max-update-rate parameter to set the rate of multicast updates. To
view the current UplinkFast parameters and ports, use the show spantree uplinkfast command.
12.6.3 BackboneFast: Redundant Backbone Paths
In the Core layer, a different method is used to shorten STP convergence. BackboneFast works by having a
switch actively determine if alternate paths exist to the root bridge in the event that the switch detects an
indirect link failure. Indirect link failures occur when a link not directly connected to a switch fails. A switch
detects an indirect link failure when it receives inferior BPDUs from its Designated Bridge on either its root
port or a blocked port. Normally, a switch must wait for the Max Age timer to expire before responding to
the inferior BPDUs. However, BackboneFast begins to determine if other alternate paths to the Root Bridge
exist according to the type of port that received the inferior BPDU:
•
If the inferior BPDU arrives on a port in the Blocking state, the switch considers the root port and all
other blocked ports to be alternate paths to the root bridge.
•
If the inferior BPDU arrives on the root port, the switch considers all blocked ports to be alternate paths
to the root bridge.
However, if the inferior BPDU arrives on the root port and there are no blocked ports, the switch assumes it
has lost connectivity with the root bridge. In this event, the switch will assume that it has become the root
bridge and BackboneFast will allow it to do so before the Max Age timer expires.
If the local switch has blocked ports, BackboneFast begins to use the Root Link Query (RLQ) protocol to
see if there are upstream switches that have stable connections to the Root Bridge. RLQ Requests are sent
out. If a switch receives an RLQ Request and is either the Root Bridge or has lost connection to the Root, it
sends an RLQ Reply. Otherwise, the RLQ Request is propagated on to other switches until an RLQ Reply
can be generated. On the local switch, if an RLQ Reply is received on its current Root Port, the path to the
Root Bridge is intact and stable. If it is received on a non-Root Port, an alternate Root Path must be chosen.
The Max Age Timer is immediately expired so that a new Root Port can be found.
BackboneFast is simple to configure and operates by short-circuiting the Max Age Timer when needed.
Although this function shortens the time a switch waits to detect a Root Path failure, ports still must go
through full-length Forward Delay Timer intervals during the Listening and Learning states. Where PortFast
and UplinkFast enabled immediate transitions, BackboneFast can only reduce the maximum convergence
delay from 50 to 30 seconds. To configure BackboneFast, you can use the following command:
Switch (enable) set spantree backbonefast { enable | disable }
When used, BackboneFast should be enabled on all switches in the network because BackboneFast requires
the use of the RLQ request and reply mechanism to inform switches of Root Path stability. The RLQ
protocol is only active when BackboneFast is enabled on a switch. By default, BackboneFast is disabled.
Leading the way in IT testing and certification tools, www.testking.com
- 167 -
CCNP/CCDP 642-891 (Composite)
12.7 Spanning-Tree Design
STP and its computations are very predictable. However, some external factors may influence STP decisions,
making the resulting tree structure neither expected nor ideal. For example, several versions of SpanningTree exist and are used by various vendors. Interoperability of these versions could be important in a mixedvendor network. The network administrator can also make adjustments to the Spanning-Tree operation to
control its behavior. The location of the Root Bridge should be determined as part of the design process.
Also, redundant links can be used for load balancing in parallel if configured correctly. Furthermore,
Spanning-Tree can be configured to converge quickly and predictably in the event of a major topology
change.
12.8 STP Types
There are three types of STP that are encountered in switched networks. These are: Common Spanning Tree
(CST), Per-VLAN Spanning Tree (PVST), and Per-VLAN Spanning Tree Plus (PVST+). There are no
specific configuration commands associated with the various types of STP. You should have a basic
understanding of how the various types of STP interoperate in a network.
12.8.1 Common Spanning Tree (CST)
The IEEE 802.1Q standard specifies how VLANs are to be trunked between switches. It specifies a single
instance of STP for all VLANs. This is referred to as the Common Spanning Tree (CST) or the Mono
Spanning Tree (MST). All BPDUs are transmitted over the management VLAN (VLAN1). Having a single
STP for many VLANs simplifies switch configuration and reduces switch CPU load during STP calculations.
However, the STP can cause limitations. Redundant links between switches will be blocked with no
capability for load balancing. Conditions can also occur that would cause forwarding on a link that does not
support all VLANs, while other links would be blocked.
12.8.2 Per-VLAN Spanning Tree (PVST)
Per-VLAN Spanning Tree (PVST) is a Cisco proprietary STP that offers more flexibility than CST. It
operates a separate instance of STP for each VLAN. This allows the STP on each VLAN to be configured
independently, offering better performance and tuning for specific conditions. Multiple Spanning Trees also
make load balancing possible over redundant links when the links are assigned to different VLANs. Due to
its proprietary nature, PVST requires the use of Cisco Inter-Switch Link (ISL) trunking encapsulation
between switches. In networks where PVST and CST coexist, interoperability problems will occur as each
requires a different trunking method. Therefore BPDUs will not be exchanged between PVST and CST.
12.8.3 Per-VLAN Spanning Tree Plus (PVST+)
Per-VLAN Spanning Tree Plus (PVST+) is another Cisco proprietary STP. It does, however, allow devices
to interoperate with both PVST and CST. (PVST+) effectively supports three groups of STP operating in the
same campus network: Catalysts running PVST; Catalysts running PVST+; and switches running CST/MST
over 802.1Q. To accomplish this, PVST+ acts as a translator between groups of CST switches and groups of
PVST switches. PVST+ can communicate directly with PVST by using ISL trunks. To communicate with
CST, however, PVST+ exchanges BPDUs with CST on VLAN1. BPDUs from other instances of STP are
propagated across the CST portions of the network by tunneling. PVST+ sends these BPDUs by using a
unique multicast address so that the CST switches will forward them on to downstream neighbors.
Eventually, the tunneled BPDUs will reach other PVST+ switches where they are understood.
Leading the way in IT testing and certification tools, www.testking.com
- 168 -
CCNP/CCDP 642-891 (Composite)
12.9 Protecting Against Unforeseen Bridge Protocol Data Units (BPDU)
Cisco added two STP features to help guard against the ‘unexpected’. These are:
•
The Root Guard Feature, and
•
The BPDU Guard Feature.
12.9.1 Root Guard Feature
The root guard aspect was introduced to control where candidate Root Bridges can be linked and located on
a network.
Switch ports are allocated with the following roles:
•
Root Port role is assigned to the port on the switch that is the nearest to the Root Bridge.
•
The Designated Port role is assigned to a port on a LAN segment nearest to the Root. The port transmits
BPDUs.
•
The Blocking role is assigned to ports that are not a Root Port or a Designated Port.
•
Forwarding roles are allocated to ports that have no STP activity and are normal end user links.
•
The Alternate Port role is allocated to a port in a blocking condition that can become a Root Port.
The Root Port and the Alternate Port is closest to the Root Bridge. A switch would find out the current Root
Bridge’s Bridge ID. Should another switch be introduced into the network with a higher Bridge ID, or a
better BPDU, on a port where root guard is enabled, the new switch will not become the root. The port will
remain in a root-inconsistent STP state for the duration that it receives superior BPDUs. No data is
transmitted or received in a root-inconsistent STP. In this manner, a port that would usually only receive
BPDUs is never a root port. The port returns to the normal STP state once superior BPDUs are no longer
being received.
The option is disabled by default on switch ports. The following command enables the option:
Switch(config-if)# spanning-tree guard root
12.9.2 BPDU Guard Feature
The BPDU Guard feature was introduced to provide more security and integrity for switch ports that have
STP PortFast enabled. When BPDU Guard is enabled, a port goes into an errdisable state when it receives
any BPDU. The port shuts down and is manually enabled again or automatically recovered. In this manner a
switch is prevented from being added to the port.
The option is disabled by default on switch ports. The following command enables the option:
Switch(config-if)# spanning-tree bpduguard enable
Leading the way in IT testing and certification tools, www.testking.com
- 169 -
CCNP/CCDP 642-891 (Composite)
12.10 Protecting Against the Sudden Loss of BPDUs
Cisco has introduced three STP features that assist in preventing and identifying against the loss of BPDUs.
12.10.1 BPDU Skew Detection Feature
A downstream switch can receive a BPDU that it relays for some time because the switch CPU is executing
other functions. Lost BPDUs or those that arrive late can have an impact on the stability and reliability of the
STP topology. The BPDU skew detection gauges the time lapse between the time a BPDU is expected and
its actual arrival time, and tracks the period of the skewing condition. The time lapse period is known as
skew time. BPDU skew detection uses syslog messages to report the condition. The messages are ratelimited so that it does not impact the switch CPU resources.
12.10.2 Loop Guard Feature
The feature keeps track of BPDU activity on nondesignated ports. Nondesignated ports are usually the Root
Port, Alternate Root Ports, and all other ports that are usually blocking. The port remains in a normal state
while it is receiving BPDUs. Loop Guard shifts the port into a loop-inconsistent state when BPDUs go
missing. Once BPDUs are being received again, the port shifts via the normal STP states and is activated. In
this manner, loop guard automatically manages ports with no manual interference.
The option is disabled by default on switch ports. The following command enables the option:
Switch(config-if)# spanning-tree guard loop
Loop Guard can be enabled on all switch ports, on a per-port basis. However, its counteractive blocking
action is carried out on a per-VLAN basis.
12.10.3 Unidirectional link detection (UDLD) STP Feature
The UDLD STP feature interactively monitors and checks a port to ascertain whether a link is really
bidirectional. A switch transmits Layer 2 UDLD frames that identify the switch port periodically. When the
end switch adds its switch port’s identification and echoes the frame across the same link, the link is
bidirectional. Both ports are identified in the frame. The link is unidirectional when these echoed frames are
not perceived. The function of UDLD is to identify a unidirectional link ahead of STP shifting a Blocked
port into the Forwarding state.
UDLD has two operation methods:
•
When operating in Normal Mode, the port continues to operate once a unidirectional link is perceived.
UDLD sends a syslog message indicating that the port is in an undetermined state.
•
When in Aggressive Mode, the switch re-establishes the link once a unidirectional link condition is
perceived. UDLD messages are transmitted every 8 seconds. The port is put into the errdisable state
when these messages are not echoed back. The port cannot be used in this state.
The option is disabled by default on switch ports. UDLD can be enabled on all ports on a per-port basis, or it
can be globally enabled on ports that use fiber-optic media. UDLD can be separately enabled on non-fiber
links. The following global configuration command globally enables the option:
Leading the way in IT testing and certification tools, www.testking.com
- 170 -
CCNP/CCDP 642-891 (Composite)
Switch(config)# udld {aggressive | enable | message time seconds}
Use the enable keyword to enable Normal Mode and the aggressive keyword to enable Aggressive Mode.
The following interface configuration command enables or disables UDLD on single switch ports:
Switch(config-if)# udld {aggressive | disable | enable}
The disable keyword disables UDLD.
12.11 Advanced Spanning-Tree Protocol
12.11.1 Rapid Spanning Tree Protocol (RSTP)
RSTP determines the manner in which switches should relate with one another in order to ensure that the
network topology is effectively loop-free. RSTP’s basic capabilities can be used on a per switch basis or as
multiple occurrences. RSTP functions constantly in each approach. Duplicating RSTP as multiple
occurrences needs another approach.
12.11.1.1 RSTP Port Performance
RSTP accomplishes its speedy character by allowing each switch to interrelate with its switch partners via
each port. Interrelation is based on a port’s role and not the BPDUs that are transmitted from the Root
Bridge. A role and a state that decides on what it does to incoming frames are determined for each port.
RSTP therefore defines port states in accordance to how it handles incoming frames. A Root Bridge is
chosen in the same manner as with 801.D. Next, the following port roles are decided on:
•
Root Port: This is the same as the 802.1D method, whereby the one switch port on each switch with the
better root path cost to the Root is assigned the role.
•
Designated Port role is assigned to the switch port on a network segment that has the better root path
cost to the Root.
•
Alternate Port role is assigned to the port that has a different path to the Root.
•
Backup Port role is assigned to a port that supplies a redundant link to a network segment on which
another switch port already connects.
Any of the above ports can have the following states: Discarding (Frames are dropped), Learning (MAC
addresses are learned, frames are dropped), and Forwarding.
12.11.1.2 BPDUs and RSTP
RSTP makes use of 802.1D BPDU set-up for backward compatibility. A transmitting switch port is
identified by its assigned RSTP role and state. The RSTP interactive process enables two neighboring
switches to negotiate port state changes and a number of BPDU bits are utilized to mark messages during
this process.
BPDUs are sent from each switch port at Hello Time intervals and in turn, each switch can anticipate getting
frequent BPDUs from their switch neighbors. Every switch can therefore partake in preserving the topology.
Leading the way in IT testing and certification tools, www.testking.com
- 171 -
CCNP/CCDP 642-891 (Composite)
A switch neighbor is assumed to be down when three successive BPDUs are missed. This enables a switch
to determine that a neighbor is down in three Hello intervals.
In order to differentiate RSTP BPDUs from 802.1D BPDUs, the BPDU version is fixed to 2. Because RSTP
differentiates its BPDUs from 802.1D BPDUs, it can exist with switches using 802.1D. Each port tries to
function according to the STP BPDU that it receives.
12.11.1.3. RSTP Convergence
STP convergence occurs when switches are brought from an independent state to a uniform state, and each
switch has a location in a loop free topology. Each switch is able to identify the Root Bridge. To avoid
loops, each switch port is taken from the Blocking state to the proper state. This process can be time
consuming.
RSTP uses another method: RSTP necessitates a switch to determine or base its forwarding decisions on the
Port Type when it becomes a member of, or identifies a failure in the topology.
Each switch port can be one of the port types listed below:
•
Edge Port: This is a port at the edge of the network that cannot form a loop where a single host
connects.
•
Root Port: This is a port which has the best cost to the root of the STP instance. One Root Port is chosen
and is active at any point in time. An Alternate Root Port and can be instantly put in the Forwarding state
when an existing Root Port fails.
•
A Point-to-Point Port is a port that links to another switch and turns into a Designated Port. A quick
handshake takes place with a neighboring switch to determine the port state. BPDUs are exchanged in
the shape of a proposal and an agreement message. One switch proposes that its port becomes a
Designated Port, and the other switch agrees by responding with an agreement message. Point-to-point
ports are automatically decided on through the duplex mode being utilized.
Full-duplex ports are regarded as point-to-point ports because STP convergence can speedily occur over
a point-to-point link by means of RSTP handshake messages. Only two switches can be present on the
link.
Half-duplex ports are not point-to-point ports because more than two switches can be present. 802.1D
convergence is utilized and results in a slower response time.
By an exchange of BPDUs, two switches can speedily converge to determine which one is the Root, and
which one will possess the Designated Port. In a bigger network where 802.1D BPDUs are usually relayed
from switch to switch, RSTP manages the entire STP convergence as a spread of handshakes across pointto-point links. At the time a switch needs to make an STP decision, it does a handshake with the nearest
switch neighbor. The handshake series moves to the next switch and the next, and so forth, towards the edge
of the network.
A synchronization process takes place to ensure that a switch does not initiate a bridging loop before
moving the handshake out. A switch must decide on the state of each of its ports in order to partake in RSTP
Leading the way in IT testing and certification tools, www.testking.com
- 172 -
CCNP/CCDP 642-891 (Composite)
convergence. The switch exchanges a proposal agreement handshake to determine the state of each end of
the connection.
Each switch presumes that its port should be the Designated Port for the segment, and sends a proposal
message signifying this to its neighbor. The receiver of the proposal efficiently separates itself from the rest
of the topology. Nonedge ports are blocked until a proposal message can be sent. This causes the closest
neighbors to synchronize themselves and this generates a moving process of synchronizing switches. The
switches speedily determine to commence forwarding on their links when their neighbors consent.
The following procedure takes place when a switch receives a proposal message:
•
A local switch comprehends that its port must be the Root Port when it receives a proposal with a
superior BPDU. The sender should be the Designated Switch
•
Before assenting to this, the switch synchronizes itself with the topology.
•
Nonedge ports are at once placed into the Discarding state to prevent the formation of bridging loops.
•
An agreement message that signifies that the switch is in accordance is returned to the sender. The
message also indicates to the sender that the switch is busy synchronizing itself.
•
The Root Port is straight away placed to the Forwarding state and the sender’s port can too begin
forwarding.
•
A proposal message is sent to the relevant neighbor for every nonedge port presently in the Discarding
state.
•
An agreement message is received from a neighbor on a nonedge port. The nonedge port is then instantly
shifted to the Forwarding state.
12.11.1.4 RSTP and Topology Changes
RSTP senses a change in topology only at the time that a nonedge port moves to the Forwarding state
because it utilizes quick convergence to stop bridging loops from developing. Topology changes are thus
perceived only to enable bridging tables to be updated and fixed when hosts first emerge on a failed port,
and then on another operating port. The process is like the convergence and synchronization processes.
A switch broadcasts a topology change (TC) message that then spreads through the network to other
switches. This allows switches to correct their bridging tables. BPDUs that have their topology change
specified are transmitted out on all of the nonedge designated ports until the TC While timer expires. The
TC While timer expires after two times the Hello time and this informs neighboring switches of the TC and
the new link. MAC addresses related with the nonedge Designated Ports are discarded from the contentaddressable memory (CAM) table. In addition, neighboring switches that receives the TC messages must
discard of MAC addresses learned on all ports, excluding the one on which the TC message was received.
That switches have to next send TC messages out of their nonedge Designated Ports. Addresses have to be
learned of again.
12.11.1.5 Configuring RSTP
RSTP is a means that a Spanning Tree mode can use in order to detect topology changes and converge a
network into a loop free topology. To use RSTP, MST or RPVST+ must be enabled. RSTP configuration
Leading the way in IT testing and certification tools, www.testking.com
- 173 -
CCNP/CCDP 642-891 (Composite)
affects the Port Type. Remember that the Port Type is used to decide on how a switch negotiates topology
information with its switch neighbors.
The following command is used to configure a port as a RSTP edge port:
Switch(config-if)# spanning-tree portfast
Recall that this command is used during 802.1D STP configuration. RSTP automatically determines that a
port is a point-to-point link when it is functioning in full duplex mode. This default can be overridden.
A port connecting to another switch could be functioning at half-duplex. The following command is used to
compel a port to operate as a point-to-point link:
Switch(config-if)# spanning-tree link-type point-to-point
12.11.2 The Multiple Spanning Tree Protocol (MSTP or MST)
Multiple Spanning Tree Protocol (MSTP or MST) was brought about to attend to the shortage of, and excess
of STP instances. MST defined in the IEEE 802.1s standard, and was developed on the notion of mapping
one or more VLANs to a single STP instance. Multiple instances of STP can be used by way of each
instance supporting a different set or group of VLANs. This enables the network administrator to accurately
configure the number of STP instances that is required for a network. To execute MST in a network the
administrator needs to determine the number of STP instances required to support the wanted topologies and
whether to map a group of VLANs to each instance.
12.11.2.1 MST Regions
MST differs from 802.1Q and PVST+. When a switch is configured to use MST it has to determine which of
its neighboring switches are using which STP type. This is accomplished by configuring switches into
common MST regions.
Each switch within a region carries out MST that is defined by the following parameters:
•
MST configuration name,32 characters
•
MST configuration revision number, 0 to 65535
•
MST instance-to-VLAN mapping table, 4096 records/entries
Two switches are part of the same MST region when they have the same qualities. Switches belong to two
independent regions when these attributes do not match. Switches that receive MST BPDUs can compare
the configuration qualities of these BPDUs with their own local MST configurations. To facilitate quicker
comparison, a code calculated from the MST instance-to-VLAN mapping table contents is sent, instead of
the whole table. The STP instances in MST are shared as being part of the same region when a match
occurs. A switch is perceived to be located at the MST region boundary when a mismatch occurs. Here one
region meets traditional 802.1D STP or another region.
12.11.2.2 Spanning Tree Instances in MST
Leading the way in IT testing and certification tools, www.testking.com
- 174 -
CCNP/CCDP 642-891 (Composite)
CST enables a loop free topology with the links that connect regions to one another, as well as to standalone
switches running 802.1Q CST. It integrates all methods of STP being used. Because CST is clueless on what
each region contains, Internal Spanning Tree (IST) instance figures out a loop free topology within each
MST region.
Therefore, an IST instance operates to figure out a loop free topology between the links where CST meets
the region boundary, and it also calculates a loop free topology for every switch within the region. A single
CST is seen to be in operation because BPDUs are exchanged at the boundary of the region only across the
native VLAN of trunks. IST portrays the region as a single virtual bridge to the CST exterior. All VLANs
are automatically mapped to an IST instance and should be overridden if need be. An IST instance must be
active on each port on a switch irrespective of whether that port holds VLANs that are mapped to IST.
The MST instances (MSTIs) live next to the IST within a region. Cisco supports a majority of 16 MSTIs in
every region with IST always being MSTI 0. MSTI numbers 1 through 15 can be used. As an example, four
independent STP instances can exist together within MST: MSTI 1, MSTI 2, MSTI 3, with IST existing as
MSTI 0.
One BPDU is necessary to communicate STP information on all other active instances because information
on every MSTI is attached to the MST BPDU as an M-record. Just MSTI 0 can transmit and acquire MST
BPDUs. The MSTIs joins with the IST only at the boundary of the region and forms a subtree of CST. Only
IST BPDUs are conveyed in and out of a region.
MST can determine whether a MST region connects with a switch running PVST+. It does this by listening
to BPDUs and concludes that PVST+ is being used when BPDUs are heard from more than one VLAN. At
the time that the MST region transmits a BPDU in the direction of the PVST+ switch, IST BPDUs are
simulated into every single one of the VLANs on the PVST+ switch trunk
12.11.2.3 Configuring MST
The MST configuration on each switch in a region is done manually. The set of commands used to define
the STP region is listed in below.
The following command is used to enable MST on a switch:
Switch(config)# spanning-tree mode mst
Use the next command to specify the MST configuration mode:
Switch(config)# spanning-tree mst configuration
Enter a region configuration name:
Switch(config-mst)# name name
Enter a region configuration revision number:
Switch(config-mst)# revision version
Leading the way in IT testing and certification tools, www.testking.com
- 175 -
CCNP/CCDP 642-891 (Composite)
Next, map VLANs to an MST instance:
Switch(config-mst)# instance instance-id vlan vlan-list
The instance-id of 0-15 contains topology information for the VLANs that are listed in vlan-list.
Use the following command to verify the MST configuration changes:
Switch(config-mst)# show pending
The following command commits changes to the active MST region configuration:
Switch(config-mst)# exit
Table 12.1 illustrates MST Configuration Commands
TABLE 12.1: MST Configuration Commands
Command Syntax
Function
spanning-tree mst instance-id root {primary
| secondary} [diameter diameter]
Sets the Root Bridge
spanning-tree
mst
bridge-priority
Sets the Bridge Priority
instance-id
priority
spanning-tree mst instance-id cost cost
Sets Port Cost
spanning-tree mst instance-id port-priority
port-priority
Sets the Port Priority
spanning-tree mst hello-time seconds
spanning-tree mst forward-time seconds
spanning-tree mst max-age seconds
Sets the STP Timers
Leading the way in IT testing and certification tools, www.testking.com
- 176 -
CCNP/CCDP 642-891 (Composite)
13. Trunking with ATM LAN Emulation (LANE)
Trunking, or carrying multiple VLANs over a single link, can also be accomplished using Asynchronous
Transfer Mode (ATM). ATM by itself is a connection oriented technology built upon relaying cells of data.
Therefore, it cannot inherently trunk VLANs. However, the LAN
ATM
Emulation (LANE) protocol uses ATM as a means to mimic
traditional LAN media and can provide the trunking function. Asynchronous Transfer Mode (ATM) is
Multiprotocol over ATM (MPOA) is another protocol that extends designed to provide multiple service types
over single pipelines very efficiently. All
LANE to offer more efficient path selection through an ATM traffic is transported as small fixed-size
network.
cells. Also, traffic is not moved based on
13.1 ATM
13.1.1 The ATM Model
The ATM standard uses a reference model that is similar to the
OSI Model to describe the hierarchy of its various operations. The
ATM reference model is composed of the following ATM layers:
cell-by-cell decisions but upon connections
built between end points. Networks built on
ATM typically have ATM switches at the
core or the "ATM cloud". ATM switches
build connections between each other and
relay native ATM cells across the
connections. At the edge of the ATM
network, many types of traditional
networking devices, such as LAN switches,
routers, workstations, etc, can be
connected. These devices provide the
conversion between other network media
formats and ATM cells and interface with
the ATM switches using ATM protocols.
ATM defines these two types of interfaces
as the User-Network Interface (UNI), the
connection between ATM endpoints and
ATM switches, and the Network-toNetwork Interface (NNI), the connection
between two ATM switches.
•
ATM adaptation layer (AAL), which is responsible for
preparing user data for conversion into cells and segments the
data into 48-byte cell payloads. This layer contains several
different processes, each tailored for the segmentation and reassembly of a different higher-layer data type.
ƒ AAL1 supports connection-oriented services that can
emulate leased line circuits. This layer requires timing
synchronization between source and destination,
allowing the transport of real-time streams like voice
and video. Data is sampled and put into cell payloads,
along with sequence information before transmission.
ƒ AAL2 supports connection-oriented services for variable bit rate (VBR) applications.
ƒ AAL3/4 supports both connection-oriented and connectionless services. AAL3/4 was designed
for use with Switched Multimegabit Data Service (SMDS). Data is segmented into cell payloads,
while cyclic redundancy check (CRC) error
ATM Cells
information is added.
In ATM, all types of traffic are transported
ƒ AAL5 supports both connection-oriented and as small cells. This has the benefit of low
connectionless services. Non-SMDS data, like TCP/IP latency and high throughput, as small cells
over ATM and LANE, is transported in sequence with can be moved very quickly from switch to
simple segmentation. A CRC value is added to the end switch with a low propagation delay for the
serialized data from each cell; traffic
of the pre-segmented frame. The higher layers residing short
from many sources can be converted into
above the AAL accept user data, arrange it into packets, the same fixed length cells going into and
and hand it to the AAL.
out of the ATM network and can be relayed
•
ATM layer, which is responsible for establishing connections Service (QoS) or guaranteed timely delivery
to other ATM devices, i.e., it is responsible for the of real-time data streams for services. ATM
simultaneous sharing of virtual circuits over a physical link cells have been standardized to a 53-byte
and passing cells through the ATM network. To accomplish length: a 5-byte header and a 48-byte
this, the ATM layer uses the virtual path identifiers (VPI) and payload.
virtual channel identifies (VCI) in the header of each ATM cell. Although ATM is designed to be media
independent, it does have a physical layer that converts cells into bit streams for the appropriate media.
at a predictable rate, providing Quality of
Leading the way in IT testing and certification tools, www.testking.com
- 177 -
CCNP/CCDP 642-891 (Composite)
As well, all timing information and error control are handled in this layer. The ATM layer and the ATM
adaptation layer (AAL) is roughly analogous to the data link layer of the OSI reference model.
•
Physical layer, which is analogous to the physical layer of the OSI reference model. This layer manages
the medium-dependent transmission.
In addition to the layers, the ATM reference model is composed of several planes, which span all layers.
These planes are:
•
Control, which is responsible for generating and managing signaling requests;
•
User, which is responsible for managing the transfer of data; and
•
Management, which contains two components:
ƒ Layer management, which manages layer-specific functions, such as the detection of failures and
protocol problems; and
ƒ Plane management, which manages and co-ordinates functions related to the complete system.
ATM operations are only performed in the data link and physical layers. The higher layers are still
concerned with passing upper-layer protocols downward to the data link layer to be processed into ATM
cells.
13.1.2 Virtual Circuits
ATM relies on connections to be built across the ATM network in order to relay cells end-to-end. Before
cells are sent, connections, called virtual circuits (VCs), must be negotiated and established. There are two
types of VCs: permanent virtual circuits and switched virtual circuits.
•
Permanent virtual circuits (PVCs) are manually built to support a predetermined path through an ATM
cloud and remain in place until they are manually removed.
•
Switched virtual circuits (SVCs) are dynamically built and torn down by ATM switches as they are
needed. When one ATM edge device requires a new circuit, it informs an ATM switch that an SVC
needs to be built to the destination.
Virtual circuits (VCs) can be built in two ways: Point-to-point, where one device talks to one device; and
Point-to-multipoint, where one device communicates in one direction to many end devices.
ATM uses specific terminology to identify the hierarchical arrangement of its VCs. A virtual channel
Connection (VCC) is the connection that is set up across the ATM network between endpoints. A virtual
path (VP) is a bundle of virtual channels that share a common path through the switched network. When a
VP is used, the individual VCs contained in it need be not switched, instead the overall VP bundle can be
switched. Also, a transmission path is a bundle of virtual paths.
13.1.3 ATM Addressing
ATM uses two types of addresses: virtual path and virtual channel identifiers (VPI/VCI), and network
service access point (NSAP) addresses. Each address type is used for a specific purpose.
13.1.3.1 VPI/VCI Addresses
Leading the way in IT testing and certification tools, www.testking.com
- 178 -
CCNP/CCDP 642-891 (Composite)
An ATM cell contains a 5-byte header and a 48-byte payload. A 5-byte space is not sufficient to contain a
large address space for the source or destination endpoints. In packet-based or frame-based networks,
placing the source and destination addresses in each frame is necessary so that routers and switches can
forward frames correctly. When ATM transports a cell from source to destination, however, only the
specific VC needs to be known because ATM uses the VPI/VCI identifier combination to deliver cells
through a network. An ATM edge device places the VPI/VCI values in a cell before presenting the cell to an
ATM switch. The switch then references a forwarding table, relating the VPI and VCI values to outbound
switch ports.
A User-to-Network Interface (UNI) cell contains an 8-bit VPI and a 16-bit VCI address. A Network-toNetwork Interface (NNI) cell expands the VPI to 12 bits and has a 16-bit VCI value. These values are only
locally significant to an ATM switch and do not have to be globally unique.
13.1.3.2 NSAP Addresses
20-byte network service access point (NSAP) addresses are used to identify individual ATM devices and
endpoints. Each device is required to have a unique NSAP address. These addresses are written out as
groups of 4 hexadecimal digits, separated by dots.
NSAP ATM addresses are composed of a prefix, an end-system identifier (ESI), and a selector.
•
The Prefix is a 13-byte field that uniquely identifies every ATM switch in the network. Cisco ATM
switches have a predefined 7-byte value of 47.0091.8100.0000, followed by a 6-byte unique MAC
address preconfigured on each switch.
•
The End-System Identifier (ESI) is a 6-byte field that uniquely identifies every device attached to an
ATM switch.
•
The Selector is a 1-byte field that identifies a process running on an ATM device. Cisco devices usually
use the selector value to identify an ATM subinterface number.
13.1.4 ATM Protocols
ATM uses two protocols: the Integrated Local Management Interface (ILMI), which is a protocol that
provides an automatic means for an ATM device to learn about its neighbors; and the Private Network-toNetwork Interface (PNNI), which is an ATM protocol that is used to dynamically build and tear down SVCs.
PNNI is only used between ATM switches because it involves SVC administration. It is termed a Layer 2
routing protocol because of its use in determining paths through an ATM network. Furthermore, PNNI
enables ATM switches to load balance traffic across multiple paths and parallel links, and provides
redundancy in case of path failure.
13.2 LAN Emulation (LANE)
ATM can be used to provide LAN connectivity and to trunk multiple VLANs across an ATM network.
Because ATM is dissimilar to traditional LAN technologies, it must be made to emulate a LAN through the
use of several processes and components. LAN Emulation (ELAN) is an ATM Forum standard that
specifically provides mechanisms to emulate IEEE 802.3 Ethernet and IEEE 802.5 Token Ring LANs.
LANE is used only where LAN functionality is needed at the edges of an ATM network. Although LANE
makes the ATM cloud appear to be a LAN, only native ATM is used on the ATM switches. LANE is built
as a layer on top of ATM so that LANE operation is transparent to a standard ATM network.
Leading the way in IT testing and certification tools, www.testking.com
- 179 -
CCNP/CCDP 642-891 (Composite)
13.2.1 LANE Components
Emulated LANs (ELANs) differ from virtual LANs (VLANs). ELANs are used by ATM devices to segment
traffic into logical networks. However, ELANs exist only within the ATM domain. ELANs and VLANs
remain separate except where they are physically bridged in Catalyst switches that support both. An ELAN
exists on each Catalyst switch as ELAN 1. LANE makes logically connecting the ELAN between switches
across an ATM network possible.
LANE has specific components designed to address the requirements of a LAN. For each ELAN that exists
in a network, the following LANE components are used:
•
LAN Emulation Client (LEC) provides the basic ELAN function on each ATM device where an
ELAN connection is needed. The LEC emulates an interface to a LAN and provides data forwarding,
address resolution, and MAC address registration with other LANE components. Each LEC can
communicate with other LECs in the network over ATM virtual channel connections (VCCs) emulating
physical or logical LAN connectivity.
•
LAN Emulation Server (LES) acts as a central control point for all LECs in an ELAN. Each LEC must
register its MAC addresses with the LES so that the LES can provide MAC address to NSAP address
translation. The LES also maintains connections to each LEC to provide control information to all
ELAN members.
•
Broadcast and Unknown Server (BUS) emulates LAN broadcasts. Any LEC that needs to send a
broadcast sends it directly to the BUS. The BUS is then able to forward the broadcast to all other LECs
through a point-to-multipoint VCC. The BUS takes care of queuing and sequencing broadcast and
multicast frames over the point-to-multipoint VCC so that the frames arrive in the correct order at the
LECs.
•
LAN Emulation Configuration Server (LECS) acts as the central administrative control point for all
ELANs in a domain. The LECS maintains a database of ELANs and the ATM addresses of the LESs
that control each ELAN. Each LEC must query the LECS to ask for membership in an ELAN and to
request the NSAP address of the respective LES.
13.2.2 LANE Operation
Before end user stations can use the emulated LAN, the local LEC must first become a member of the
ELAN. There are four steps to LANE membership:
•
A newly created LEC must first contact the LECS so that the LEC can be pointed to the LES of its
specific ELAN as the LECS holds the LES/ELAN database for all ELANs in the network. To begin, an
LEC needs to find the ATM address of the LECS. This address can be obtained in one of four ways:
ƒ The NSAP address of the LECS can be manually configured into every ATM switch in the
network. The LEC then queries a local switch through ILMI for the LECS address. This is the
preferred method.
ƒ The LECS NSAP address can be manually configured into every LEC.
ƒ LECs can contact the LECS over the well-known VPI/VCI value of 0/17. This assumes a PVC
has already been built using 0/17.
ƒ A LEC can find the nearest LECS using the well-known NSAP of
47.007900000000000000000000.00A03E000001.00. Once the LECS address has been found,
the LEC can contact the LECS address directly with a configuration request over a Configuration
Leading the way in IT testing and certification tools, www.testking.com
- 180 -
CCNP/CCDP 642-891 (Composite)
Direct VC. The LECS looks up the desired ELAN in its database. Once the ELAN is found, the
LECS responds with the ATM address of the respective LES, the type of LAN being emulated,
the ELAN name, and the MTU of the ELAN.
•
Once the LEC has found the ATM address of the LECS, it is ready to check in to become a member of
the ELAN. The LEC opens a direct SVC with the LES, and registers its own ATM address and MAC
address with the LES database. The LEC can also register any other MAC addresses directly connected
to the LEC. The LEC will then become a proxy for these devices.
•
The BUS is the keystone to emulating a LAN, by providing the means to send a broadcast to all stations
in the ELAN. Instead of requiring each LEC to maintain a connection to every other LEC for broadcast
traffic to be flooded over, each LEC needs only to point broadcasts to the BUS. The BUS in turn
maintains a point-to-multipoint connection to every LEC in the ELAN for broadcast purposes.
To find the BUS, the LEC must learn its ATM address by sending the LES a LAN Emulation ARP
request (LE_ARP_REQUEST). This is only possible if the LES and BUS are located on the same device.
The LEC will attempt to query the BUS ATM address by using the broadcast MAC address of
0xFFFFFFFFFFFF. The LEC can then contact the BUS directly by building a VCC for sending
broadcast and multicast traffic. The BUS replies to the LEC and adds the LEC as a leaf node on its
broadcast point-to-multipoint connection.
•
Once the LEC has joined the ELAN and has formed the required ATM connections to the LANE
components, the LEC can communicate with other LECs in the ELAN. Building Data Direct VCs, or
direct connections, from one LEC to another completes this communication.
13.2.3 Address Resolution
Two types of address resolution can occur within ATM LANE: IP ARP and LAN Emulation ARP
(LE_ARP). IP ARP is associated with resolving between MAC addresses and IP addresses. LE_ARP occurs
when a LANE component needs to resolve between an NSAP address and a MAC address.
To perform MAC address resolution using IP ARP
•
•
•
•
•
•
A workstation generates an ARP request broadcast on its local LAN switch
port to find a MAC address.
The switch floods the broadcast out all VLAN ports, as well as to the ELAN
associated with the VLAN. This flooding occurs on the switch's ATM
module.
The LEC contacts the BUS with a broadcast frame to be delivered.
The BUS sends the broadcast to all LECs in the ELAN over its multipoint
connection.
Each LEC in the ELAN receives the broadcast from the BUS and floods the
broadcast out all local VLAN ports.
The destination station receives the ARP request and sends an ARP reply
frame. Because the reply is not a broadcast, it is returned to the source via a
Data Direct VC between destination and source LECs.
Leading the way in IT testing and certification tools, www.testking.com
- 181 -
CCNP/CCDP 642-891 (Composite)
If a workstation needs to contact another workstation and the workstation already knows the IP and MAC
addresses of the other target workstation, it must find the target workstation's LEC via its NSAP address so
that a Data Direct VC can be built between the LECs. LE_ARP is used to resolve the NSAP address.
To perform NSAP address resolution using LE_ARP
•
•
•
•
•
•
The originating workstation sends a frame to the target workstation MAC
address.
The originating workstation's LEC sends an LE_ARP request to the LES for
the target workstation's NSAP address
The LES looks up the target workstation's MAC address in its MAC/NSAP
address table. If the LES finds the NSAP entry, it replies to the originating
workstation's LEC with the address.
If the NSAP entry is not in the table, the LES forwards the LE_ARP request
on to all ELAN LECs over its multipoint control connection.
The LEC where the target workstation is attached has the MAC address in
its bridging table and sends an LE_ARP reply back to the LES with its
NSAP address.
The originating workstation's LEC can then build a Data Direct VC to the
target workstation's LEC and data transfer can commence.
13.2.4 LANE Component Placement
LANE is built upon LECS, LES, BUS, and one or more LEC components that can be configured on any
device in the network that supports LANE, such as Catalyst switch LANE modules, Cisco routers with ATM
Interface Processors (AIPs), and Cisco LightStream 1010 ATM switches. However, some consideration
needs to go into placing LANE components on Cisco devices:
•
The LECS is not very CPU-intensive because it is only consulted when a LEC is initializing and looking
for the appropriate LES. Therefore any Cisco LANE device can be used for the LECS. However, the
LECS is the central LANE database for all ELANs. Therefore, the LECS should be placed on a device
that is highly available to all LECs in the network. This could be on a centrally located ATM switch or a
distribution layer Catalyst switch LANE module.
•
The LES and BUS should be configured as a single unit on the same device. The BUS accepts and
sends all broadcast traffic for the ELAN which increases as the number of LECs and workstations grow.
The BUS should therefore always be configured on the most robust Catalyst switch in the network, so
that its function does not hamper other switching duties of the switch.
•
A LEC must be configured on every LANE device where ELAN connectivity is needed. Each ATM
edge device will need a LEC. Although an ATM switch does not require a LEC because it switches
native ATM traffic, it will need its own LEC to process any management traffic such as IP address
assignment, syslog, or SNMP support.
13.2.5 LANE Component Redundancy (SSRP)
Although LANE can be implemented as separate components dispersed throughout a network, each
component is still a single point of failure. To address this problem, Cisco has implemented two redundancy
Leading the way in IT testing and certification tools, www.testking.com
- 182 -
CCNP/CCDP 642-891 (Composite)
protocol called the Simple Server Redundancy Protocol (SSRP) and the Fast Simple Server Redundancy
Protocol (FSSRP) that allows multiple LECS, LES, and BUS components. SSRP provides communication
between the primary active component and one or more standby components so that the standby can take
over if the primary component fails. Under this protocol, only one active LANE component is allowed at
any one time. The Fast Simple Server Redundancy Protocol (FSSRP) allows multiple LES/BUS pairs to be
defined and active at any one time. In addition, redundant LECS components are provided by configuring
the list of multiple LECS NSAP addresses on all ATM switches. The switches then provide the address of
the first active LECS to LANE devices when the switches request the LECS address via ILMI. Similarly, a
list of redundant LES/BUS components can be configured in the LECS database for an ELAN. Under
normal SSRP, the LECS will provide the next available address in the list when a LEC requests it. FSSRP
allows up to four LES/BUS pairs to be active at one time. LECs that are FSSRP-aware build VCs to every
LES/BUS pair automatically.
13.3 LANE Configuration
Each LANE component is dependent upon another component; therefore, the order that the components are
configured on Cisco Catalyst switches is important. On Cisco ATM devices, ELANs are configured on
ATM subinterfaces. This configuration makes it possible to support many ELANs over a single ATM link.
The LANE components necessary for a specific ELAN must also be configured on the respective
subinterface for that ELAN. The LECS exists for all ELANs, must be configured on the major ATM
interface, usually ATM 0. Each LEC must be configured on a different subinterface. The LES/BUS pairs
must be configured on the subinterfaces where their respective ELANs are present. The LECS must be
configured on an ATM major interface because it keeps a database for all ELANs.
NSAP addresses on Cisco devices can be either manually configured or automatically generated. Automatic
generation is used most often. The LANE components are also given automatic NSAP addresses, according
to the scheme shown in Table 13.1.
TABLE 13.1: Automatic NSAP Address Generation for LANE Components
LANE Component
Prefix
ESI
Selector
LEC
From ATM switch
MAC address
ATM subinterface
LES
From ATM switch
MAC address + 1
ATM subinterface
BUS
From ATM switch
MAC address + 2
ATM subinterface
LECS
From ATM switch
MAC address + 3
.00
Viewing a listing of the automatically generated NSAP addresses on any LANE-capable switch module
before configuration is possible. You can find LECS NSAP address by using the show lane default
command on the LANE module where the LECS will be configured. This command lists all LANE
components with their automatically generated NSAP addresses.
13.3.1 Configuring the LES and BUS
The LES and BUS for an ELAN must be located on the same device and must use the same ATM
subinterface. You use the following commands to configure both LES and BUS components for an ELAN:
ATM(Config)# interface atm number.subinterface multipoint
Leading the way in IT testing and certification tools, www.testking.com
- 183 -
CCNP/CCDP 642-891 (Composite)
ATM(Config-subif)# lane server-bus ethernet elan_name
The subinterface number (subinterface) can be arbitrarily chosen. However, each subinterface or each
ELAN is segmented from the others; therefore, you can configure a different LES/BUS pair on one or more
subinterfaces. Each pair will operate only for its assigned ELAN. The elan_name parameter is a case
sensitive text string that identifies the name of the ELAN. This name must be defined the same in all LANE
components.
13.3.2 Configuring the LECS
The LECS configuration is a two stage process that must be performed on a major ATM interface, not on a
subinterface: first you must build the LECS database of ELANs and their associated LES NSAP addresses;
and then you must enable the LECS on the ATM interface.
•
To configure the LECS database, use the following set of commands:
ATM(Config)# lane database
ATM(lane-config-database)#
les1_nsap_address
ATM(lane-config-database)#
les2_nsap_address
ATM(lane-config-database)#
database_name
name elan1_name server-atm-address
name elan2_name server-atm-address
name ...
The database_name identifies the LECS database as a whole. Several LECS databases can be defined,
each with a unique name, and applied to LECS components on individual ATM major interfaces.
Usually, one LECS and one database are sufficient on a LANE module. Each ELAN in the LANE
network must be defined with a single name database command using the NSAP address of that ELAN's
LES. You can identify the NSAP address of the LES on a switch using the show lane default
command.
•
To enable the LECS on the major ATM interface, use the following set of commands:
ATM(Config)# interface atm number
ATM(Config-if)# lane config database database_name
ATM(Config-if)# lane config [ auto-config-atm-address ]
The auto-config-atm-address option instructs the LANE module to use the automatically generated
NSAP addresses for the LECS component.
13.3.3 Configuring Each LEC
You must configure a LEC for each ELAN that a device participates in. The LEC configuration also
specifies which VLAN the ELAN will be bridged to on the switch. Each LEC is configured on a different
ATM subinterface, using the following commands:
ATM(Config)# interface atm number.subinterface multipoint
ATM(Config-subif)# lane client ethernet vlan_number elan_name
Leading the way in IT testing and certification tools, www.testking.com
- 184 -
CCNP/CCDP 642-891 (Composite)
The vlan_number references an existing VLAN number on the local switch. The elan_name references the
name of an existing ELAN on the local LANE module. The two are then bridged on the LANE module and
become a single broadcast domain.
13.3.4 Viewing the LANE Configuration
The configuration of each individual LANE component is straightforward. However, because each
component can be placed on a separate device, you must ensure that all of the components are
communicating and operating properly. Catalyst LANE modules offer a number of commands that can be
used to display and debug LANE configurations and status. These are:
•
show lane default, which can be used to view the default NSAP addresses for the local LANE
module.
•
show lane bus, which can be used to view the status of a BUS.
•
show lane server, which can be used on the LES machine to view the status of an LES.
•
show lane database, which can be used on the LECS machine to view the current LECS database.
•
show lane client, which can be used to view the status of a LANE Client. Each LEC that is present
on the local switch is listed with this command.
Leading the way in IT testing and certification tools, www.testking.com
- 185 -
CCNP/CCDP 642-891 (Composite)
14. InterVLAN Routing
VLANs are typically configured on Layer 2 switches to form broadcast domains. VLANs can exist in one or
more switches through the use of trunking. VLANs also usually represent subnetworks of Layer 3 protocols.
Since Layer 2 switches do not use Layer 3 addressing to make switching decisions, a Layer 3 decision is
needed to move packets between VLANs. The latter can be provided by InterVLAN routing, multilayer
switching (MLS), or Cisco Express Forwarding (CEF).
•
InterVLAN routing is based on adding a route processor somewhere in the switched network to
provide Layer 3 routing. Every packet destined from one VLAN to another must pass through the router.
•
MultiLayer Switching (MLS) is based on the principle that the router only sees the first packet.
Switching paths are then set up so that subsequent packets bypass the router and are switched by a more
efficient path. MLS is discussed in more detail in Section 15.
•
Cisco Express Forwarding (CEF) is a distributed switching mechanism that keeps copies of route
cache information in several different forms to be used for efficient switching. Catalyst switches can
hand off packets to a CEF-capable router for processing, as in interVLAN routing. Some Catalyst
platforms implement CEF directly in hardware. CEF is discussed in more detail in Section 16.
Routers have feature sets that include intelligent, dynamic routing protocols for packet transport and packet
filtering capabilities. Additional connectivity enhancements, such as DHCP relaying, Network Address
Translation, Quality of Service, and policy-based routing add to router functionality. Routers also offer
connectivity between LAN technologies and Wide Area Network technologies, connecting a broad range of
network media.
14.1 InterVLAN Routing Design
Several options are available when placing a route processor within a switch campus network. These options
are primarily based on the type of connectivity between the switches and the router, and location of the route
processor.
14.1.1 Routing with Multiple Physical Links
The simplest method of routing between VLANs is to use several physical links between switches and an
external router. Each link is configured for a single VLAN, so that there is a link for each VLAN to be
routed. Using one VLAN per link offers an intuitive approach to routing between VLANs as routers
associate each physical link with a subnetwork, and transport packets between links. Each link is also
segmented from the others, unless bridging arrangements are made within the router. This is useful when the
switches and router are already available and can be quickly connected, using a small number of VLANs.
No configuration is needed, except the usual interface addressing used on the router. However, when the
network grows, this method becomes problematic as every additional VLAN requires an additional physical
link to the router.
14.1.2 Routing over Trunk Links
A more robust routing approach is to use trunk links between the switches and routers, instead of multiple
physical links. Trunk links transport multiple VLANs over a single link; therefore, only one link to an
external router is required. A router connected to a switch by a single trunk link is referred to as a router on a
Leading the way in IT testing and certification tools, www.testking.com
- 186 -
CCNP/CCDP 642-891 (Composite)
stick. However, a router can also connect to several switches using trunk links. This connection provides
end-to-end Layer 3 connectivity between blocks of switches. Three types of trunk links can be used: IEEE
802.1Q, ISL, and ATM LANE.
14.1.2.1 802.1Q and ISL Trunks
Both IEEE 802.1Q and Inter-Switch Link (ISL), which are encapsulation methods that use Fast Ethernet or
Gigabit Ethernet as the physical media for trunking, can be used to transport multiple VLANs to a router.
Both 802.1Q and ISL trunk links identify each frame with a VLAN number. As a frame leaves a switch, the
frame is encapsulated and identified with its VLAN. When the router receives a frame over a trunk link, the
router decapsulates the frame and associates it with an interface assigned to the VLAN number that
identified the frame. Frames from the native VLAN of an 802.1Q trunk are not tagged with the VLAN
number.
To support various VLANs, individual subinterfaces are configured with 802.1Q or ISL encapsulation, and a
VLAN number. 802.1Q and ISL trunks offer the advantage of scalability because a single link can transport
many VLANs; however, some CPU overhead is involved as the router processes the encapsulation.
Therefore, the router cannot use its most efficient packet switching method for packet forwarding.
Furthermore, trunking encapsulations also require some link bandwidth overhead; ISL adds a 30-byte header
to each frame while 802.1Q adds a 4-byte header to each frame.
14.1.2.2 ATM LANE
Switches can trunk VLANs to other switches over ATM links using the LAN Emulation (LANE) standard.
Switches with LANE modules can participate by operating local LANE Clients (LECs) for each VLAN-toemulated LAN (ELAN) bridge. The LANE modules must also communicate with the other LANE
components in the network to become active members in the emulated. If only switches are involved in
LANE, each ELAN remains isolated from other ELANs. This isolation follows the principle of VLANs,
which require a Layer 3 routing function to move data between VLANs. Routing can be performed between
ELANs trunked over LANE by adding an ATM interface to an external router.
The router can use a single ATM link to connect to an ATM switch. It cannot connect directly to an ATM
link on a Catalyst switch. The router interacts with LANE in the same manner as a switch does. A major
ATM interface on the router is broken up into logical subinterfaces. The major interface can be configured
with an LECS, if required. The individual subinterfaces represent single ELANs and can be configured with
LES/BUS pairs or a LEC for the ELAN. Packets coming in over a LEC on one subinterface are processed by
the router and forwarded out another LEC on a different subinterface, depending on the Layer 3 addresses
and other routing decisions.
14.2 Routing with an Integrated Router
Early approaches to interVLAN routing used external routers and physical links. Further developments have
moved the route processor inside the switch, for convenience and tighter integration of the Layer 2 and 3
components. Cisco offers several types of integrated route processors in its Catalyst switch family. These
include
•
Route Switch Module (RSM) and Route Switch Feature Card (RSFC)
•
Multilayer Switch Feature Card (MSFC)
Leading the way in IT testing and certification tools, www.testking.com
- 187 -
CCNP/CCDP 642-891 (Composite)
Each of these route processors can also participate in MLS, where the switch creates shortcut paths with the
assistance of a route processor. The RSM interfaces to the Catalyst 5000/5500 backplane through two direct
memory access (DMA) connections. The channels are referred to as Channel0 and Channel1, each providing
200-Mbps throughput. VLANs are supported through the use of virtual VLAN interfaces.
14.3 InterVLAN Routing Configuration
14.3.1 Accessing the Route Processor
To begin interVLAN routing configuration, the route processor must first be accessed. On an external router,
a terminal emulator program can be used to connect directly with the console port. If some IP connectivity is
already available on the router, a telnet session can be opened to the router. An integrated or internal route
processor must first be located in the switch chassis. Use the show module command on a Catalyst switch to
get a listing of the installed modules.
To establish a terminal session with the integrated route processor, use the session Catalyst switch
command with the module_number as an argument. The integrated route processors run Cisco IOS;
therefore, the user interface and command set may be different from that of the host switch. The session
command essentially starts a Telnet session with the route processor. By using the exit IOS command, the
router session is terminated and the switch session is resumed.
For future identification and readability, you should assign a hostname to the route processor at this point
using the hostname name command.
14.3.2 Establishing VLAN Connectivity
Next, the route processor will need to have its interfaces configured to support connectivity to the necessary
VLANs. This is accomplished using interfaces and commands that are unique to the route processor
hardware.
14.3.2.1 Establishing VLAN Connectivity with Physical Interfaces
External routers are connected to switches using traditional LAN media links, such as Ethernet, Fast
Ethernet, Gigabit Ethernet, or Token Ring. Individual physical router interfaces are configured for a single
network each and connected to non-trunk switch ports configured for VLAN membership. By way of the
physical connection, the router interface inherits the VLAN identity of the switch port.
To configure a physical interface, enter the configuration mode and the interface configuration mode; assign
a network address to the interface; and ensure that the interface is in operation. The commands required for
these operations are illustrated below:
Router# configure terminal
Router(config)# interface media module_number/port_number
Router(config-if)# description description
Router(config-if)# ip address ip_address subnet_mask
Router(config-if)# no shutdown
14.3.2.2 Establishing VLAN Connectivity with Trunk Links
When an external router is connected to a switch by a trunk link, the trunk must also be configured. The
physical interface on the router must be Fast Ethernet or Gigabit Ethernet to support trunking and VLAN
Leading the way in IT testing and certification tools, www.testking.com
- 188 -
CCNP/CCDP 642-891 (Composite)
encapsulation. The physical interface is identified with a slot number and a major interface number. Once
trunking is enabled on the interface, each VLAN in the trunk is represented by a subinterface number. These
numbers can be arbitrarily chosen, but must be unique within the major interface number. For each VLAN
to be connected, trunking and VLAN encapsulation must be configured on the respective subinterface. Then
the subinterface is assigned a network address. The commands required for these operations are illustrated
below:
Router(config)# interface module_number/port_number.subinterface
Router(config-if)# encapsulation [ isl | dot1q ] vlan_number
Router(config-if)# ip address ip_address subnet_mask
14.3.2.3 Establishing VLAN Connectivity with LANE
If an external router must be connected to campus switches via an ATM network, LANE must be used, and
the router must be equipped with an ATM Interface Processor, or module, that supports LANE. LANE
trunking is similar to Ethernet trunking in that a major interface is used for connectivity, and individual
subinterfaces correspond to single VLANs. ATM is configured on the major interface. Each ATM
subinterface is configured with a LEC per ELAN. The LANE module on a Catalyst switch performs
bridging between the VLAN on the LAN side and the ELAN on the ATM side. A router only needs to route
packets between ELANs directly. Assuming that the LECS, LES, and BUS reside on other network devices,
only LECs need to be configured on the router. Configuration begins with the ATM interface:
Router(config)# interface atm module_number/port_number
Router(config-if)# no ip address
Router(config-if)# atm pvc 1 0 5 qsaal
Router(config-if)# atm pvc 2 0 16 ilmi
Router(config-if)# interface atm module_number/port_number.subinterface
multipoint
Router(config-if)# ip address ip_address subnet_mask
Router(config-if)# lane client ethernet elan_name
Router(config-if)# interface atm module_number/port_number.subinterface
multipoint
Router(config-if)# ip address ip_address subnet_mask
Router(config-if)# lane client ethernet elan_name
Router(config-if)# ...
14.3.2.4 Establishing VLAN Connectivity with Integrated Routing Processors
Route processors internal to a Catalyst switch have no physical interfaces to connect and configure, instead
these modules use internal connections into the switching backplane. The type of connection is related to the
specific module being used. However, the configuration is similar to that of an external router: To begin
configuration, the route processor module must be located and a terminal session opened. The show module
command displays a list of the installed modules. The session module-number command can be used to
open a Telnet session to the routing module. This command will open a session to the IOS command line
running on the route processor module.
The RSM and RSFC modules in a Catalyst 5000 use virtual VLAN interfaces to provide connectivity to
individual VLANs. These interfaces are configured with IP, IPX or AppleTalk network addresses. The RSM
and RSFC both make connections to the configured VLANs over their internal trunk ports to the switch
backplane. The following commands are used to configure the VLAN interfaces:
Leading the way in IT testing and certification tools, www.testking.com
- 189 -
CCNP/CCDP 642-891 (Composite)
Switch(enable) session module_number
...
Router# configure terminal
Router(config)# interface vlan vlan_number
Router(config-if)# ip address ip_address subnet_mask
Router(config-if)# no shutdown
Router(config-if)# interface vlan vlan_number
Router(config-if)# ip address ip_address subnet_mask
Router(config-if)# no shutdown
...
14.3.3 Configure Routing Processes
Once connectivity has been configured between the switch and a route processor, you must also configure
routing. Routes are paths to distant networks known on the local route processor, along path costs and the
addresses of next-hop route processors. In this way, a router hands off packets destined for a remote network
to a neighboring router who is closer to the destination. Routers are used by end-user devices when the
destination is not attached to the local VLAN. A route processor keeps a local table of known routes, metrics,
interfaces, and neighboring routers. The table entries can be derived from static route entries that are
manually configured or from dynamic routing protocols that run on the router. Dynamic routing protocols
communicate with other routers running the same protocols so that optimal routes can be determined and
advertised in real-time.
To configure dynamic routing on a route processor, use the following IOS commands:
Router(config)# ip routing
Router(config)# router ip_routing_protocol
Router(config-router)# network ip_network_number
Router(config-router)# network ip_network_number
...
14.3.4 Additional InterVLAN Routing Configurations
Once a route processor has been configured for interVLAN routing, end-user stations can use the processor.
Normally, an end-user device knows only about its local subnet and can communicate only with stations on
the local VLAN. To reach another station on a different VLAN, packets must be forwarded to a router;
therefore, each end-user device should be configured with the router's IP address on the local VLAN. The
local VLAN router's IP address is known as the default gateway. In addition, a switch needs to be
configured with a router's address. Unless the switch has the router's address, the switch will be unable to
forward management traffic off its local management VLAN. You can use the ip default-gateway
ip_address command to configure a default gateway on an IOS-based switch. On a CLI-based switch, a
default route must be configured using the set ip route default gateway_address command.
Leading the way in IT testing and certification tools, www.testking.com
- 190 -
CCNP/CCDP 642-891 (Composite)
15. Multilayer Switching
Multilayer Switching (MLS) performs IP data packet flows at a much higher level of performance than
traditional routing. This preserves the CPU of an upstream router without compromising functionality. MLS
was designed increase the performance of a router by combining
Data Flows
the functionality in hardware with a switch. The frame forwarding
and the rewrite function is moved to hardware and then Layer 3 A flow is a specific communication,
switching takes over the task formerly done by the router. MLS consisting of multiple packets, between a
uses the Route Switch Module (RSM), a directly attached external network source and destination within a
specific time sequence.
router, and the engine. MLS can be implemented by using a Layer
3 switch or an external router topology. The Layer 3 switch contains an RSM and the NetFlow Feature Card
(NFFC).
MLS requires the following software and hardware:
•
Catalyst 2926G, 5000, or 6000 series switch with Supervisor Engine software Release 4.1(1) or later.
•
Cisco IOS Release 11.3(2)WA4(4) or later.
•
Supervisor Engine III or III F with the NFFC II, or Supervisor Engine II G or III G.
•
Route Switch Feature Card (RSFC).
•
Multilayer Switch Feature Card (MSFC).
15.1 Multilayer Switching Components
The Cisco MLS implementation includes three components: a MultiLayer Switching Switch Engine (MLSSE); a MultiLayer Switching Route Processor (MLS-RP); and a MultiLayer Switching Protocol (MSLP).
•
The Multilayer Switching Switch Engine (MLS-SE) is the switching entity that handles the function
of moving and rewriting the packets. It is an NFFC residing on a Supervisor Engine III card in a Catalyst
switch. It can also be a Supervisor I and the PFC on the 6000 series.
•
The Multilayer Switching Route Processor (MLS-RP)is a RSM, RSFC, MSFC, or an externally
connected Cisco 7500, 7200, 4500, 4700, or 8500 series router with software that supports multilayer
switching. The MLS-RP sends MLS configuration information and updates, such as the router Media
Access Control (MAC) address, virtual LAN (VLAN) number flow mask, and routing and access list
changes.
•
The Multilayer Switching Protocol (MLSP) operates between the MLS-SE and MLS-RP to enable
multilayer switching. MLSP is the method in which the RSM or router advertises routing changes and
the VLANs or MAC addresses of the interfaces that are participating in MLS.
15.2 MLS-RP Advertisements
When an MLS-RP is enabled in the campus network, MLS-RP advertisements begin. The MLS-RP sends
out multicast Hello messages every 15 seconds to all switches in the network. The advertisement message
consists of the MAC addresses used by the MLS-RP on its interfaces that are participating in MLS, access
list information, and additions and deletions of routes. MLSP uses the Cisco Group Management Protocol
(CGMP) multicast address, which ensures interoperability with the Cisco switches in the network, as the
destination address of the Hello message. Although this address is the same as that used by CGMP, the
Leading the way in IT testing and certification tools, www.testking.com
- 191 -
CCNP/CCDP 642-891 (Composite)
message contains a different protocol type so the switch can distinguish these messages from other multicast
packets.
All switches in the network receive the Hello message, although only Layer 3 switches process the message.
Any switches that are not Layer 3 capable passes the frames on to any downstream switches. When an MLSSE receives the frame, the device extracts the MAC addresses in the frame, along with the associated
interface or VLAN ID for that address. The MLS-SE records the addresses of the MLS-RPs in the MLS-SE
content-addressable memory (CAM) table.
The MLS-SE the assigns XTAGs to each and every MLS-RP attached to a switch. The XTAG is a one-byte
value attached to the MAC address of each attached MLS-RP. These values are used to differentiate
between MLS-RPs when there are more than one available MLS-RP. The XTAG is also used to delete
entries from the Layer 3 table when an MLS-RP fails or exits the network.
The Switching Engine (SE) is involved in the MSL-RP advertising process to maintain the cache for MLS
flows. MLS caching is a process that occurs based on individual flows. Packets in a flow are compared to
the cache, which is based on one-way flows with a flow in the reverse direction being regarded as another
flow. If the cache has an entry that is a match for the packet, the SE switches the packet instead of passing it
to the router. If it does not match an entry in the cache, a process occurs that is designed to create an entry in
the cache.
15.3 Configuring Multilayer Switching
To process of configuring multilayer switching contains a number of steps. Before you can configure MLS
for a specific VLAN or interface, you must globally enable the MLSP that operates between the route
processor and the switch. To enable MLSP on the route processor, enter the following command in global
configuration mode:
Router(config)#mls rp ip
You can use the no mls rp ip command in global configuration mode to disable MLS on the route
processor. In Cisco's implementation of MLS, Layer 3 switches IP, IPX, and IP multicast packets. Any other
packets are routed as in a non-Layer 3 switched network. MLS is a form of interVLAN routing. Multilayer
switches make forwarding decisions based upon which ports are configured for which VLANs. Internal
route processors and ISL-configured links use VLAN IDs to identify interfaces. External route processor
interfaces have knowledge regarding subnets but not VLANs. Therefore, MLS requires that each external
route processor interface have a VLAN ID assigned to it. To assign a VLAN ID to a route processor
interface, use the following commands in interface configuration mode:
Router (config)#interface interface_number
Router (config-if)#mls rp vlan-id vlan_id_number
In the second command, vlan-id-number represents the VLAN assigned to the interface specified in the
interface_number argument in the interface command.
To remove an interface from a VLAN, use the no mls rp vlan-id vlan_id_number command. This
removes the VLAN ID from an interface and, in so doing, disables MLS for that interface.
Leading the way in IT testing and certification tools, www.testking.com
- 192 -
CCNP/CCDP 642-891 (Composite)
Once you have determined which route processor interfaces will be MLS interfaces, you must add the
interfaces to the same VTP domain as the switch. To place an external route processor interface in the same
VTP domain as the switch, use the following commands in interface configuration mode:
Router(config) interface interface_number
Router(config-if)# mls rp vtp-domain domain_name
For an ISL interface, use the mls rp vtp-domain command only on the primary interface as all
subinterfaces that are part of the primary interface inherit the VTP domain of the primary interface.
Use the no mls rp vtp-domain domain_name command to remove the MLS interface from a VTP domain.
You can use the show mls rp vtp-domain domain_name command to see domain information for a
specific VTP domain:
Once you have put an interface into a particular VTP domain you must enable MLS. MLS must be enabled
on every interface that will participate in Layer 3 switching. On a router or RSM interface, use the following
command in interface configuration mode in order to enable MLS:
Router (config-if)#mls rp ip
You can use the no mls rp ip command to disable MLS on an interface.
When a RSM or router is configured to participate in MLS, the device uses the MLSP to send Hello
messages, advertise routing changes, and announce the VLANs or MAC addresses of those interfaces on the
devices participating in MLS. One interface on the MLS-RP must be identified as the management interface
through which MLSP packets are sent and received. The MLSP management interface can be any MLS
interface connected to the switch. Only one management interface needs to be specified. If no management
interface is configured, however, MLSP messages will not be sent. Multiple interfaces on the same route
processor can be configured as a management interface; though this will increase the management overhead
per route processor. To identify a management interface on an RSM or router, use the following command
in interface configuration mode:
Router(config-if)#mls rp management-interface
You can disable the management interface by using the no mls rp management-interface command in
interface configuration mode.
You can use the show mls rp exec command to verify the MLS configuration for an MLS-RP. This
command identifies each MLSP-RP to the switch by both the MLS ID and MLS IP address of the route
processor. The MLS ID is the MAC address of the route processor. The MLS-RP automatically selects the
IP address of one of its interfaces and uses that IP address as its MLS IP address.
To verify the MLS configuration for a specific interface, enter the following command in privilege exec
mode:
Router#show mls rp interface interface_number
Leading the way in IT testing and certification tools, www.testking.com
- 193 -
CCNP/CCDP 642-891 (Composite)
This command displays information that indicates whether MLS is configured on the specified interface; the
VTP domain in which the VLAN ID resides; and whether the interface is configured as the management
interface for the MLS-RP.
15.4 Flow Masks
The MLS-SE uses flow mask modes to determine how packets are compared to MLS entries in the MLS
cache. The flow mask mode is based on the access lists configured on the MLS router interfaces. The MLSSE learns the flow mask through MLSP messages from each MLS-RP for which the MLS-SE is performing
Layer 3 switching. MLS-SE supports only one flow mask for all MLS-RPs that are serviced by the MLS-SE.
If the MLS-SE detects different flow masks from different MLS-RPs for which the MLS-SE is performing
Layer 3 switching, the MLS-SE changes its flow mask to the most specific flow mask detected.
The MLS-SE supports three flow mask modes: Destination-IP, Source-Destination-IP, and IP-Flow.
•
Destination-IP is the default flow mask mode and is the least specific flow mask. This mode is used if
no access lists are configured on any of the MLS router interfaces.
•
Source-Destination-IP is the entry that the MLS-SE maintains for each source and destination IP
address pair. All flows between a given source and destination use this MLS entry regardless of the IP
protocol ports. This mode is used if a standard access list is on any of the MLS interfaces.
•
IP-Flow represents the most specific flow mask. An IP-Flow entry includes the source IP address,
destination IP address, protocol, and protocol ports. This mode is used if there is an extended access list
on any MLS interface.
When the MLS-SE flow mask changes, the entire MLS cache is purged. You can set a flow mask on the
MLS-SE without applying an access list on the route processor. To set the flow mask on the MLS-SE
without setting an access list on a route processor interface, enter the following command in privilege mode:
set mls flow [ destination | destination-source | full ]
The destination keyword indicates that you are applying the IP-Destination mode, the destinationsource keyword indicates that you are applying Source-Destination-IP mode, and full keywords indicates
that you are applying IP-Flow mode.
15.5 Configuring the MLS-SE
MLS is enabled by default on Catalyst series switches that support Layer 3 switching. There are, however, a
few instances where configuring the switch is necessary, such as when the MLS-RP is an external router.
Enter the set mls disable command to disable MLS on the MLS-SE. This command stops the MLS-SE
from processing the MLSP messages from the MLS-RP and purges all existing MLS cache entries in the
switch.
If a switch has been disabled for Layer 3 switching, you can use the set mls enable command in privilege
exec mode re-enable it.
15.5.1 MLS Caching
Leading the way in IT testing and certification tools, www.testking.com
- 194 -
CCNP/CCDP 642-891 (Composite)
Because the MLS cache has a size limitation, MLS entries will be purged from the cache under certain
conditions. This purging, or aging, process takes into effect when candidate entries remain in the cache for
five seconds with no enabled entry before timing out; when a flow for an entry has not been detected for the
specified aging time; when access lists are applied; when routing changes; or when MLS is disabled on the
switch.
The amount of time an MLS entry remains in the cache, which is called the aging time, is adjustable. To
adjust the value of the aging time you can use the following command in privileged exec mode:
Switch(enable)#set mls agingtime agingtime
The range of the aging time value is from 8 to 2032 seconds and the default value is 256 seconds.
Some MLS flows are sporadic or short-lived, such as packets that are sent to or received from a Domain
Name System (DNS) or Trivial File Transfer Protocol (TFTP) server which may be closed after one request
and one reply cycle. The MLS entry for such a packet will still consume cache space until the entry is
purged. To overcome this, a different type of aging mechanism, called fast aging, can be implemented. In
fast aging, the entry is removed from the cache if the MLS-SE does not detect a specified number of packets
in a certain time period. To configure the fast aging option, enter the following command in privilege exec
mode:
Switch(enable)# set mls agingtime fast fast_agingtime pkt_threshold
The allowable fast_agingtime values are 32, 64, 96, or 128 seconds while the default is 0 seconds. The
pkt_threshold argument indicates the number of packets that must be detected within the specified amount
of time. The allowable pkt_threshold values are 0, 1, 3, 7, 15, 31 or 63 packets, and the default is 0
packets.
15.5.2 Verifying MLS Configurations
You can verify MLS configurations by using the show mls command in privileged exec mode. This
command displays information about MLS on a MLS-SE, including the status of MLS; the aging time for an
MLS cache entry; the fast aging time and the packet threshold for a flow; the flow mask; the total packets
switched; the number of active MLS entries in the cache; whether and for which port and host NetFlow data
export is enabled; and the MLS-RP IP address, MAC address, XTAG, and supported VLANs.
You can also display information about a specific MLS-RP by using the show mls rp command and
specifying the IP address of the target MLS-RP.
15.5.3 External Router Support
If the switch supports an externally attached MLS-RP, the switch must be manually configured to recognize
that MLS-RP. Use the following command in privilege exec mode on the switch to manually include an
external MLS-RP:
Switch (enable) set mls include ip_address
15.5.4 Switch Inclusion Lists
Leading the way in IT testing and certification tools, www.testking.com
- 195 -
CCNP/CCDP 642-891 (Composite)
Use the following command in privilege exec mode to display the contents of the switch inclusion list to
determine which MLS-RPs are participating in MLS with the MLS-SE:
Switch (enable) show mls include
This command displays the IP addresses of all MLS-RPs that are participating in MLS with the MLS-SE.
15.5.5 Displaying MLS Cache Entries
Use the following command in privilege exec mode to display the MLS cache entries:
Switch (enable) show mls entry
This command can be further defined to show specific MLS cache entries by using certain parameters.
These parameters are listed in Table 15.1.
TABLE 15.1: Displaying Specific MLS Cache Entries
MLS Cache Entry
Command
Specific destination IP address
show mls entry destination ip_address
Specific source IP address
show mls entry source ip_address
Specific MLS_RP ID
show mls entry rp ip_address
Specific IP flow
show mls entry flow protocol source_port
destination_port
Use the clear mls entry command in privilege exec mode to remove entries from the MLS cache.
Leading the way in IT testing and certification tools, www.testking.com
- 196 -
CCNP/CCDP 642-891 (Composite)
16. Cisco Express Forwarding (CEF)
Cisco Express Forwarding (CEF) is advanced, Layer 3 IP switching technology. Although you can use CEF
in any part of a network, it is designed for high-performance, highly resilient Layer 3 IP backbone switching.
It optimizes network performance and scalability for networks with large and dynamic traffic patterns, such
as the Internet, on networks characterized by intensive Web-based applications, or interactive sessions. CEF
offers three benefits: improved performance because it is less CPU-intensive than fast switching route
caching; scalability; and resilience as CEF can switch traffic more efficiently than typical demand caching
schemes.
16.1 CEF Components
Information conventionally stored in a route cache is stored in two data structures for CEF switching. The
data structures are the Forwarding Information Base (FIB) and the Adjacency Tables, and provide
optimized lookup for efficient packet forwarding.
16.1.1 Forwarding Information Base (FIB)
CEF uses a FIB to make IP destination prefix-based switching decisions. The FIB is similar to a routing
table and maintains a mirror image of the forwarding information contained in the IP routing table. When
routing or topology changes, the IP routing table is updated, and those changes are reflected in the FIB. The
FIB also maintains next-hop address information based on the information in the IP routing table. Because
there is a one-to-one correlation between FIB entries and routing table entries, the FIB contains all known
routes and eliminates the need for route cache maintenance that is associated with switching paths.
16.1.2 Adjacency Tables
Nodes in the network are adjacent if they are a single hop across a link layer from each other. CEF uses
adjacency tables to maintain Layer 2 next-hop addresses for all FIB entries. The adjacency table is populated
as adjacencies are discovered. Each time an adjacency entry is created; a link-layer header for that adjacent
node is computed and stored in the adjacency table. Once a route is determined, it points to a next hop and
corresponding adjacency entry. It is subsequently used for encapsulation during CEF packet switching.
However, a route might have several paths, such as when a router is configured for load balancing and/or
redundancy. For each resolved path, a pointer is added for the adjacency corresponding to the next-hop
interface for that path. This mechanism is also used for load balancing across several paths.
In addition to adjacencies associated with next-hop interfaces, other types of adjacencies are used to
facilitate switching when exception processing conditions exist. When the prefix is defined, prefixes
requiring exception processing are cached with one of the special adjacencies listed in Table 16.1.
TABLE 16.1: Adjacency Types for Exception Processing
Adjacency Type
Purpose
Null adjacency
Drops packets destined for a Null interface. This can be used as
are form of access filtering.
Glean adjacency
When a router is connected directly to several hosts, the FIB
table on the router maintains a prefix for the subnet rather than
Leading the way in IT testing and certification tools, www.testking.com
- 197 -
CCNP/CCDP 642-891 (Composite)
for the individual host prefixes. The subnet prefix points to a
glean adjacency. When packets need to be forwarded to a
specific host, the adjacency database is "gleaned" for the specific
prefix.
Punt adjacency
Features that require special handling or features that are not yet
supported in conjunction with CEF switching paths are
forwarded to the next switching layer for handling while features
that are not supported are forwarded to the next higher switching
level.
Discard adjacency
Discards packets.
Drop adjacency
Drops packets, but checks the prefix.
16.2 CEF Operation Modes
CEF can be enabled in one of two modes: Central CEF Mode; and Distributed CEF Mode (dCEF)
•
When in Central CEF Mode is enabled, the CEF FIB and adjacency tables reside on the route processor,
and the route processor performs the express forwarding. This mode can be used when line cards, such
as VIP line cards or GSR line cards, are not available for CEF switching or when you need to use
features not compatible with distributed CEF switching.
•
When Distributed CEF Mode (dCEF) is enabled, line cards maintain an identical copy of the FIB and
adjacency tables. The line cards perform the express forwarding between port adapters, relieving the
RSP of involvement in the switching operation. dCEF uses an Inter Process Communication (IPC)
mechanism to ensure synchronization of FIBs and adjacency tables on the route processor and line cards.
Note: The Cisco 12000 series Routers operate only in dCEF mode; dCEF
switching cannot be configured on the same VIP card as distributed fast
switching, and dCEF is not supported on Cisco 7200 series routers.
16.3 Configuring Cisco Express Forwarding
To configure CEF you simply need to enable CEF or dCEF on a router. There are, however, a number of
other optional configuration tasks that you can perform. These include configuring load balancing; and
configuring network accounting. You need not configure distributed tunnel switching, such as GRE tunnels,
for CEF as it is enabled automatically when you enable CEF or dCEF.
You must enable dCEF when you want the line cards to perform express forwarding so that the route
processor can handle routing protocols or switch packets from legacy interface processors. To do this, use ip
cef command in global configuration mode.
To enable dCEF, use one of the ip cef distributed commands in global configuration mode. You can
disable dCEF by using the no ip cef distributed command.
When you enable CEF or dCEF globally, all interfaces that support CEF are enabled by default. If you want
to turn off CEF or dCEF on a particular interface you must disable it for that interface. You can use the no
ip route-cache cef command in interface configuration mode to disable CEF or dCEF on an interface.
Leading the way in IT testing and certification tools, www.testking.com
- 198 -
CCNP/CCDP 642-891 (Composite)
If at a later you want to re-enable CEF or dCEF on that interface, use the ip route-cache cef command in
interface configuration mode.
16.3.1 Configuring Load Balancing for CEF
You can configure load balancing on a per-destination or per-packet basis. Load-balancing decisions are
made on the outbound interface therefore you must configure load balancing on outbound interfaces.
16.3.1.1 Per-Destination Load Balancing
Per-destination load balancing is enabled by default when you enable CEF. Per-destination load balancing
allows the router to use multiple paths to achieve load sharing. Packets for a given source-destination host
pair are guaranteed to take the same path, even if multiple paths are available. Traffic destined for different
pairs tend to take different paths. Because per-destination load balancing depends on the statistical
distribution of traffic, load sharing becomes more effective as the number of source-destination pairs
increase.
You should disable per-destination load balancing when you want to enable per-packet load balancing. To
do this, use the no ip load-sharing per-destination command in interface configuration mode.
16.3.1.2 Per-Packet Load Balancing
Per-packet load balancing allows the router to send successive data packets over paths without regard to
individual hosts or user sessions. It uses a round-robin method to determine which path each packet takes to
the destination. Per-packet load balancing ensures balancing over multiple links. However, per-packet load
balancing via CEF is not supported on Engine 2 Gigabit Switch Router (GSR) line cards (LCs).
You can use the ip load-sharing
packet load balancing.
per-packet
command in interface configuration mode to enable per-
16.3.2 Configuring Network Accounting for CEF
Network accounting is the process of collect statistics and information about patterns of network use.
To collect network accounting information for CEF, you can use either the ip cef accounting
command or the ip cef accounting non-recursive command in global configuration mode.
per-prefix
When you enable network accounting for CEF from global configuration mode, accounting information is
collected at the route processor when CEF mode is enabled. When network accounting is enabled for dCEF,
information is collected at the line cards.
The information collected through network accounting can be viewed by using the
in exec mode.
show ip cef
command
Leading the way in IT testing and certification tools, www.testking.com
- 199 -
CCNP/CCDP 642-891 (Composite)
17. The Hot Standby Router Protocol (HSRP)
Hot Standby Router Protocol (HSRP) is Cisco proprietary protocol that is designed to provide a level of
fault tolerance in the network. It protects against a failure of the first-hop router by picking up where the
default router left off. HSRP not only allows for a layer of redundancy, but also has the capability of
providing load sharing.
17.1Traditional Redundancy Methods
There are various traditional redundancy methods that have been used, but for one reason or another are
ineffective in some failure modes. These methods include default gateways, proxy ARP, RIP, and IRDP.
17.1.1 Default Gateways
In a network where a router is responsible for routing packets for a subnet, the subnet will become isolated
should the router go down as the computer do not have the capability to collect and exchange routing
information. These devices are configured with a single default gateway IP address. If the router that is the
default gateway fails, the device is limited to communicating only on the local subnet and is disconnected
from the rest of the network. Even if a redundant router exists that could serve as a default gateway, there is
no way for the computer to dynamically switch to a new default gateway if the present default gateway fails.
17.1.2 Proxy ARP
Proxy ARP is used when the router is responding with its own MAC address as a proxy for some other host
on a different subnet, thus, a host computer sending a packet to a remote station considers the destination to
be on the same subnet as itself. Because the router knows where the destination MAC is located or via the
routing table, the router forwards the packets to the real destination. However, should the router go down
and the host computer continues to send packets destined for a remote station, the packets will be dropped.
A reboot of the host computer will fix the problem by clearing the ARP cache, or a force of another ARP
request will pick up a failover router. However, there would be a significant delay in communications, either
to reboot the host or to initiate an ARP.
17.1.3 Routing Information Protocol (RIP)
The first routing protocol to discover available routers in networks was the Routing Information Protocol
(RIP). In this method, the workstation holds a routing table, which lists routes and associated next hops that
have a path to the destination. It is then up to the workstation to choose the best path. RIP is a distance
vector routing protocol, which means it relies on the number of hops. However, RIP has a limitation of 15
hops and its network convergence is slow. In an unstable network, this can cause problems.
17.1.4 ICMP Router Discovery Protocol (IRDP)
Some newer IP hosts use the ICMP Router Discovery Protocol (IRDP) to find a new router when a route
becomes unavailable. IRDP is not a routing protocol but is an extension to ICMP. It provides a mechanism
for routers to advertise useful default routes. A host that runs IRDP listens for hello multicast messages from
the preferred default router. As long as the host detects these hello messages, the MAC address for the router
generating the hello messages is used as the destination MAC address by the host. The IRDP-based
advertisements are considered valid only for a predefined lifetime value. If a new advertisement is not
Leading the way in IT testing and certification tools, www.testking.com
- 200 -
CCNP/CCDP 642-891 (Composite)
received during that lifetime, the router address is considered invalid and the host removes the
corresponding default route. The host then uses an alternate router, which generally does not represent the
shortest path to the destination station.
17.2 Hot Standby Router Protocol
Hot Standby Router Protocol addresses the problem caused by first-hop failures generally having static
default gateway addresses on hosts. Previously, a failure at the default gateway address would leave the host
unable to communicate outside of its own subnet. With HSRP, a set of routers work together to represent a
single virtual standby router. A failure of the active router would result in a switch to the standby router, and
packets would continue to be forwarded. Cisco routers use HSRP, which enables end stations to continue
communicating throughout the network even when the default gateway becomes unavailable. The standby
router group functions as a single router configured with a virtual IP and MAC address, distinct from the
physical routers in the network. Because the routers in the standby group route packets sent to a virtual
address, packets are still routed through the network even when the router originally forwarding the packets
fails.
If the primary or lead router of a group of HSRP routers fails, a standby router in the same group begins to
forward traffic for the HSRP group. The routers decide within the group which router forwards traffic for the
virtual address. At regular intervals, the routers exchange information to determine which routers are still
present and able to forward traffic. When routers are configured to be part of an HSRP group, the routers
recognize their own native MAC address, as well as the HSRP group MAC address. Routers whose Ethernet
controllers only recognize a single MAC address will use the HSRP MAC address when performing as the
active router and the burn-in address (BIA) when in standby mode or not speaking.
17.2.1 HSRP Group Members
The HSRP group consists of an active router, a standby router, a virtual router, and other routers. To
facilitate load sharing, a single router may be a member of multiple HSRP standby groups on a single subnet.
Each standby group emulates a single virtual router. However, there is a limit of 255 standby groups on any
given LAN.
Note: Some platforms do not support multiple HSRPs because of the single
MAC address per interface restriction. You can lift this restriction by using
the standby use-bia command.
17.2.2 Addressing HSRP Groups Across ISL Links
HSRP routers can provide for redundancy and load sharing across the same subnet. As of Cisco IOS 11.3,
HSRP routers can also provide for redundancy and load sharing across different subnets. For each standby
group, an IP address and a single well-known MAC address with a unique group identifier is allocated to the
group. The IP address of a group is in the range of addresses belonging to the subnet in use on the LAN.
However, the IP address of the group must differ from the addresses allocated as interface addresses on all
routers and hosts on the LAN, including virtual IP addresses assigned to other HSRP groups.
Running HSRP over ISL allows users to configure redundancy between multiple routers that are configured
as front ends for VLAN IP subnets. By configuring HSRP over ISLs, users can eliminate situations in which
a single point of failure causes traffic interruptions. This provides improvement in overall networking
Leading the way in IT testing and certification tools, www.testking.com
- 201 -
CCNP/CCDP 642-891 (Composite)
resilience by providing load balancing and redundancy capabilities between subnets and VLANs. To
configure HSRP over an ISL link between VLANs, you must define the encapsulation type; configure the IP
address; and enable HSRP.
17.3 HSRP Operations
17.3.1 The Active Router
One router in each group is elected to be the active router. The election process occurs through the sending
and receiving of hello messages. The hello message contains a priority level for the sending router. The
router with the highest standby priority in the group becomes the active router responsible for forwarding the
packets sent to the virtual router. If the priority level is the same for each router in the group, the first router
to come up and obtain the virtual router IP address becomes the active router.
17.3.2 Locating the Virtual Router MAC Address
The ARP process makes an association between Layer 3 network addresses and Layer 2 hardware addresses.
Each router maintains a table of resolved addresses. The router checks this ARP cache before attempting to
contact a device to determine if the address has already been resolved. The IP address and corresponding
MAC address of the virtual router is maintained in the ARP table of each router in an HSRP standby group.
17.3.3 Standby Router Behavior
The function of the active router is to forward packets sent to the virtual router. Another router in the group
is elected as the standby router. The active router assumes and maintains its active role through the
transmission of hello messages. Meanwhile the standby router monitors the operational status of the HSRP
group and assumes packet-forwarding responsibility if the active router becomes inoperable. The standby
router also transmits hello messages to inform all other routers in the group of the standby router's role and
status.
When the active router fails, it stops transmitting hello messages. If the HSRP group misses three hello
messages, it realizes that the active router is down and the standby router then assumes the role of the active
router. Because the new active router assumes both the IP and MAC addresses of the virtual router, the end
stations see no disruption in service. The end-user stations continue to send packets to the virtual router
MAC address, and the new active router delivers the packets to the destination. In the event that both the
active and standby routers fail, all routers in the group contend for the active and standby router roles, and
the highest priority router will become the active router.
17.3.4 HSRP Messages
All routers in a standby group send or receive HSRP messages. These messages are used to determine and
maintain the router roles within the group. HSRP messages are encapsulated in the data portion of User
Datagram Protocol (UDP) packets and use port number 1985. These packets are addressed to an "all router"
multicast address with a Time to Live (TTL) of one (1). Once the HSRP protocol has completed the election
process, only the active and the standby routers will send HSRP messages. The HSRP message contains a
Version field, which indicates the version of the HSRP; an Op Code, which describes the type of message
contained in the HSRP message packet; Hello messages that are sent to indicate that a router is running and
capable of becoming either the active or standby router; Group messages that are sent when a router wants
to become the active router; Reserved messages that are sent when a router no longer wants to be the active
Leading the way in IT testing and certification tools, www.testking.com
- 202 -
CCNP/CCDP 642-891 (Composite)
router; a State field, which describes the current state of the router sending the message; a Hellotime field,
which contains the approximate period between the hello messages that the router sends; a Holdtime field,
contains the amount of time that the current hello message should be valid; a Priority field, which is used to
elect the active and standby routers; a Group field, which identifies the standby group; a Authentication
Data field, which contains a clear-text, eight-character reused password; and a Virtual IP Address field,
which contains the IP address of the virtual router used by this group.
17.3.5 HSRP States
HSRP defines six states in which an HSRP configured router may exist. These states are the Initial State; the
Learn State; the Listen State; the Speak State; the Standby State; and the Active State.
•
All routers begin in the Initial State. This state indicates that HSRP is not running and is entered via a
configuration change or when an interface first comes up.
•
In the Learn State, the router has not yet seen a hello message from the active router, nor learned the IP
address of the virtual router and is thus still waiting to hear from the active router.
•
In the Listen State, the router knows the virtual IP address, but is neither the active router nor the
standby router.
•
In the Speak State, the router sends periodic hello messages and is actively participating in the election
of the active and standby router. However, a router cannot enter the speak state unless the router has the
IP address of the virtual router.
•
In the Standby State, the router is the standby router and sends periodic hello messages.
•
In the Active State, the router is currently the active router.
17.4 Configuring HSRP
The active components of the configuration are HSRP standby interface, standby preempt, and hello
message timers.
17.4.1 Configuring an HSRP Standby Interface
You use the following command in interface configuration mode to configure a router as a member of an
HSRP standby group:
Router(config-if)#standby [ group_number ] ip ip_address
In this command, you can include the optional group_number argument to indicate the HSRP group to
which the interface belongs. If you specifying a unique group_number in the standby command, you would
create multiple HSRP groups. The default group is 0.
After the standby ip command is issued, the interface will change to the appropriate state.
To remove an interface from an HSRP group, you can use the no standby group_number ip command.
17.4.2 Configuring HSRP Standby Priority
Leading the way in IT testing and certification tools, www.testking.com
- 203 -
CCNP/CCDP 642-891 (Composite)
The network administrator can assign a priority value to each router in a standby group, allowing the
administrator to control the order in which active routers for that group are selected. Use the following
command in interface configuration mode to set the priority value of a router:
Router#(config-if) standby group_number priority priority_value
Use the no standby priority command to reinstate the default standby priority value.
17.4.3 Configuring HSRP Standby Preempt
The standby router automatically assumes the active router role when the active router fails or is removed
from service. However, the new active router remains the forwarding router even when the former active
router with the higher priority regains service in the network. The former active router can be configured to
resume the forwarding router role from a router with a lower priority by using the following command in
interface configuration mode on the active router:
Router(config-if)#standby group_number preempt
After the standby preempt command is issued, the interface changes to the appropriate state.
17.4.4 Configuring the Hello Message Timers
An HSRP-enabled router sends hello messages to indicate that the router is operational and is capable of
becoming either the active or standby router. The hello message contains the priority of the router, as well as
a hellotime and holdtime value. The hellotime value indicates the interval between the hello messages
that the router sends. The holdtime value contains the amount of time that the current hello message is
considered valid. If an active router sends a hello message, receiving routers consider that hello message to
be valid for one holdtime.
The hellotime and the holdtime parameters are adjustable. However, the holdtime value should be at least
three times the hellotime value. Use the following command in interface configuration mode to configure
the time between hellos and the time before other group routers declare the active or standby router to be
nonfunctioning:
Router(config-if)#standby [ group_number ] timers hellotime holdtime
The hellotime argument indicates the hello time in seconds. This value is an integer ranging from 1
through 255 and has a default is 3 seconds. The holdtime argument indicates the time, in seconds, before
the active or standby router is declared to be down. This is also an integer from 1 through 255 but has a
default value of 10 seconds.
Use the no standby group timers command to reinstate the default standby timer values.
17.4.5 HSRP Interface Tracking
The status of an interface directly affects which router needs to become the active router, particularly when
each of the routers in an HSRP group has a different path to resources within the campus network. Interface
Leading the way in IT testing and certification tools, www.testking.com
- 204 -
CCNP/CCDP 642-891 (Composite)
tracking enables the priority of a standby group router to be automatically adjusted based on availability of
the interfaces of that router. When a tracked interface becomes unavailable, the HSRP priority of the router
is decreased. The HSRP tracking feature reduces the likelihood that a router with an unavailable key
interface will remain the active router.
17.4.6 Configuring HSRP Tracking
Use the following command in interface configuration mode to configure HSRP tracking:
Router(config-if)#standby [ group_number ] track type interface_number
[ interface_priority ]
The optional group_number argument for this command indicates the group number on the interface to
which the tracking applies. If no group_number is specified, the default group number (0) is assumed. The
type argument and the interface_number argument indicate the interface type and the interface number to
be tracked. Finally the optional interface_priority argument indicates the amount by how much the
HSRP priority for the router is decreased when the interface becomes disabled. The priority of the router is
also increased by this amount when the interface becomes available again. The default
interface_priority value is 10. You can use the no standby group_number track command to disable
interface tracking.
17.4.7 HSRP Status
Use the following command in privileged exec mode to display the status of the HSRP router:
Router#show standby [ type_number ] [ group_number ] [ brief ]
The brief option displays a summary of each standby group. If these optional interface parameters are not
indicated, the show standby command displays HSRP information for all interfaces.
17.5 Troubleshooting HSRP
The Cisco IOS implementation of HSRP supports the debug command. Enabling debug displays the HSRP
state changes and debugging information regarding transmission and receipt of HSRP packets. However, the
debug command can cause the system to become unusable due to the high CPU priority assigned to this
process. Use the following command in privileged exec mode to enable HSRP debugging:
Router#debug standby
The output of the debug standby command shows the various states of HSRP.
To disable the debugging feature, enter the no debug standby command, the no debug all command,
the undebug standby command, or the undebug all command.
Leading the way in IT testing and certification tools, www.testking.com
- 205 -
CCNP/CCDP 642-891 (Composite)
18. Multicasts
The new campus networks support intranet and multimedia applications that operate between one sender and
a group of simultaneous receivers. Applications like these include transmitting all-hands messages to
employees, video and audio broadcasting, interactive video distance learning, transmitting data from a
centralized data warehouse to multiple departments, communication of stock quotes to brokers, and
collaborative computing. Multimedia traffic can traverse the network in the form of unicasts, broadcast or
multicasts. Each of these methods of transmission has a different effect on network bandwidth.
18.1 Unicast Traffic
In a unicast design, an application sends a copy of a packet to every client unicast address that is called out
on a one-to-one relationship. In the event that the unicast group is large and diverse; the potential to carry
the same traffic multiple times is great. With the advances in technology, it possible to provide every user
with a unicast connection to the Internet. The concerns of network managers when it comes to unicast traffic
consist of the number of user connections and the amount of replicated unicast transmissions.
In the case of an IP TV server, which is a streaming video server and application capable of doing both
unicast and multicasts, if the server operates in unicast, it must send a separate TV stream for each client
requesting access to the application. Replicated unicast transmissions consume bandwidth within the
network. The path between server and client must also take into account the number of router and switch
hops that occur between the two points. As routers are added to the path, the data is replicated across the link,
consuming router and switch bandwidth. Because this, replicated unicast cannot scale up to efficiently
deliver traffic to large numbers of end stations, but may be suitable for small numbers of destinations.
18.2 Broadcast Traffic
In a broadcast design, an application sends only one copy of each packet using a broadcast address. If this
technique is used, however, broadcasts must either be stopped at the broadcast domain boundary with a
Layer 3 device or transmitted to all devices in the campus network. Broadcasting a packet to all devices can
be inefficient if only a small group in the network needs to receive the packet. Broadcast multimedia is
dispersed throughout the network just like normal broadcast traffic. As with normal broadcasts, every client
has to process the broadcast multimedia data frame. However, multimedia broadcasts can reach as high as 7
Mbps or more of data. Even if an end station is not using a multimedia application, the device still processes
the broadcast traffic. This requirement can use most, if not all, of the allocated bandwidth for each device.
For this reason, the broadcast multimedia method is rarely implemented.
18.3 Multicast Traffic
The most efficient solution for transmitting multimedia is one in which a multimedia server sends one copy
of each packet, addressing each packet to a special multicast address. Unlike the unicast environment, a
multicast server sends out a single data stream to multiple clients. Unlike the broadcast environment, the
client device decides whether to listen to the multicast address. Multicasting saves bandwidth and controls
network traffic by forcing the network to replicate packets only when necessary, thus eliminating traffic
redundancy, multicasting reduces network and host processing.
Multicast traffic is handled at the transport layer using the User Datagram Protocol (UDP), which, unlike the
Transmission Control Protocol (TCP), has no reliability functionality. This means UDP does not perform
Leading the way in IT testing and certification tools, www.testking.com
- 206 -
CCNP/CCDP 642-891 (Composite)
error correction or flow control. Because of the simplicity of UDP, data packet headers contain fewer bytes
and consume less network overhead than TCP.
18.4 Multicast Addressing
18.4.1 Multicast Address Structure
IP multicasting is the transmission of an IP data frame to a multicast group, identified by a single IP address.
Because the multicast group is identified by a single IP address rule, the IP multicast datagram contains a
specific combination of the destination MAC address and a destination IP address. The range of IP addresses
is divided into classes based on the high order bits of a 32-bit IP address. IP multicast uses Class D IP
addresses. Unlike Class A, B, and C IP addresses, the last 28 bits of a Class D address are unstructured.
These 28 bits identify the multicast group ID. The multicast group ID is a single address typically written as
decimal numbers in the range 224.0.0.0 through 239.255.255.255.
Multicast addresses may be dynamically or statically allocated. Dynamic multicast addressing provides
applications with a group address on demand. Because dynamic multicast addresses have a specific lifetime,
applications must request this type of address only for as long as it is needed. Statically allocated addresses
are reserved for specific protocols that require well-known addresses. The Internet Assigned Numbers
Authority (IANA) assigns these well-known addresses. These addresses are called permanent host groups
and are similar in concept to the well-known TCP and UDP port numbers. Table 18.1 lists some of the
statically assigned well-known Class D addresses.
TABLE 18.1: Well-Known Class D Addresses
Class D Address
Purpose
224.0.0.1
All hosts on a subnet
224.0.0.2
All routers on a subnet
224.0.0.4
All Distance Vector Multicast Routing Protocol (DVMRP) routers
224.0.0.5
All Open Shortest Path First (OSPF) routers
224.0.0.6
All OSPF designated routers
224.0.0.9
All Routing Information Protocol, version 2 (RIP-2) routers
224.0.0.13
All Protocol Independent Multicast (PIM) routers
18.4.2 Mapping IP Multicast Addresses to Ethernet
Ethernet frames have a 48-bit destination address field. To avoid invoking the Address Resolution Protocol
(ARP) to map multicast IP addresses to Ethernet addresses, the IANA designated a range of Ethernet
addresses for multicast. The lower 23 bits of the Class D address are mapped into a block of Ethernet
addresses that have been reserved for multicast. This block includes addresses in the range
00:00:5e:00:00:00 through 00:00:5e:ff:ff:ff. The IANA allocates half of this block for multicast addresses.
Given that the first byte of any Ethernet address must be 01 to specify a multicast address, the Ethernet
addresses corresponding to IP multicasting are in the range 01:00:5e:00:00:00 through 01:00:5e:7f:ff:ff.
The prefix 01-00-5e identifies the frame as multicast; the next bit is always 0, leaving only 23 bits for the
multicast address. Because IP multicast groups are 28-bits long, the mapping cannot be one-to-one. Only the
23 least-significant bits of the IP multicast group are placed in the frame. The remaining five high-order bits
Leading the way in IT testing and certification tools, www.testking.com
- 207 -
CCNP/CCDP 642-891 (Composite)
are ignored, resulting in 32 different multicast groups being mapped to the same Ethernet address. This also
means that 32, or 25, addresses can be ambiguous when mapped. Therefore, each IP multicast MAC address
is capable of representing 32 IP multicast addresses.
18.4.3 Managing Multicast Traffic
The multicast sending process specifies a destination address defined as a multicast address. The device
driver in the sending server converts this address to the corresponding Ethernet address and sends the packet
out on the network. The receiving devices, or clients, must indicate that they want to receive datagrams
destined for a given multicast address. This requires coordination from all devices participating in the
multicast. These devices include the server, the host, the router, and the switch.
IP multicast traffic for a particular source/destination group pair is transmitted from the source to the
multicast group via a distribution tree. This distribution tree connects all the hosts in the group. Before
multicast traffic can traverse the network, routers need to know which hosts, if any, on a specific physical
network belong to a given multicast group. Because the emerging campus network model is comprised of
both routers and switches, switches also need to know how to direct multicast traffic. Cisco switches do this
through the use of Cisco Group Management Protocol (CGMP).
18.4.4 Subscribing and Maintaining Groups
The Internet Group Management Protocol (IGMP) provides a means to report their multicast group
membership with neighboring multicast routers. It manages multicast traffic throughout networks through
the use of special multicast queries and hosts. A querier is a network device, such as a router, that sends
IGMP queries. A set of queriers and hosts that receive multicast data streams from the same source is called
a multicast group. Queriers and hosts use IGMP messages to join and leave multicast groups.
Several versions of IGMP are available; IGMP versions 1 (IGMPv1) and IGMP version 2 (IGMPv2) are
now in production, with version 3 in development. Each version has its own set of behavior characteristics.
18.4.4.1 IGMP Version 1
IGMP version 1 uses IP datagrams to transmit information about multicast groups. The datagram consists of
a 20-byte IP header and an 8-byte IGMP message. One multicast router per LAN must periodically transmit
Host Membership Query messages to determine which host groups have members on the querier's directly
attached networks. IGMP query messages are addressed to the all-host group (224.0.0.1) and have an IP
Time-To-Live (TTL) equal to one. This TTL ensures that the Query messages sourced from a router are
transmitted onto the directly attached network but are not forwarded by any other multicast routers. When
the end station receives an IGMP query message, the end station responds with a host membership report for
each group into which the end station belongs.
When a host wants to join a multicast group, the host sends a Host Membership Report to the group address.
This unsolicited request reduces join latency for the end system when no other members of that group are
present on that network segment. Multicast routers send Host Membership Query messages to discover
which host groups have members on their attached local networks. One member from each group on the
segment will respond with a report. No formal IGMP query router election process exists within IGMPv1
itself. Instead, the election process is left up to the multicast routing protocols, which use different
Leading the way in IT testing and certification tools, www.testking.com
- 208 -
CCNP/CCDP 642-891 (Composite)
mechanisms. This often results in multiple queriers on a single network segment supporting multiple
multicast-enabled routers.
To ensure the viability of group membership on a given network segment, the router multicasts periodic
IGMPv1 membership queries to the all-hosts group address. Only one member per group responds with a
report to a query. This saves bandwidth on the network segment and processing by the hosts. No special
leave mechanism was defined in IGMPv1. Instead, IGMPv1 hosts leave a group passively or quietly at any
time without any notification to the router.
18.4.4.2 IGMP Version 2
Most of the changes between IGMPv1 and IGMPv2 are primarily to address the issues of Leave and Join
latencies, as well as address ambiguities in the original protocol specification. However, IGMPv2 has some
enhancements over IGMPv1, including the definition of a Group-Specific Query. This allows the router to
transmit a Specific Query to one particular group. IGMPv2 also defines a Leave Group Message for the
hosts, which results in lower leave latency.
The process of joining a multicast group is the same in IGMPv2 as it is in IGMPv1. When a host wants to
join a multicast group, the host sends a host membership report to the multicast group address. If the host
and server reside in different subnets, the join message must go to a router. When the router intercepts the
message, the router looks at its IGMP table. If the network number is not in the table, the router adds the
information contained in the IGMP message. Using queries and reports, a multicast router builds a table
detailing which of the router interfaces have one or more hosts in a multicast group. When the router
receives a multicast datagram, the router forwards the datagram to only those interfaces that have hosts with
processes belonging to that group. After a host has joined a multicast group, the host appears in the router's
group database.
A Leave Group message was also added in IGMPv2. Whenever any end station wants to leave a group, the
host transmits a Leave Group message to the all-routers group (224.0.0.2) with the group field indicating the
group being left. This action reduces the leave latency for the group on the segment when the member
leaving is the last member of the group.
18.4.5 Switching Multicast Traffic
In the multilayer campus model, IP multicast traffic traverses a Layer 2 switch, especially at the access layer.
Because IP multicast traffic maps to a corresponding Layer 2 multicast address, multicast traffic is delivered
to all ports of a Layer 2 switch.
Switches must be able to allow multicast traffic to be forwarded to a large number of attached group
members without unduly loading the switch fabric. This allows the switch to provide support for the
growing number of new multicast applications without impacting other traffic. Layer 2 switches also need
some degree of multicast awareness to avoid flooding multicasts to all switch ports. Multicast control in
Layer 2 switches can be accomplished by defining Virtual LANs (VLANs) to correspond to the boundaries
of the multicast group; Layer 2 switches can snoop IGMP queries and reports to learn the port mappings of
multicast group members, allowing the switch to dynamically track group membership; or a multicast
router-to-switch protocol such as the Cisco Group Management Protocol (CGMP) can be implemented.
CGMP is a Cisco proprietary protocol designed to enable Cisco Catalyst switches to learn about multicast
clients from Cisco routers and Layer 3 switches. It allows the router to work with the switch to configure the
Leading the way in IT testing and certification tools, www.testking.com
- 209 -
CCNP/CCDP 642-891 (Composite)
multicast forwarding table to correspond with the current group membership and is based on a client-server
model, with the router assuming a CGMP server role while the switch assumes the client role.
CGMP works on the basis that the IP multicast router sees all IGMP packets and therefore can inform the
switch when specific hosts join or leave multicast groups. The switch then uses this information to construct
a forwarding table. When the router sees an IGMP control packet, the router creates a CGMP packet. This
CGMP packet contains the request type, the multicast group address, and the actual MAC address of the
client. The packet is sent to a well-known address to which all switches listen. Each switch then interprets
the packet and creates the proper entries in a forwarding table.
18.5 Routing Multicast Traffic
Campus networks typically have a large number of subnetworks, each being a separate broadcast domain.
Using the IP protocol, routers connect these subnetworks so that they can be routed from one broadcast
domain to the next. Each host on the Internet has an address that identifies the physical location of the host.
Part of the address identifies the subnet on which the host resides and part identifies the individual host on
that subnet. Routers periodically send routing update messages to adjacent routers, conveying the state of the
network as perceived by that particular router. This data is recorded in routing tables that are then used to
determine optimal transmission paths for forwarding messages across the network.
Unicast transmission involves transmission from a single source to a single destination. The transmission is
directed toward a single physical location that is specified by the host address. This routing procedure is
straightforward because of the binding of a single address to a single host. However, routing multicast traffic
is a more complex. A multicast address identifies a particular transmission session rather than a specific
physical destination. An individual host is able to join an ongoing multicast session by using IGMP to
communicate this desire to the subnet router. Because the number of receivers for a multicast session can
potentially be quite large, the source does not need to know all the relevant addresses. Instead, the network
routers must somehow be able to translate multicast addresses into host addresses. The basic principal
involved in multicast routing is that routers interact with each other to exchange information about
neighboring routers. Multicast routing is based upon the construction of distribution trees that connect the
members of the various multicast groups.
18.5.1 Distribution Trees
For efficient transmission of multicast traffic, designated routers construct a distribution tree that connects
all members of an IP multicast group. A distribution tree specifies a unique forwarding path between the
subnet of the source and each subnet containing members of the multicast group. It has just enough
connectivity so that there is only one loop-free path between every pair of routers. Because each router
knows which of its lines belong to the tree, the router can copy an incoming multicast datagram onto all the
outgoing branches. This generates the minimum needed number of datagram copies. Because messages are
replicated only when the tree branches, the number of copies of the messages transmitted through the
network is minimized.
There are two tree construction techniques: source-specific trees and shared trees.
•
Source-specific distribution trees require finding a shortest path from the sender to each receiver,
resulting in multiple minimal delay trees for a group. The source-specific method builds a Spanning Tree
for each potential source, or subnetwork and results in source-based delivery trees emanating from the
subnetworks directly connected to the source stations. Because many potential sources for a group exist,
Leading the way in IT testing and certification tools, www.testking.com
- 210 -
CCNP/CCDP 642-891 (Composite)
a different delivery tree is constructed, rooted at each active source. These source-based trees are
constructed using a technique called Reverse Path Forwarding (RPF). If a packet arrives on a link that
the local router believes to be on the shortest path back toward the source of the packet, the router
forwards the packet on all interfaces except the incoming interface. If the packet does not arrive on the
interface that is on the shortest path back toward the source, the packet is discarded. This reduces packet
duplication.
•
Shared Distribution Trees make use of distribution centers and constructs a single multicast tree,
resulting in a low overhead method but sacrifices minimal end-to-end delay. Shared-tree algorithms
construct a single delivery tree shared by all members of a group. The shared-tree approach is similar to
the Spanning-Tree Algorithm except that the shared tree allows the definition of a different shared tree
for each group.
18.5.2 Multicast Routing Protocols
A multicast routing protocol is responsible for the construction of multicast delivery trees and is necessary to
permit the forwarding of multicast packets. Different IP multicast routing protocols use different techniques
to construct multicast Spanning Trees and forward packets. IP multicast routing protocols follow one of two
basic methods.
18.5.2.1 Dense Mode Routing Protocols
The first method for multicast routing is based on the assumption that the multicast group members are
densely distributed throughout the network and bandwidth is plentiful. These dense mode multicast routing
protocols rely on periodic flooding of the network with multicast traffic to set up and maintain the
distribution tree. Dense mode routing protocols include the Distance Vector Multicast Routing Protocol
(DVMRP); the Multicast Open Shortest Path First (MOSPF); and the Protocol Independent Multicast Dense
Mode (PIMDM). Dense mode routing protocol operations assume that almost all routers in the network will
need to distribute multicast traffic for each multicast group. The dense mode protocols are most appropriate
in environments with densely clustered receivers and the available bandwidth to tolerate flooding.
•
The Distance Vector Multicast Routing Protocol (DVMRP) is widely used on the Internet multicast
backbone (MBONE) and uses a process called reverse path flooding. When a router receives a packet, it
floods the packet out all paths except the one that leads back to the packet source. This allows a data
stream to reach all LANs. If a router is attached to a set of LANs that do not want to receive a particular
multicast group, the router can send a prune message back up the distribution tree to stop subsequent
packets along that path. DVMRP periodically floods packets in order to reach any new hosts that want to
receive a particular group and implements its own unicast routing protocol in order to determine which
interface leads back to the source of the data stream. This unicast routing protocol is similar to RIP and
is based purely on hop counts. As a result, the path that the multicast traffic follows may not be the same
as the path that the unicast traffic follows.
•
The Multicast Open Shortest Path First (MOSPF) is a link-state multicast routing protocol that
should be used in a single domain or organization. It is dependent on the use of Open Shortest Path First
(OSPF) as the accompanying unicast routing protocol. In an OSPF/MOSPF network, each router
maintains an up-to-date image of the topology of the entire network. However, MOSPF is not supported
on Cisco routers.
•
The Protocol Independent Multicast Dense Mode (PIMDM) is similar to DVMRP. This protocol is
best suited when there are dense distributions of members of multicast groups. PIM uses flooding as a
Leading the way in IT testing and certification tools, www.testking.com
- 211 -
CCNP/CCDP 642-891 (Composite)
mechanism to reach all routers in the network and then prunes those routers that do not support members
of that particular multicast group.
18.5.2.2 Sparse Mode Routing Protocols
The other method of multicast routing is based on a sparse distribution of multicast group members. Because
the multicast group members are located sparsely throughout the network, flooding the network would be a
waste of bandwidth. Therefore, a more efficient method is required to accomplish multicast routing. Sparse
mode multicast routing protocols use the assumption that explicit requests are used to join a multicast
distribution. Sparse mode protocols are widely used in WAN environments. Sparse mode routing protocols
include the Core-Based Tree (CBT) Protocol, and the Protocol Independent Multicast Sparse Mode
(PIMSM).
•
The Core-Based Tree (CBT) Protocol constructs a single distribution tree that is shared by all members
of the group. Multicast traffic for the entire group is sent and received over the same tree, regardless of
the source. The use of a shared distribution tree can lighten the load on individual routers relative to the
amount of multicast routing information stored. CBT has a core router that is used to construct the tree.
When routers are ready to join the tree, they send a join message to the core router. When the core router
sends a reply, it travels the reverse path, thereby forming a branch of the tree. Because the CBT join
request has a TTL of 1, CBT routers in the network forward the message hop by hop until the core is
reached or until a CBT router that is already on the shared tree is reached.
•
The Protocol Independent Multicast Sparse Mode (PIMSM) is used in those environments where the
number of receivers is relatively small. PIMSM can also be used when multicast traffic is sporadic.
Because the number of receivers is relatively small, PIMSM institute a type of proxy, called the
rendezvous point (RP). Instead of flooding, the host receiver or sender must register with the RP. PIM is,
however, a flexible protocol in that some multicast groups can be dense mode and can coexist together
with other groups that might be sparse mode.
18.6 Configuring IP Multicast
Two tasks must be performed in order to enable multicast. First you have to enable IP multicast routing, and
the enable PIM on an interface. In addition, there are several optional tasks, such as configuring a
rendezvous point, configuring the Time To Live (TTL) threshold, debugging IP Multicast, configuring
Internet Group Management Protocol (IGMP), and enabling Cisco Group Management Protocol (CGMP)
that can be configured.
18.6.1 Enabling IP Multicast Routing
By default, multicast routing is disabled on an interface. Enabling it allows the Cisco IOS software to
forward multicast packets. You must enable IP multicast routing in global configuration mode to turn it on
for the entire router. Then, using interface commands, you can turn on various modes of multicast routing
using specific interfaces. You can use the following command in global configuration mode to enable IP
multicast routing on the router:
Router(config)#ip multicast-routing
You can use the no ip multicast-routing command to disable IP multicast routing.
Leading the way in IT testing and certification tools, www.testking.com
- 212 -
CCNP/CCDP 642-891 (Composite)
18.6.2 Enabling PIM on an Interface
When you enable multicast routing on a route processor or router, it is processed on an individual interface
basis. Enabling each individual interface used with a specific multicast routing protocol is necessary.
The command to enable PIM on an interface is
DallasR1>(config-if)#ip pim
{ dense-mode | sparse-mode | sparse-dense-mode }
PIM can be implemented in three modes: sparse mode, dense mode, or a combination called sparse-dense
mode, as specified by the ip pim options above.
18.6.2.1 Enabling PIM in Dense Mode
Because dense mode used periodic flooding, dense mode should be used when bandwidth is plentiful.
Pruning is used in this environment to avoid unnecessary multicast packets being flooded to a router with no
directly connected neighbors.
18.6.2.2 Enabling PIM in Sparse Mode
Sparse mode should be configured when there are few multicast hosts. Because they are sparse, a
rendezvous point is needed in this scheme. Sparse mode protocols are more appropriate for large
internetworks where dense mode protocols would waste bandwidth by flooding packets to all parts in the
network and then prune back all unwanted connections. Rendezvous points (RPs) are configured and act as a
sort of proxy for multicast hosts. Multicast senders use the RP to announce the hosts, and multicast receivers
use RP to learn about other hosts.
18.6.2.3 Enabling PIM in Sparse-Dense Mode
Use the following commands on all PIM routers inside the PIM domain, beginning in global configuration
mode to configure PIM sparse-dense mode:
Router#ip multicast-routing
Router#interface interface_type interface_number
Router#ip pim sparse-dense-mode
You can verify the PIM configuration by using the show ip pim interface command.
18.6.2.4 Selecting a Designated Router
In a normal functioning multicast network, PIM queries are sent periodically to discover other routers in the
network running PIM. For multi-access networks such as Ethernet, PIM queries are sent to the well-known
multicast address of all routers (224.0.0.2). In a multi-access network, a designated router is elected. The
election process uses the highest IP address received in PIM query messages from a network device's
neighbors. If no PIM queries are received after a given time period, the election process for Designated
Router runs again. When running PIM in sparse mode, the designated router is responsible for sending
multicast join messages to the RP on behalf of host computers on the network. No designated router exists
when running PIM in dense mode.
Leading the way in IT testing and certification tools, www.testking.com
- 213 -
CCNP/CCDP 642-891 (Composite)
18.6.3 Configuring a Rendezvous Point
If you use PIM in sparse mode you must configure a Rendezvous Point (RP). The routers learn that they are
RPs automatically. Multi-RP environments can be configured for any given multicast group. RPs that are
either directly connected to a multicast group member or to a sender are called leaf routers. The RP address
is used by first-hop routers to send PIM register messages on behalf of a host sending a packet to the group.
The RP address is also used by last-hop routers to send PIM join/prune messages to the RP to inform about
group membership. The RP address is configured only on the first-hop and last-hop routers. A PIM router
can be configured as an RP for more than one group. A group can also have more than one RP configured.
An access list is used to determine the groups for which the router is an RP. Although a group can have
more than one RP, only one RP address is used per group at any given time. You can configure multiple
redundant RPs, but only one is used. Use the following command in global configuration mode to configure
the address of the RP:
DallasR1>#ip pim rp-address ip_address
[ group_access_list_number ][ override ]
To disable the RP address, use the following command:
DallasR1>#no ip pim rp-address ip_address
[ group_access_list_number ][ override ]
18.6.4 Configuring Time-To-Live
Time-To-Live (TTL) works in a multicast environment like it does in other routing environments: any
packet that comes in with a higher TTL than the one configured will be forwarded and the TTL value
decreased by one. TTL is expressed as a number that signifies the number of router hops. The default value
of TTL is 0. A TTL of zero means that every packet is forwarded. Configuring the TTL limit is done on a
per-interface basis. Use the following command in interface mode to configure a value other than the default
value:
DallasR1>(config-if)#ip multicast ttl-threshold ttl_value
18.6.5 Debugging Multicast
Many commands can be used when debugging multicast. The more useful commands are:
•
show ip pim neighbor, which displays the PIM neighbor table
•
show ip mroute, which displays the entries in the multicast routing table.
18.6.6 Configuring Internet Group Management Protocol (IGMP)
Multicast routers use IGMP to keep track of multicast hosts on a network. Although there are two versions
of IGMP, IGMPv1 and IGMPv2, IGMPv2 is the default in Cisco routers running IOS Release 11.3(2)T and
later. Use the following command in the relevant interface configuration mode to configure the multicast
router to join a particular multicast group:
DallasR1>(config-if)#ip igmp join-group mulicast_group-address
Leading the way in IT testing and certification tools, www.testking.com
- 214 -
CCNP/CCDP 642-891 (Composite)
18.6.7 Configuring Cisco Group Management Protocol (CGMP)
Cisco Group Management Protocol (CGMP) runs on Catalyst switches and Cisco routers. It is used in
conjunction with IGMP to determine forwarding information. CGMP messages are sent to the well-known
multicast MAC address of 01-00-0cdd-dd-dd. Catalyst switches discover CGMP routers hello messages. In
addition, CGMP is capable of operating correctly only when it is working in conjunction with a router. The
router, however, needs to be able to detect IGMP packets and communicate with the CGMP-enabled switch.
The switch in turn receives CGMP packets created by the router.
Use the ip cgmp command to configure CGMP for a particular interface.
You can use the no ip cgmp command to disable CGMP. Disabling CGMP triggers a CGMP Leave
message similar to the CGMP Join message triggered when enabling CGMP.
Use the set cgmp enable command in enable mode to enable CGMP on a Catalyst switch.
The set cgmp disable command turns CGMP off.
The show config displays whether CGMP is enabled or disabled.
The set cgmp leave command can be used to remove a multicast group from the forwarding tables, which
called CGMP leave.
Leading the way in IT testing and certification tools, www.testking.com
- 215 -
CCNP/CCDP 642-891 (Composite)
19 Quality of Service
19.1 Understanding the Need for Quality of Service
Quality of Service (QoS) addresses the following situations:
•
Delay: When packets are transmitted over the network from one host to another, a delay can occur.
Packets can arrive later than the time that they expected to arrive. The entire delay from the point that a
user enters a key, to the time that the character is echoed and shown in a terminal session, is known as
latency.
•
Jitter takes place when packets arrive outside established parameters for a delay: earlier or later. Queue
disposition can have an impact on delay. Packets at the front of the queue could be prone to delay. The
packets to the back are behind an anonymous number of packets. This creates variability. Cisco offers
various queuing options that enable the most applicable packets to be opted for on an individual basis.
•
Loss: Packets that moves across an error prone network could be dropped. When this happens in a
connection oriented network, the lost packet can simply be requested and submitted again. In a
connectionless network the packet remains lost.
19.2 QoS Types
19.2.1 Best Efforts Delivery
In a best efforts network, routers and switches make their best effort to deliver packets as quickly as
possible. There is no consideration to the traffic type or priority service. Packets are anticipated to be
delivered. When QoS operates in a network there is at least a guarantee that data will be delivered and
adheres to traffic types and priority services.
19.2.2 Integrated Services Model
A mechanism for dealing with end-to-end QoS is the Integrated Services (IntServ) Model. This involves
establishing an end-to-end connection over an internetwork of RSVP-enabled routers. An IP footed
signaling protocol known as Resource Reservation Setup Protocol (RSVP) is utilized. RSVP routers request
and reserves bandwidth over an internetwork. RSVP returns this to the internetwork once the connection is
terminated. QoS is applied on a per flow basis.
11.2.3 Differentiated Services Model
Another mechanism is the Differentiated Services (DiffServ) model. With DiffServ there is no need for
advanced reservations. Every network router and switch can independently manage packets. The devices are
configured with QoS policies and forwarding decisions are executed appropriately. QoS is applied on a per
hop basis and QoS decisions are also based on data held in each packet header. The rest of the chapter
focuses on DiffServ because Cisco uses this mechanism with Ethernet.
19.3 Differentiated Services QoS
The DiffServ structural design is the basis for QoS performance in Catalyst switches. Packets are classed as
they enter the network. Each following router and switch, in a per-hop behavior, makes use of the specific
parameters within the Type of Service and DiffServ fields in the IP header to match the forwarding method
to these parameters. Routers that do not have DiffServ enabled will make forwarding decisions based on
Leading the way in IT testing and certification tools, www.testking.com
- 216 -
CCNP/CCDP 642-891 (Composite)
default queues. Every DiffServe enabled router will have a local configured queuing priority that is used
when forwarding classed packets.
At Layer 2 there is no means to point out the significance of a frame’s contents. A Layer 2 switch can
simply handle forwarding frames as a best-effort delivery. However, there are mechanisms available that
allow layer 2 priorities to be mapped to layer 3.
The original IPv4 Type of Service (ToS) fields has 3 bits of Precedence that provides seven levels of
Precedence which is configurable on a group basis. A further 4 independently configurable bits can be used
to request one of the following four types of service:
•
Minimize delay
•
Maximize throughput
•
Maximize reliability
•
Minimize monetary
One bit must be set to 0 (zero) to indicate that it is unused.
At layer 3, the classification and indicators are created by placing bits in ToS field to other values. The
DiffServ model uses the existing IP ToS byte, known as the Differentiated Services (DS) field and sets it
with a differing value. This 6-bit DS value is used as the Differentiated Service Code Point (DSCP) and is
checked by every DiffServ enabled mechanism. A 2-bit currently unused (CU) field is retained.
Traffic type are defined in Table 19.1
TABLE 19.1: Differentiated Services Types of Traffic
Type of Traffic
Traffic Requirement Attributes
Network Control
Signifies an enormous requirement in order to sustain and support
network infrastructure
Best Effort
Normal LAN priority
Excellent Effort
Entails best efforts meant for key users
Controlled Load
Key Applications
Background
Games, bulk transfers etc
Video
Fewer than 100 milliseconds delay
Voice
Fewer than 10 milliseconds delay
19.3.1 IEEE 802.1p
IEEE 802.1p provides QoS at MAC Level. It identifies 3 bits in the 802.1Q header that is assigned to the
Class of Service (CoS). IEEE 802.1p also specifies vital methods for speeding up traffic class and it enables
vigorous multicast filtering. It set ups eight priority levels that are very alike to the 3 bits specified by IP
Precedence. Layer 3 switches are able to map the 802.1p Precedence to DiffServ fields inside the IP while
Layer 2 switches are able to prioritize output buffer data in accordance to the priority levels. This ensures
end to end QoS.
Leading the way in IT testing and certification tools, www.testking.com
- 217 -
CCNP/CCDP 642-891 (Composite)
19.3.2 Using the QoS Model
The classification process is the first step in determining the manner in which switches and routers
prioritize traffic. Packets need to be classified by means of an indicator or marking that shows that it should
be handled differently. The next step is to traffic policing. This entails the process taken by a switch and
router that decides on whether a packet conforms to the preconfigured profiles. Bandwidth limits are set for
conforming traffic and non-conforming traffic are dropped. The third step is to mark the packet. Data can
be marked in the IEEE 802.1p header at Layer 2. At Layer 3 data can be marked inside the IP header.
When the switch is operating as Layer 3 switch, a packet can be forwarded with QoS. The traffic type is
mapped to the DiffServ number. Once the packet has been through the three steps outlined above, it is
allocated to the applicable queue, prior to exiting the switch. The process can be automated when a switch
receives a packet inside an 802.3 frame with a particular IEEE 802.1p priority. When this does not occur, the
process must be manually mapped. Next, a queuing process is established and traffic is placed into various
queues with reference to the policies. The packet is then forwarded out of the shared output buffer on the
media to the next hop.
19.3.3 Prioritizing the Traffic Classes
Traffic marking is usually done by using the class-maps and policy-maps mapping commands in Cisco IOS.
Maps start with a match command that explicitly identifies some traffic type at the packet, frame or
application layers. Access list are utilized during this identification process. Class-maps facilitate the
matching of an IP address, a protocol, or an incoming interface. After traffic has been matched, the policymap is used to set the Differentiated Service Code Point (DSCP).
19.3.4 Queuing Methods
There are numerous queuing mechanisms on Cisco Layer 2 and Cisco Layer 3 switches to cater for the need
that different network administrators need different prioritizations for their different networks that run a vast
diversity of applications. Some of these methods are listed below:
•
First In, First Out Queuing (FIFO): This is the default method and sends packets and frames with
reference to the timed arrival of the initial bits, in the packet or frame, at the input interface.
•
Weighed Fair Queuing (WFQ): This method uses the conversation index linked with each packet to
put data into various queues. A conversation index is a phrase used for various applications that have
their packets marked by means of a number within the switch or router.
•
Custom Queuing enables an administrator to establish a maximum to 16 queues that have configurable
sizes and forwarding thresholds. Data is put into queues in accordance to access lists. Queues are cleared
on a round robin basis
•
Weighted Round Robin Queuing: This is an easier form of Custom Queuing whereby a set number of
queues are serviced in round-robin fashion. Each queue is configurable just to the size of the queue.
•
Priority Queuing: This method enables an administrator to establish queues and to configure the size of
each queue. Data is located in queues according to access lists. Packets that are part of the highest
priority queue are always sent first, while packets that are part of the lower priority queues are only sent
when the higher queues are cleared
Leading the way in IT testing and certification tools, www.testking.com
- 218 -
CCNP/CCDP 642-891 (Composite)
19.3.4.1. Auto-QoS
This is the process used to ease the deployment of QoS features whereby a switch or router can
automatically decide on whether a port connection holds any particular QoS condition. The switch is able to
prioritize various traffic flows. It uses the output queues specifically as an alternative to using the default
QoS behavior of Best Effort Delivery from a single queue. Traffic are automatically classified and put in the
suitable output queue. With auto-QoS, the switch is capable of identifying ports that have IP telephones
connected to them. It then assigns sufficient buffer space to give the Voice over IP (VoIP) calls the proper
QoS. The feature also applies to uplinks that contain the VoIP calls to the next switch. This process is
known as trust and is configured across a QoS domain. Trust permits ports that carry VoIP traffic that do
not have IP phones immediately connected to them, to conclude that a packet carrying this facility is given
the identical QoS, as though it was immediately connected. Packets are marked only at the entrance to the
domain and are trusted from that point onwards. This eliminates the need to mark at each switch or router.
19.4 Configuring QoS
QoS trust can be configured in the following two manners:
•
Per-interface; or
•
As part of a QoS policy on specific traffic types.
Per-interface trust is illustrated below. Policy trust is illustrated in Section 19.4.2.
19.4.1 Per-interface QoS Trust
In this instance, one of the values listed below is trusted and used when a switch makes forwarding
decisions:
•
Inbound CoS from the trunking tags
•
DSCP from the inbound IP packet headers
•
IP Precedence from the inbound IP packet headers
Use the following command on each interface when QoS information is trusted:
Switch(config-if)# mls qos trust {cos | dscp | ip-precedence}
Use the following command on each interface when QoS information is NOT trusted (default setting):
Switch(config-if)# no mls qos trust
Inbound QoS information (trusted or untrusted) must be mapped into internal DSCP values. With Class of
Service (CoS), every one of the eight CoS values are mapped into an internal DSCP value.
Use the following global configuration command to amend the default mapping, where each of the dscp
values is a value from 0 to 63:
Switch(config)# mls qos map cos-dscp dscp1 ... dscp8
Leading the way in IT testing and certification tools, www.testking.com
- 219 -
CCNP/CCDP 642-891 (Composite)
Every one of the eight IP Precedence values is mapped into an internal DSCP value.
Use the following global configuration command to amend the default mapping, where each of the dscp
values is a value from 0 to 63:
Switch(config)# mls qos map ip-prec-dscp dscp1 ... dscp8
Inbound DSCP values could be mapped into different internal DSCP values by means of a DSCP mutation
map. The default configuration is no DSCP mutation.
To define a DSCP mutation map, start by creating a named map that contains a maximum of eight entries.
Each of the dscp values is a value from 0 to 63. This is done by repeating the following global configuration
command:
Switch(config)# mls qos map dscp-mutation dscp-mutation-name in-dscp to
outdscp
Next, use the following interface configuration command to apply the mutation map to a particular ingress
interface:
Switch(config-if)# mls qos dscp-mutation dscp-mutation-name
19.4.2 Defining a QoS Policy
Start at defining the QoS class by using the following global configuration command:
Switch(config)# class-map class-name [match-all | match-any]
Several conditions can be configured into the class map to match different traffic types. Use the match-all
keyword when the class should match against all the conditions. This is the default setting. The match-any
keyword used enable any of these conditions to activate a match. Packets can be classified with access lists
or Network-Based Application Recognition (NBAR). NBAR matches against more intricate fields. The
NBAR attribute is updated at intervals to support the recognition of recently developed applications.
Next, exit the class map configuration mode to define the IP access list with the access-list accesslist-number or the ip access-list extended command
Use the following class map configuration command to tie a traffic flow with NBAR:
Switch(config-cmap)# match protocol protocol-name
When NBAR is enabled on an interface packets are inspected as well. The switch CPU has to process all
traffic moving in and out of an interface. This is not as effective as CEF switching and performance through
that interface could be negatively impacted.
Class maps must be specified to enable traffic to be classified for the policy. Use the following global
configuration command to specify the QoS policy:
Leading the way in IT testing and certification tools, www.testking.com
- 220 -
CCNP/CCDP 642-891 (Composite)
Switch(config)# policy-map policy-name
Next, use the following policy map configuration command to specify each class map that will be used:
Switch(config-pmap)# class class-name
Once the class maps are set up to classify traffic, the DSCP value or the IP Precedence value must be
marked by using one of the following policy map configuration commands:
Switch(config-pmap)# set ip dscp dscp-value
OR
Switch(config-pmap)# set ip precedence ip-precedence-value
The DSCP value is a number 0 to 63 while the IP Precedence value is a number 0 to 7
To enable policy trust, use the following policy map configuration command:
Switch(config-pmap)# trust {cos | dscp | ip-precedence}
A QoS policy map can be applied to the physical interface on a switch once it is classified and defined.
Because an interface can only contain one active policy applied in each direction, an inbound traffic policy
and an outbound traffic policy can be applied to an interface.
The following interface configuration command is employed to start using a policy:
Switch(config-if)# service-policy [input | output] policy-name
19.4.3 Configuring and Tuning Egress Scheduling
Packet scheduling pertains to the manner in which a switch places a packet into an egress queue and also
deals with the manner in which queues are serviced. Each queue is serviced according to its weight with
reference to the other queues. Catalyst switches support the Weighted Round Robin scheduling algorithm
that checks the weighting values to find out the ratio of packets to send from one queue as opposed to
another. Interfaces with two standard queues are allocated weights 4 and 255. The second queue gets almost
64 times the quantity of data it sends for each one unit of data from the first queue. The quantity of weight
values that can be set depends on the number of common egress queues that an interface has. The weight
values extend from 1 to 255.
Use the following interface configuration command to alter a queue’s weights:
Switch(config-if)#
weight4]
wrr-queue
bandwidth
weight1
weight2
[
weight3]
[
19.4.4 Congestion Prevention
Congestion prevention or avoidance is configured with Weighted Round Robin configuration commands.
Leading the way in IT testing and certification tools, www.testking.com
- 221 -
CCNP/CCDP 642-891 (Composite)
Internal DSCP Values are mapped to CoS Values for Queuing which are in turn used for egress scheduling
and queuing.
Use the following global configuration command to modify the default DSCP to CoS mappings:
Switch(config)# mls qos map dscp-cos dscp-list to cos-value
The dscp-list can take the form of an individual DSCP value ranging from 0 – 63, a hyphenated variety of
values, or it can be multiple values and ranges that are separated by commas.
WRR locates packets in egress queues based on the mapping between a CoS value and a queue number.
Use the following interface configuration command to specify the map that links CoS values to particular
egress queue drop thresholds:
Switch(config-if)# wrr-queue cos-map queue-id threshold-id cos-list
Packets that have with a CoS value named in the cos-list will be placed in the specified queue with the
threshold ID applied. The CoS values are halved: CoS 0 and 1 fall in queue 1 threshold 1; CoS 2 fall in
queue 1 threshold 2, CoS 4 fall in queue 2 threshold 1. CoS 5 is always put in the ‘strict priority queue’
when the queue is reachable. CoS 6 and 7 fall in queue 2 threshold 2.
Every switch interface has WRED enabled as a default configuration. Use the following interface
configuration command to enable the option when it has been previously overridden:
Switch(config-if)# wrr-queue random-detect queue-id
WRED maintains two thresholds, namely a minimum threshold and a maximum threshold, for each queue
for the majority of interface types. WRED cannot drop any packets when the queue level falls below the
minimum threshold. WRED drops all new packets when the queue level is over the maximum threshold.
When the queue the level is between the minimum and maximum standards, WRED can drop packets
according to a rate that is relevant to the level of the queue
Use the following interface configuration command to define the WRED thresholds:
Switch(config-if)# wrr-queue random-detect {max-threshold |
min-threshold} queue-id threshold-percent-1 ... threshold-percent-N
The lowest priority queue, queue 1 has a minimum threshold of 0 and a maximum threshold of 40 percent.
The low priority queue is constantly vulnerable to random drops. Queue 2, a higher priority queue has a
minimum of 0 and a maximum threshold of 100 percent. This queue’s level must reach 100 percent before
all packets are dropped.
Leading the way in IT testing and certification tools, www.testking.com
- 222 -
CCNP/CCDP 642-891 (Composite)
20. IP Telephony
20.1 Inline Power
Cisco IP Phones have been added to the switched campus network and are managed by Cisco CallManager
servers. A Cisco IP Phone’s power can be sourced from inline power over the network data cable or an
external AC adapter.
The external AC adapter plugs into an AC wall outlet and supplies 48V DC to the IP Phone. The downside
is that the IP Phone fails when a power failure occurs at the wall outlet.
Greater efficiency is obtained when inline power is used. The 48V DC is supplied to an IP Phone over the
identical Category 5 cable that is used to enable Ethernet connectivity. The DC power source is actually the
Catalyst switch. An AC adapter can be used as a redundant power source. No additional power sources are
necessary. The Catalyst switch can be connected to an uninterruptable power supply (UPS) to ensure that
it constantly receives and transmits power at times when the ordinary AC source fails. An IP Phone can
therefore be used during a power failure. Inline power has the advantage that it is available just to an IP
phone. Power is not supplied to a standard PC using the same switch port.
20.1.1 Inline Power Configuration and Verification
A switch has to identify an inline power capable mechanism before it can supply it with power. To do this,
the switch port transmits a 340-kHz test tone on the twisted pair Ethernet cable when it initially powers up
An IP Phone loops the test tone and obtains pairs of its Ethernet connection. When it is connected to an
inline power switch port, the switch is able to perceive that its test tone has been looped back. It then
presumes that a powered mechanism exists and offers power to it. Inline power is offered on Catalyst 355024-PWR, Catalyst 4500 and Catalyst 6500. Inline power is supplied at 48V DC across pairs 2 and 3. The
switch power supply has to be gauged correctly in order to supply constant power to an IP Phone on each
switch port.
The default configuration is that each switch port tries to determine an inline powered mechanism.
Use the following interface configuration command to change to the default configuration: :
Switch(config-if)# power inline {auto | never}
Use the following EXEC command to verify the inline power status for a switch port:
Switch# show power inline [ type mod/num]
20.2 Voice VLANs
An IP Phone is able to manage certain elements on the manner in which a packet is exposed to a switch.
Manner is this instance refers to both user information and voice. Cisco IP Phones have a three port switch:
one port connecting to the upstream switch, another connecting to the user’s PC, and the third connecting to
an internal VoIP data-stream. The user PC and voice ports operate as access mode switch ports while the
internal VoIP data-stream port operates as a single VLAN or 802.1Q trunk port. As a trunk, voice traffic is
separated from different user data. This enables QoS abilities and enhanced security. When operating as an
access link, voice and user data are joined across the single VLAN. This method could have a negative
impact on voice quality. The switch can be configured to use either mode.
Leading the way in IT testing and certification tools, www.testking.com
- 223 -
CCNP/CCDP 642-891 (Composite)
Voice packets and the QoS data they contain have to be transmitted across a single voice VLAN or over the
native VLAN.
20.2.1 Voice VLANs Configuration and Verification
The voice VLAN mode is configured by configuring the switch port where an IP Phone uplink connects.
The switch will direct the IP Phone to use the configured mode.
Use the following interface configuration command to specify the voice VLAN mode:
Switch(config-if)# switchport voice vlan { vlan-id | dot1p | untagged |
none}
The trunk that is utilized between an IP Phone and a Catalyst switch port is dynamically established. It holds
only two VLANs: a voice VLAN and the untagged native VLAN. An active trunk is not displayed in the
trunking mode by a Cisco IOS show command. The special trunk is negotiated through DTP and CDP,
irrelevant of the trunking mode. STP PortFast is automatically enabled and STP operates in two instances
above the trunk.
When a trunk is not utilized, the default for every switch port is none. Modes other than none make use of
the special case 802.1Q trunk. The method of encapsulating voice traffic is the only variation between the
dot1p and the untagged modes. The dot1p mode places the voice packets on VLAN 0. This mode needs a
VLAN ID but not a unique voice VLAN. The untagged mode places the voice packet in the native VLAN.
This mode does not need a VLAN ID or a unique voice VLAN.
Because a connection between a switch port and an IP Phone is not displayed in the trunking mode by a
Cisco IOS show command, verifying the connection can be a bit complex.
Use the following EXEC command to verify whether the IP Phone has been broadcasted to the switch:
Switch# show cdp neighbors type mod/num detail
Next, use the following EXEC command to verify the access VLAN and voice VLAN (if relevant) used on
the switch port:
Switch# show interface type mod/num switchport
20.3 Voice QoS
20.3.1 QoS Trust
It is extremely essential to apply and practice the correct QoS level when moving voice traffic over a
switched campus network. A trust boundary is defined for QoS in a network. Since each network mechanism
has comparable QoS policies configured, QoS information within this boundary is trusted. QoS information
arising from outside the boundary can either be overwritten for a particular circumstance or completely
overwritten. When an IP Phone is set up as a component of the network, the QoS information transmitted
by the phone can be trusted.
Leading the way in IT testing and certification tools, www.testking.com
- 224 -
CCNP/CCDP 642-891 (Composite)
The IP Phone has the following two data sources:
•
User PC data switch port: Packets from the user PC data switch port are produced somewhere else.
Therefore, QoS information contained in these packets should not just be trusted to be accurate or
truthful.
•
VoIP packets native to the phone: The IP Phone produces these packets. Therefore, it can accurately
manage QoS information that is integrated in these voice packets
A switch instructs a connected IP Phone on the manner in which it has to offer QoS trust to its user data
switch port. This is done by using CDP messages.
20.3.1.1 QoS Trust Configuration and Verification
Use the following interface configuration command to configure the trust extension:
Switch(config-if)# switchport priority extend {cos value | trust}
QoS contained in the packets from the user PC data switch port are not usually trusted because the PC’s
applications can attempt to send-up CoS or DSCP settings, in order to try and access premium network
service. By using the cos keyword, the CoS bits are overwritten to value by the IP Phone, when packets are
sent to the switch. The CoS values from the PC are overwritten to 0 when they cannot be trusted. However,
the PC can be operating certain trusted applications that are authorized to request particular QoS levels. Use
the trust keyword to enable the IP Phone to pass total QoS trust to the PC. The CoS values remain
unchanged and are forwarded via the phone.
Use the following EXEC command to verify the manner in which QoS trust was extended to the IP Phone:
Switch# show mls qos interface type mod/num
Use the following EXEC command to verify the manner in which the IP Phone has been set up to deal with
incoming QoS information from a connected PC or mechanism:
Switch# show interface type mod/num switchport
Use the following EXEC command to verify the switch port queuing methods:
Switch# show interface type mod/num capabilities
Use one of the following EXEC commands to verify the manner in which the CoS values places packets into
egress port queues:
Switch# show mls qos interface type mod/num queueing
OR
Switch# show queueing interface type mod/num
The first EXEC command is utilized by the Catalyst 3550 switches. The second EXEC command is utilized
by the Catalyst 6500 switches.
Leading the way in IT testing and certification tools, www.testking.com
- 225 -
CCNP/CCDP 642-891 (Composite)
20.3.2 Voice Packet Classification
The Cisco IP Phones makes use of the subsequent voice call control Skinny Protocols:
•
Skinny Client Control Protocol (SCCP): TCP port 2000
•
Skinny Station Protocol (SSP): TCP port 2001
•
Skinny Gateway Protocol (SGP): TCP port 2002
Real-time Transport Protocol (RTP) conveys all voice-bearer traffic by utilizing UDP ports negotiated by
call control protocols.
Switches that have to classify voice call control traffic that are utilized by IP Phones ought to match against
the static TCP ports 2000 - 2002. Matching can be achieved by utilizing an IP Access List and the following
command: match accessgroup.
A switch needs to single out RTP packets that are on the negotiated UDP port numbers in order to classify
the voice-bearer traffic. This can be done by using the following NBAR command: match protocol rtp
A Cisco IP Phone sets its QoS information according to the rules listed below:
•
PC data packets can be left in its original state or can be marked with a configurable CoS value
•
SCCP voice control packets get CoS 3, IP Precedence, and DSCP 26 - AF31
•
RTP voice-bearer packets get CoS 5, IP Precedence 5 and DSCP 46 - EF
The default switch behavior supplies the suitable QoS to voice traffic. Packets with CoS 3 are normally
located in the higher threshold lower-priority standard queue. Packets with CoS 5 are located in the strictpriority queue. Voice call control packets are marked with CoS 3.They are placed into the egress queues for
better service than normal data. Voice bearer packets are marked with CoS 5. They placed into the strictpriority queue.
Leading the way in IT testing and certification tools, www.testking.com
- 226 -
CCNP/CCDP 642-891 (Composite)
21. Controlling Access in the Campus Environment
This chapter covers some of the preventative security measures that can be implemented in a Cisco campus
environment. The first preventative measure involves creating an access policy.
21.1 Access Policies
Access policies are the defining guidelines that are necessary to create a level of access control. Access
policies may vary widely; different size businesses may require a different type of access policy. An access
policy may define: the management and configuration of network devices, including physical security,
logical security, and access control; the means of controlling user access to the network through the use of
mechanisms such as switch port security and VLAN management; controlling access to distributed and
enterprise services; determining the traffic allowed out of a distribution switch and into the core network, as
well as how traffic is managed; and route filtering to determine the routes that should be seen by the core
network.
In the campus environment, an access policy is designed to police that traffic going to and from the campus.
The policy should allow only the traffic required to do business. An access policy should also provide a
measure of protection to those network devices in the campus. Each layer in the network can and should
have a different access policy. Some access policies could, however, apply to all devices in the network.
Table 21.1 summarizes the different characteristics and access policies for each hierarchical layer of a given
network.
TABLE 21.1: Access Policy Guidelines
Network Layer
Access Policy
Access Layer
The access layer is the entry point for the users to the campus
network. Use Port Security and passwords here to protect the
network.
Distribution Layer
The distribution layer carries the bulk of all policy decisions.
This layer defines what traffic enters to or from the core and
access layers. Many of the network device access policies should
be the same as the access layer.
Core Layer
The core layer is a high bandwidth backbone handling the traffic
of all the other devices in the network. There usually should be
no policies at this layer because the core's function is to pass
traffic at a high speed. Any policy implemented would slow
down the flow.
21.2 Managing Network Devices
The policy to control access to network devices should be one of the first components of the access policy.
All devices at every layer in the campus network should have a plan to provide for physical security;
passwords; privilege levels, which allow limited access to a network device; and limiting virtual terminal or
Telnet access.
Leading the way in IT testing and certification tools, www.testking.com
- 227 -
CCNP/CCDP 642-891 (Composite)
21.2.1 Physical Access
Virtually all devices provide a way of gaining control of a given device, if you have physical access to the
device. For this reason defining a physical access policy is important. If the physical device is not secured,
your network would not be secure either. You can physically secure your network by: establish a
configuration, control, and change management policy for all devices at each of the respective layers;
establish a security plan for all physical locations, including details on physical and link security; providing
the proper physical environment with provisions for locking the room, proper ventilation and temperature
controls, and backup power; controlling direct access to the device; and securing access to network links.
21.2.2 Passwords
There are several ways to access every Cisco device; each should have a password applied to prevent
unauthorized access.
•
Out-of-band management options include the console port and the auxiliary port.
•
In-band management options include Trivial File Transfer Protocol (TFTP) servers and Simple Network
Management Protocol (SNMP)-based network management systems, such as CiscoWorks 2000.
•
Virtual terminal ports that are used for terminal access and are referred to as vty ports. There are five
vty ports by default on each Cisco device. You can create more vty ports if you need to.
By default, passwords are stored in clear text format in the router's configuration. The only exception to this
is the enable secret password, which is automatically encrypted. Password encryption can be compromised
so it should be used in combination with other methods of security.
21.2.3 Privilege Levels
There are two default levels of access: user and privileged. The user level allows the user to perform certain
commands but does not give them the ability to modify the configuration or perform a debug. The privileged
level allows the user to issue all commands, including configuration and debug commands.
Cisco IOS provides different levels of privileges for users with the use of the privilege level command.
This command allows network administrators to provide a more granular set of rights to Cisco network
devices. There are 16 different levels of privilege that can be set, ranging from 0 to 15. Level 1 is the default
user exec privilege while the highest level, 15, allows the user to have all rights to the device. Level 0 can be
used to specify a more limited subset of commands for specific users or lines. For all other privilege levels
(2 to 14) you must specify the commands that the privilege level should be able to complete.
Use the privilege command to define the commands that can be entered at that privilege level:
Use the enable secret level level password command to set the password for the privilege level.
21.2.4 Virtual Terminal Access
By default, there are five vtys on each Cisco device but you can create as many as you need. The vtys that
you received is based on the number of vtys that are currently in use. Because you will never know exactly
which vty line you are using, you should set identical restrictions on all lines.
Leading the way in IT testing and certification tools, www.testking.com
- 228 -
CCNP/CCDP 642-891 (Composite)
The line vty_number vty_range command takes you into the selected configuration mode of the vtys.
The most common use of this command is line vty 0 4. This command indicates that you are modifying
the vty 0 (the first vty) to vty 4.
The access-class command applies the access list to the interface. The access list is a standard access list
that indicates the source addresses that are either permitted or denied. The in | out condition must be
specified at the access-class statement indicates whether the source address should be allowed to establish
a Telnet session with this device or allowed to Telnet out of this device.
Use caution with the access-class command. Starting in release 11.0 (6) and later, Cisco allows web
browser access to configure your Cisco network device. This access is provided via HTTP and, while easier,
it does create some potential security issues. If you turn on HTTP server, no security is default for this
command. To enable HTTP access, enter the following command:
Switch(config)#ip http server
Password security for web access can be applied similar to console and virtual terminal access. The
following command can be used to specify what kind of authentication should be used:
Switch(config)#ip http authentication [ aaa | enable | local | tacacs ]
The four types of authentication that can be set in this command are:
•
aaa, which indicates that authentication, authorization, and accounting (AAA) should be used for
authentication;
•
enable, which indicates that the enable password should be used. This is the default method;
•
local, which indicates that the local user database is used for authentication information; and
•
tacacs, which indicates that a TACACS server should be used for authentication.
21.3 Access Layer Policy
The access layer is the entry point for users to access the network. Cable connections are generally pulled
from an access layer switch to offices and cubicles in a company. For this reason, the network devices of the
access layer are physically the most vulnerable.
At the access layer you should use port security to limit the Media Access Control (MAC) addresses
allowed to use the switch so as to prevent unauthorized users from gaining access to the network at all. Also,
the default VLAN of all ports is VLAN1, which is also the default management VLAN. Users entering the
network on ports that were not configured would be in this VLAN. Cisco recommends that the
management VLAN be moved to another VLAN to prevent users from entering the network on VLAN1 on
an unconfigured port.
21.4 Distribution Layer Policy
Most of the access control policy would be implemented at the distribution layer. This layer is also
responsible for ensuring that data stays in the switch block unless that data is specifically permitted outside
of the switch block, and sending the correct routing and service information to the core. Policy at the
Leading the way in IT testing and certification tools, www.testking.com
- 229 -
CCNP/CCDP 642-891 (Composite)
distribution layer ensures that the core block or the WAN blocks are not burdened with traffic that has not
been explicitly permitted. A distribution layer policy also protects the core and the other switch blocks from
receiving incorrect information, such as incorrect routes, that may harm the rest of the network. Access
control at the distribution layer falls into three different categories: defining which user traffic passes
between VLANs and ultimately to the core; defining which routes are seen by the core block and the switch
block; and defining which services the switch block will advertise out to the rest of the network.
21.4.1 Filtering Traffic at the Distribution Layer
Many of the access control methods used at the distribution layer rely on the creation of an access control
list. Two types of IP access lists are available: standard and extended. Both types of access list are a series of
permission based on a set of test criteria. However, the standard access list allows for a test criteria of only
the source address while the extended access list allows for greater degree of control by checking the source
and destination addresses as well as the protocol type and the port number or application type of the packet.
A standard access list is easier for the router to process; an extended access list, however, provides a greater
degree of control.
Access lists are created for a variety of applications and can be used for controlling access in the campus
network by applying them in different capacities. These include: applying the access list to the interface for
traffic management purposes through the use of the protocol access-group command; applying the access
list to a line for security purposes through the use of the access-class command; managing routing update
information through the use of the distribution-list command; and managing services update
information through the use of commands such as ipx outputsap-filter in order to determine which
services are advertised.
21.4.2 Controlling Routing Update Traffic
Controlling the routing table of the core block has the advantage of reducing the size of the routing table at
the core block, allowing it to process packets faster; preventing users from getting to networks that have not
been advertised, unless they have a static or default route to get there; and preventing incorrect information
from propagating through the core block.
There are two methods available for controlling the routing information that is sent to the core block:
•
Route summarization. Depending on which routing protocol is used, a summarized entry of all the
available routes of the switch block can be sent from the distribution layer to the core.
•
Distribution lists. A distribution list can be used to indicate what routes the distribution layer can
advertise to the core, or conversely, what the core can accept from the switch block.
21.4.3 Configuring Route Filtering
The basic method for configuring route filtering is by using the distribute-list command. This method
is used in large routed networks but can also be used by Route Switch modules (RSMs) in a large switched
network. The syntax for configuring route filtering for inbound routing updates is:
R1(config-router)# distribute-list access_list_number | name in
[ type number ]
Leading the way in IT testing and certification tools, www.testking.com
- 230 -
CCNP/CCDP 642-891 (Composite)
Similarly, the syntax for configuring route filtering for outbound routing updates is
R1(config-router)# distribute-list access_list_number | name out
[ interface-name ] routing_process | autonomous_system_number
The arguments for this command are:
•
access_list_number, which specifies the number of the previously created standard access list.
•
in | out, which defines the filtering on either incoming routing updates or outgoing routing updates.
•
interface_name, which specifies the name of the interface.
21.5 Core Layer Policy
The core block is responsible for moving data quickly. All the devices that are designed to be core block
solutions are optimized to move data as quickly as possible. For this reason, the core block should have as
little policy as possible. The only policies that should be applied at the core block are those that relate to
quality of service (QoS) commands for congestion management and congestion avoidance. QoS
implementations vary, depending on hardware used and versions of IOS. Please see your IOS-specific
documentation for details.
Leading the way in IT testing and certification tools, www.testking.com
- 231 -
CCNP/CCDP 642-891 (Composite)
22. Monitoring and Troubleshooting
22.1 Monitoring Cisco Switches
You can monitor and manage your Catalyst switches in a number of different ways. One way is primarily
through a console port using either the command-line interface (CLI) or other methods for performing
network management functions, such as Cisco Discovery Protocol (CDP), Embedded Remote Monitoring
(RMON), or Switched Port Analyzer (SPAN). The console port is an EIA/TIA-232 DCE interface to which
you can connect a console terminal or modem. The type of connector, however, used depends on the
hardware. On a Catalyst 5000 with Supervisor I or II, a rollover cable is used with the above hardware. On a
Supervisor III or a Catalyst 6000, a straight through cable is used in conjunction with a modular plug. Other
kinds of switches may be different.
Through the console port, you can directly access the CLI or configure a Serial Line Internet Protocol (SLIP)
interface to access such network management functions as Telnet, ping, and SNMP. An IP address can be
assigned to the Cisco switch for management purposes. Once the address is in place, you can direct Telnet to
access the IP address of the switch to reach the CLI.
You can also use the IP address of the switch to access an SNMP agent, such as CiscoWorks 2000.
22.1.1 Out-of-Band Management
Out-of-band management access for Cisco switches is performed via a console port connection or the Serial
Line Internet Protocol (SLIP).
22.1.1.1 Console Port Connection
The console port is the local console terminal connection to the switch. Depending on the type of switch
used, connect an EIA/TIA-232 terminal, a modem, or a network management workstation to the switch, via
a straight-through cable to use the console port. The console port enables you to: configure the switch using
a command-line interface; monitor network statistics and errors; configure SNMP agent parameters; and to
download software updates to the switch or distribute software images residing in Flash memory to attached
devices.
22.1.1.2 Serial Line Internet Protocol (SLIP)
You can access the Cisco switch command line using SLIP, which is a version of Internet Protocol (IP) that
runs over serial links and allows IP communications through the console port. Catalyst series switches
support out-of-band management through the use of a modem attached to the console port. This out-of-band
connection works in conjunction with SLIP.
The out-of-band connection can be used to: establish a Telnet session that provides access to the Cisco
switch CLI; use the Telnet Server feature; and establish an SNMP management session that provides the
capability to use an SNMP based management platform such as the CiscoWorks 2000 solution.
To establish an out-of-band connection on a Cisco switch, connect a 100 percent Hayes compatible modem
by means of a straight-through cable with a 25 pin D type connector. The modem should be configured for
auto answer mode. Use the SLIP (sl0) interface for point-to-point SLIP connections between the switch and
an IP host.
Leading the way in IT testing and certification tools, www.testking.com
- 232 -
CCNP/CCDP 642-891 (Composite)
22.1.2 In-Band Management
In-band management access for Cisco switches is performed using the Simple Network Management
Protocol (SNMP); Telnet; or the Cisco Discovery Protocol (CDP).
22.1.2.1 SNMP
Simple Network Management Protocol (SNMP) is an application layer protocol designed to facilitate the
exchange of management information between network devices. The SNMP system consists of a SNMP
manager, a SNMP agent, and a Management Information Base (MIB).
Instead of defining a large set of commands, SNMP places all operations in a get-request, getnextrequest, and set-request format. A SNMP manager can get a value from an SNMP agent or store a value
into that SNMP agent. The SNMP manager can be part of a network management system (NMS), and the
SNMP agent can reside on a networking device such as a switch. The SNMP agent can respond to MIBrelated queries being sent by the NMS.
A SNMP agent can access a MIB variable using the get-request or get-next-request format; set a MIB
variable; and can SNMP trap. The latter is used to notify a network management station that an
extraordinary event has occurred at an agent. When a trap condition occurs, the SNMP agent sends an
SNMP agent trap message to each of the network management stations as specified in the trap receiver table.
To configure SNMP on a switch, configure the SNMP community strings via the set snmp community
{ read-only | read-write | read-write-all } [ community_name ] command. Then assign a trap
receiver address and community via the set snmp trap rcvr_address rcvr_community command. If
desired, configure the switch so that it issues an authentication trap via the set snmp trap enable command.
The keywords for the set snmp community command are:
•
read-only, which assigns read-only access to the specified SNMP community.
•
read-write, which assigns read-write access to the specified SNMP community.
•
read-write-all, which assigns read-write access to the specified SNMP community.
•
community_name, which is an optional parameter that specifies the name of the SNMP community. The
default SNMP community strings are as follows:
An IP permit trap is sent when unauthorized access based on the IP permit list is attempted. The set snmp
trap command is a privileged mode switch command used to enable or disable the different SNMP traps on
the system or to add an entry into the SNMP authentication trap receiver table. The default configuration has
SNMP traps disabled. Use the show snmp command to verify the appropriate traps were configured. The
syntax for the set snmp trap command is:
set snmp trap { enable | disable } [ all | module | chassis | bridge
| repeater | auth | vtp | ippermit | vmps | config | entity | stpx ]
set snmp trap rcvr_address rcvr_community
Table 22.1 list the keywords and arguments for the set snmp trap command
Leading the way in IT testing and certification tools, www.testking.com
- 233 -
CCNP/CCDP 642-891 (Composite)
TABLE 22.1: Keywords and Arguments for the set snmp trap Command
Keyword or Argument
Definition
enable
Keyword to activate SNMP traps
disable
Keyword to deactivate SNMP traps
all
Optional keyword to specify all trap types
module
Optional keyword to specify the moduleUp
moduleDown traps from the CISCO-STACK-MIB
chassis
Optional keyword to specify the ciscoSyslogMIB.
bridge
Optional keyword to
topologyChange traps.
repeater
Optional
keyword
to
specify
the
rptrGroupChange, and rptrResetEvent traps.
auth
Optional keyword to specify the authenticationFailure trap.
vtp
Optional keyword to specify the VTP.
ippermit
Optional keyword to specify the IP Permit Denied access.
vmps
Optional keyword to specify the vmVmpsChange trap.
config
Optional keyword to specify the sysConfigChange.
entity
Optional keyword to specify the entityMIB trap.
stpx
Optional keyword to specify the STPX trap.
rcvr_address
IP address or IP alias of the system to receive SNMP traps.
rcvr_community
Community name to use when sending authentication traps.
specify
the
newRoot
and
and
rptrHealth,
22.1.2.2 Telnet Client Access
Remote, in-band SNMP management is possible through any LAN or ATM interface assigned to the same
VLAN as the Supervisor module's NMP IP address. In-band connections can be used to establish Telnet
sessions to the Cisco switch CLI or SNMP management sessions on an SNMP-based management platform.
Cisco switches provide outgoing Telnet functionality from the CLI; this allows a network manager to use
Telnet from the CLI of the switch to other devices on the network. Using Telnet, a network manager can
maintain a connection to a Cisco switch while also connecting to another switch or router. Cisco switches
support up to eight simultaneous Telnet sessions. Telnet sessions disconnect automatically after remaining
idle for a configurable time period. To access the switch through a Telnet session, you must first set the IP
address for the switch.
22.1.2.3 Cisco Discovery Protocol (CDP)
Cisco Discovery Protocol (CDP) is media- and protocol-independent and runs on all Cisco manufactured
equipment. With CDP, network management applications can retrieve the device type and the SNMP-agent
address of neighboring devices. Applications are now enabled to send SNMP queries to neighboring devices.
Leading the way in IT testing and certification tools, www.testking.com
- 234 -
CCNP/CCDP 642-891 (Composite)
CDP enables network management applications to dynamically discover Cisco devices that are neighbors of
already known devices, neighbors running lower-layer transparent protocols in particular. CDP runs on all
media that support the Subnetwork Access Protocol (SNAP). CDP runs over the data link layer only, not the
network layer. Therefore, two systems that support different network layer protocols can learn about each
other. Cached CDP information is available to network management applications. However, Cisco devices
never forward a CDP packet. When new information is received, old information is discarded.
22.1.3 Embedded Remote Monitoring
Cisco switches provide support for the Embedded Remote Monitoring (RMON) of Ethernet and Fast
Ethernet ports. Embedded RMON allows you to monitor network activity. It enables you to access and
remotely monitor the RMON specification RFC 1757 groupings of statistics, historical information, alarms,
and events for any port through SNMP or the TrafficDirector Management application. The RMON feature
monitors network traffic at the data link layer of the OSI model without requiring a dedicated monitoring
probe or network analyzer. RMON enables a network manager to analyze network traffic patterns, set up
proactive alarms to detect problems before they affect users, identify heavy network users as candidates to
move to dedicated or higher speed ports, and perform trend analysis for long-term planning.
The statistics group of the RMON specification maintains utilization and error statistics for the switch that is
monitored. Statistics include information about collisions; cyclic redundancy checks (CRC) and alignment;
undersized or oversized packets; jabber; fragments; broadcast, multicast, and unicast messages; and
bandwidth utilization.
To configure a Cisco switch for RMON, activate SNMP remote monitoring support via the set snmp rmon
enable command.
22.1.4 Switched Port Analyzer
Cisco switches have a Switched Port Analyzer (SPAN) feature which enables you to monitor traffic on any
port for analysis by a network analyzer device or RMON probe. This feature also provides RMON2
statistics on all nine RMON groups and all seven layers of the OSI model. Enhanced SPAN (E-SPAN)
enables you to monitor traffic from multiple ports with the same VLAN to a port for analysis.
The SPAN redirects traffic from an Ethernet, Fast Ethernet, or Fiber Distributed Data Interface (FDDI) port
or VLAN to an Ethernet or Fast Ethernet monitor port for analysis and troubleshooting. You can monitor a
single port or VLAN using a dedicated analyzer such as a Network Associates Sniffer, or an RMON probe,
such as a Cisco SwitchProbe.
22.1.5 CiscoWorks 2000
CiscoWorks is Cisco Systems' network management software. It is based on Simple Network Management
Protocol (SNMP) and is used for managing networks with one integrated platform. This includes topology
maps, configuration services, and important system, device, and performance information. CiscoWorks 2000
can be integrated with popular SNMP management platforms, such as HP OpenView, for seamless
management of complex networks. Additionally, CiscoWorks 2000 solutions can be used independently of
these SNMP management applications and do not require these services to be fully functional. The various
features of CiscoWorks 2000 LAN Management are discussed in Table 22.2.
Leading the way in IT testing and certification tools, www.testking.com
- 235 -
CCNP/CCDP 642-891 (Composite)
TABLE 22.2: CiscoWorks 2000 LAN Management Features
Feature
Description
Campus Bundle for
ATM and LANE
This product is an updated version of the former ATM Director.
The Campus Bundle offers network discovery and display, ATM
and LANE configuration, user tracking, LAN/WAN traffic, and
performance management capabilities on a device and network
wide basis.
CiscoView
A graphical management application providing dynamic status,
statistics, and comprehensive configuration information for local
or remote Cisco internetworking products. CiscoView displays a
physical view of a device backplane, with graphs and color-coding
for at-a-glance status and to display performance and other
statistics. In addition, CiscoView has the ability to modify
configurations such as trap, IP route, virtual LAN (VLAN), and
bridge configurations.
Campus Manager
Campus Manager features include: intelligent discovery and
display of large Layer 2 networks on browser-accessible topology
maps; configuration of VLAN/LANE and ATM services and
assignment of switch ports to those services link and device status
display based upon SNMP polling; identification of Layer 2
configuration discrepancies; diagnostic tools for connectivity
related problems between end stations, and Layer 2 and Layer 3
devices; automatic location and correlation of information on
users by media access control (MAC), IP address, NT or NetWare
Directory Services (NDS) login or UNIX hostname, with their
physical connections to the switched network.
TrafficDirector
Offers graphical reporting and analysis of RMON collected traffic
data both from RMON enabled Catalyst switches and from
external SwitchProbes, which are also available from Cisco.
Resource Manager
Essentials
A suite of Web-based applications offering network management
solutions for Cisco switches, access servers, and routers. The suite
consists of Inventory Manager, Change Audit, Device
Configuration Manager, Software Image Manager, Availability
Manager Syslog Analyzer, and Cisco Management Connection.
22.2 General Troubleshooting Model
You should deploy a systematic troubleshooting technique that can eliminate different possibilities and
move step-by-step toward the real causes of the problem. The following is a generally accepted
troubleshooting model. It presents a flow chart that can effectively guide you through your troubleshooting
tasks.
Leading the way in IT testing and certification tools, www.testking.com
- 236 -
CCNP/CCDP 642-891 (Composite)
•
Define the problem in terms of the associated symptoms and possible causes.
•
Gather facts from different sources. Talk to network administrators, other support engineers, managers,
and anyone that can provide relevant information. Run some basic tests (such as ping, trace, etc).
•
Consider all possibilities and eliminate the improbable possibilities so as to set a boundary for the
problem area. Order the possibilities that you believe might be the cause of the network problem based
on their likelihood.
•
Create an action plan for each possibility in order to solve the problem. Ensure the security and
performance implications of each of your proposed actions are acceptable.
•
Implement the action plan for each possibility in the order of their likelihood. Every action and change
must be documented so that you can reverse your actions if they are not appropriate.
•
Observe the results of each action. See if the problems or symptoms have been eliminated and that
other normal network operations are not disrupted or adversely affected.
•
Document the facts and report the problem as solved if the symptoms have disappeared and the
problem has been solved without creating new ones. Documenting your work will save you and others a
lot of time and effort in the future. Also document the date and time that you made changes.
•
Go through an iteration process of implementing actions and observing results if there are still
unresolved issues. Consider the next action plan and go about implementing it. There will be times that
you remain with no possibility in hand while your network problems persist. In this event, you will have
Leading the way in IT testing and certification tools, www.testking.com
- 237 -
CCNP/CCDP 642-891 (Composite)
to think of more possibilities. This may require that you gather more facts that you might have
overlooked.
22.2.1 Troubleshooting with show Commands
There are a number of show commands that you can use for troubleshooting hardware, configuration, or
network problems in a switched network environment. These are:
•
show system, which displays the power supply, fan, temperature alarm, system, and modem status; the
number of days, hours, minutes, and seconds since the last system restart; the baud rate; the MAC
address range; and the system name, location, and contact.
•
show arp, which displays the contents of the ARP table and aging time.
•
show atm, which displays the ATM interfaces, traffic, VC and VLAN information and status.
•
show cam dynamic, which displays the dynamic CAM table.
•
show config, which displays the current system configuration.
•
show fddi, which displays the settings of the FDDI/CDDI module.
•
show flash, which displays the Flash code names, version numbers, and sizes.
•
show interface, which displays the Supervisor module network interface information.
•
show ip route, which displays the IP route information.
•
show log, which displays the system or module error log.
•
show mac, which displays the MAC counters for all the installed modules.
•
show module, which displays module status and information.
•
show netstat, which displays statistics for the various TCP/IP stack protocols and state of active
network connections.
•
show port, which displays the port status and counters for all installed modules.
•
show spantree, which displays the Spanning Tree information for the VLANs, including port states.
•
show system, which displays the status of the power supply, fan, temperature alarm, system, and uptime.
•
show test, which displays the results of diagnostic tests on the specified modules.
•
show trunk, which displays the ISL/Dot1Q information including trunking status.
•
show vlan, which displays the virtual LAN type, status and assigned modules and ports.
22.2.2 Physical Layer Troubleshooting
The most common network problems can be traced to cable problems. Check that the correct cable is used.
Category 3 cabling can only support 10BaseT. Check whether a 10/100-Mbps connection is connected at 10
Mbps instead of 100 Mbps. Check whether the cable is a crossover, rollover or straight-through cable by
comparing the RJ-45 connector wiring at both ends of the cable, including all wiring closet connections.
Check the devices' port link integrity LED on both ends of the cable.
Leading the way in IT testing and certification tools, www.testking.com
- 238 -
CCNP/CCDP 642-891 (Composite)
22.2.3 Troubleshooting Ethernet
Table 22.3 outlines problems commonly encountered on Ethernet networks.
TABLE 22.3: Ethernet Media Problems
Media Problem
Possible Solution
Excessive noise
Use the show interfaces ethernet EXEC command to
determine the status of the router’s Ethernet interfaces. Check
cables to determine whether any are damaged. Look for badly
spaced taps that could be causing reflections. If you are using
100BaseTX, make sure you are using Category 5 cabling.
Excessive collisions
Use the show interfaces ethernet command to check the
rate of collisions. Use a time domain reflectometer (TDR) to
find any unterminated Ethernet cables. Look for a jabbering
transceiver attached to a host.
Excessive runt frames
In a shared Ethernet environment, runt frames are almost
always caused by collisions. If the collision rate is high, refer to
the problem “Excessive collisions” earlier in this table.
If runt frames occur when collisions are not high or in a
switched Ethernet environment, then they are the result of
underruns or bad software on a network interface card.
Use a protocol analyzer to try to determine the source address
of the runt frames.
Late collisions (collision
that occurs beyond the
first 64 bytes of an
Ethernet frame)
Use a protocol analyzer to check for late collisions. Late
collisions usually occur when Ethernet cables are too long or
when there are too many repeaters in the network.
Check the diameter of the network and make sure it is within
specification.
No link integrity on
10BaseT
Make sure you are not using 100BaseT4 when only two
100BaseT4, or 100BaseTX pairs of wire are available.
100BaseT4 requires four pairs. Check for 10BaseT,
100BaseT4, or 100BaseTX mismatch. Determine whether there
is cross-connect. Check for excessive noise.
22.2.3.1 Network Testing
The ping command is one of the most useful troubleshooting tools when performing network testing. The
ping command is supported at the user and privileged exec modes. In user mode, you must specify an IP
address or a host name, if the host name can be resolved to an IP address, with the ping command. The ping
command tests the round-trip path to and from a target. In privileged mode, you must enter a protocol, a
target IP address, a repeat count, datagram size, and a timeout in seconds. Generally, the syntax for the ping
command is:
ping –s ip_address [ packet_size] [ packet_count]
Leading the way in IT testing and certification tools, www.testking.com
- 239 -
CCNP/CCDP 642-891 (Composite)
TABLE 22.4: Parameters for the ping Command
Parameter
Purpose
-s
Causes ping to send one datagram per second, printing one line of
output for every response received. The ping command does not
return any output when no response is received.
ip_address
The IP address or IP alias of the host.
Packet_size
This optional parameter represents the number of bytes in a packet,
from 1 to 2000 bytes, with a default of 56 bytes. The actual packet size
is eight bytes larger because the switch adds header information.
Packet_count
This optional parameter represents the number of packets to send.
22.2.3.2 The Traceroute Command
The traceroute command was introduced with the release 10.0 of Cisco IOS and can be used to find the
path between IP devices. The traceroute command can be executed in user and privileged exec modes, but
in privileged exec mode, you can use the extended traceroute, which is more flexible and informative.
This command can be very useful in troubleshooting by determining where along a particular network path a
particular problem might be as the traceroute command displays a hop-by-hop path through an IP network
from the switch to a specific destination host. The syntax for the traceroute command is:
traceroute [ -n ] [- w wait_time ] [ -i initial_ttl ] [ -m max_ttl ]
[ -p dest_port ] [ -q nqueries ] [ -t tos ] ip_address [ data_size ]
TABLE 22.5: Parameters for the traceroute Command
Parameter
Description
-n
Prevents traceroute from performing a DNS lookup for each hop on
the path. Only numerical IP addresses are printed.
-w wait_time
Specifies the amount of time that traceroute will wait for an
ICMP response message. The allowed range for wait time is 1 to
300 seconds; the default is 5.
-i initial_ttl
Causes traceroute to send ICMP datagrams with a TTL value equal
to initial_ttl instead of the default TTL of 1. This causes
traceroute to skip processing for hosts that are less than
initial_ttl hops away.
-m max_ttl
Specifies the maximum TTL value for outgoing ICMP datagrams.
The allowed range is 1 to 255; the default value is 30.
-p dest_port
Specifies the base UDP destination port number used in traceroute
datagrams. This value increments each time a datagram is sent. The
allowed range is 1 to 65535; the default base port is 33434.
-q nqueries
Specifies the number of datagrams to send for each TTL value. The
allowed range is 1 to 1000; the default is 3.
-t tos
Specifies the TOS to be set in the IP header of the outgoing
Leading the way in IT testing and certification tools, www.testking.com
- 240 -
CCNP/CCDP 642-891 (Composite)
datagrams. The allowed range is 0 to 255; the default is 0.
ip_address
IP alias or IP address in dot notation of the destination host.
data_size
Number of bytes, in addition to the default of 40 bytes, of the
outgoing datagrams. The allowed range is 0 to 1420; the default is
0.
22.2.3.3 Network Media Test Equipment
Third party equipment that can be used to troubleshoot networks includes:
•
Volt/Ohm meters and digital multimeters used to check for cable connectivity and continuity.
•
Cable testers or scanners, also test for connectivity but are more sophisticated than Volt/Ohm meters.
Are able report cable conditions such as attenuation, near-end crosstalk (NEXT), and noise. Can also
provide the measurement of a cable's impedance.
•
TDRs and OTDRs, devices that provide time domain reflectometer (TDR and optical TDR or OTDR
for fiber-optic cable testing), wire-map, and traffic monitoring functionality. Can locate opens, shorts,
kinks, sharp bends, crimps, and impedance mismatches
•
Breakout Boxes, Fox Boxes, and bit/block error rate testers (BERTs/BLERTs) are digital interface
testing tools used to measure the digital signals present at computers, printers, modems, CSU/DSUs, and
other peripheral interfaces. These devices can monitor data line conditions, analyze and trap data, and
diagnose problems common to data communication systems. Traffic from data terminal equipment (DTE)
through data communications equipment (DCE) can be examined to help isolate problems, identify bit
patterns, and ensure that the proper cabling has been installed.
•
Network monitors, Layer 2 tools used to capture, display and save traffic passing through a network
cable. Can take the raw data and provide information on frame sizes, number of erroneous frames, MAC
addresses, number of broadcasts, etc.
•
Network analyzers are similar to network monitors but are capable of interpreting and displaying the
packet, segment, and other (higher) protocol data units (PDUs). Can be used to study the format or
behavior of certain protocols; to check time delays between request and response.
Leading the way in IT testing and certification tools, www.testking.com
- 241 -
Download PDF
Similar pages