TCP/IP Illustrated, Volume 1: The Protocols - R-5

TCP/IP Illustrated, Volume 1: The Protocols - R-5
Praise for the First Edition of TCP/IP Illustrated, Volume 1: The Protocols
“This is sure to be the bible for TCP/IP developers and users. Within minutes of picking
up the text, I encountered several scenarios that had tripped up both my colleagues and
myself in the past. Stevens reveals many of the mysteries once held tightly by the everelusive networking gurus. Having been involved in the implementation of TCP/IP for
some years now, I consider this by far the finest text to date.”
—Robert A. Ciampa, network engineer, Synernetics, division of 3COM
“While all of Stevens’ books are readable and technically excellent, this new opus is awesome. Although many books describe the TCP/IP protocols, Stevens provides a level of
depth and real-world detail lacking from the competition. He puts the reader inside
TCP/IP using a visual approach and shows the protocols in action.”
—Steven Baker, networking columnist, Unix Review
“TCP/IP Illustrated, Volume 1, is an excellent reference for developers, network administrators, or anyone who needs to understand TCP/IP technology. TCP/IP Illustrated is
comprehensive in its coverage of TCP/IP topics, providing enough details to satisfy the
experts while giving enough background and commentary for the novice.”
—Bob Williams, vice president, Marketing, NetManage, Inc.
“. . . [T]he difference is that Stevens wants to show as well as tell about the protocols.
His principal teaching tools are straightforward explanations, exercises at the ends of
chapters, byte-by-byte diagrams of headers and the like, and listings of actual traffic as
—Walter Zintz, UnixWorld
“Much better than theory only. . . . W. Richard Stevens takes a multihost-based configuration and uses it as a travelogue of TCP/IP examples with illustrations. TCP/IP Illustrated, Volume 1, is based on practical examples that reinforce the theory—distinguishing
this book from others on the subject, and making it both readable and informative.”
—Peter M. Haverlock, consultant, IBM TCP/IP Development
“The diagrams he uses are excellent and his writing style is clear and readable. In sum,
Stevens has made a complex topic easy to understand. This book merits everyone’s attention. Please read it and keep it on your bookshelf.”
—Elizabeth Zinkann, sys admin
“W. Richard Stevens has produced a fine text and reference work. It is well organized
and very clearly written with, as the title suggests, many excellent illustrations exposing the intimate details of the logic and operation of IP, TCP, and the supporting cast of
protocols and applications.”
—Scott Bradner, consultant, Harvard University OIT/NSD
This page intentionally left blank
TCP/IP Illustrated, Volume 1
Second Edition
This page intentionally left blank
TCP/IP Illustrated, Volume 1
The Protocols
Second Edition
Kevin R. Fall
W. Richard Stevens
Originally written by Dr. W. Richard Stevens.
Revised by Kevin Fall.
Upper Saddle River, NJ • Boston • Indianapolis • San Francisco
New York • Toronto • Montreal • London • Munich • Paris • Madrid
Capetown • Sydney • Tokyo • Singapore • Mexico City
Many of the designations used by manufacturers and sellers to distinguish their products are
claimed as trademarks. Where those designations appear in this book, and the publisher was aware
of a trademark claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no expressed
or implied warranty of any kind and assume no responsibility for errors or omissions. No liability
is assumed for incidental or consequential damages in connection with or arising out of the use of
the information or programs contained herein.
The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases
or special sales, which may include electronic versions and/or custom covers and content particular
to your business, training goals, marketing focus, and branding interests. For more information,
please contact:
U.S. Corporate and Government Sales
(800) 382-3419
[email protected]
For sales outside the United States, please contact:
International Sales
[email protected]
Visit us on the Web:
Library of Congress Cataloging-in-Publication Data
Fall, Kevin R.
TCP/IP illustrated.—2nd ed. / Kevin R. Fall, W. Richard Stevens.
p. cm.
Stevens’ name appears first on the earlier edition.
Includes bibliographical references and index.
ISBN-13: 978-0-321-33631-6 (v. 1 : hardcover : alk. paper)
ISBN-10: 0-321-33631-3 (v. 1 : hardcover : alk. paper) 1. TCP/IP (Computer network protocol)
I. Stevens, W. Richard. II. Title.
TK5105.55.S74 2012
Copyright © 2012 Pearson Education, Inc.
All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction,
storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical,
photocopying, recording, or likewise. To obtain permission to use material from this work, please
submit a written request to Pearson Education, Inc., Permissions Department, One Lake Street,
Upper Saddle River, New Jersey 07458, or you may fax your request to (201) 236-3290.
ISBN-13: 978-0-321-33631-6
Text printed in the United States on recycled paper at Edwards Brothers in Ann Arbor, Michigan.
First printing, November 2011
To Vicki, George, Audrey, Maya, Dylan, and Jan,
for their insight, tolerance, and support
through the long nights and weekends.
This page intentionally left blank
Preface to the Second Edition
Adapted Preface to the First Edition
Chapter 1
Architectural Principles
1.1.1 Packets, Connections, and Datagrams
1.1.2 The End-to-End Argument and Fate Sharing
1.1.3 Error Control and Flow Control
Design and Implementation
1.2.1 Layering
1.2.2 Multiplexing, Demultiplexing, and Encapsulation in Layered
The Architecture and Protocols of the TCP/IP Suite
1.3.1 The ARPANET Reference Model
1.3.2 Multiplexing, Demultiplexing, and Encapsulation in TCP/IP
1.3.3 Port Numbers
1.3.4 Names, Addresses, and the DNS
Internets, Intranets, and Extranets
Designing Applications
1.5.1 Client/Server
1.5.2 Peer-to-Peer
1.5.3 Application Programming Interfaces (APIs)
Standardization Process
1.6.1 Request for Comments (RFC)
1.6.2 Other Standards
Implementations and Software Distributions
Attacks Involving the Internet Architecture
The Internet Address Architecture
Chapter 2
Expressing IP Addresses
Basic IP Address Structure
2.3.1 Classful Addressing
2.3.2 Subnet Addressing
2.3.3 Subnet Masks
2.3.4 Variable-Length Subnet Masks (VLSM)
2.3.5 Broadcast Addresses
2.3.6 IPv6 Addresses and Interface Identifiers
CIDR and Aggregation
2.4.1 Prefixes
2.4.2 Aggregation
Special-Use Addresses
2.5.1 Addressing IPv4/IPv6 Translators
2.5.2 Multicast Addresses
2.5.3 IPv4 Multicast Addresses
2.5.4 IPv6 Multicast Addresses
2.5.5 Anycast Addresses
2.6.1 Unicast
2.6.2 Multicast
Unicast Address Assignment
2.7.1 Single Provider/No Network/Single Address
2.7.2 Single Provider/Single Network/Single Address
2.7.3 Single Provider/Multiple Networks/Multiple Addresses
2.7.4 Multiple Providers/Multiple Networks/Multiple Addresses
Attacks Involving IP Addresses
Link Layer
Ethernet and the IEEE 802 LAN/MAN Standards
3.2.1 The IEEE 802 LAN/MAN Standards
3.2.2 The Ethernet Frame Format
3.2.3 802.1p/q: Virtual LANs and QoS Tagging
3.2.4 802.1AX: Link Aggregation (Formerly 802.3ad)
Chapter 3
Full Duplex, Power Save, Autonegotiation, and 802.1X Flow Control
3.3.1 Duplex Mismatch
3.3.2 Wake-on LAN (WoL), Power Saving, and Magic Packets
3.3.3 Link-Layer Flow Control
Bridges and Switches
3.4.1 Spanning Tree Protocol (STP)
3.4.2 802.1ak: Multiple Registration Protocol (MRP)
Wireless LANs—IEEE 802.11(Wi-Fi)
3.5.1 802.11 Frames
3.5.2 Power Save Mode and the Time Sync Function (TSF)
3.5.3 802.11 Media Access Control
3.5.4 Physical-Layer Details: Rates, Channels, and Frequencies
3.5.5 Wi-Fi Security
3.5.6 Wi-Fi Mesh (802.11s)
Point-to-Point Protocol (PPP)
3.6.1 Link Control Protocol (LCP)
3.6.2 Multilink PPP (MP)
3.6.3 Compression Control Protocol (CCP)
3.6.4 PPP Authentication
3.6.5 Network Control Protocols (NCPs)
3.6.6 Header Compression
3.6.7 Example
MTU and Path MTU
Tunneling Basics
3.9.1 Unidirectional Links
Attacks on the Link Layer
Chapter 4
ARP: Address Resolution Protocol
An Example
4.2.1 Direct Delivery and ARP
ARP Cache
ARP Frame Format
ARP Examples
4.5.1 Normal Example
4.5.2 ARP Request to a Nonexistent Host
ARP Cache Timeout
Proxy ARP
Gratuitous ARP and Address Conflict Detection (ACD)
The arp Command
Using ARP to Set an Embedded Device’s IPv4 Address
Attacks Involving ARP
The Internet Protocol (IP)
Chapter 5
IPv4 and IPv6 Headers
5.2.1 IP Header Fields
5.2.2 The Internet Checksum
5.2.3 DS Field and ECN (Formerly Called the ToS Byte or IPv6 Traffic Class) 188
5.2.4 IP Options
IPv6 Extension Headers
5.3.1 IPv6 Options
5.3.2 Routing Header
5.3.3 Fragment Header
IP Forwarding
5.4.1 Forwarding Table
5.4.2 IP Forwarding Actions
5.4.3 Examples
5.4.4 Discussion
Mobile IP
5.5.1 The Basic Model: Bidirectional Tunneling
5.5.2 Route Optimization (RO)
5.5.3 Discussion
Host Processing of IP Datagrams
5.6.1 Host Models
5.6.2 Address Selection
Attacks Involving IP
Chapter 6
System Configuration: DHCP and Autoconfiguration
Dynamic Host Configuration Protocol (DHCP)
6.2.1 Address Pools and Leases
6.2.2 DHCP and BOOTP Message Format
6.2.3 DHCP and BOOTP Options
6.2.4 DHCP Protocol Operation
6.2.5 DHCPv6
6.2.6 Using DHCP with Relays
6.2.7 DHCP Authentication
6.2.8 Reconfigure Extension
6.2.9 Rapid Commit
6.2.10 Location Information (LCI and LoST)
6.2.11 Mobility and Handoff Information (MoS and ANDSF)
6.2.12 DHCP Snooping
Stateless Address Autoconfiguration (SLAAC)
6.3.1 Dynamic Configuration of IPv4 Link-Local Addresses
6.3.2 IPv6 SLAAC for Link-Local Addresses
DHCP and DNS Interaction
PPP over Ethernet (PPPoE)
Attacks Involving System Configuration
Chapter 7
Firewalls and Network Address Translation (NAT)
7.2.1 Packet-Filtering Firewalls
7.2.2 Proxy Firewalls
Network Address Translation (NAT)
7.3.1 Traditional NAT: Basic NAT and NAPT
7.3.2 Address and Port Translation Behavior
7.3.3 Filtering Behavior
7.3.4 Servers behind NATs
7.3.5 Hairpinning and NAT Loopback
7.3.6 NAT Editors
7.3.7 Service Provider NAT (SPNAT) and Service Provider IPv6
NAT Traversal
7.4.1 Pinholes and Hole Punching
7.4.2 UNilateral Self-Address Fixing (UNSAF)
7.4.3 Session Traversal Utilities for NAT (STUN)
7.4.4 Traversal Using Relays around NAT (TURN)
7.4.5 Interactive Connectivity Establishment (ICE)
Configuring Packet-Filtering Firewalls and NATs
7.5.1 Firewall Rules
7.5.2 NAT Rules
7.5.3 Direct Interaction with NATs and Firewalls: UPnP, NAT-PMP,
and PCP
NAT for IPv4/IPv6 Coexistence and Transition
7.6.1 Dual-Stack Lite (DS-Lite)
7.6.2 IPv4/IPv6 Translation Using NATs and ALGs
Attacks Involving Firewalls and NATs
Chapter 8
ICMPv4 and ICMPv6: Internet Control Message Protocol
8.1.1 Encapsulation in IPv4 and IPv6
ICMP Messages
8.2.1 ICMPv4 Messages
8.2.2 ICMPv6 Messages
8.2.3 Processing of ICMP Messages
ICMP Error Messages
8.3.1 Extended ICMP and Multipart Messages
8.3.2 Destination Unreachable (ICMPv4 Type 3, ICMPv6 Type 1)
and Packet Too Big (ICMPv6 Type 2)
8.3.3 Redirect (ICMPv4 Type 5, ICMPv6 Type 137)
8.3.4 ICMP Time Exceeded (ICMPv4 Type 11, ICMPv6 Type 3)
8.3.5 Parameter Problem (ICMPv4 Type 12, ICMPv6 Type 4)
ICMP Query/Informational Messages
8.4.1 Echo Request/Reply (ping) (ICMPv4 Types 0/8, ICMPv6 Types
8.4.2 Router Discovery: Router Solicitation and Advertisement
(ICMPv4 Types 9, 10)
8.4.3 Home Agent Address Discovery Request/Reply (ICMPv6 Types
8.4.4 Mobile Prefix Solicitation/Advertisement (ICMPv6 Types 146/147)
8.4.5 Mobile IPv6 Fast Handover Messages (ICMPv6 Type 154)
8.4.6 Multicast Listener Query/Report/Done (ICMPv6 Types
8.4.7 Version 2 Multicast Listener Discovery (MLDv2) (ICMPv6
Type 143)
8.4.8 Multicast Router Discovery (MRD) (IGMP Types 48/49/50,
ICMPv6 Types 151/152/153)
Neighbor Discovery in IPv6
8.5.1 ICMPv6 Router Solicitation and Advertisement (ICMPv6 Types
133, 134)
8.5.2 ICMPv6 Neighbor Solicitation and Advertisement (IMCPv6 Types
135, 136)
8.5.3 ICMPv6 Inverse Neighbor Discovery Solicitation/Advertisement
(ICMPv6 Types 141/142)
8.5.4 Neighbor Unreachability Detection (NUD)
8.5.5 Secure Neighbor Discovery (SEND)
8.5.6 ICMPv6 Neighbor Discovery (ND) Options
Translating ICMPv4 and ICMPv6
8.6.1 Translating ICMPv4 to ICMPv6
8.6.2 Translating ICMPv6 to ICMPv4
Attacks Involving ICMP
Chapter 9
Broadcasting and Local Multicasting (IGMP and MLD)
9.2.1 Using Broadcast Addresses
9.2.2 Sending Broadcast Datagrams
9.3.1 Converting IP Multicast Addresses to 802 MAC/Ethernet Addresses
9.3.2 Examples
9.3.3 Sending Multicast Datagrams
9.3.4 Receiving Multicast Datagrams
9.3.5 Host Address Filtering
The Internet Group Management Protocol (IGMP) and Multicast Listener
Discovery Protocol (MLD)
9.4.1 IGMP and MLD Processing by Group Members (“Group
Member Part”)
9.4.2 IGMP and MLD Processing by Multicast Routers (“Multicast
Router Part”)
9.4.3 Examples
9.4.4 Lightweight IGMPv3 and MLDv2
9.4.5 IGMP and MLD Robustness
9.4.6 IGMP and MLD Counters and Variables
9.4.7 IGMP and MLD Snooping
Attacks Involving IGMP and MLD
Chapter 10 User Datagram Protocol (UDP) and IP Fragmentation
UDP Header
UDP Checksum
UDP and IPv6
10.5.1 Teredo: Tunneling IPv6 through IPv4 Networks
IP Fragmentation
10.7.1 Example: UDP/IPv4 Fragmentation
10.7.2 Reassembly Timeout
Path MTU Discovery with UDP
10.8.1 Example
Interaction between IP Fragmentation and ARP/ND
Maximum UDP Datagram Size
10.10.1 Implementation Limitations
10.10.2 Datagram Truncation
UDP Server Design
10.11.1 IP Addresses and UDP Port Numbers
10.11.2 Restricting Local IP Addresses
10.11.3 Using Multiple Addresses
10.11.4 Restricting Foreign IP Address
10.11.5 Using Multiple Servers per Port
10.11.6 Spanning Address Families: IPv4 and IPv6
10.11.7 Lack of Flow and Congestion Control
Translating UDP/IPv4 and UDP/IPv6 Datagrams
UDP in the Internet
Attacks Involving UDP and IP Fragmentation
Chapter 11 Name Resolution and the Domain Name System (DNS)
The DNS Name Space
11.2.1 DNS Naming Syntax
Name Servers and Zones
The DNS Protocol
11.5.1 DNS Message Format
11.5.2 The DNS Extension Format (EDNS0)
11.5.3 UDP or TCP
11.5.4 Question (Query) and Zone Section Format
11.5.5 Answer, Authority, and Additional Information Section Formats
11.5.6 Resource Record Types
11.5.7 Dynamic Updates (DNS UPDATE)
11.5.8 Zone Transfers and DNS NOTIFY
Sort Lists, Round-Robin, and Split DNS
Open DNS Servers and DynDNS
Transparency and Extensibility
Translating DNS from IPv4 to IPv6 (DNS64)
Attacks on the DNS
Chapter 12 TCP: The Transmission Control Protocol (Preliminaries)
12.1.1 ARQ and Retransmission
12.1.2 Windows of Packets and Sliding Windows
12.1.3 Variable Windows: Flow Control and Congestion Control
12.1.4 Setting the Retransmission Timeout
Introduction to TCP
12.2.1 The TCP Service Model
12.2.2 Reliability in TCP
TCP Header and Encapsulation
Chapter 13 TCP Connection Management
TCP Connection Establishment and Termination
13.2.1 TCP Half-Close
13.2.2 Simultaneous Open and Close
13.2.3 Initial Sequence Number (ISN)
13.2.4 Example
13.2.5 Timeout of Connection Establishment
13.2.6 Connections and Translators
TCP Options
13.3.1 Maximum Segment Size (MSS) Option
13.3.2 Selective Acknowledgment (SACK) Options
13.3.3 Window Scale (WSCALE or WSOPT) Option
13.3.4 Timestamps Option and Protection against Wrapped
Sequence Numbers (PAWS)
13.3.5 User Timeout (UTO) Option
13.3.6 Authentication Option (TCP-AO)
Path MTU Discovery with TCP
13.4.1 Example
TCP State Transitions
13.5.1 TCP State Transition Diagram
13.5.2 TIME_WAIT (2MSL Wait) State
13.5.3 Quiet Time Concept
13.5.4 FIN_WAIT_2 State
13.5.5 Simultaneous Open and Close Transitions
Reset Segments
13.6.1 Connection Request to Nonexistent Port
13.6.2 Aborting a Connection
13.6.3 Half-Open Connections
13.6.4 TIME-WAIT Assassination (TWA)
TCP Server Operation
13.7.1 TCP Port Numbers
13.7.2 Restricting Local IP Addresses
13.7.3 Restricting Foreign Endpoints
13.7.4 Incoming Connection Queue
Attacks Involving TCP Connection Management
Chapter 14 TCP Timeout and Retransmission
Simple Timeout and Retransmission Example
Setting the Retransmission Timeout (RTO)
14.3.1 The Classic Method
14.3.2 The Standard Method
14.3.3 The Linux Method
14.3.4 RTT Estimator Behaviors
14.3.5 RTTM Robustness to Loss and Reordering
Timer-Based Retransmission
14.4.1 Example
Fast Retransmit
14.5.1 Example
Retransmission with Selective Acknowledgments
14.6.1 SACK Receiver Behavior
14.6.2 SACK Sender Behavior
14.6.3 Example
Spurious Timeouts and Retransmissions
14.7.1 Duplicate SACK (DSACK) Extension
14.7.2 The Eifel Detection Algorithm
14.7.3 Forward-RTO Recovery (F-RTO)
14.7.4 The Eifel Response Algorithm
Packet Reordering and Duplication
14.8.1 Reordering
14.8.2 Duplication
Destination Metrics
Attacks Involving TCP Retransmission
Chapter 15 TCP Data Flow and Window Management
15.1Introduction 691
Interactive Communication
Delayed Acknowledgments
Nagle Algorithm
15.4.1 Delayed ACK and Nagle Algorithm Interaction
15.4.2 Disabling the Nagle Algorithm
Flow Control and Window Management
15.5.1 Sliding Windows
15.5.2 Zero Windows and the TCP Persist Timer
15.5.3 Silly Window Syndrome (SWS)
15.5.4 Large Buffers and Auto-Tuning
Urgent Mechanism
15.6.1 Example
Attacks Involving Window Management
Chapter 16 TCP Congestion Control
16.1.1 Detection of Congestion in TCP
16.1.2 Slowing Down a TCP Sender
The Classic Algorithms
16.2.1 Slow Start
16.2.2 Congestion Avoidance
16.2.3 Selecting between Slow Start and Congestion Avoidance
16.2.4 Tahoe, Reno, and Fast Recovery
16.2.5 Standard TCP
Evolution of the Standard Algorithms
16.3.1 NewReno
16.3.2 TCP Congestion Control with SACK
16.3.3 Forward Acknowledgment (FACK) and Rate Halving
16.3.4 Limited Transmit
16.3.5 Congestion Window Validation (CWV)
Handling Spurious RTOs—the Eifel Response Algorithm
An Extended Example
16.5.1 Slow Start Behavior
16.5.2 Sender Pause and Local Congestion (Event 1)
16.5.3 Stretch ACKs and Recovery from Local Congestion
16.5.4 Fast Retransmission and SACK Recovery (Event 2)
16.5.5 Additional Local Congestion and Fast Retransmit Events
16.5.6 Timeouts, Retransmissions, and Undoing cwnd Changes
16.5.7 Connection Completion
Sharing Congestion State
TCP Friendliness
TCP in High-Speed Environments
16.8.1 HighSpeed TCP (HSTCP) and Limited Slow Start
16.8.2 Binary Increase Congestion Control (BIC and CUBIC)
Delay-Based Congestion Control
16.9.1 Vegas
16.9.2 FAST
16.9.3 TCP Westwood and Westwood+
16.9.4 Compound TCP
Buffer Bloat
Active Queue Management and ECN
Attacks Involving TCP Congestion Control
Chapter 17 TCP Keepalive
17.2.1 Keepalive Examples
Attacks Involving TCP Keepalives
Chapter 18 Security: EAP, IPsec, TLS, DNSSEC, and DKIM
Basic Principles of Information Security
Threats to Network Communication
Basic Cryptography and Security Mechanisms
18.4.1 Cryptosystems
18.4.2 Rivest, Shamir, and Adleman (RSA) Public Key Cryptography
18.4.3 Diffie-Hellman-Merkle Key Agreement (aka Diffie-Hellman or DH)
18.4.4 Signcryption and Elliptic Curve Cryptography (ECC)
18.4.5 Key Derivation and Perfect Forward Secrecy (PFS)
18.4.6 Pseudorandom Numbers, Generators, and Function Families
18.4.7 Nonces and Salt
18.4.8 Cryptographic Hash Functions and Message Digests
18.4.9 Message Authentication Codes (MACs, HMAC, CMAC, and GMAC)
18.4.10 Cryptographic Suites and Cipher Suites
Certificates, Certificate Authorities (CAs), and PKIs
18.5.1 Public Key Certificates, Certificate Authorities, and X.509
18.5.2 Validating and Revoking Certificates
18.5.3 Attribute Certificates
TCP/IP Security Protocols and Layering
Network Access Control: 802.1X, 802.1AE, EAP, and PANA
18.7.1 EAP Methods and Key Derivation
18.7.2 The EAP Re-authentication Protocol (ERP)
18.7.3 Protocol for Carrying Authentication for Network Access (PANA)
Layer 3 IP Security (IPsec)
18.8.1 Internet Key Exchange (IKEv2) Protocol
18.8.2 Authentication Header (AH)
18.8.3 Encapsulating Security Payload (ESP)
18.8.4 Multicast
18.8.5 L2TP/IPsec
18.8.6 IPsec NAT Traversal
18.8.7 Example
Transport Layer Security (TLS and DTLS)
18.9.1 TLS 1.2
18.9.2 TLS with Datagrams (DTLS)
DNS Security (DNSSEC)
18.10.1 DNSSEC Resource Records
18.10.2 DNSSEC Operation
18.10.3 Transaction Authentication (TSIG, TKEY, and SIG(0))
18.10.4 DNSSEC with DNS64
DomainKeys Identified Mail (DKIM)
18.11.1 DKIM Signatures
18.11.2 Example
Attacks on Security Protocols
Glossary of Acronyms
This page intentionally left blank
Rarely does one find a book on a well-known topic that is both historically and
technically comprehensive and remarkably accurate. One of the things I admire
about this work is the “warts and all” approach that gives it such credibility. The
TCP/IP architecture is a product of the time in which it was conceived. That it has
been able to adapt to growing requirements in many dimensions by factors of a
million or more, to say nothing of a plethora of applications, is quite remarkable.
Understanding the scope and limitations of the architecture and its protocols is a
sound basis from which to think about future evolution and even revolution.
During the early formulation of the Internet architecture, the notion of “enterprise” was not really recognized. In consequence, most networks had their own
IP address space and “announced” their addresses in the routing system directly.
After the introduction of commercial service, Internet Service Providers emerged
as intermediaries who “announced” Internet address blocks on behalf of their customers. Thus, most of the address space was assigned in a “provider dependent”
fashion. “Provider independent” addressing was unusual. The net result (no pun
intended) led to route aggregation and containment of the size of the global routing table. While this tactic had benefits, it also created the “multi-homing” problem since users of provider-dependent addresses did not have their own entries
in the global routing table. The IP address “crunch” also led to Network Address
Translation, which also did not solve provider dependence and multi-homing
Reading through this book evokes a sense of wonder at the complexity that
has evolved from a set of relatively simple concepts that worked with a small number of networks and application circumstances. As the chapters unfold, one can
see the level of complexity that has evolved to accommodate an increasing number
of requirements, dictated in part by new deployment conditions and challenges, to
say nothing of sheer growth in the scale of the system.
The issues associated with securing “enterprise” users of the Internet also led
to firewalls that are intended to supply perimeter security. While useful, it has
become clear that attacks against local Internet infrastructure can come through
internal compromises (e.g., an infected computer is put onto an internal network
or an infected thumb-drive is used to infect an internal computer through its USB
It has become apparent that, in addition to a need to expand the Internet
address space through the introduction of IP version 6, with its 340 trillion trillion trillion addresses, there is also a strong need to introduce various securityenhancing mechanisms such as the Domain Name System Security Extension
(DNSSEC) among many others.
What makes this book unique, in my estimation, is the level of detail and attention to history. It provides background and a sense for the ways in which solutions
to networking problems have evolved. It is relentless in its effort to achieve precision and to expose remaining problem areas. For an engineer determined to refine
and secure Internet operation or to explore alternative solutions to persistent problems, the insights provided by this book will be invaluable. The authors deserve
credit for a thorough rendering of the technology of today’s Internet.
June 2011
Vint Cerf
Preface to the Second Edition
Welcome to the second edition of TCP/IP Illustrated, Volume 1. This book aims
to provide a detailed, current look at the TCP/IP protocol suite. Instead of just
describing how the protocols operate, we show the protocols in operation using
a variety of analysis tools. This helps you better understand the design decisions
behind the protocols and how they interact with each other, and it simultaneously
exposes you to implementation details without your having to read through the
implementation’s software source code or set up an experimental laboratory. Of
course, reading source code or setting up a laboratory will only help to increase
your understanding.
Networking has changed dramatically in the past three decades. Originally a
research project and object of curiosity, the Internet has become a global communication fabric upon which governments, businesses, and individuals depend. The
TCP/IP suite defines the underlying methods used to exchange information by
every device on the Internet. After more than a decade of delay, the Internet and
TCP/IP itself are now undergoing an evolution, to incorporate IPv6. Throughout
the text we will discuss both IPv6 and the current IPv4 together, but we highlight the differences where they are important. Unfortunately, they do not directly
interoperate, so some care and attention are required to appreciate the impact of
the evolution.
The book is intended for anyone wishing to better understand the current set
of TCP/IP protocols and how they operate: network operators and administrators,
network software developers, students, and users who deal with TCP/IP. We have
included material that should be of interest to both new readers as well as those
familiar with the material from the first edition. We hope you will find the coverage of the new and older material useful and interesting.
Comments on the First Edition
Nearly two decades have passed since the publication of the first edition of TCP/IP
Illustrated, Volume 1. It continues to be a valuable resource for both students and
professionals in understanding the TCP/IP protocols at a level of detail difficult to
Preface to the Second Edition
obtain in competing texts. Today it remains among the best references for detailed
information regarding the operation of the TCP/IP protocols. However, even the
best books concerned with information and communications technology become
dated after a time, and the TCP/IP Illustrated series is no exception. In this edition,
I hope to thoroughly update the pioneering work of Dr. Stevens with coverage of
new material while maintaining the exceptionally high standard of presentation
and detail common to his numerous books.
The first edition covers a broad set of protocols and their operation, ranging
from the link layer all the way to applications and net work management. Today,
covering this breadth of material comprehensively in a single volume would
produce a very lengthy text indeed. For this reason, the second edition focuses
specifically on the core protocols: those relatively low-level protocols used most
frequently in providing the basic services of configuration, naming, data delivery,
and security for the Internet. Detailed discussions of applications, routing, Web
services, and other important topics are postponed to subsequent volumes.
Considerable progress has been made in improving the robustness and compliance of TCP/IP implementations to their corresponding specifications since the
publication of the first edition. While many of the examples in the first edition
highlight implementation bugs or noncompliant behaviors, these problems have
largely been addressed in currently available systems, at least for IPv4. This fact
is not terribly surprising, given the greatly expanded use of the TCP/IP protocols
in the last 18 years. Misbehaving implementations are a comparative rarity, which
attests to a certain maturity of the protocol suite as a whole. The problems encountered in the operation of the core protocols nowadays often relate to intentional
exploitation of infrequently used protocol features, a form of security concern that
was not a primary focus in the first edition but one that we spend considerable
effort to address in the second edition.
The Internet Milieu of the Twenty-first Century
The usage patterns and importance of the Internet have changed considerably
since the publication of the first edition. The most obvious watershed event was
the creation and subsequent intense commercialization of the World Wide Web
starting in the early 1990s. This event greatly accelerated the availability of the
Internet to large numbers of people with various (sometimes conflicting) motivations. As such, the protocols and systems originally implemented in a small-scale
environment of academic cooperation have been stressed by limited availability of
addresses and an increase of security concerns.
In response to the security threats, network and security administrators have
introduced special control elements into the network. It is now common practice to
place a firewall at the point of attachment to the Internet, for both large enterprises
as well as small businesses and homes. As the demand for IP addresses and security has increased over the last decade, Network Address Translation (NAT) is now
supported in virtually all current-generation routers and is in widespread use. It
Preface to the Second Edition
has eased the pressure on Internet address availability by allowing sites to obtain
a comparatively small number of routable Internet addresses from their service
providers (one for each simultaneously online user), yet assign a very large number of addresses to local computers without further coordination. A consequence
of NAT deployment has been a slowing of the migration to IPv6 (which provides
for an almost incomprehensibly large number of addresses) and interoperability
problems with some older protocols.
As the users of personal computers began to demand Internet connectivity
by the mid-1990s, the largest supplier of PC software, Microsoft, abandoned its
original policy of offering only proprietary alternatives to the Internet and instead
undertook an effort to embrace TCP/IP compatibility in most of its products.
Since then, personal computers running their Windows operating system have
come to dominate the mix of PCs presently connected to the Internet. Over time,
a significant rise in the number of Linux-based systems means that such systems
now threaten to displace Microsoft as the frontrunner. Other operating systems,
including Oracle Solaris and Berkeley’s BSD-based systems, which once represented the majority of Internet-connected systems, are now a comparatively small
component of the mix. Apple’s OS X (Mach-based) operating system has risen as
a new contender and is gaining in popularity, especially among portable computer users. In 2003, portable computer (laptop) sales exceeded desktop sales as
the majority of personal computer types sold, and their proliferation has sparked
a demand for widely deployed, high-speed Internet access supported by wireless infrastructure. It is projected that the most common method for accessing the
Internet from 2012 and beyond will be smartphones. Tablet computers also represent an important growing contender.
Wireless networks are now available at a large number of locations such as
restaurants, airports, coffeehouses, and other public places. They typically provide short-range free or pay-for-use (flat-rate) high-speed wireless Internet connections using hardware compatible with commonly used office or home local
area network installations. A set of alternative “wireless broadband” technologies based on cellular telephone standards (e.g., LTE, HSPA, UMTS, EV-DO) are
becoming widely available in developed regions of the world (and some developing regions of the words that are “leapfrogging” to newer wireless technology),
offering longer-range operation, often at somewhat reduced bandwidths and with
volume-based pricing. Both types of infrastructure address the desire of users to
be mobile while accessing the Internet, using either portable computers or smaller
devices. In either case, mobile end users accessing the Internet over wireless networks pose two significant technical challenges to the TCP/IP protocol architecture. First, mobility affects the Internet’s routing and addressing structure by
breaking the assumption that hosts have addresses assigned to them based upon
the identity of their nearby router. Second, wireless links may experience outages
and therefore cause data to be lost for reasons other than those typical of wired
links (which generally do not lose data unless too much traffic is being injected
into the network).
Preface to the Second Edition
Finally, the Internet has fostered the rise of so-called peer-to-peer applications forming “overlay” networks. Peer-to-peer applications do not rely on a central server to accomplish a task but instead determine a set of peer computers with
which they can communicate and interact to accomplish a task. The peer computers
are operated by other end users and may come and go rapidly compared to a fixed
server infrastructure. The “overlay” concept captures the fact that such interacting peers themselves form a network, overlaid atop the conventional TCP/IP-based
network (which, one may observe, is itself an overlay above the underlying physical links). The development of peer-to-peer applications, while of intense interest
to those who study traffic flows and electronic commerce, has not had a profound
impact on the core protocols described in Volume 1 per se, but the concept of overlay
networks has become an important consideration for networking technology more
Content Changes for the Second Edition
Regarding content in the text, the most important changes from the first edition
are a restructuring of the scope of the overall text and the addition of significant
material on security. Instead of attempting to cover nearly all common protocols
in use at every layer in the Internet, the present text focuses in detail first on the
non-security core protocols in widespread use, or that are expected to be in widespread use in the near future: Ethernet (802.3), Wi-Fi (802.11), PPP, ARP, IPv4, IPv6,
UDP, TCP, DHCP, and DNS. These protocols are likely to be encountered by system administrators and users alike.
In the second edition, security is covered in two ways. First, in each appropriate
chapter, a section devoted to describing known attacks and their countermeasures
relating to the protocol described in the chapter is included. These descriptions
are not presented as a recipe for constructing attacks but rather as a practical indication of the kinds of problems that may arise when protocol implementations (or
specifications, in some cases) are insufficiently robust. In today’s Internet, incomplete specification or lax implementation practice can lead to mission-critical systems being compromised by even relatively unsophisticated attacks.
The second important discussion of security occurs in Chapter 18, where
security and cryptography are studied in some detail, including protocols such as
IPsec, TLS, DNSSEC, and DKIM. These protocols are now understood to be important for implementing any service or application expected to maintain integrity
or secure operation. As the Internet has increased in commercial importance, the
need for security (and the number of threats to it) has grown proportionally.
Although IPv6 was not included in the first edition, there is now reason to
believe that the use of IPv6 may increase significantly with the exhaustion of
unallocated IPv4 address groups in February 2011. IPv6 was conceived largely
to address the problems of IPv4 address depletion and, and while not nearly as
common as IPv4 today, is becoming more important as a growing number of
small devices (such as cellular telephones, household devices, and environmental
Preface to the Second Edition
sensors) become attached to the Internet. Events such as the World IPv6 Day (June
8, 2011) helped to demonstrate that the Internet can continue to work even as the
underlying protocols are modified and augmented in a significant way.
A second consideration for the structure of the second edition is a deemphasis
of the protocols that are no longer commonly used and an update of the descriptions of those that have been revised substantially since the publication of the
first edition. The chapters covering RARP, BOOTP, NFS, SMTP, and SNMP have
been removed from the book, and the discussion of the SLIP protocol has been
abandoned in favor of expanded coverage of DHCP and PPP (including PPPoE).
The function of IP forwarding (described in Chapter 9 in the first edition) has
been integrated with the overall description of the IPv4 and IPv6 protocols in
Chapter 5 of this edition. The discussion of dynamic routing protocols (RIP, OSPF,
and BGP) has been removed, as the latter two protocols alone could each conceivably merit a book-long discussion. Starting with ICMP, and continuing through IP,
TCP, and UDP, the impact of operation using IPv4 versus IPv6 is discussed in any
cases where the difference in operation is significant. There is no specific chapter
devoted solely to IPv6; instead, its impact relative to each existing core protocol is
described where appropriate. Chapters 15 and 25–30 of the first edition, which are
devoted to Internet applications and their supporting protocols, have been largely
removed; what remains only illustrates the operation of the underlying core protocols where necessary.
Several chapters covering new material have been added. The first chapter
begins with a general introduction to networking issues and architecture, followed
by a more Internet-specific orientation. The Internet’s addressing architecture is
covered in Chapter 2. A new chapter on host configuration and how a system “gets
on” the network appears as Chapter 6. Chapter 7 describes firewalls and Network
Address Translation (NAT), including how NATs are used in partitioning address
space between routable and nonroutable portions. The set of tools used in the first
edition has been expanded to include Wireshark (a free network traffic monitor
application with a graphical user interface).
The target readership for the second edition remains identical to that of the
first edition. No prior knowledge of networking concepts is required for approaching it, although the advanced reader should benefit from the level of detail and
references. A rich collection of references is included in each chapter for the interested reader to pursue.
Editorial Changes for the Second Edition
The general flow of material in the second edition remains similar to that of the
first edition. After the introductory material (Chapters 1 and 2), the protocols are
presented in a bottom-up fashion to illustrate how the goal of network communication presented in the introduction is realized in the Internet architecture. As in
the first edition, actual packet traces are used to illustrate the operational details
of the protocols, where appropriate. Since the publication of the first edition, freely
Preface to the Second Edition
available packet capture and analysis tools with graphical interfaces have become
available, extending the capabilities of the tcpdump program used in the first
edition. In the present text, tcpdump is used when the points to be illustrated
are easily conveyed by examining the output of a text-based packet capture tool.
In most other cases, however, screen shots of the Wireshark tool are used. Please
be aware that some output listings, including snapshots of tcpdump output, are
wrapped or simplified for clarity.
The packet traces shown typically illustrate the behavior of one or more parts
of the network depicted on the inside of the front book cover. It represents a broadband-connected “home” environment (typically used for client access or peer-topeer net working), a “public” environment (e.g., coffee shop), and an enterprise
environment. The operating systems used for examples include Linux, Windows,
FreeBSD, and Mac OS X. Various versions are used, as many different OS versions
are in use on the Internet today.
The structure of each chapter has been slightly modified from the first edition. Each chapter begins with an introduction to the chapter topic, followed in
some cases by historical notes, the details of the chapter, a summary, and a set of
references. A section near the end of most chapters describes security concerns
and attacks. The per-chapter references represent a change for the second edition.
They should make each chapter more self-contained and require the reader to
perform fewer “long-distance page jumps” to find a reference. Some of the references are now enhanced with WWW URLs for easier access online. In addition,
the reference format for papers and books has been changed to a somewhat more
compact form that includes the first initial of each author’s last name followed by
the last two digits of the year (e.g., the former [Cerf and Kahn 1974] is now shortened to [CK74]). For the numerous RFC references used, the RFC number is used
instead of the author names. This follows typical RFC conventions and has the
side benefit of grouping all the RFC references together in the reference lists.
On a final note, the typographical conventions of the TCP/IP Illustrated series
have been maintained faithfully. However, the present author elected to use an
editor and typesetting package other than the Troff system used by Dr. Stevens
and some other authors of the Addison-Wesley Professional Computing Series collection. Thus, the particular task of final copyediting could take advantage of the
significant expertise of Barbara Wood, the copy editor generously made available
to me by the publisher. We hope you will be pleased with the results.
Berkeley, California
September 2011
Kevin R. Fall
Adapted Preface
to the First Edition
This book describes the TCP/IP protocol suite, but from a different perspective
than other texts on TCP/IP. Instead of just describing the protocols and what they
do, we’ll use a popular diagnostic tool to watch the protocols in action. Seeing how
the protocols operate in varying circumstances provides a greater understanding
of how they work and why certain design decisions were made. It also provides
a look into the implementation of the protocols, without having to wade through
thousands of lines of source code.
When networking protocols were being developed in the 1960s through
the 1980s, expensive, dedicated hardware was required to see the packets going
“across the wire.” Extreme familiarity with the protocols was also required to
comprehend the packets displayed by the hardware. Functionality of the hardware analyzers was limited to that built in by the hardware designers.
Today this has changed dramatically with the ability of the ubiquitous workstation to monitor a local area network [Mogul 1990]. Just attach a workstation to
your network, run some publicly available software, and watch what goes by on
the wire. While many people consider this a tool to be used for diagnosing network
problems, it is also a powerful tool for understanding how the network protocols
operate, which is the goal of this book.
This book is intended for anyone wishing to understand how the TCP/IP protocols operate: programmers writing network applications, system administrators
responsible for maintaining computer systems and networks utilizing TCP/IP,
and users who deal with TCP/IP applications on a daily basis.
Adapted Preface to the First Edition
Typographical Conventions
When we display interactive input and output we’ll show our typed input in a
bold font, and the computer output like this. Comments are added in italics.
bsdi % telnet svr4 discard
Connected to svr4.
connect to the discard server
this line and next output by Telnet client
Also, we always include the name of the system as part of the shell prompt (bsdi
in this example) to show on which host the command was run.
Throughout the text we’ll use indented, parenthetical notes such as this to
describe historical points or implementation details.
We sometimes refer to the complete description of a command on the Unix manual as in ifconfig(8). This notation, the name of the command followed by a
number in parentheses, is the normal way of referring to Unix commands. The
number in parentheses is the section number in the Unix manual of the “manual
page” for the command, where additional information can be located. Unfortunately not all Unix systems organize their manuals the same, with regard to the
section numbers used for various groupings of commands. We’ll use the BSDstyle section numbers (which is the same for BSD-derived systems such as SunOS
4.1.3), but your manuals may be organized differently.
Although the author’s name is the only one to appear on the cover, the combined
effort of many people is required to produce a quality text book. First and foremost is the author’s family, who put up with the long and weird hours that go into
writing a book. Thank you once again, Sally, Bill, Ellen, and David.
The consulting editor, Brian Kernighan, is undoubtedly the best in the business. He was the first one to read various drafts of the manuscript and mark it up
with his infinite supply of red pens. His attention to detail, his continual prodding
for readable prose, and his thorough reviews of the manuscript are an immense
resource to a writer.
Technical reviewers provide a different point of view and keep the author
honest by catching technical mistakes. Their comments, suggestions, and (most
importantly) criticisms add greatly to the final product. My thanks to Steve Bellovin, Jon Crowcroft, Pete Haverlock, and Doug Schmidt for comments on the
entire manuscript. Equally valuable comments were provided on portions of the
manuscript by Dave Borman for his thorough review of all the TCP chapters, and
to Bob Gilligan who should be listed as a coauthor for Appendix E.
Adapted Preface to the First Edition
An author cannot work in isolation, so I would like to thank the following persons for lots of small favors, especially by answering my numerous e-mail questions: Joe Godsil, Jim Hogue, Mike Karels, Paul Lucchina, Craig Partridge, Thomas
Skibo, and Jerry Toporek.
This book is the result of my being asked lots of questions on TCP/IP for which
I could find no quick, immediate answer. It was then that I realized that the easiest way to obtain the answers was to run small tests, forcing certain conditions to
occur, and just watch what happens. I thank Peter Haverlock for asking the probing questions and Van Jacobson for providing so much of the publicly available
software that is used in this book to answer the questions.
A book on networking needs a real network to work with along with access
to the Internet. My thanks to the National Optical Astronomy Observatories
(NOAO), especially Sidney Wolff, Richard Wolff, and Steve Grandi, for providing
access to their networks and hosts. A special thanks to Steve Grandi for answering lots of questions and providing accounts on various hosts. My thanks also to
Keith Bostic and Kirk McKusick at the U.C. Berkeley CSRG for access to the latest
4.4BSD system.
Finally, it is the publisher that pulls everything together and does whatever is
required to deliver the final product to the readers. This all revolves around the
editor, and John Wait is simply the best there is. Working with John and the rest
of the professionals at Addison-Wesley is a pleasure. Their professionalism and
attention to detail show in the end result.
Camera-ready copy of the book was produced by the author, a Troff die-hard,
using the Groff package written by James Clark.
Tucson, Arizona
October 1993
W. Richard Stevens
This page intentionally left blank
Effective communication depends on the use of a common language. This is true
for humans and other animals as well as for computers. When a set of common
behaviors is used with a common language, a protocol is being used. The first definition of a protocol, according to the New Oxford American Dictionary, is
The official procedure or system of rules governing affairs of state or diplomatic
We engage in many protocols every day: asking and responding to questions,
negotiating business transactions, working collaboratively, and so on. Computers
also engage in a variety of protocols. A collection of related protocols is called a
protocol suite. The design that specifies how various protocols of a protocol suite
relate to each other and divide up tasks to be accomplished is called the architecture or reference model for the protocol suite. TCP/IP is a protocol suite that implements the Internet architecture and draws its origins from the ARPANET Reference
Model (ARM) [RFC0871]. The ARM was itself influenced by early work on packet
switching in the United States by Paul Baran [B64] and Leonard Kleinrock [K64],
in the U.K. by Donald Davies [DBSW66], and in France by Louis Pouzin [P73].
Other protocol architectures have been specified over the years (e.g., the ISO protocol architecture [Z80], Xerox’s XNS [X85], and IBM’s SNA [I96]), but TCP/IP has
become the most popular. There are several interesting books that focus on the
history of computer communications and the development of the Internet, such as
[P07] and [W02].
It is worth mentioning that the TCP/IP architecture evolved from work that
addressed a need to provide interconnection of multiple different packet-switched
computer networks [CK74]. This was accomplished using a set of gateways (later
called routers) that provided a translation function between each otherwise incompatible network. The resulting “concatenated” network or catenet (later called internetwork) would be much more useful, as many more nodes offering a wide variety
of services could communicate. The types of uses that a global network might
offer were envisioned years before the protocol architecture was fully developed.
In 1968, for example, J. C. R. Licklider and Bob Taylor foresaw the potential uses
for a global interconnected communication network to support “supercommunities” [LT68]:
Today the on-line communities are separated from one another functionally as
well as geographically. Each member can look only to the processing, storage and
software capability of the facility upon which his community is centered. But
now the move is on to interconnect the separate communities and thereby transform them into, let us call it, a supercommunity. The hope is that interconnection
will make available to all members of all the communities the programs and data
resources of the entire supercommunity . . . The whole will constitute a labile network of networks—ever-changing in both content and configuration.
Thus, it is apparent that the global network concept underpinning the ARPANET and later the Internet was designed to support many of the types of uses we
enjoy today. However, getting to this point was neither simple nor obvious. The
success resulted from paying careful attention to design and engineering, innovative users and developers, and the availability of sufficient resources to move from
concept to prototype and, eventually, to commercial networking products.
This chapter provides an overview of the Internet architecture and TCP/IP
protocol suite, to provide some historical context and to establish an adequate
background for the remaining chapters. Architectures (both protocol and physical) really amount to a set of design decisions about what features should be supported and where such features should be logically implemented. Designing an
architecture is more art than science, yet we shall discuss some characteristics of
architectures that have been deemed desirable over time. The subject of network
architecture has been undertaken more broadly in the text by Day [D08], one of
few such treatments.
Architectural Principles
The TCP/IP protocol suite allows computers, smartphones, and embedded devices
of all sizes, supplied from many different computer vendors and running totally
different software, to communicate with each other. By the turn of the twenty-first
century it has become a necessity for modern communication, entertainment, and
commerce. It is truly an open system in that the definition of the protocol suite and
many of its implementations are publicly available at little or no charge. It forms
the basis for what is called the global Internet, or the Internet, a wide area network
(WAN) of about two billion users that literally spans the globe (as of 2010, about
30% of the world’s population). Although many people consider the Internet and
the World Wide Web (WWW) to be interchangeable terms, we ordinarily refer to
the Internet in terms of its ability to provide basic communication of messages
between computers. We refer to WWW as an application that uses the Internet for
Section 1.1 Architectural Principles
communication. It is perhaps the most important Internet application that brought
Internet technology to world attention in the early 1990s.
Several goals guided the creation of the Internet architecture. In [C88], Clark
recounts that the primary goal was to “develop an effective technique for multiplexed utilization of existing interconnected networks.” The essence of this
statement is that the Internet architecture should be able to interconnect multiple
distinct networks and that multiple activities should be able to run simultaneously on the resulting interconnected network. Beyond this primary goal, Clark
provides a list of the following second-level goals:
• Internet communication must continue despite loss of networks or gateways.
• The Internet must support multiple types of communication services.
• The Internet architecture must accommodate a variety of networks.
• The Internet architecture must permit distributed management of its
• The Internet architecture must be cost-effective.
• The Internet architecture must permit host attachment with a low level of
• The resources used in the Internet architecture must be accountable.
Many of the goals listed could have been supported with somewhat different
design decisions from those ultimately selected. However, a few design options
were gaining momentum when these architectural principles were being formulated that influenced the designers in the particular choices they made. We will
mention some of the more important ones and their consequences.
Packets, Connections, and Datagrams
Up to the 1960s, the concept of a network was based largely on the telephone network. It was developed to connect telephones to each other for the duration of a
call. A call was normally implemented by establishing a connection from one party
to another. Establishing a connection meant that a circuit (initially, a physical electrical circuit) was made between one telephone and another for the duration of a
call. When the call was complete, the connection was cleared, allowing the circuit
to be used by other users’ calls. The call duration and identification of the connection endpoints were used to perform billing of the users. When established, the
connection provided each user a certain amount of bandwidth or capacity to send
information (usually voice sounds). The telephone network progressed from its
analog roots to digital, which greatly improved its reliability and performance.
Data inserted into one end of a circuit follows some preestablished path through
the network switches and emerges on the other side in a predictable fashion,
usually with some upper bound on the time (latency). This gives predictable service, as long as a circuit is available when a user needs one. Circuits allocate a
pathway through the network that is reserved for the duration of a call, even if
they are not entirely busy. This is a common experience today with the phone
network—as long as a call is taking place, even if we are not saying anything, we
are being charged for the time.
One of the important concepts developed in the 1960s (e.g., in [B64]) was the
idea of packet switching. In packet switching, “chunks” (packets) of digital information comprising some number of bytes are carried through the network somewhat
independently. Chunks coming from different sources or senders can be mixed
together and pulled apart later, which is called multiplexing. The chunks can be
moved around from one switch to another on their way to a destination, and
the path might be subject to change. This has two potential advantages: the network can be more resilient (the designers were worried about the network being
physically attacked), and there can be better utilization of the network links and
switches because of statistical multiplexing.
When packets are received at a packet switch, they are ordinarily stored in buffer memory or queue and processed in a first-come-first-served (FCFS) fashion. This
is the simplest method for scheduling the way packets are processed and is also
called first-in-first-out (FIFO). FIFO buffer management and on-demand scheduling are easily combined to implement statistical multiplexing, which is the primary method used to intermix traffic from different sources on the Internet. In
statistical multiplexing, traffic is mixed together based on the arrival statistics or
timing pattern of the traffic. Such multiplexing is simple and efficient, because if
there is any network capacity to be used and traffic to use it, the network will be
busy (high utilization) at every bottleneck or choke point. The downside of this
approach is limited predictability—the performance seen by any particular application depends on the statistics of other applications that are sharing the network.
Statistical multiplexing is like a highway where the cars can change lanes and
ultimately intersperse in such a way that any point of constriction is as busy as it
can be.
Alternative techniques, such as time-division multiplexing (TDM) and static multiplexing, typically reserve a certain amount of time or other resources for data on
each connection. Although such techniques can lead to more predictability, a feature useful for supporting constant bit rate telephone calls, they may not fully utilize the network capacity because reserved bandwidth may go unused. Note that
while circuits are straightforwardly implemented using TDM techniques, virtual
circuits (VCs) that exhibit many of the behaviors of circuits but do not depend on
physical circuit switches can be implemented atop connection-oriented packets.
This is the basis for a protocol known as X.25 that was popular until about the
early 1990s when it was largely replaced with Frame Relay and ultimately digital
subscriber line (DSL) technology and cable modems supporting Internet connectivity (see Chapter 3).
Section 1.1 Architectural Principles
The VC abstraction and connection-oriented packet networks such as X.25
required some information or state to be stored in each switch for each connection. The reason is that each packet carries only a small bit of overhead information that provides an index into a state table. For example, in X.25 the 12-bit logical
channel identifier (LCI) or logical channel number (LCN) serves this purpose. At each
switch, the LCI or LCN is used in conjunction with the per-flow state in each switch
to determine the next switch along the path for the packet. The per-flow state is
established prior to the exchange of data on a VC using a signaling protocol that
supports connection establishment, clearing, and status information. Such networks are consequently called connection-oriented.
Connection-oriented networks, whether built on circuits or packets, were the
most prevalent form of networking for many years. In the late 1960s, another option
was developed known as the datagram. Attributed in origin to the CYCLADES
[P73] system, a datagram is a special type of packet in which all the identifying information of the source and final destination resides inside the packet itself
(instead of in the packet switches). Although this tends to require larger packets,
per-connection state at packet switches is no longer required and a connectionless
network could be built, eliminating the need for a (complicated) signaling protocol. Datagrams were eagerly embraced by the designers of the early Internet, and
this decision had profound implications for the rest of the protocol suite.
One other related concept is that of message boundaries or record markers. As
shown in Figure 1-1, when an application sends more than one chunk of information into the network, the fact that more than one chunk was written may or
Figure 1-1
Applications write messages that are carried in protocols. A message boundary is the position or
byte offset between one write and another. Protocols that preserve message boundaries indicate
the position of the sender’s message boundaries at the receiver. Protocols that do not preserve
message boundaries (e.g., streaming protocols like TCP) ignore this information and do not make
it available to a receiver. As a result, applications may need to implement their own methods to
indicate a sender’s message boundaries if this capability is required.
may not be preserved by the communication protocol. Most datagram protocols
preserve message boundaries. This is natural because the datagram itself has a
beginning and an end. However, in a circuit or VC network, it is possible that an
application may write several chunks of data, all of which are read together as one
or more different-size chunks by a receiving application. These types of protocols
do not preserve message boundaries. In cases where an underlying protocol fails
to preserve message boundaries but they are needed by an application, the application must provide its own.
The End-to-End Argument and Fate Sharing
When large systems such as an operating system or protocol suite are being
designed, a question often arises as to where a particular feature or function
should be placed. One of the most important principles that influenced the design
of the TCP/IP suite is called the end-to-end argument [SRC84]:
The function in question can completely and correctly be implemented only with
the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of
the communication itself is not possible. (Sometimes an incomplete version of the
function provided by the communication system may be useful as a performance
This argument may seem fairly straightforward upon first reading but can
have profound implications for communication system design. It argues that correctness and completeness can be achieved only by involving the application or
ultimate user of the communication system. Efforts to correctly implement what
the application is “likely” to need are doomed to incompleteness. In short, this
principle argues that important functions (e.g., error control, encryption, delivery
acknowledgment) should usually not be implemented at low levels (or layers; see
Section 1.2.1) of large systems. However, low levels may provide capabilities that
make the job of the endpoints somewhat easier and consequently may improve
performance. A nuanced reading reveals that this argument suggests that lowlevel functions should not aim for perfection because a perfect guess at what the
application may require is unlikely to be possible.
The end-to-end argument tends to support a design with a “dumb” network
and “smart” systems connected to the network. This is what we see in the TCP/IP
design, where many functions (e.g., methods to ensure that data is not lost, controlling the rate at which a sender sends) are implemented in the end hosts where
the applications reside. The selection of which functions are implemented together
in the same computer or network or software stack is the subject of another related
principle known as fate sharing [C88].
Fate sharing suggests placing all the necessary state to maintain an active
communication association (e.g., virtual connection) at the same location with
Section 1.1 Architectural Principles
the communicating endpoints. With this reasoning, the only type of failure that
destroys communication is one that also destroys one or more of the endpoints,
which obviously destroys the overall communication anyhow. Fate sharing is one
of the design philosophies that allows virtual connections (e.g., those implemented
by TCP) to remain active even if connectivity within the network has failed for a
(modest) period of time. Fate sharing also supports a “dumb network with smart
end hosts” model, and one of the ongoing tensions in today’s Internet is what
functions reside in the network and what functions do not.
Error Control and Flow Control
There are some circumstances where data within a network gets damaged or lost.
This can be for a variety of reasons such as hardware problems, radiation that
modifies bits while being transmitted, being out of range in a wireless network,
and other factors. Dealing with such errors is called error control, and it can be
implemented in the systems constituting the network infrastructure, or in the systems that attach to the network, or some combination. Naturally, the end-to-end
argument and fate sharing would suggest that error control be implemented close
to or within applications.
Usually, if a small number of bit errors are of concern, a number of mathematical codes can be used to detect and repair the bit errors when data is received or
while it is in transit [LC04]. This task is routinely performed within the network.
When more severe damage occurs in a packet network, entire packets are usually resent or retransmitted. In circuit-switched or VC-switched networks such as
X.25, retransmission tends to be done inside the network. This may work well for
applications that require strict in-order, error-free delivery of their data, but some
applications do not require this capability and do not wish to pay the costs (such
as connection establishment and potential retransmission delays) to have their
data reliably delivered. Even a reliable file transfer application does not really care
in what order the chunks of file data are delivered, provided it is eventually satisfied that all chunks are delivered without errors and can be reassembled back into
the original order.
As an alternative to the overhead of reliable, in-order delivery implemented
within the network, a different type of service called best-effort delivery was
adopted by Frame Relay and the Internet Protocol. With best-effort delivery, the
network does not expend much effort to ensure that data is delivered without
errors or gaps. Certain types of errors are usually detected using error-detecting
codes or checksums, such as those that might affect where a datagram is directed,
but when such errors are detected, the errant datagram is merely discarded without further action.
If best-effort delivery is successful, a fast sender can produce information at
a rate that exceeds the receiver’s ability to consume it. In best-effort IP networks,
slowing down a sender is achieved by flow control mechanisms that operate outside the network and at higher levels of the communication system. In particular,
TCP handles this type of problem, and we shall discuss it in detail in Chapters 15
and 16. This is consistent with the end-to-end argument: TCP, which resides at the
end hosts, handles rate control. It is also consistent with fate sharing: the approach
allows some elements of the network infrastructure to fail without necessarily
affecting the ability of the devices outside the network to communicate (as long as
some communication path continues to operate).
Design and Implementation
Although a protocol architecture may suggest a certain approach to implementation, it usually does not include a mandate. Consequently, we make a distinction between the protocol architecture and the implementation architecture, which
defines how the concepts in a protocol architecture may be rendered into existence, usually in the form of software.
Many of the individuals responsible for implementing the protocols for the
ARPANET were familiar with the software structuring of operating systems, and
an influential paper describing the “THE” multiprogramming system [D68] advocated the use of a hierarchical structure as a way to deal with verification of the
logical soundness and correctness of a large software implementation. Ultimately,
this contributed to a design philosophy for networking protocols involving multiple layers of implementation (and design). This approach is now called layering
and is the usual approach to implementing protocol suites.
With layering, each layer is responsible for a different facet of the communications. Layers are beneficial because a layered design allows developers to evolve
different portions of the system separately, often by different people with somewhat different areas of expertise. The most frequently mentioned concept of protocol layering is based on a standard called the Open Systems Interconnection (OSI)
model [Z80] as defined by the International Organization for Standardization
(ISO). Figure 1-2 shows the standard OSI layers, including their names, numbers,
and a few examples. The Internet’s layering model is somewhat simpler, as we
shall see in Section 1.3.
Although the OSI model suggests that seven logical layers may be desirable
for modularity of a protocol architecture implementation, the TCP/IP architecture is normally considered to consist of five. There was much debate about the
relative benefits and deficiencies of the OSI model, and the ARPANET model that
preceded it, during the early 1970s. Although it may be fair to say that TCP/IP
ultimately “won,” a number of ideas and even entire protocols from the ISO protocol suite (protocols standardized by ISO that follow the OSI model) have been
adopted for use with TCP/IP (e.g., IS-IS [RFC3787]).
Section 1.2 Design and Implementation
Figure 1-2
The standard seven-layer OSI model as specified by the ISO. Not all protocols are implemented by
every networked device (at least in theory). The OSI terminology and layer numbers are widely
As described briefly in Figure 1-2, each layer has a different responsibility.
From the bottom up, the physical layer defines methods for moving digital information across a communication medium such as a phone line or fiber-optic cable.
Portions of the Ethernet and Wireless LAN (Wi-Fi) standards are here, although
we do not delve into this layer very much in this text. The link or data-link layer
includes those protocols and methods for establishing connectivity to a neighbor
sharing the same medium. Some link-layer networks (e.g., DSL) connect only two
neighbors. When more than one neighbor can access the same shared network, the
network is said to be a multi-access network. Wi-Fi and Ethernet are examples of
such multi-access link-layer networks, and specific protocols are used to mediate
which stations have access to the shared medium at any given time. We discuss
these in Chapter 3.
Moving up the layer stack, the network or internetwork layer is of great interest
to us. For packet networks such as TCP/IP, it provides an interoperable packet format that can use different types of link-layer networks for connectivity. The layer
also includes an addressing scheme for hosts and routing algorithms that choose
where packets go when sent from one machine to another. Above layer 3 we find
protocols that are (at least in theory) implemented only by end hosts, including
the transport layer. Also of great interest to us, it provides a flow of data between
sessions and can be quite complex, depending on the types of services it provides
(e.g., reliable delivery on a packet network that might drop data). Sessions represent ongoing interactions between applications (e.g., when “cookies” are used
with a Web browser during a Web login session), and session-layer protocols may
provide capabilities such as connection initiation and restart, plus checkpointing
(saving work that has been accomplished so far). Above the session layer we find
the presentation layer, which is responsible for format conversions and standard
encodings for information. As we shall see, the Internet protocols do not include a
formal session or presentation protocol layer, so these functions are implemented
by applications if needed.
The top layer is the application layer. Applications usually implement their
own application-layer protocols, and these are the ones most visible to users.
There is a wide variety of application-layer protocols, and programmers are constantly inventing new ones. Consequently, the application layer is where there is
the greatest amount of innovation and where new capabilities are developed and
Multiplexing, Demultiplexing, and Encapsulation in Layered
One of the major benefits of a layered architecture is its natural ability to perform
protocol multiplexing. This form of multiplexing allows multiple different protocols
to coexist on the same infrastructure. It also allows multiple instantiations of the
same protocol object (e.g., connections) to be used simultaneously without being
Multiplexing can occur at different layers, and at each layer a different sort of
identifier is used for determining which protocol or stream of information belongs
together. For example, at the link layer, most link technologies (such as Ethernet
and Wi-Fi) include a protocol identifier field value in each packet to indicate which
protocol is being carried in the link-layer frame (IP is one such protocol). When
an object (packet, message, etc.), called a protocol data unit (PDU), at one layer is
carried by a lower layer, it is said to be encapsulated (as opaque data) by the next
layer down. Thus, multiple objects at layer N can be multiplexed together using
encapsulation in layer N - 1. Figure 1-3 shows how this works. The identifier at
layer N - 1 is used to determine the correct receiving protocol or program at layer
N during demultiplexing.
In Figure 1-3, each layer has its own concept of a message object (a PDU) corresponding to the particular layer responsible for creating it. For example, if a layer
4 (transport) protocol produces a packet, it would properly be called a layer 4 PDU
or transport PDU (TPDU). When a layer is provided a PDU from the layer above it,
it usually “promises” to not look into the contents of the PDU. This is the essence
of encapsulation—each layer treats the data from above as opaque, uninterpretable information. Most commonly a layer prepends the PDU with its own header,
although trailers are used by some protocols (not TCP/IP). The header is used for
multiplexing data when sending, and for the receiver to perform demultiplexing,
Section 1.2 Design and Implementation
Figure 1-3
Encapsulation is usually used in conjunction with layering. Pure encapsulation involves
taking the PDU of one layer and treating it as opaque (uninterpreted) data at the layer
below. Encapsulation takes place at each sender, and decapsulation (the reverse operation) takes place at each receiver. Most protocols use headers during encapsulation; a few
also use trailers.
based on a demultiplexing (demux) identifier. In TCP/IP networks such identifiers
are commonly hardware addresses, IP addresses, and port numbers. The header
may also include important state information, such as whether a virtual circuit is
being set up or has already completed setup. The resulting object is another PDU.
One other important feature of layering suggested by Figure 1-2 is that in pure
layering not all networked devices need to implement all the layers. Figure 1-4
shows that in some cases a device needs to implement only a few layers if it is
expected to perform only certain types of processing.
In Figure 1-4, a somewhat idealized small internet includes two end systems, a
switch, and a router. In this figure, each number corresponds to a type of protocol
at a particular layer. As we can see, each device implements a different subset of
the layer stack. The host on the left implements three different link-layer protocols
(D, E, and F) with corresponding physical layers and three different transportlayer protocols (A, B, and C) that run on a single type of network-layer protocol.
End hosts implement all the layers, switches implement up to layer 2 (this switch
implements D and G), and routers implement up to layer 3. Routers are capable
of interconnecting different types of link-layer networks and must implement the
link-layer protocols for each of the network types they interconnect.
Figure 1-4 Different network devices implement different subsets of the protocol stack. End hosts tend to
implement all the layers. Routers implement layers below the transport layer, and switches implement link-layer protocols and below. This idealized structure is often violated because routers and
switches usually include the ability to act as a host (e.g., to be managed and set up) and therefore
need an implementation of all of the layers even if they are rarely used.
The internet of Figure 1-4 is somewhat idealized because today’s switches and
routers often implement more than the protocols they are absolutely required to
implement for forwarding data. This is for a number of reasons, including management. In such circumstances, devices such as routers and switches must sometimes act as hosts and support services such as remote login. To do this, they
usually must implement transport and application protocols.
Although we show only two hosts communicating, the link- and physicallayer networks (labeled as D and G) might have multiple hosts attached. If so,
then communication is possible between any pair of systems that implement the
appropriate higher-layer protocols. In Figure 1-4 we can differentiate between an
end system (the two hosts on either side) and an intermediate system (the router in
the middle) for a particular protocol suite. Layers above the network layer use endto-end protocols. In our picture these layers are needed only on the end systems.
The network layer, however, provides a hop-by-hop protocol and is used on the two
end systems and every intermediate system. The switch or bridge is not ordinarily
considered an intermediate system because it is not addressed using the internetworking protocol’s addressing format, and it operates in a fashion that is largely
transparent to the network-layer protocol. From the point of view of the routers
and end systems, the switch or bridge is essentially invisible.
A router, by definition, has two or more network interfaces (because it connects two or more networks). Any system with multiple interfaces is called multihomed. A host can also be multihomed, but unless it specifically forwards packets
from one interface to another, it is not called a router. Also, routers need not be
Section 1.3 The Architecture and Protocols of the TCP/IP Suite
special hardware boxes that only move packets around an internet. Most TCP/IP
implementations, for example, allow a multihomed host to act as a router also,
if properly configured to do so. In this case we can call the system either a host
(when an application such as File Transfer Protocol (FTP) [RFC0959] or the Web is
used) or a router (when it is forwarding packets from one network to another). We
will use whichever term makes sense given the context.
One of the goals of an internet is to hide all of the details of the physical layout (the topology) and lower-layer protocol heterogeneity from the applications.
Although this is not obvious from our two-network internet in Figure 1-4, the
application layers should not care (and do not care) that even though each host
is attached to a network using link-layer protocol D (e.g., Ethernet), the hosts are
separated by a router and switch that use link-layer G. There could be 20 routers between the hosts, with additional types of physical interconnections, and the
applications would run without modification (although the performance might be
somewhat different). Abstracting the details in this way is what makes the concept of an internet so powerful and useful.
The Architecture and Protocols of the TCP/IP Suite
So far we have discussed architecture, protocols, protocol suites, and implementation techniques in the abstract. In this section, we discuss the architecture and
particular protocols that constitute the TCP/IP suite. Although this has become the
established term for the protocols used on the Internet, there are many protocols
beyond TCP and IP in the collection or family of protocols used with the Internet. We begin by noting how the ARPANET reference model of layering, which
ultimately formed the basis for the Internet’s protocol layering, differs somewhat
from the OSI layering discussed earlier.
The ARPANET Reference Model
Figure 1-5 depicts the layering inspired by the ARPANET reference model, which
was ultimately adopted by the TCP/IP suite. The structure is simpler than the OSI
model, but real implementations include a few specialized protocols that do not fit
cleanly into the conventional layers.
Starting from the bottom of Figure 1-5 and working our way up the stack,
the first layer we see is 2.5, an “unofficial” layer. There are several protocols that
operate here, but one of the oldest and most important is called the Address Resolution Protocol (ARP). It is a specialized protocol used with IPv4 and only with
multi-access link-layer protocols (such as Ethernet and Wi-Fi) to convert between
the addresses used by the IP layer and the addresses used by the link layer. We
examine this protocol in Chapter 4. In IPv6 the address-mapping function is part
of ICMPv6, which we discuss in Chapter 8.
Figure 1-5
Protocol layering based on the ARM or TCP/IP suite used in the Internet. There are no official session or presentation layers. In addition, there are several “adjunct” or helper protocols that do not
fit well into the standard layers yet perform critical functions for the operation of the other protocols. Some of these protocols are not used with IPv6 (e.g., IGMP and ARP).
At layer number 3 in Figure 1-5 we find IP, the main network-layer protocol
for the TCP/IP suite. We discuss it in detail in Chapter 5. The PDU that IP sends to
link-layer protocols is called an IP datagram and may be as large as 64KB (and up
to 4GB for IPv6). In many cases we shall use the simpler term packet to mean an
IP datagram when the usage context is clear. Fitting large packets into link-layer
PDUs (called frames) that may be smaller is handled by a function called fragmentation that may be performed by IP hosts and some routers when necessary. In fragmentation, portions of a larger datagram are sent in multiple smaller datagrams
called fragments and put back together (called reassembly) when reaching the destination. We discuss fragmentation in Chapter 10.
Throughout the text we shall use the term IP to refer to both IP versions 4 and
6. We use the term IPv6 to refer to IP version 6, and IPv4 to refer to IP version 4,
currently the most popular version. When discussing architecture, the details of
IPv4 versus IPv6 matter little. When we delve into the way particular addressing
and configuration functions work (Chapter 2 and Chapter 6), for example, these
details will become more important.
Because IP packets are datagrams, each one contains the address of the layer
3 sender and recipient. These addresses are called IP addresses and are 32 bits long
for IPv4 and 128 bits long for IPv6; we discuss them in detail in Chapter 2. This
difference in IP address size is the characteristic that most differentiates IPv4 from
IPv6. The destination address of each datagram is used to determine where each
datagram should be sent, and the process of making this determination and sending the datagram to its next hop is called forwarding. Both routers and hosts perform forwarding, although routers tend to do it much more often. There are three
Section 1.3 The Architecture and Protocols of the TCP/IP Suite
types of IP addresses, and the type affects how forwarding is performed: unicast
(destined for a single host), broadcast (destined for all hosts on a given network),
and multicast (destined for a set of hosts that belong to a multicast group). Chapter
2 looks at the types of addresses used with IP in more detail.
The Internet Control Message Protocol (ICMP) is an adjunct to IP, and we label
it as a layer 3.5 protocol. It is used by the IP layer to exchange error messages and
other vital information with the IP layer in another host or router. There are two
versions of ICMP: ICMPv4, used with IPv4, and ICMPv6, used with IPv6. ICMPv6
is considerably more complex and includes functions such as address autoconfiguration and Neighbor Discovery that are handled by other protocols (e.g., ARP)
on IPv4 networks. Although ICMP is used primarily by IP, it is also possible for
applications to use it. Indeed, we will see that two popular diagnostic tools, ping
and traceroute, use ICMP. ICMP messages are encapsulated within IP datagrams in the same way transport layer PDUs are.
The Internet Group Management Protocol (IGMP) is another protocol adjunct to
IPv4. It is used with multicast addressing and delivery to manage which hosts are
members of a multicast group (a group of receivers interested in receiving traffic for
a particular multicast destination address). We describe the general properties of
broadcasting and multicasting, along with IGMP and the Multicast Listener Discovery protocol (MLD, used with IPv6), in Chapter 9.
At layer 4, the two most common Internet transport protocols are vastly different. The most widely used, the Transmission Control Protocol (TCP), deals with
problems such as packet loss, duplication, and reordering that are not repaired
by the IP layer. It operates in a connection-oriented (VC) fashion and does not
preserve message boundaries. Conversely, the User Datagram Protocol (UDP) provides little more than the features provided by IP. UDP allows applications to send
datagrams that preserve message boundaries but imposes no rate control or error
TCP provides a reliable flow of data between two hosts. It is concerned with
things such as dividing the data passed to it from the application into appropriately sized chunks for the network layer below, acknowledging received packets,
and setting timeouts to make certain the other end acknowledges packets that
are sent, and because this reliable flow of data is provided by the transport layer,
the application layer can ignore all these details. The PDU that TCP sends to IP is
called a TCP segment.
UDP, on the other hand, provides a much simpler service to the application
layer. It allows datagrams to be sent from one host to another, but there is no
guarantee that the datagrams reach the other end. Any desired reliability must
be added by the application layer. Indeed, about all that UDP provides is a set
of port numbers for multiplexing and demultiplexing data, plus a data integrity
checksum. As we can see, UDP and TCP differ radically even though they are at
the same layer. There is a use for each type of transport protocol, which we will
see when we look at the different applications that use TCP and UDP.
There are two additional transport-layer protocols that are relatively new
and available on some systems. As they are not yet very widespread, we do not
devote much discussion to them, but they are worth being aware of. The first is the
Datagram Congestion Control Protocol (DCCP), specified in [RFC4340]. It provides a
type of service midway between TCP and UDP: connection-oriented exchange of
unreliable datagrams but with congestion control. Congestion control comprises
a number of techniques whereby a sender is limited to a sending rate in order to
avoid overwhelming the network. We discuss it in detail with respect to TCP in
Chapter 16.
The other transport protocol available on some systems is called the Stream
Control Transmission Protocol (SCTP), specified in [RFC4960]. SCTP provides reliable delivery like TCP but does not require the sequencing of data to be strictly
maintained. It also allows for multiple streams to logically be carried on the same
connection and provides a message abstraction, which differs from TCP. SCTP
was designed for carrying signaling messages on IP networks that resemble those
used in the telephone network.
Above the transport layer, the application layer handles the details of the particular application. There are many common applications that almost every implementation of TCP/IP provides. The application layer is concerned with the details
of the application and not with the movement of data across the network. The
lower three layers are the opposite: they know nothing about the application but
handle all the communication details.
Multiplexing, Demultiplexing, and Encapsulation in TCP/IP
We have already discussed the basics of protocol multiplexing, demultiplexing,
and encapsulation. At each layer there is an identifier that allows a receiving system to determine which protocol or data stream belongs together. Usually there is
also addressing information at each layer. This information is used to ensure that
a PDU has been delivered to the right place. Figure 1-6 shows how demultiplexing
works in a hypothetical Internet host.
Although it is not really part of the TCP/IP suite, we shall begin bottom-up
and mention how demultiplexing from the link layer is performed, using Ethernet
as an example. We discuss several link-layer protocols in Chapter 3. An arriving
Ethernet frame contains a 48-bit destination address (also called a link-layer or
MAC—Media Access Control—address) and a 16-bit field called the Ethernet type.
A value of 0x0800 (hexadecimal) indicates that the frame contains an IPv4 datagram. Values of 0x0806 and 0x86DD indicate ARP and IPv6, respectively. Assuming that the destination address matches one of the receiving system’s addresses,
the frame is received and checked for errors, and the Ethernet Type field value is
used to select which network-layer protocol should process it.
Assuming that the received frame contains an IP datagram, the Ethernet
header and trailer information is removed, and the remaining bytes (which constitute the frame’s payload) are given to IP for processing. IP checks a number of
items, including the destination IP address in the datagram. If the destination
Section 1.3 The Architecture and Protocols of the TCP/IP Suite
Figure 1-6 The TCP/IP stack uses a combination of addressing information and protocol demultiplexing identifiers to determine if a datagram has been received correctly and, if so,
what entity should process it. Several layers also check numeric values (e.g., checksums)
to ensure that the contents have not been damaged in transit.
address matches one of its own and the datagram contains no errors in its header
(IP does not check its payload), the 8-bit IPv4 Protocol field (called Next Header
in IPv6) is checked to determine which protocol to invoke next. Common values
include 1 (ICMP), 2 (IGMP), 4 (IPv4), 6 (TCP), and 17 (UDP). The value of 4 (and
41, which indicates IPv6) is interesting because it indicates the possibility that an
IP datagram may appear inside the payload area of an IP datagram. This violates
the original concepts of layering and encapsulation but is the basis for a powerful
technique known as tunneling, which we discuss more in Chapter 3.
Once the network layer (IPv4 or IPv6) determines that the incoming datagram
is valid and the correct transport protocol has been determined, the resulting datagram (reassembled from fragments if necessary) is passed to the transport layer
for processing. At the transport layer, most protocols (including TCP and UDP)
use port numbers for demultiplexing to the appropriate receiving application.
Port Numbers
Port numbers are 16-bit nonnegative integers (i.e., range 0–65535). These numbers
are abstract and do not refer to anything physical. Instead, each IP address has
65,536 associated port numbers for each transport protocol that uses port numbers
(most do), and they are used for determining the correct receiving application. For
client/server applications (see Section 1.5.1), a server first “binds” to a port number, and subsequently one or more clients establish connections to the port number using a particular transport protocol on a particular machine. In this sense,
port numbers act more like telephone number extensions, except they are usually
assigned by standards.
Standard port numbers are assigned by the Internet Assigned Numbers
Authority (IANA). The set of numbers is divided into special ranges, including the
well-known port numbers (0–1023), the registered port numbers (1024–49151), and
the dynamic/private port numbers (49152–65535). Traditionally, servers wishing to
bind to (i.e., offer service on) a well-known port require special privileges such as
administrator or “root” access.
The range of well-known ports is used for identifying many well-known services such as the Secure Shell Protocol (SSH, port 22), FTP (ports 20 and 21), Telnet
remote terminal protocol (port 23), e-mail/Simple Mail Transfer Protocol (SMTP,
port 25), Domain Name System (DNS, port 53), the Hypertext Transfer Protocol or Web
(HTTP and HTTPS, ports 80 and 443), Interactive Mail Access Protocol (IMAP and
IMAPS, ports 143 and 993), Simple Network Management Protocol (SNMP, ports 161
and 162), Lightweight Directory Access Protocol (LDAP, port 389), and several others.
Protocols with multiple ports (e.g., HTTP and HTTPS) often have different port
numbers depending on whether Transport Layer Security (TLS) is being used with
the base application-layer protocol (see Chapter 18).
If we examine the port numbers for these standard services and other standard
TCP/IP services (Telnet, FTP, SMTP, etc.), we see that most are odd numbers.
This is historical, as these port numbers are derived from the NCP port numbers.
(NCP, the Network Control Protocol, preceded TCP as a transport-layer protocol
for the ARPANET.) NCP was simplex, not full duplex, so each application required
two connections, and an even-odd pair of port numbers was reserved for each
application. When TCP and UDP became the standard transport layers, only a
single port number was needed per application, yet the odd port numbers from
NCP were used.
The registered port numbers are available to clients or servers with special
privileges, but IANA keeps a reserved registry for particular uses, so these port
numbers should generally be avoided when developing new applications unless
an IANA allocation has been procured. The dynamic/private port numbers are
essentially unregulated. As we will see, in some circumstances (e.g., on clients)
the value of the port number matters little because the port number being used
is transient. Such port numbers are also called ephemeral port numbers. They are
considered to be temporary because a client typically needs one only as long as the
user running the client needs service, and the client does not need to be found by
Section 1.4 Internets, Intranets, and Extranets
the server in order to establish a connection. Servers, conversely, generally require
names and port numbers that do not change often in order to be found by clients.
Names, Addresses, and the DNS
With TCP/IP, each link-layer interface on each computer (including routers) has
at least one IP address. IP addresses are enough to identify a host, but they are
not very convenient for humans to remember or manipulate (especially the long
addresses used with IPv6). In the TCP/IP world, the DNS is a distributed database
that provides the mapping between host names and IP addresses (and vice versa).
Names are set up in a hierarchy, ending in domains such as .com, .org, .gov, .in,
.uk, and .edu. Perhaps surprisingly, DNS is an application-layer protocol and
thus depends on the other protocols in order to operate. Although most of the
TCP/IP suite does not use or care about names, typical users (e.g., those using Web
browsers) use names frequently, so if the DNS fails to function properly, normal
Internet access is effectively disabled. Chapter 11 looks into the DNS in detail.
Applications that manipulate names can call a standard API function (see
Section 1.5.3) to look up the IP address (or addresses) corresponding to a given
host’s name. Similarly, a function is provided to do the reverse lookup—given an
IP address, look up the corresponding host name. Most applications that take a host
name as input also take an IP address. Web browsers support this capability. For
example, the Uniform Resource Locators (URLs)
html and http://[2001:400:610:102::c9]/index.html can be typed into a Web
browser and are both effectively equivalent to (at
the time of writing; the second example requires IPv6 connectivity to be successful).
Internets, Intranets, and Extranets
As suggested previously, the Internet has developed as the aggregate network
resulting from the interconnection of constituent networks over time. The lowercase internet means multiple networks connected together, using a common protocol suite. The uppercase Internet refers to the collection of hosts around the world
that can communicate with each other using TCP/IP. The Internet is an internet,
but the reverse is not true.
One of the reasons for the phenomenal growth in networking during the
1980s was the realization that isolated groups of stand-alone computers made
little sense. A few stand-alone systems were connected together into a network.
Although this was a step forward, during the 1990s we realized that separate
networks that could not interoperate were not as valuable as a bigger network
that could. This notion is the basis for the so-called Metcalfe’s Law, which states
roughly that the value of a computer network is proportional to the square of the
number of connected endpoints (e.g., users or devices). The Internet idea, and its
supporting protocols, would make possible the interconnection of different networks. This deceptively simple concept turns out to be remarkably powerful.
The easiest way to build an internet is to connect two or more networks with
a router. A router is often a special-purpose device for connecting networks. The
nice thing about routers is that they provide connections to many different types
of physical networks: Ethernet, Wi-Fi, point-to-point links, DSL, cable Internet service, and so on.
These devices are also called IP routers, but we will use the term router. Historically
these devices were called gateways, and this term is used throughout much of the
older TCP/IP literature. Today the term gateway is used for an application-layer
gateway (ALG), a process that connects two different protocol suites (say, TCP/IP
and IBM’s SNA) for one particular application (often electronic mail or file transfer).
In recent years, other terms have been adopted for different configurations of
internets using the TCP/IP protocol suite. An intranet is the term used to describe a
private internetwork, usually run by a business or other enterprise. Most often, the
intranet provides access to resources available only to members of the particular
enterprise. Users may connect to their (e.g., corporate) intranet using a virtual private
network (VPN). VPNs help to ensure that access to potentially sensitive resources in
an intranet is made available only to authorized users, usually using the tunneling
concept we mentioned previously. We discuss VPNs in more detail in Chapter 7.
In many cases an enterprise or business wishes to set up a network containing
servers accessible to certain partners or other associates using the Internet. Such
networks, which also often involve the use of a VPN, are known as extranets and
consist of computers attached outside the serving enterprise’s firewall (see Chapter 7). Technically, there is little difference between an intranet, an extranet, and
the Internet, but the usage cases and administrative policies are usually different,
and therefore a number of these more specific terms have evolved.
Designing Applications
The network concepts we have touched upon so far provide a fairly simple service
model [RFC6250]: moving bytes between programs running on different (or, occasionally, the same) computers. To do anything useful with this capability, we need
networked applications that use the network for providing services or performing computations. Networked applications are typically structured according to a
small number of design patterns. The most common of these are client/server and
Most network applications are designed so that one side is the client and the other
side is the server. The server provides some type of service to clients, such as
Section 1.5 Designing Applications
access to files on the server host. We can categorize servers into two classes: iterative and concurrent. An iterative server iterates through the following steps:
I1. Wait for a client request to arrive.
I2. Process the client request.
I3. Send the response back to the client that sent the request.
I4. Go back to step I1.
The problem with an iterative server occurs when step I2 takes a long time.
During this time no other clients are serviced. A concurrent server, on the other
hand, performs the following steps:
C1. Wait for a client request to arrive.
C2. Start a new server instance to handle this client’s request. This may involve
creating a new process, task, or thread, depending on what the underlying operating system supports. This new server handles one client’s entire
request. When the requested task is complete, the new server terminates.
Meanwhile, the original server instance continues to C3.
C3. Go back to step C1.
The advantage of a concurrent server is that the server just spawns other
server instances to handle the client requests. Each client has, in essence, its own
server. Assuming that the operating system allows multiprogramming (essentially all do today), multiple clients are serviced concurrently. The reason we categorize servers, and not clients, is that a client normally cannot tell whether it is
talking to an iterative server or a concurrent server. As a general rule, most servers
are concurrent.
Note that we use the terms client and server to refer to applications and not
to the particular computer systems on which they run. The very same terms are
sometimes used to refer to the pieces of hardware that are most often used to execute either client or server applications. Although the terminology is thus somewhat imprecise, it works well enough in practice. As a result, it is common to find
a server (in the hardware sense) running more than one server (in the application
Some applications are designed in a more distributed fashion where there is no
single server. Instead, each application acts both as a client and as a server, sometimes as both at once, and is capable of forwarding requests. Some very popular
applications (e.g., Skype [SKYPE], BitTorrent [BT]) are of this form. These applications are called peer-to-peer or p2p applications. A concurrent p2p application may
receive an incoming request, determine if it is able to respond to the request, and
if not forward the request on to some other peer. Thus, the set of p2p applications
together form a network among applications, also called an overlay network. Such
overlays are now commonplace and can be extremely powerful. Skype, for example, has grown to be the largest carrier of international telephone calls. According
to some estimates, BitTorrent was responsible for more than half of all Internet
traffic in 2009 [IPIS].
One of the primary problems in p2p networks is called the discovery problem.
That is, how does one peer find which other peer(s) can provide the data or service
it wants in a network where peers may come and go? This is usually handled by
a bootstrapping procedure whereby each client is initially configured with the
addresses and port numbers of some peers that are likely to be operating. Once
connected, the new participant learns of other active peers and, depending on the
protocol, what services or files they provide.
Application Programming Interfaces (APIs)
Applications, whether p2p or client/server, need to express their desired network
operations (e.g., make a connection, write or read data). This is usually supported
by a host operating system using a networking application programming interface
(API). The most popular API is called sockets or Berkeley sockets, indicating where it
was originally developed [LJFK93].
This text is not a programming text, but occasionally we refer to a feature of
TCP/IP and whether that feature is provided by the sockets API or not. All of the
programming details with examples for sockets can be found in [SFR04]. Modifications to sockets intended for use with IPv6 are also described in a number
of freely available online documents [RFC3493][RFC3542][RFC3678][RFC4584]
Standardization Process
Newcomers to the TCP/IP suite often wonder just who is responsible for specifying and standardizing the various protocols and how they operate. A number
of organizations represent the answer to this question. The group with which
we will most often be concerned is the Internet Engineering Task Force (IETF)
[RFC4677]. This group meets three times each year in various locations around
the world to develop, discuss, and agree on standards for the Internet’s “core”
protocols. Exactly what constitutes “core” is subject to some debate, but common
protocols such as IPv4, IPv6, TCP, UDP, and DNS are clearly in the purview of
IETF. Attendance at IETF meetings is open to anyone, but it is not free.
IETF is a forum that elects leadership groups called the Internet Architecture Board (IAB) and the Internet Engineering Steering Group (IESG). The IAB is
chartered to provide architectural guidance to activities in IETF and to perform a
Section 1.6 Standardization Process
number of other tasks such as appointing liaisons to other standards-defining organizations (SDOs). The IESG has decision-making authority regarding the creation
and approval of new standards, along with modifications to existing standards.
The “heavy lifting” or detailed work is generally performed by IETF working
groups that are coordinated by working group chairs who volunteer for this task.
In addition to the IETF, there are two other important groups that interact
closely with the IETF. The Internet Research Task Force (IRTF) explores protocols,
architectures, and procedures that are not deemed mature enough for standardization. The chair of the IRTF is a nonvoting member of IAB. The IAB, in turn,
works with the Internet Society (ISOC) to help influence and promote worldwide
policies and education regarding Internet technologies and usage.
Request for Comments (RFC)
Every official standard in the Internet community is published as a Request for
Comments, or RFC. RFCs can be created in a number of ways, and the publisher of
RFCs (called the RFC editor) recognizes multiple document streams corresponding
to the way an RFC has been developed. The current streams (as of 2010) include
the IETF, IAB, IRTF, and independent submission streams. Prior to being accepted
and published (permanently) as an RFC, documents exist as temporary Internet
drafts while they receive comments and progress through the editing and review
All RFCs are not standards. Only so-called standards-track category RFCs
are considered to be official standards. Other categories include best current practice (BCP), informational, experimental, and historic. It is important to realize that
just because a document is an RFC does not mean that the IETF has endorsed it
as any form of standard. Indeed, there exist RFCs on which there is significant
The RFCs range in size from a few pages to several hundred. Each is identified by a number, such as RFC 1122, with higher numbers for newer RFCs. They
are all available for free from a number of Web sites, including For historical reasons, RFCs are generally delivered as basic text files,
although some RFCs have been reformatted or authored using more advanced file
A number of RFCs have special significance because they summarize, clarify,
or interpret particular sets of other standards. For example, [RFC5000] defines
the set of all other RFCs that are considered official standards as of mid-2008 (the
most recent such RFC at the time of writing). An updated list is available at the
current standards Web site [OIPSW]. The Host Requirements RFCs ([RFC1122] and
[RFC1123]) define requirements for protocol implementations in Internet IPv4
hosts, and the Router Requirements RFC [RFC1812] does the same for routers. The
Node Requirements RFC [RFC4294] does both for IPv6 systems.
Other Standards
Although the IETF is responsible for standardizing most of the protocols we discuss in this text, other SDOs are responsible for defining protocols that merit our
attention. The most important of these groups include the Institute of Electrical
and Electronics Engineers (IEEE), the World Wide Web Consortium (W3C), and
the International Telecommunication Union (ITU). In their activities relevant to
this text, IEEE is concerned with standards below layer 3 (e.g., Wi-Fi and Ethernet),
and W3C is concerned with application-layer protocols, specifically those related
to Web technologies (e.g., HTML-based syntax). ITU, and more specifically ITU-T
(formerly CCITT), standardizes protocols used within the telephone and cellular
networks, which is becoming an ever more important component of the Internet.
Implementations and Software Distributions
The historical de facto standard TCP/IP implementations were from the Computer
Systems Research Group (CSRG) at the University of California, Berkeley. They
were distributed with the 4.x BSD (Berkeley Software Distribution) system and
with the BSD Networking Releases until the mid-1990s. This source code has been
the starting point for many other implementations. Today, each popular operating
system has its own implementation. In this text, we tend to draw examples from
the TCP/IP implementations in Linux, Windows, and sometimes FreeBSD and
Mac OS (both of which are derived from historical BSD releases). In most cases,
the particular implementation matters little.
Figure 1-7 shows a chronology of the various BSD releases, indicating the
important TCP/IP features we cover in later chapters. It also shows the years when
Linux and Windows began supporting TCP/IP. The BSD Networking Releases
shown in the second column were freely available public source code releases containing all of the networking code, both the protocols themselves and many of the
applications and utilities (e.g., the Telnet remote terminal program and FTP file
transfer program).
By the mid-1990s, the Internet and TCP/IP were well established. All subsequent popular operating systems support TCP/IP natively. Research and development of new TCP/IP features, previously found first in BSD releases, are now
typically found first in Linux releases. Windows has recently implemented a new
TCP/IP stack (starting with Windows Vista) with many new features and native
IPv6 capability. Linux, FreeBSD, and Mac OS X also support IPv6 without setting
any special configuration options.
Section 1.8 Attacks Involving the Internet Architecture
Figure 1-7
The history of software releases supporting TCP/IP up to 1995. The various BSD releases pioneered
the availability of TCP/IP. In part because of legal uncertainties regarding the BSD releases in the
early 1990s, Linux was developed as an alternative that was initially tailored for PC users. Microsoft began supporting TCP/IP in Windows a couple of years later.
Attacks Involving the Internet Architecture
Throughout the text we shall briefly describe attacks and vulnerabilities that
have been discovered in the design or implementation of the topic we are discussing. Few attacks target the Internet architecture as a whole. However, it is
worth observing that the Internet architecture delivers IP datagrams based on
destination IP addresses. As a result, malicious users are able to insert whatever
IP address they choose into the source IP address field of each IP datagram they
send, an activity called spoofing. The resulting datagrams are delivered to their
destinations, but it is difficult to perform attribution. That is, it may be difficult or
impossible to determine the origin of a datagram received from the Internet.
Spoofing can be combined with a variety of other attacks seen periodically on
the Internet. Denial-of-service (DoS) attacks usually involve using so much of some
important resource that legitimate users are denied service. For example, sending
so many IP datagrams to a server that it spends all of its time just processing the
incoming packets and performing no other useful work is a type of DoS attack.
Other DoS attacks may involve clogging the network with so much traffic that
no other packets can be sent. This is often accomplished by using many sending
computers, forming a distributed DoS (DDoS) attack.
Unauthorized access attacks involve accessing information or resources in an
unauthorized fashion. This can be accomplished with a variety of techniques such
as exploiting protocol implementation bugs to take control of a system (called
0wning the system and turning it into a zombie or bot). It can also involve various forms of masquerading such as an attacker’s agent impersonating a legitimate
user (e.g., by running with the user’s credentials). Some of the more pernicious
attacks involve taking control of many remote systems using malicious software
(malware) and using them in a coordinated, distributed fashion (called botnets).
Programmers who intentionally develop malware and exploit systems for (illegal)
profit or other malicious purposes are generally called black hats. So-called white
hats do the same sorts of technical things but notify vulnerable parties instead of
exploit them.
One other concern with the Internet architecture is that the original Internet
protocols did not perform any encryption in support of authentication, integrity,
or confidentiality. Consequently, malicious users could usually ascertain private
information by merely observing packets in the network. Those with the ability
to modify packets in transit could also impersonate users or alter the contents of
messages. Although these problems have been reduced significantly thanks to
encryption protocols (see Chapter 18), old or poorly designed protocols are still
sometimes used that are vulnerable to simple eavesdropping attacks. Given the
prevalence of wireless networks, where it is relatively easy to “sniff” the packets
sent by others, such older or insecure protocols should be avoided. Note that while
encryption may be enabled at one layer (e.g., on a link-layer Wi-Fi network), only
host-to-host encryption (IP layer or above) protects information across the multiple network segments an IP datagram is likely to traverse on its way to its final
This chapter has been a whirlwind tour of concepts in network architecture and
design in general, plus the TCP/IP protocol suite in particular that we discuss in
detail in later chapters. The Internet architecture was designed to interconnect
different existing networks and provide for a wide range of services and protocols
Section 1.9 Summary
operating simultaneously. Packet switching using datagrams was chosen for its
robustness and efficiency. Security and predictable delivery of data (e.g., bounded
latency) were secondary concerns.
Based on their understanding of layered and modular software design in
operating systems, the early implementers of the Internet protocols adopted a
layered design that employs encapsulation. The three main layers in the TCP/IP
protocol suite are the network layer, transport layer, and application layer, and we
mentioned the different responsibilities of each. We also mentioned the link layer
because it relates so closely with the TCP/IP suite. We shall discuss each in more
detail in subsequent chapters.
In TCP/IP, the distinction between the network layer and the transport layer is
critical: the network layer (IP) provides an unreliable datagram service and must
be implemented by all systems addressable on the Internet, whereas the transport
layers (TCP and UDP) provide an end-to-end service to applications running on
end hosts. The primary transport layers differ radically. TCP provides in-ordered
reliable stream delivery with flow control and congestion control. UDP provides
essentially no capabilities beyond IP except port numbers for demultiplexing and
an error detection mechanism. Unlike TCP, however, it supports multicast delivery.
Addresses and demultiplexing identifiers are used by each layer to avoid confusing protocols or different associations/connections of the same protocol. Linklayer multi-access networks often use 48-bit addresses; IPv4 uses 32-bit addresses
and IPv6 uses 128-bit addresses. The TCP and UDP transport protocols use distinct sets of port numbers. Some port numbers are assigned by standards, and others are used temporarily, usually by client applications when communicating with
servers. Port numbers do not represent anything physical; they are merely used as
a way for applications that want to communicate to rendezvous.
Although port numbers and IP addresses are usually enough to identify the
location of a service on the Internet, they are not very convenient for humans to
remember or use (especially IPv6 addresses). Consequently, the Internet uses
a hierarchical system of host names that can be converted to IP addresses (and
back) using DNS, a distributed database application running on the Internet. DNS
has become an essential component of the Internet infrastructure, and efforts are
under way to make it more secure (see Chapter 18).
An internet is a collection of networks. The common building block for an
internet is a router that connects the networks at the IP layer. The “capital-I” Internet is an internet that spans the globe and interconnects nearly two billion users
(as of 2010). Private internets are called intranets and are usually connected to the
Internet using special devices (firewalls, discussed in Chapter 10) that attempt to
prevent unauthorized access. Extranets usually consist of a subset of an institution’s intranet that is designed to be accessed by partners or affiliates in a limited
Networked applications are usually designed using a client/server or peerto-peer design pattern. Client/server is more popular and traditional, but peerto-peer designs have also seen tremendous success. Whatever the design pattern,
applications invoke APIs to perform networking tasks. The most common API for
TCP/IP networks is called sockets. It was provided with BSD UNIX distributions,
software releases that pioneered the use of TCP/IP. By the late 1990s the TCP/IP
protocol suite and sockets API were available on every popular operating system.
Security was not a major design goal for the Internet architecture. Determining where packets originate can be difficult for a receiver, as end hosts can easily
spoof source IP addresses in unsecured IP datagrams. Distributed DoS attacks
also remain an ongoing challenge because victim end hosts can be collected
together to form botnets that can carry out DDoS and other attacks, sometimes
without the system owners’ knowledge. Finally, early Internet protocols did little
to ensure privacy of sensitive information, but most of those protocols are now
deprecated, and modern replacements use encryption to provide confidential and
authenticated communications between hosts.
1.10 References
[B64] P. Baran, “On Distributed Communications: 1. Introduction to Distributed
Communications Networks,” RAND Memorandum RM-3420-PR, Aug. 1964.
[C88] D. Clark, “The Design Philosophy of the DARPA Internet Protocols,” Proc.
ACM SIGCOMM, Aug. 1988.
[CK74] V. Cerf and R. Kahn, “A Protocol for Packet Network Intercommunication,” IEEE Transactions on Communications, COM-22(5), May 1974.
[D08] J. Day, Patterns in Network Architecture: A Return to Fundamentals (Prentice
Hall, 2008).
[D68] E. Dijkstra, “The Structure of the ‘THE’-Multiprogramming System,” Communications of the ACM, 11(5), May 1968.
[DBSW66] D. Davies, K. Bartlett, R. Scantlebury, and P. Wilkinson, “A Digital
Communications Network for Computers Giving Rapid Response at Remote
Terminals,” Proc. ACM Symposium on Operating System Principles, Oct. 1967.
[I96] IBM Corporation, Systems Network Architecture—APPN Architecture Reference,
Document SC30-3422-04, 1996.
[IPIS] Ipoque, Internet Study 2008/2009,
[K64] L. Kleinrock, Communication Nets: Stochastic Message Flow and Delay
(McGraw-Hill, 1964).
[LC04] S. Lin and D. Costello Jr., Error Control Coding, Second Edition (Prentice
Hall, 2004).
Section 1.10 References
[LJFK93] S. Leffler, W. Joy, R. Fabry, and M. Karels, “Networking Implementation
Notes—4.4BSD Edition,” June 1993.
[LT68] J. C. R. Licklider and R. Taylor, “The Computer as a Communication
Device,” Science and Technology, Apr. 1968.
[P07] J. Pelkey, Entrepreneurial Capitalism and Innovation: A History of Computer
Communications 1968–1988, available at
[P73] L. Pouzin, “Presentation and Major Design Aspects of the CYCLADES
Computer Network,” NATO Advanced Study Institute on Computer Communication Networks, 1973.
[RFC0871] M. Padlipsky, “A Perspective on the ARPANET Reference Model,”
Internet RFC 0871, Sept. 1982.
[RFC0959] J. Postel and J. Reynolds, “File Transfer Protocol,” Internet RFC 0959/
STD 0009, Oct. 1985.
[RFC1122] R. Braden, ed., “Requirements for Internet Hosts—Communication
Layers,” Internet RFC 1122/STD 0003, Oct. 1989.
[RFC1123] R. Braden, ed., “Requirements for Internet Hosts—Application and
Support,” Internet RFC 1123/STD 0003, Oct. 1989.
[RFC1812] F. Baker, ed., “Requirements for IP Version 4 Routers,” Internet RFC
1812, June 1995.
[RFC3493] R. Gilligan, S. Thomson, J. Bound, J. McCann, and W. Stevens, “Basic
Socket Interface Extensions for IPv6,” Internet RFC 3493 (informational), Feb.
[RFC3542] W. Stevens, M. Thomas, E. Nordmark, and T. Jinmei, “Advanced
Sockets Application Program Interface (API) for IPv6,” Internet RFC 3542 (informational), May 2003.
[RFC3678] D. Thaler, B. Fenner, and B. Quinn, “Socket Interface Extensions for
Multicast Source Filters,” Internet RFC 3678 (informational), Jan. 2004.
[RFC3787] J. Parker, ed., “Recommendations for Interoperable IP Networks Using
Intermediate System to Intermediate System (IS-IS),” Internet RFC 3787 (informational), May 2004.
[RFC4294] J. Loughney, ed., “IPv6 Node Requirements,” Internet RFC 4294 (informational), Apr. 2006.
[RFC4340] E. Kohler, M. Handley, and S. Floyd, “Datagram Congestion Control
Protocol (DCCP),” Internet RFC 4340, Mar. 2006.
[RFC4584] S. Chakrabarti and E. Nordmark, “Extension to Sockets API for
Mobile IPv6,” Internet RFC 4584 (informational), July 2006.
[RFC4677] P. Hoffman and S. Harris, “The Tao of IETF—A Novice’s Guide to the
Internet Engineering Task Force,” Internet RFC 4677 (informational), Sept. 2006.
[RFC4960] R. Stewart, ed., “Stream Control Transmission Protocol,” Internet RFC
4960, Sept. 2007.
[RFC5000] RFC Editor, “Internet Official Protocol Standards,” Internet RFC 5000/
STD 0001 (informational), May 2008.
[RFC5014] E. Nordmark, S. Chakrabarti, and J. Laganier, “IPv6 Socket API for
Source Address Selection,” Internet RFC 5014 (informational), Sept. 2007.
[RFC6250] D. Thaler, “Evolution of the IP Model,” Internet RFC 6250 (informational), May 2011.
[SFR04] W. R. Stevens, B. Fenner, and A. Rudoff, UNIX Network Programming,
Volume 1, Third Edition (Prentice Hall, 2004).
[SRC84] J. Saltzer, D. Reed, and D. Clark, “End-to-End Arguments in System
Design,” ACM Transactions on Computer Systems, 2(4), Nov. 1984.
[W02] M. Waldrop, The Dream Machine: J. C. R. Licklider and the Revolution That
Made Computing Personal (Penguin Books, 1992).
[X85] Xerox Corporation, Xerox Network Systems Architecture—General Information
Manual, XNSG 068504, 1985.
[Z80] H. Zimmermann, “OSI Reference Model—The ISO Model of Architecture
for Open Systems Interconnection,” IEEE Transactions on Communications, COM28(4), Apr. 1980.
The Internet Address
This chapter deals with the structure of network-layer addresses used in the Internet, also known as IP addresses. We discuss how addresses are allocated and
assigned to devices on the Internet, the way hierarchy in address assignment aids
routing scalability, and the use of special-purpose addresses, including broadcast,
multicast, and anycast addresses. We also discuss how the structure and use of
IPv4 and IPv6 addresses differ.
Every device connected to the Internet has at least one IP address. Devices
used in private networks based on the TCP/IP protocols also require IP addresses.
In either case, the forwarding procedures implemented by IP routers (see Chapter
5) use IP addresses to identify where traffic is going. IP addresses also indicate
where traffic has come from. IP addresses are similar in some ways to telephone
numbers, but whereas telephone numbers are often known and used directly by
end users, IP addresses are often shielded from a user’s view by the Internet’s DNS
(see Chapter 11), which allows most users to use names instead of numbers. Users
are confronted with manipulating IP addresses when they are required to set up
networks themselves or when the DNS has failed for some reason. To understand
how the Internet identifies hosts and routers and delivers traffic between them,
we must understand the role of IP addresses. We are therefore interested in their
administration, structure, and uses.
When devices are attached to the global Internet, they are assigned addresses
that must be coordinated so as to not duplicate other addresses in use on the network. For private networks, the IP addresses being used must be coordinated to
avoid similar overlaps within the private networks. Groups of IP addresses are
allocated to users and organizations. The recipients of the allocated addresses then
The Internet Address Architecture
assign addresses to devices, usually according to some network “numbering plan.”
For global Internet addresses, a hierarchical system of administrative entities helps
in allocating addresses to users and service providers. Individual users typically
receive address allocations from Internet service providers (ISPs) that provide both
the addresses and the promise of routing traffic in exchange for a fee.
Expressing IP Addresses
The vast majority of Internet users who are familiar with IP addresses understand
the most popular type: IPv4 addresses. Such addresses are often represented in
so-called dotted-quad or dotted-decimal notation, for example,
The dotted-quad notation consists of four decimal numbers separated by periods.
Each such number is a nonnegative integer in the range [0, 255] and represents
one-quarter of the entire IP address. The dotted-quad notation is simply a way of
writing the whole IPv4 address—a 32-bit nonnegative integer used throughout
the Internet system—using convenient decimal numbers. In many circumstances
we will be concerned with the binary structure of the address. A number of Internet sites, such as and, now contain calculators for converting between formats of
IP addresses and related information. Table 2-1 gives a few examples of IPv4
addresses and their corresponding binary representations, to get started.
Table 2-1
Example IPv4 addresses written in dotted-quad and binary notation
Dotted-Quad Representation
Binary Representation
In IPv6, addresses are 128 bits in length, four times larger than IPv4 addresses,
and generally speaking are less familiar to most users. The conventional notation
adopted for IPv6 addresses is a series of four hexadecimal (“hex,” or base-16) numbers called blocks or fields separated by colons. An example IPv6 address containing
eight blocks would be written as 5f05:2000:80ad:5800:0058:0800:2023:1d71. Although
not as familiar to users as decimal numbers, hexadecimal numbers make the task
of converting to binary somewhat simpler. In addition, a number of agreed-upon
simplifications have been standardized for expressing IPv6 addresses [RFC4291]:
Section 2.2 Expressing IP Addresses
1. Leading zeros of a block need not be written. In the preceding example, the
address could have been written as 5f05:2000:80ad:5800:58:800:2023:1d71.
2. Blocks of all zeros can be omitted and replaced by the notation ::. For example, the IPv6 address 0:0:0:0:0:0:0:1 can be written more compactly as ::1.
Similarly, the address 2001:0db8:0:0:0:0:0:2 can be written more compactly
as 2001:db8::2. To avoid ambiguities, the :: notation may be used only once
in an IPv6 address.
3. Embedded IPv4 addresses represented in the IPv6 format can use a form
of hybrid notation in which the block immediately preceding the IPv4 portion of the address has the value ffff and the remaining part of the address
is formatted using dotted-quad. For example, the IPv6 address ::ffff:
represents the IPv4 address This is called an IPv4-mapped IPv6
4. A conventional notation is adopted in which the low-order 32 bits of the
IPv6 address can be written using dotted-quad notation. The IPv6 address
::0102:f001 is therefore equivalent to the address :: This is called
an IPv4-compatible IPv6 address. Note that IPv4-compatible addresses are
not the same as IPv4-mapped addresses; they are compatible only in the
sense that they can be written down or manipulated by software in a way
similar to IPv4 addresses. This type of addressing was originally required
for transition plans between IPv4 and IPv6 but is now no longer required
Table 2-2 presents some examples of IPv6 addresses and their binary representations.
Table 2-2
Examples of IPv6 addresses and their binary representations
Hex Notation
Binary Representation
0101111100000101 0010000000000000
1000000010101101 0101100000000000
0000000001011000 0000100000000000
0010000000100011 0001110101110001
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
:: or ::102:f001
0000000000000000 0000000000000001
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000100000010 1111000000000001
The Internet Address Architecture
In some circumstances (e.g., when expressing a URL containing an address)
the colon delimiter in an IPv6 address may be confused with another separator
such as the colon used between an IP address and a port number. In such circumstances, bracket characters, [ and ], are used to surround the IPv6 address. For
example, the URL
refers to port number 443 on IPv6 host 2001:0db8:85a3:08d3:1319:8a2e:0370:7344
using the HTTP/TCP/IPv6 protocols.
The flexibility provided by [RFC4291] resulted in unnecessary confusion due
to the ability to represent the same IPv6 address in multiple ways. To remedy this
situation, [RFC5952] imposes some rules to narrow the range of options while
remaining compatible with [RFC4291]. They are as follows:
1. Leading zeros must be suppressed (e.g., 2001:0db8::0022 becomes
2. The :: construct must be used to its maximum possible effect (most zeros
suppressed) but not for only 16-bit blocks. If multiple blocks contain equallength runs of zeros, the first is replaced with ::.
3. The hexadecimal digits a through f should be represented in lowercase.
In most cases, we too will abide by these rules.
Basic IP Address Structure
IPv4 has 4,294,967,296 possible addresses in its address space, and IPv6 has 340,282,3
66,920,938,463,463,374,607,431,768,211,456. Because of the large number of addresses
(especially for IPv6), it is convenient to divide the address space into chunks. IP
addresses are grouped by type and size. Most of the IPv4 address chunks are eventually subdivided down to a single address and used to identify a single network
interface of a computer attached to the Internet or to some private intranet. These
addresses are called unicast addresses. Most of the IPv4 address space is unicast
address space. Most of the IPv6 address space is not currently being used. Beyond
unicast addresses, other types of addresses include broadcast, multicast, and
anycast, which may refer to more than one interface, plus some special-purpose
addresses we will discuss later. Before we begin with the details of the current
address structure, it is useful to understand the historical evolution of IP addresses.
Classful Addressing
When the Internet’s address structure was originally defined, every unicast IP
address had a network portion, to identify the network on which the interface using
Section 2.3 Basic IP Address Structure
the IP address was to be found, and a host portion, used to identify the particular host
on the network given in the network portion. Thus, some number of contiguous bits
in the address became known as the net number, and remaining bits were known as
the host number. At the time, most hosts had only a single network interface, so the
terms interface address and host address were used somewhat interchangeably.
With the realization that different networks might have different numbers of
hosts, and that each host requires a unique IP address, a partitioning was devised
wherein different-size allocation units of IP address space could be given out to
different sites, based on their current and projected number of hosts. The partitioning of the address space involved five classes. Each class represented a different trade-off in the number of bits of a 32-bit IPv4 address devoted to the network
number versus the number of bits devoted to the host number. Figure 2-1 shows
the basic idea.
Figure 2-1
The IPv4 address space was originally divided into five classes. Classes A, B, and C were
used for assigning addresses to interfaces on the Internet (unicast addresses) and for
some other special-case uses. The classes are defined by the first few bits in the address:
0 for class A, 10 for class B, 110 for class C, and so on. Class D addresses are for multicast
use (see Chapter 9), and class E addresses remain reserved.
Here we see that the five classes are named A, B, C, D, and E. The A, B, and
C class spaces were used for unicast addresses. If we look more carefully at this
addressing structure, we can see how the relative sizes of the different classes and
their corresponding address ranges really work. Table 2-3 gives this class structure (sometimes called classful addressing structure).
Table 2-3
The Internet Address Architecture
The original (“classful”) IPv4 address space partitioning
Address Range
of Total
of Nets
of Hosts
The table indicates how the classful addressing structure was used primarily to have a way of allocating unicast address blocks of different sizes to users.
The partitioning into classes induces a trade-off between the number of available
network numbers of a given size and the number of hosts that can be assigned
to the given network. For example, a site allocated the class A network number (MIT) has 224 possible addresses to assign as host addresses (i.e., using
IPv4 addresses in the range–, but there are only 127 class A
networks available for the entire Internet. A site allocated a class C network number, say,, would be able to assign only 256 hosts (i.e., those in the range–, but there are more than two million class C network
numbers available.
These numbers are not exact. Several addresses are not generally available for
use as unicast addresses. In particular, the first and last addresses of the range
are not generally available. In our example, the site assigned address range would really be able to assign as many as 224 - 2 = 16,777,214 unicast IP
The classful approach to Internet addressing lasted mostly intact for the first
decade of the Internet’s growth (to about the early 1980s). After that, it began to
show its first signs of scaling problems—it was becoming too inconvenient to centrally coordinate the allocation of a new class A, B, or C network number every time
a new network segment was added to the Internet. In addition, assigning class A
and B network numbers tended to waste too many host numbers, whereas class C
network numbers could not provide enough host numbers to many new sites.
Subnet Addressing
One of the earliest difficulties encountered when the Internet began to grow was
the inconvenience of having to allocate a new network number for any new network segment that was to be attached to the Internet. This became especially
Section 2.3 Basic IP Address Structure
cumbersome with the development and increasing use of local area networks
(LANs) in the early 1980s. To address the problem, it was natural to consider a
way that a site attached to the Internet could be allocated a network number centrally that could then be subdivided locally by site administrators. If this could be
accomplished without altering the rest of the Internet’s core routing infrastructure, so much the better.
Implementing this idea would require the ability to alter the line between the
network portion of an IP address and the host portion, but only for local purposes
at a site; the rest of the Internet would “see” only the traditional class A, B, and C
partitions. The approach adopted to support this capability is called subnet addressing [RFC0950]. Using subnet addressing, a site is allocated a class A, B, or C network number, leaving some number of remaining host bits to be further allocated
and assigned within a site. The site may further divide the host portion of its base
address allocation into a subnetwork (subnet) number and a host number. Essentially, subnet addressing adds one additional field to the IP address structure, but
without adding any bits to its length. As a result, a site administrator is able to
trade off the number of subnetworks versus the number of hosts expected to be on
each subnetwork without having to coordinate with other sites.
In exchange for the additional flexibility provided by subnet addressing, a
new cost is imposed. Because the definition of the Subnet and Host fields is now
site-specific (not dictated by the class of the network number), all routers and hosts
at a site require a new way to determine where the Subnet field of the address and
the Host field of the address are located within the address. Before subnets, this
information could be derived directly by knowing whether a network number
was from class A, B, or C (as indicated by the first few bits in the address). As an
example, using subnet addressing, an IPv4 address might have the form shown in
Figure 2-2.
Figure 2-2
An example of a subnetted class B address. Using 8 bits for the subnet ID provides for
256 subnets with 254 hosts on each of the subnets. This partitioning may be altered by
the network administrator.
The Internet Address Architecture
Figure 2-2 is an example of how a class B address might be “subnetted.”
Assume that some site in the Internet has been allocated a class B network number. The first 16 bits of every address the site will use are fixed at some particular
number because these bits have been allocated by a central authority. The last 16
bits (which would have been used only to create host numbers in the class B network without subnets) can now be divided by the site network administrator as
needs may dictate. In this example, 8 bits have been chosen for the subnet number,
leaving 8 bits for host numbers. This particular configuration allows the site to
support 256 subnetworks, and each subnetwork may contain up to 254 hosts (now
the first and last addresses for each subnetwork are not available, as opposed to
losing only the first and last addresses of the entire allocated range). Recall that
the subnetwork structure is known only by hosts and routers where the subnetting is taking place. The remainder of the Internet still treats any address associated with the site just as it did prior to the advent of subnet addressing. Figure 2-3
shows how this works.
Figure 2-3
A site is allocated the classical class B network number 128.32. The network administrator decides to apply a site-wide subnet mask of, giving 256 subnetworks
where each subnetwork can hold 256 – 2 = 254 hosts. The IPv4 address of each host on
the same subnet has the subnetwork number in common. All of the IPv4 addresses of
hosts on the left-hand LAN segment start with 128.32.1, and all of those on the right start
with 128.32.2.
Section 2.3 Basic IP Address Structure
This figure shows a hypothetical site attached to the Internet with one border
router (i.e., one attachment point to the Internet) and two internal local area networks. The value of x could be anything in the range [0, 255]. Each of the Ethernet
networks is an IPv4 subnetwork of the overall network number 128.32, a class B
address allocation. For other sites on the Internet to reach this site, all traffic with
destination addresses starting with 128.32 is directed by the Internet routing system to the border router (specifically, its interface with IPv4 address
At this point, the border router must distinguish among different subnetworks
within the 128.32 network. In particular, it must be able to distinguish and separate traffic destined for addresses of the form 128.32.1.x from those destined for
addresses of the form 128.32.2.x. These represent subnetwork numbers 1 and 2,
respectively, of the 128.32 class B network number. In order to do this, the router
must be aware of where the subnet ID is to be found within the addresses. This is
accomplished by a configuration parameter we will discuss next.
Subnet Masks
The subnet mask is an assignment of bits used by a host or router to determine how
the network and subnetwork information is partitioned from the host information
in a corresponding IP address. Subnet masks for IP are the same length as the corresponding IP addresses (32 bits for IPv4 and 128 bits for IPv6). They are typically
configured into a host or router in the same way as IP addresses—either statically
(typical for routers) or using some dynamic system such as the Dynamic Host Configuration Protocol (DHCP; see Chapter 6). For IPv4, subnet masks may be written
in the same way an IPv4 address is written (i.e., dotted-decimal). Although not
originally required to be arranged in this manner, today subnet masks are structured as some number of 1 bits followed by some number of 0 bits. Because of this
arrangement, it is possible to use a shorthand format for expressing masks that
simply gives the number of contiguous 1 bits in the mask (starting from the left).
This format is now the most common format and is sometimes called the prefix
length. Table 2-4 presents some examples for IPv4.
Table 2-4
IPv4 subnet mask examples in various formats
(Prefix Length)
Binary Representation
The Internet Address Architecture
Table 2-5
IPv6 subnet mask examples in various formats
Hex Notation
(Prefix Length)
Binary Representation
1111111111111111 1111111111111111
1111111111111111 1111111111111111
0000000000000000 0000000000000000
0000000000000000 0000000000000000
1111111100000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
Table 2-5 presents some examples for IPv6.
Masks are used by routers and hosts to determine where the network/subnetwork portion of an IP address ends and the host part begins. A bit set to 1 in
the subnet mask means the corresponding bit position in an IP address should be
considered part of a combined network/subnetwork portion of an address, which
is used as the basis for forwarding datagrams (see Chapter 5). Conversely, a bit
set to 0 in the subnet mask means the corresponding bit position in an IP address
should be considered part of the host portion. For example, in Figure 2-4 we can
see how the IPv4 address is treated when a subnet mask of
is applied to it.
Figure 2-4
An IP address can be combined with a subnet mask using a bitwise AND operation in
order to form the network/subnetwork identifier (prefix) of the address used for routing.
In this example, applying a mask of length 24 to the IPv4 address gives the
Here we see how each bit in the address is ANDed with each corresponding
bit in the subnet mask. Recalling the bitwise AND operation, a result bit is only
ever a 1 if the corresponding bits in both the mask and the address are 1. In this
example, we see that the address belongs to the subnet
In Figure 2-3, this is precisely the information required by the border router to
Section 2.3 Basic IP Address Structure
determine to which subnetwork a datagram destined for the system with address should be forwarded. Note again that the rest of the Internet routing
system does not require knowledge of the subnet mask because routers outside
the site make routing decisions based only on the network number portion of
an address and not the combined network/subnetwork or host portions. Consequently, subnet masks are purely a local matter at the site.
Variable-Length Subnet Masks (VLSM)
So far we have discussed how a network number allocated to a site can be subdivided into ranges assigned to multiple subnetworks, each of the same size and
therefore able to support the same number of hosts, based on the operational expectations of the network administrator. We now observe that it is possible to use a
different-length subnet mask applied to the same network number in different portions of the same site. Although doing this complicates address configuration management, it adds flexibility to the subnet structure because different subnetworks
may be set up with different numbers of hosts. Variable-length subnet masks (VLSM)
are now supported by most hosts, routers, and routing protocols. To understand
how VLSM works, consider the network topology illustrated in Figure 2-5, which
extends Figure 2-3 with two additional subnetworks using VLSM.
Figure 2-5 VLSM can be used to partition a network number into subnetworks with a differing
number of hosts on each subnet. Each router and host is configured with a subnet mask
in addition to its IP address. Most software supports VLSM, except for some older routing protocols (e.g., RIP version 1).
The Internet Address Architecture
In the more complicated and realistic example shown in Figure 2-5, three different subnet masks are used within the site to subnet the network:
/24, /25, and /26. Doing so provides for a different number of hosts on each subnet. Recall that the number of hosts is constrained by the number of bits remaining in the IP address that are not used by the network/subnet number. For IPv4
and a /24 prefix, this allows for 32 – 24 = 8 bits (256 hosts); for /25, half as many
(128 hosts); and for /26, half further still (64 hosts). Note that each interface on
each host and router depicted is now given both an IP address and a subnet mask,
but the mask differs across the network topology. With an appropriate dynamic
routing protocol running among the routers (e.g., OSPF, IS-IS, RIPv2), traffic is
able to flow correctly among hosts at the same site or to/from the outside of the
site across the Internet.
Although it may not seem obvious, there is a common case where a subnetwork contains only two hosts. When routers are connected together by a pointto-point link requiring an IP address to be assigned at each end, it is common
practice to use a /31 network prefix with IPv4, and it is now also a recommended
practice to use a /127 prefix for IPv6 [RFC6164].
Broadcast Addresses
In each IPv4 subnetwork, a special address is reserved to be the subnet broadcast
address. The subnet broadcast address is formed by setting the network/subnetwork portion of an IPv4 address to the appropriate value and all the bits in the Host
field to 1. Consider the left-most subnet from Figure 2-5. Its prefix is
The subnet broadcast address is constructed by inverting the subnet mask (i.e.,
changing all the 0 bits to 1 and vice versa) and performing a bitwise OR operation with the address of any of the computers on the subnet (or, equivalently, the
network/subnetwork prefix). Recall that the result of a bitwise OR operation is 1
if either input bit is 1. Using the IPv4 address, this computation can be
written as shown in Figure 2-6.
Figure 2-6
The subnet broadcast address is formed by ORing the complement of the subnet mask
with the IPv4 address. In this case of a /24 subnet mask, all of the remaining 32 – 24
= 8 bits are set to 1, giving a decimal value of 255 and the subnet broadcast address of
Section 2.3 Basic IP Address Structure
As shown in the figure, the subnet broadcast address for the subnet is Historically, a datagram using this type of address as
its destination has also been known as a directed broadcast. Such a broadcast can,
at least theoretically, be routed through the Internet as a single datagram until
reaching the target subnetwork, at which point it becomes a collection of broadcast datagrams that are delivered to all hosts on the subnetwork. Generalizing
this idea further, we could form a datagram with the destination IPv4 address and launch it into the Internet attached to the network depicted in
Figure 2-3 or Figure 2-5. This would address all hosts at the target site.
Directed broadcasts were found to be such a big problem from a security point of
view that they are effectively disabled on the Internet today. [RFC0919] describes
the various types of broadcasts for IPv4, and [RFC1812] suggests that support
for forwarding directed broadcasts by routers should not only be available but
enabled by default. This policy was reversed by [RFC2644] so that by default
routers must now disable the forwarding of directed broadcasts and are even free
to omit support for the capability altogether.
In addition to the subnet broadcast address, the special-use address is reserved as the local net broadcast (also called limited broadcast),
which is never forwarded by routers. (See Section 2.5 for more detail on specialuse addresses.) Note that although routers may not forward broadcasts, subnet
broadcasts and local net broadcasts destined for the same network to which a
computer is attached should be expected to work unless explicitly disabled by
end hosts. Such broadcasts do not require action by a router; link-layer broadcast
mechanisms, if available, are used for supporting them (see Chapter 3). Broadcast
addresses are typically used with protocols such as UDP/IP (Chapter 10) or ICMP
(Chapter 8) because these protocols do not involve two-party conversations as in
TCP/IP. IPv6 lacks any broadcast addresses; for places where broadcast addresses
might be used in IPv4, IPv6 instead uses exclusively multicast addresses (see
Chapter 9).
IPv6 Addresses and Interface Identifiers
In addition to being longer than IPv4 addresses by a factor of 4, IPv6 addresses
also have some additional structure. Special prefixes used with IPv6 addresses
indicate the scope of an address. The scope of an IPv6 address refers to the portion
of the network where it can be used. Important examples of scopes include nodelocal (the address can be used only for communication on the same computer),
link-local (used only among nodes on the same network link or IPv6 prefix), or
global (Internet-wide). In IPv6, most nodes have more than one address in use,
often on the same network interface. Although this is supported in IPv4 as well, it
The Internet Address Architecture
is not nearly as common. The set of addresses required in an IPv6 node, including
multicast addresses (see Section 2.5.2), is given in [RFC4291].
Another scope level called site-local using prefix fec0::/10 was originally supported by IPv6 but was deprecated for use with unicast addressing by [RFC3879].
The primary problems include how to handle such addresses given that they may
be reused in more than one site and a lack of clarity on precisely how to define
a “site.”
Link-local IPv6 addresses (and some global IPv6 addresses) use interface identifiers (IIDs) as a basis for unicast IPv6 address assignment. IIDs are used as the
low-order bits of an IPv6 address in all cases except where the address begins with
the binary value 000, and as such they must be unique within the same network
prefix. IIDs are ordinarily 64 bits long and are formed either directly from the
underlying link-layer MAC address of a network interface using a modified EUI-64
format [EUI64], or by another process that randomizes the value in hopes of providing some degree of privacy against address tracking (see Chapter 6).
In IEEE standards, EUI stands for extended unique identifier. EUI-64 identifiers start with a 24-bit Organizationally Unique Identifier (OUI) followed by a 40-bit
extension identifier assigned by the organization, which is identified by the first 24
bits. The OUIs are maintained and allocated by the IEEE registration authority
[IEEERA]. EUIs may be “universally administered” or “locally administered.” In
the Internet context, such addresses are typically of the universally administered
Many IEEE standards-compliant network interfaces (e.g., Ethernet) have used
shorter-format addresses (48-bit EUIs) for years. The only significant difference
between the EUI-48 and EUI-64 formats is their length (see Figure 2-7).
Figure 2-7
The EUI-48 and EUI-64 formats defined by the IEEE. These are used within IPv6 to form
interface identifiers by inverting the u bit.
Section 2.3 Basic IP Address Structure
The OUI is 24 bits long and occupies the first 3 bytes of both EUI-48 and EUI64 addresses. The low-order 2 bits of the first bytes of these addresses are designated the u and g bits, respectively. The u bit, when set, indicates that the address
is locally administered. The g bit, when set, indicates that the address is a group or
multicast-type address. For the moment, we are concerned only with cases where
the g bit is not set.
An EUI-64 can be formed from an EUI-48 by copying the 24-bit OUI value from
the EUI-48 address to the EUI-64 address, placing the 16-bit value 1111111111111110
(hex FFFE) in the fourth and fifth bytes of the EUI-64 address, and then copying
the remaining organization-assigned bits. For example, the EUI-48 address 00-1122-33-44-55 would become 00-11-22-FF-FE-33-44-55 in EUI-64. This mapping is the
first step used by IPv6 in constructing its interface identifiers when such underlying EUI-48 addresses are available. The modified EUI-64 used to form IIDs for
IPv6 addresses simply inverts the u bit.
When an IPv6 interface identifier is needed for a type of interface that does not
have an EUI-48-bit address provided by its manufacturer, but has some other type
of underlying address (e.g., AppleTalk), the underlying address is left-padded with
zeros to form the interface identifier. Interface identifiers created for interfaces
that lack any form of other identifier (e.g., tunnels, serial links) may be derived
from some other interface on the same node (that is not on the same subnet) or
from some identifier associated with the node. Lacking any other options, manual
assignment is a last resort. Examples
Using the Linux ifconfig command, we can investigate the way a link-local IPv6
address is formed:
Linux% ifconfig eth1
Link encap:Ethernet HWaddr 00:30:48:2A:19:89
inet addr: Bcast:
inet6 addr: fe80::230:48ff:fe2a:1989/64 Scope:Link
RX packets:1359970341 errors:0 dropped:0 overruns:0 frame:0
TX packets:1472870787 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4021555658 (3.7 GiB) TX bytes:3258456176 (3.0 GiB)
Base address:0x3040 Memory:f8220000-f8240000
Here we can see how the Ethernet’s hardware address 00:30:48:2A:19:89 is
mapped to an IPv6 address. First, it is converted to EUI-64, forming the address
00:30:48:ff:fe:2a:19:89. Next, the u bit is inverted, forming the IID value
02:30:48:ff:fe:2a:19:89. To complete the link-local IPv6 address, we use
the reserved link-local prefix fe80::/10 (see Section 2.5). Together, these form
the complete address, fe80::230:48ff:fe2a:1989. The presence of /64 is the
The Internet Address Architecture
standard length used for identifying the subnetwork/host portion of an IPv6
address derived from an IID as required by [RFC4291].
Another interesting example is from a Windows system with IPv6. In this
case, we see a special tunnel endpoint, which is used to carry IPv6 traffic through
networks that otherwise support only IPv4:
c:\> ipconfig /all
Tunnel adapter Automatic Tunneling Pseudo-Interface:
Connection-specific DNS Suffix . : foo
Description . . . . . . . . . . . : Automatic Tunneling
Physical Address.
Dhcp Enabled. . .
IP Address. . . .
Default Gateway .
DNS Servers . . .
NetBIOS over Tcpip. . . . . . . . : Disabled
In this case, we can see a special tunneling interface called ISATAP [RFC5214].
The so-called physical address is really the hexadecimal encoding of an IPv4
address: 0A-99-8D-87 is the same as Here, the OUI used (0000-5E) is the one assigned to the IANA [IANA]. It is used in combination with
the hex value fe, indicating an embedded IPv4 address. This combination is
then combined with the standard link-local prefix fe80::/10 to give the address
fe80::5efe: The %2 appended to the end of the address is called
a zone ID in Windows and indicates the interface index number on the computer
corresponding to the IPv6 address. IPv6 addresses are often created by a process
of automatic configuration, a process we discuss in more detail in Chapter 6.
CIDR and Aggregation
In the early 1990s, after the adoption of subnet addressing to ease one form of
growing pains, the Internet started facing a serious set of scaling problems. Three
particular issues were considered so important as to require immediate attention:
1. By 1994, over half of all class B addresses had already been allocated. It was
expected that the class B address space would be exhausted by about 1995.
2. The 32-bit IPv4 address was thought to be inadequate to handle the size of
the Internet anticipated by the early 2000s.
Section 2.4 CIDR and Aggregation
3. The number of entries in the global routing table (one per network number), about 65,000 in 1995, was growing. As more and more class A, B, and
C routing entries appeared, routing performance would suffer.
These three issues were attacked by a group in the IETF called ROAD (for
ROuting and ADdressing), starting in 1992. They considered problems 1 and 3 to
be of immediate concern, and problem 2 as requiring a long-term solution. The
short-term solution they proposed was to effectively remove the class breakdown
of IP addresses and also promote the ability to aggregate hierarchically assigned
IP addresses. These measures would help problems 1 and 3. IPv6 was envisioned
to deal with problem 2.
In order to help relieve the pressure on the availability of IPv4 addresses (especially class B addresses), the classful addressing scheme was generalized using a
scheme similar to VLSM, and the Internet routing system was extended to support
Classless Inter-Domain Routing (CIDR) [RFC4632]. This provided a way to conveniently allocate contiguous address ranges that contained more than 255 hosts but
fewer than 65,536. That is, something other than single class B or multiple class
C network numbers could be allocated to sites. Using CIDR, any address range
is not predefined as being part of a class but instead requires a mask similar to a
subnet mask, sometimes called a CIDR mask. CIDR masks are not limited to a site
but are instead visible to the global routing system. Thus, the core Internet routers
must be able to interpret and process masks in addition to network numbers. This
combination of numbers, called a network prefix, is used for both IPv4 and IPv6
address management.
Eliminating the predefined separation of network and host number within an
IP address makes finer-grain allocation of IP address ranges possible. As with classful addressing, dividing the address spaces into chunks is most easily achieved by
grouping numerically contiguous addresses for use as a type or for some particular special purpose. Such groupings are now commonly expressed using a prefix
of the address space. An n-bit prefix is a predefined value for the first n bits of an
address. The value of n (the length of the prefix) is typically expressed as an integer in the range 0–32 for IPv4 and 0–128 for IPv6. It is generally appended to the
base IP address following a / character. Table 2-6 gives some examples of prefixes
and their corresponding IPv4 or IPv6 address ranges.
In the table, the bits defined and fixed by the prefix are enclosed in a box.
The remaining bits may be set to any combination of 0s and 1s, thereby covering the possible address range. Clearly, a smaller prefix length corresponds to a
larger number of possible addresses. In addition, the earlier classful addressing
approach is easily generalized by this scheme. For example, the class C network
number can be written as the prefix or 192.125.3/24.
Classful A and B network numbers can be expressed using /8 and /16 prefix
lengths, respectively.
Table 2-6
The Internet Address Architecture
Examples of prefixes and their corresponding IPv4 or IPv6 address range
Prefix (Binary)
Address Range
00000000 00000000 00000000 00000000
10000000 00000000 00000000 00000000
10000000 00000000 00000000 00000000
11000110 10000000 10000000 11000000
10100101 11000011 10000010 01101011
0010000000000001 0000110110111000––––
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
Removing the classful structure of IP addresses made it possible to allocate IP
address blocks in a wider variety of sizes. Doing so, however, did not address
the third concern from the list of problems; it did not help to reduce the number
of routing table entries. A routing table entry tells a router where to send traffic.
Essentially, the router inspects the destination IP address in an arriving datagram,
finds a matching routing table entry, and from the entry extracts the “next hop”
for the datagram. This is somewhat like driving to a particular address in a car
and in every intersection along the way finding a sign indicating what direction
to take to get to the next intersection on the way to the destination. If you consider
the number of signs that would have to be present at every intersection for every
possible destination neighborhood, you get some sense of the problem facing the
Internet in the early 1990s.
At the time, few techniques were known to dramatically reduce the number
of routing table entries while maintaining shortest-path routes to all destinations
in the Internet. The best-known approach was published in a study of hierarchical
routing [KK77] in the late 1970s by Kleinrock and Kamoun. They observed that if
the network topology were arranged as a tree1 and addresses were assigned in a
way that was “sensitive” to this topology, very small routing tables could be used
while still maintaining shortest-path routes to all destinations. Consider Figure 2-8.
In this figure, circles represent routers and lines represent network links
between them. The left-hand and right-hand sides of the diagram show treeshaped networks. The difference between them is the way addresses have been
assigned to the routers. In the left-hand (a) side, addresses are essentially random—there is no direct relationship between the addresses and the location of
1. In graph theory, a tree is a connected graph with no cycles. For a network of routers and links, this
means that there is only one simple (nonduplicative) path between any two routers.
Section 2.4 CIDR and Aggregation
Figure 2-8
In a network with a tree topology, network addresses can be assigned in a special way so as to limit
the amount of routing information (“state”) that needs to be stored in a router. If addresses are
not assigned in this way (left side), shortest-path routes cannot be guaranteed without storing an
amount of state proportional to the number of nodes to be reached. While assigning addresses in
a way that is sensitive to the tree topology saves state, if the network topology changes, a reassignment of addresses is generally required.
the routers in the tree. On the right-hand (b) side of the diagram, the addresses
are assigned based upon where the router is located in the tree. If we consider
the number of entries each top router requires, we see that there is a significant
The root (top) of the tree on the left is the router labeled In order to
know a next hop for every possible destination, it needs an entry for all the routers
“below” it in the tree:,,,,,,, and For any other destination, it simply routes to the
cloud labeled “Other Parts of the Network.” This results in a total of nine entries.
In contrast, the root of the right-hand tree is labeled and requires only three
entries in its routing table. Note that all of the routers on the left side of the right
tree begin with the prefix 19.1 and all to the right begin with 19.2. Thus, the table
in router need only show as the next hop for any destination starting with 19.1, whereas is the next hop for any destination starting with 19.2.
Any other destination goes to the cloud labeled “Other Parts of the Network.” This
results in a total of three entries. Note that this behavior is recursive—any router
in the (b) side of the tree never requires more entries than the number of links it
has. This is a direct result of the special method used to assign the addresses. Even
The Internet Address Architecture
if more routers are added to the (b)-side tree, this nice property is maintained.
This is the essence of the hierarchical routing idea from [KK77].
In the Internet context, the hierarchical routing idea can be used in a specific
way to reduce the number of Internet routing entries that would be required otherwise. This is accomplished by a procedure known as route aggregation. It works by
joining multiple numerically adjacent IP prefixes into a single shorter prefix (called
an aggregate or summary) that covers more address space. Consider Figure 2-9.
Figure 2-9
In this example, the arrows indicate aggregation of two address prefixes to form one;
the underlined prefixes are additions in each step. In the first step,
and can be aggregated because they are numerically adjacent, but cannot. With the addition of, they can all be aggregated together in two steps to form With the final addition of the adjacent, the aggregate is produced.
We start with three address prefixes on the left in Figure 2-9. The first two, and, are numerically adjacent and can therefore
be combined (aggregated). The arrows indicate where aggregation takes place.
The prefix cannot be aggregated in the first step because it is not
numerically adjacent. When a new prefix,, is added (underlined),
the and prefixes may be aggregated, forming
the prefix. This aggregate is now adjacent to the
aggregate, so they can be aggregated further to form When the
prefix (underlined) is added, the two class C prefixes can be aggregated to form In this way, the original three prefixes and the two
that were added can be aggregated into a single prefix.
Special-Use Addresses
Both the IPv4 and IPv6 address spaces include a few address ranges that are used
for special purposes (and are therefore not used in assigning unicast addresses).
For IPv4, these addresses are given in Table 2-7 [RFC5735].
Section 2.5 Special-Use Addresses
IPv4 special-use addresses (defined January 2010)
Table 2-7
Special Use
Hosts on the local network. May be used only as a source IP
Address for private networks (intranets). Such addresses
never appear on the public Internet.
Internet host loopback addresses (same computer). Typically
only is used.
“Link-local” addresses—used only on a single link and
generally assigned automatically. See Chapter 6.
Address for private networks (intranets). Such addresses
never appear on the public Internet.
IETF protocol assignments (IANA reserved).
TEST-NET-1 addresses approved for use in documentation.
Such addresses never appear on the public Internet.
Used for 6to4 relays (anycast addresses).
Address for private networks (intranets). Such addresses
never appear on the public Internet.
Used for benchmarks and performance testing.
TEST-NET-2. Approved for use in documentation.
TEST-NET-3. Approved for use in documentation.
IPv4 multicast addresses (formerly class D); used only as
destination addresses.
Reserved space (formerly class E), except
Local network (limited) broadcast address.
In IPv6, a number of address ranges and individual addresses are used for
specific purposes. They are listed in Table 2-8 [RFC5156].
For both IPv4 and IPv6, address ranges not designated as special, multicast, or
reserved are available to be assigned for unicast use. Some unicast address space
(prefixes 10/8, 172.16/12, and 192.168/16 for IPv4 and fc00::/7 for IPv6) is reserved
for building private networks. Addresses from these ranges can be used by cooperating hosts and routers within a site or organization, but not across the global
Internet. Thus, these addresses are sometimes called nonroutable addresses. That
is, they will not be routed by the public Internet.
The management of private, nonroutable address space is entirely a local decision. The IPv4 private addresses are very common in home networks and for the
internal networks of moderately sized and large enterprises. They are frequently
used in combination with network address translation (NAT), which rewrites IP
addresses inside IP datagrams as they enter the Internet. We discuss NAT in detail
in Chapter 7.
The Internet Address Architecture
IPv6 special-use addresses (defined April 2008)
Table 2-8
Special Use
Default route entry. Not used for addressing.
The unspecified address; may be used as a source IP address.
The IPv6 host loopback address; not used in datagrams sent
outside the local host.
IPv4-mapped addresses. Such addresses never appear in
packet headers. For internal host use only.
IPv4-compatible addresses. Deprecated; not to be used.
Teredo addresses.
Overlay Routable Cryptographic Hash Identifiers. Such
addresses never appear on the public Internet.
Address range used for documentation and for examples.
Such addresses never appear on the public Internet.
6to4 addresses of 6to4 tunnel relays.
Used by 6bone experiments. Deprecated; not to be used.
Used by 6bone experiments. Deprecated; not to be used.
Unique, local unicast addresses; not used on the global
Link-local unicast addresses.
IPv6 multicast addresses; used only as destination addresses. [RFC4291]
Addressing IPv4/IPv6 Translators
In some networks, it may be attractive to perform translation between IPv4 and
IPv6 [RFC6127]. A framework for this has been developed for unicast translations
[RFC6144], and one is currently under development for multicast translations [IDv4v6mc]. One of the basic functions is to provide automated, algorithmic translation
of addresses. Using the “well-known” IPv6 prefix 64:ff9b::/96 or another assigned
prefix, [RFC6052] specifies how this is accomplished for unicast addresses.
The scheme makes use of a specialized address format called an IPv4-embedded IPv6 address. This type of address contains an IPv4 address inside an IPv6
address. It can be encoded using one of six formats, based on the length of the IPv6
prefix, which is required to be one of the following: 32, 40, 48, 56, 64, or 96. The
formats available are shown in Figure 2-10.
In the figure, the prefix is either the well-known prefix or a prefix unique to
the organization deploying translators. Bits 64–71 must be set to 0 to maintain
compatibility with identifiers specified in [RFC4291]. The suffix bits are reserved
and should be set to 0. The method to produce an IPv4-embedded IPv6 address
is then simple: concatenate the IPv6 prefix with the 32-bit IPv4 address, ensuring that the bits 63–71 are set to 0 (inserting if necessary). Append the suffix as
0 bits until a 128-bit address is produced. IPv4-embedded IPv6 addresses using
Section 2.5 Special-Use Addresses
Figure 2-10 IPv4 addresses can be embedded within IPv6 addresses, forming an IPv4-embedded
IPv6 address. Six different formats are available, depending on the IPv6 prefix length in
use. The well-known prefix 64:ff9b::/96 can be used for automatic translation between
IPv4 and IPv6 unicast addresses.
the 96-bit prefix option may be expressed using the convention for IPv6-mapped
addresses mentioned previously (Section 2.2(3) of [RFC4291]). For example,
embedding the IPv4 address with the well-known prefix produces
the address 64:ff9b::
Multicast Addresses
Multicast addressing is supported by IPv4 and IPv6. An IP multicast address (also
called group or group address) identifies a group of host interfaces, rather than a
single one. Generally speaking, the group could span the entire Internet. The
portion of the network that a single group covers is known as the group’s scope
[RFC2365]. Common scopes include node-local (same computer), link-local (same
subnet), site-local (applicable to some site), global (entire Internet), and administrative. Administrative scoped addresses may be used in an area of the network that
has been manually configured into routers. A site administrator may configure
routers as admin-scope boundaries, meaning that multicast traffic of the associated
group is not forwarded past the router. Note that the site-local and administrative
scopes are available for use only with multicast addressing.
Under software control, the protocol stack in each Internet host is able to join
or leave a multicast group. When a host sends something to a group, it creates a
datagram using one of its own (unicast) IP addresses as the source address and
a multicast IP address as the destination. All hosts in scope that have joined the
The Internet Address Architecture
group should receive any datagrams sent to the group. The sender is not generally
aware of the hosts receiving the datagram unless they explicitly reply. Indeed, the
sender does not even know in general how many hosts are receiving its datagrams.
The original multicast service model, described so far, has become known as
any-source multicast (ASM). In this model, any sender may send to any group; a
receiver joins the group by specifying only the group address. A newer approach,
called source-specific multicast (SSM) [RFC3569][RFC4607], uses only a single sender
per group (also see the errata to [RFC4607]). In this case, when joining a group,
a host specifies the address of a channel, which comprises both a group address
and a source IP address. SSM was developed to avoid some of the complexities in
deploying the ASM model. Although neither form of multicast is widely available
throughout the Internet, it seems that SSM is now the more likely candidate for
Understanding and implementing wide area multicasting has been an ongoing effort within the Internet community for more than a decade, and a large
number of protocols have been developed to support it. Full details of how global
Internet multicasting works are therefore beyond the scope of this text, but the
interested reader is directed to [IMR02]. Details of how local IP multicast operates
are given in Chapter 9. For now, we shall discuss the format and meaning of IPv4
and IPv6 multicast addresses.
IPv4 Multicast Addresses
For IPv4, the class D space (– has been reserved for
supporting multicast. With 28 bits free, this provides for the possibility of 228 =
268,435,456 host groups (each host group is an IP address). This address space is
divided into major sections based on the way they are allocated and handled with
respect to routing [IP4MA]. Those major sections are presented in Table 2-9.
The blocks of addresses up to are allocated for the exclusive
use of certain application protocols or organizations. These are allocated as the
result of action by the IANA or by the IETF. The local network control block is
limited to the local network of the sender; datagrams sent to those addresses are
never forwarded by multicast routers. The All Hosts group ( is one group
in this block. The internetwork control block is similar to the local network control
range but is intended for control traffic that needs to be routed off the local link.
An example from this block is the Network Time Protocol (NTP) multicast group
( [RFC5905].
The first ad hoc block was constructed to hold addresses that did not fall into
either the local or internetwork control blocks. Most of the allocations in this range
are for commercial services, some of which do not (or never will) require global
address allocations; they may eventually be returned in favor of GLOP2 addressing (see the next paragraphs). The SDP/SAP block contains addresses used by
2. GLOP is not an acronym but instead simply a name for a portion of address space.
Section 2.5 Special-Use Addresses
Table 2-9
Major sections of IPv4 class D address space used for supporting multicast
Range (Inclusive)
Special Use
Local network control; not forwarded
Internetwork control; forwarded normally
Ad hoc block I
Ad hoc block II
Source-specific multicast (SSM)
Ad hoc block III
( is reserved for documentation)
Unicast-prefix-based IPv4 multicast addresses
Administrative scope
applications such as the session directory tool (SDR) [H96] that send multicast
session announcements using the Session Announcement Protocol (SAP) [RFC2974].
Originally a component of SAP, the newer Session Description Protocol (SDP)
[RFC4566] is now used not only with IP multicast but also with other mechanisms
to describe multimedia sessions.
The other major address blocks were created somewhat later in the evolution of
IP multicast. The SSM block is used by applications employing SSM in combination
with their own unicast source IP address in forming SSM channels, as described
previously. In the GLOP block, multicast addresses are based on the autonomous
system (AS) number of the host on which the application allocating the address
resides. AS numbers are used by Internet-wide routing protocols among ISPs in
order to aggregate routes and apply routing policies. Each such AS has a unique
AS number. Originally, AS numbers were 16 bits but have now been extended to
32 bits [RFC4893]. GLOP addresses are generated by placing a 16-bit AS number in
the second and third bytes of the IPv4 multicast address, leaving room for 1 byte to
represent the possible multicast addresses (i.e., up to 256 addresses). Thus, it is possible to map back and forth between a 16-bit AS number and the GLOP multicast
address range associated with an AS number. Although this computation is simple,
several online calculators have been developed to do it, too.3
3. For example,
The Internet Address Architecture
The most recent of the IPv4 multicast address allocation mechanisms associates
a number of multicast addresses with an IPv4 unicast address prefix. This is called
unicast-prefix-based multicast addressing (UBM) and is described in [RFC6034]. It is
based on a similar structure developed earlier for IPv6 that we discuss in Section
2.5.4. The UBM IPv4 address range is through A unicast
address allocation with a /24 or shorter prefix may make use of UBM addresses.
Allocations with fewer addresses (i.e., a /25 or longer prefix) must use some other
mechanism. UBM addresses are constructed as a concatenation of the 234/8 prefix, the allocated unicast prefix, and the multicast group ID. Figure 2-11 shows the
Figure 2-11
The IPv4 UBM address format. For unicast address allocations of /24 or shorter, associated multicast addresses are allocated based on a concatenation of the prefix 234/8, the
assigned unicast prefix, and the multicast group ID. Allocations with shorter unicast
prefixes therefore contain more unicast and multicast addresses.
To determine the set of UBM addresses associated with a unicast allocation,
the allocated prefix is simply prepended with the 234/8 prefix. For example, the
unicast IPv4 address prefix has a single associated UBM address, It is also possible to determine the owner of a multicast address by
simply “left-shifting” the multicast address by 8 bit positions. We know that the
multicast address range is allocated to UC Berkeley, for example,
because the corresponding unicast IPv4 address space (the “leftshifted” version of is owned by UC Berkeley (as can be determined
using a WHOIS query; see Section
UBM addresses may offer advantages over the other types of multicast
address allocations. For example, they do not carry the 16-bit restriction for AS
numbers used by GLOP addressing. In addition, they are allocated as a consequence of already-existing unicast address space allocations. Thus, sites wishing
to use multicast addresses already know which addresses they can use without
further coordination. Finally, UBM addresses are allocated at a finer granularity than GLOP addresses, which correspond to AS number allocations. In today’s
Internet, a single AS number may be associated with multiple sites, frustrating the
simple mapping between address and owner supported by UBM.
The administratively scoped address block can be used to limit the distribution of multicast traffic to a particular collection of routers and hosts. These are
the multicast analogs of private unicast IP addresses. Such addresses should not
be used for distributing multicast into the Internet, as most of them are blocked at
enterprise boundaries. Large sites sometimes subdivide administratively scoped
Section 2.5 Special-Use Addresses
multicast addresses to cover specific useful scopes (e.g., work group, division, and
geographical area).
IPv6 Multicast Addresses
For IPv6, which is considerably more aggressive in its use of multicast, the prefix
ff00::/8 has been reserved for multicast addresses, and 112 bits are available for
holding the group number, providing for the possibility of
2112 = 5,192,296,858,534,827,628,530,496,329,220,096
groups. Its general format is as shown in Figure 2-12.
5 3 7
Figure 2-12 The base IPv6 multicast address format includes 4 flag bits (0, reserved; R, contains rendezvous point; P, uses unicast prefix; T, is transient). The 4-bit Scope value indicates the
scope of the multicast (global, local, etc.). The Group ID is encoded in the low-order 112
bits. If the P or R bit is set, an alternative format is used.
The second byte of the IPv6 multicast address includes a 4-bit Flags field and a
4-bit Scope ID field in the second nibble. The Scope field is used to indicate a limit
on the distribution of datagrams addressed to certain multicast addresses. The
hexadecimal values 0, 3, and f are reserved. The hex values 6, 7, and 9 through d
are unassigned. The values are given in Table 2-10, which is based on Section 2.7
of [RFC4291].
Table 2-10 Values of the IPv6 Scope field
The Internet Address Architecture
Many IPv6 multicast addresses allocated by the IANA for permanent use
intentionally span multiple scopes. Each of these is defined with a certain offset
relative to every scope (such addresses are called scope-relative or variable-scope
for this reason). For example, the variable-scope multicast address ff0x::101 is
reserved for NTP servers by [IP6MA]. The x indicates variable scope; Table 2-11
shows some of the addresses defined by this reservation.
Table 2-11 Example permanent variable-scope IPv6 multicast address reservations for NTP (101)
All NTP servers on the same machine
All NTP servers on the same link/subnet
All NTP servers within some administratively defined scope
All NTP servers at the same site
All NTP servers at the same organization
All NTP servers in the Internet
In IPv6, the multicast address format given in Figure 2-12 is used when the
P and R bit fields are set to 0. When P is set to 1, two alternative methods exist
for multicast addresses that do not require global agreement on a per-group basis.
These are described in [RFC3306] and [RFC4489]. In the first, called unicast-prefixbased IPv6 multicast address assignment, a unicast prefix allocation provided by an
ISP or address allocation authority also effectively allocates a collection of multicast
addresses, thereby limiting the amount of global coordination required for avoiding duplicates. With the second method, link-scoped IPv6 multicast, interface identifiers are used, and multicast addresses are based on a host’s IID. To understand
how these various formats work, we need to first understand the use of the bit
fields in the IPv6 multicast address in more detail. They are defined in Table 2-12.
Table 2-12 IPv6 multicast address flags
Bit Field
Rendezvous point flag (0, regular; 1, RP address included)
Prefix flag (0, regular; 1, address based on unicast prefix)
Transient flag (0, permanently assigned; 1, transient)
The T bit field, when set, indicates that the included group address is temporary or dynamically allocated; it is not one of the standard addresses defined in
[IP6MA]. When the P bit field is set to 1, the T bit must also be set to 1. When this
happens, a special format of IPv6 multicast addresses based on unicast address
prefixes is enabled, as shown in Figure 2-13.
Section 2.5 Special-Use Addresses
Figure 2-13
IPv6 multicast addresses can be created based upon unicast IPv6 address assignments
[RFC3306]. When this is done, the P bit field is set to 1, and the unicast prefix is carried
in the address, along with a 32-bit group ID. This form of multicast address allocation
eases the need for global address allocation agreements.
We can see here how using unicast-prefix-based addressing changes the format of the multicast address to include space for a unicast prefix and its length,
plus a smaller (32-bit) group ID. The purpose of this scheme is to provide a way
of allocating globally unique IPv6 multicast addresses without requiring a new
global mechanism for doing so. Because IPv6 unicast addresses are already allocated globally in units of prefixes (see Section 2.6), it is possible to use bits of this
prefix in multicast addresses, thereby leveraging the existing method of unicast
address allocation for multicast use. For example, an organization receiving a unicast prefix allocation of 3ffe:ffff:1::/48 would also consequently receive a unicastbased multicast prefix allocation of ff3x:30:3ffe:ffff:1::/96, where x is any valid
scope. SSM is also supported using this format by setting the prefix length and
prefix fields to 0, effectively requiring the prefix ff3x::/32 (where x is any valid
scope value) for use in all such IPv6 SSM multicast addresses.
To create unique multicast addresses of link-local scope, a method based on
IIDs can be used [RFC4489], which is preferred to unicast-prefix-based allocation
when only link-local scope is required. In this case, another form of IPv6 multicast
address structure is used (see Figure 2-14).
Figure 2-14 The IPv6 link-scoped multicast address format. Applicable only to link- (or smaller)
scoped addresses, the multicast address can be formed by combining an IPv6 interface
ID and a group ID. The mapping is straightforward, and all such addresses use prefixes
of the form ff3x:0011/32, where x is the scope ID and is less than 3.
The address format shown in Figure 2-14 is very similar to the format in Figure 2-13, except that the Prefix Length field is set to 255, and instead of a prefix
being carried in the subsequent field, an IPv6 IID is instead. The advantage of
The Internet Address Architecture
this structure over the previous one is that no prefix need be supplied in forming
the multicast address. In ad hoc networks where no routers may be available, an
individual machine can form unique multicast addresses based on its own IID
without having to engage in a complex agreement protocol. As stated before, this
format works only for link- or node-local multicast scoping, however. When larger
scopes are required, either unicast-prefix-based addressing or permanent multicast addresses are used. As an example of this format, a host with IID 02-11-22-3344-55-66-77 would use multicast addresses of the form ff3x:0011:0211:2233:4455:66
77:gggg:gggg, where x is a scope value of 2 or less and gggg:gggg is the hexadecimal notation for a 32-bit multicast group ID.
The bit field we have yet to discuss is the R bit field. It is used when unicastprefix-based multicast addressing is used (the P bit is set) along with a multicast
routing protocol that requires knowledge of a rendezvous point.
A rendezvous point (RP) is the IP address of a router set up to handle multicast
routing for one or more multicast groups. RPs are used by the PIM-SM protocol [RFC4601] to help senders and receivers participating in the same multicast
group to find each other. One of the problems encountered in deploying Internetwide multicast has been locating rendezvous points. This scheme overloads the
IPv6 multicast address to include an RP address. Therefore, it is simple to find an
RP from a group address by just selecting the appropriate subset of bits.
When the P bit is set, the modified format for a multicast address shown in
Figure 2-15 is used.
Figure 2-15
The unicast IPv6 address of an RP can be embedded inside an IPv6 multicast address
[RFC3956]. Doing so makes it straightforward to find an RP associated with an address
for routing purposes. An RP is used by the multicast routing system in order to coordinate multicast senders with receivers when they are not on the same subnetwork.
The format shown in Figure 2-15 is similar to the one shown in Figure 2-13,
but SSM is not used (so the prefix length cannot be zero). In addition, a new 4-bit
field called the RIID is introduced. To form the IPv6 address of an RP based on
a multicast address of the form in Figure 2-15, the number of bits indicated in
the Prefix Length field are extracted from the Prefix field and placed as the upper
bits in a fresh IPv6 address. Then, the contents of the RIID field are used as the
Section 2.5 Special-Use Addresses
low-order 4 bits of the RP address. The rest is filled with zeros. As an example,
consider a multicast address ff75:940:2001:db8:dead:beef:f00d:face. In this case,
the scope is 5 (site-local), the RIID field has the value 9, and the prefix length is
0x40 = 64 bits. The prefix itself is therefore 2001:db8:dead:beef, so the RP address
is 2001:db8:dead:beef::9. More examples are given in [RFC3956].
As with IPv4, there are a number of reserved IPv6 multicast addresses. These
addresses are grouped by scope, except for the variable-scope addresses mentioned before. Table 2-13 gives a list of the major reservations from the IPv6 multicast space. Consult [IP6MA] for additional information.
Table 2-13 Reserved addresses within the IPv6 multicast address space
Special Use
All nodes
All routers
All nodes
All routers
DVMRP routers
OSPFIGP designated routers
RIPng routers
EIGRP routers
PIM routers
MLDv2-capable routers
All snoopers
All DHCP agents
Solicited-node address range
All routers
All DHCP servers
Aggregate Server Access Protocol
All ACs address (CAPWAP)
SSM block
The Internet Address Architecture
Anycast Addresses
An anycast address is a unicast IPv4 or IPv6 address that identifies a different host
depending on where in the network it is used. This is accomplished by configuring Internet routers to advertise the same unicast routes from multiple locations in
the Internet. Thus, an anycast address refers not to a single host in the Internet, but
to the “most appropriate” or “closest” single host that is responding to the anycast
address. Anycast addressing is used most frequently for finding a computer that
provides a common service [RFC4786]. For example, a datagram sent to an anycast
address could be used to find a DNS server (see Chapter 11), a 6to4 gateway that
encapsulates IPv6 traffic in IPv4 tunnels [RFC3068], or RPs for multicast routing
IP address space is allocated, usually in large chunks, by a collection of hierarchically organized authorities. The authorities are generally organizations that allocate address space to various owners—usually ISPs or other smaller authorities.
Authorities are most often involved in allocating portions of the global unicast
address space, but other types of addresses (multicast and special-use) are also
sometimes allocated. The authorities can make allocations to users for an undetermined amount of time, or for a limited time (e.g., for running experiments). The
top of the hierarchy is the IANA [IANA], which has wide-ranging responsibility for allocating IP addresses and other types of numbers used in the Internet
For unicast IPv4 and IPv6 address space, the IANA delegates much of its allocation
authority to a few regional Internet registries (RIRs). The RIRs coordinate with each
other through an organization formed in 2003 called the Number Resource Organization (NRO) [NRO]. At the time of writing (mid-2011), the set of RIRs includes
those shown in Table 2-14, all of which participate in the NRO. Note in addition
that, as of early 2011, all the remaining unicast IPv4 address space held by IANA
for allocation had been handed over to these RIRs.
These entities typically deal with relatively large address blocks [IP4AS]
[IP6AS]. They allocate address space to smaller registries operating in countries
(e.g., Australia and Singapore) and to large ISPs. ISPs, in turn, provide address
space to their customers and themselves. When users sign up for Internet service, they are ordinarily provided a (typically small) fraction or range of their
ISP’s address space in the form of an address prefix. These address ranges are
owned and managed by the customer’s ISP and are called provider-aggregatable
(PA) addresses because they consist of one or more prefixes that can be aggregated
with other prefixes the ISP owns. Such addresses are also sometimes called nonportable addresses. Switching providers typically requires customers to change the
Section 2.6 Allocation
Table 2-14 Regional Internet registries that participate in the NRO
RIR Name
Area of Responsibility
AfriNIC—African Network
Information Center
APNIC—Asia Pacific Network
Information Center
ARIN—American Registry for
Internet Numbers
LACNIC—Regional Latin
America and Caribbean IP
Address Registry
Asia/Pacific Area
North America
Latin America and some
Caribbean islands
Europe, Middle East,
Central Asia
IP prefixes on all computers and routers they have that are attached to the Internet
(an often unpleasant operation called renumbering).
An alternative type of address space is called provider-independent (PI) address
space. Addresses allocated from PI space are allocated to the user directly and
may be used with any ISP. However, because such addresses are owned by the
customer, they are not numerically adjacent to the ISP’s own addresses and are
therefore not aggregatable. An ISP being asked to provide routing for a customer’s
PI addresses may require additional payment for service or simply not agree to
support such a configuration. In some sense, an ISP that agrees to provide routing
for a customer’s PI addresses is taking on an extra cost relative to other customers
by having to increase the size of its routing tables. On the other hand, many sites
prefer to use PI addresses, and might be willing to pay extra for them, because
it helps to avoid the need to renumber when switching ISPs (avoiding what has
become known as provider lock). Examples
It is possible to use the Internet WHOIS service to determine how address space
has been allocated. For example, we can form a query for information about the
IPv4 address by accessing the corresponding URL http://whois.
Ref: -
The Internet Address Architecture
Here we see that the address is really part of the network called
SPEK-SEA5-PART-1, which has been allocated the address range
Furthermore, we can see that SPEK-SEA5-PART-1’s address range is a portion of
the PA address space called NET-72-1-128-0-1. We can formulate a query for
information about this network by visiting the URL
Ref: -
Direct Allocation
This record indicates that the address range (called by the “handle” or name NET-72-1-128-0-1) has been directly allocated out of the address
range managed by ARIN. More details on data formats and the various methods ARIN supports for WHOIS queries can be found at [WRWS].
We can look at a different type of result using one of the other RIRs. For example, if we search for information regarding the IPv4 address using
the Web query interface at, we obtain the following result:
% This is the RIPE Database query service.
% The objects are in RPSL format.
% The RIPE Database is subject to Terms and Conditions.
% See
% Note: This output has been filtered.
To receive output for a database update, use the "-B" flag.
% Information related to ' -'
inetnum: -
World Intellectual Property Organization
UN Specialized Agency
RIPE # Filtered
Section 2.7 Unicast Address Assignment
Here, we can see that the address is a portion of the
block allocated to WIPO. Note that the status of this block is ASSIGNED PI, meaning that this particular block of addresses is of the provider-independent variety.
The reference to RPSL indicates that the database records are in the Routing Policy
Specification Language [RFC2622][RFC4012], used by ISPs to express their routing
policies. Such information allows network operators to configure routers to help
minimize Internet routing instabilities.
In IPv4 and IPv6, multicast addresses (i.e., group addresses) can be described based
on their scope, the way they are determined (statically, dynamically by agreement,
or algorithmically), and whether they are used for ASM or SSM. Guidelines have
been constructed for allocation of these groups ([RFC5771] for IPv4; [RFC3307] for
IPv6) and the overall architecture is detailed in [RFC6308]. The groups that are
not of global scope (e.g., administratively scoped addresses and IPv6 link-scoped
multicast addresses) can be reused in various parts of the Internet and are either
configured by a network administrator out of an administratively scoped address
block or selected automatically by end hosts. Globally scoped addresses that are
statically allocated are generally fixed and may be hard-coded into applications.
This type of address space is limited, especially in IPv4, so such addresses are
really intended for uses applicable to any Internet site. Algorithmically determined globally scoped addresses can be created based on AS numbers, as in
GLOP, or an associated unicast prefix allocation. Note that SSM can use globally
scoped addresses (i.e., from the SSM block), administratively scoped addresses, or
unicast-prefix-based IPv6 addresses where the prefix is effectively zero.
As we can see from the relatively large number of protocols and the complexity of the various multicast address formats, multicast address management is a
formidable issue (not to mention global multicast routing [RFC5110]). From a typical user’s point of view, multicasting is used rarely and may be of limited concern.
From a programmer’s point of view, it may be worthwhile to support multicast
in application designs, and some insight has been provided into how to do so
[RFC3170]. For network administrators faced with implementing multicast, some
interaction with the service provider is likely necessary. In addition, some guidelines for multicast address allocation have been developed by vendors [CGEMA].
Unicast Address Assignment
Once a site has been allocated a range of unicast IP addresses, typically from its
ISP, the site or network administrator must determine how to assign addresses in
the address range to each network interface and how to set up the subnet structure.
If the site has only a single physical network segment (e.g., most private homes),
this process is relatively straightforward. For larger enterprises, especially those
The Internet Address Architecture
receiving service from multiple ISPs and that use multiple physical network segments distributed over a large geographical area, this process can be complicated.
We shall begin to see how this works by looking at the case where a home user
uses a private address range and a single IPv4 address provided by an ISP. This is
a common scenario today. We then move on to provide some introductory guidance for more complicated situations.
Single Provider/No Network/Single Address
The simplest type of Internet service that can be obtained today is to receive a single
IP address (typically IPv4 only in the United States) from an ISP to be used with a
single computer. For services such as DSL, the single address might be assigned as
the end of a point-to-point link and might be temporary. For example, if a user’s
computer connects to the Internet over DSL, it might be assigned the address on a particular day. Any running program on the computer may send
and receive Internet traffic, and any such traffic will carry the source IPv4 address Even a host this simple has other active IP addresses as well. These
include the local “loopback” address ( and some multicast addresses, including, at a minimum, the All Hosts multicast address ( If the host is running
IPv6, at a minimum it is using the All Nodes IPv6 multicast address (ff02::1), any
IPv6 addresses it has been assigned by the ISP, the IPv6 loopback address (::1), and a
link-local address for each network interface configured for IPv6 use.
To see a host’s active multicast addresses (groups) on Linux, we can use the
ifconfig and netstat commands to see the IP addresses and groups in use:
Linux% ifconfig ppp0
Link encap:Point-to-Point Protocol
inet addr:
P-t-P: Mask:
RX packets:33134 errors:0 dropped:0 overruns:0 frame:0
TX packets:41031 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:17748984 (16.9 MiB) TX bytes:9272209 (8.8 MiB)
Linux% netstat -gn
IPv6/IPv4 Group Memberships
RefCnt Group
--------------- ------ --------------------lo
Here we see that the point-to-point link associated with the device ppp0
has been assigned the IPv4 address; no IPv6 address has been
assigned. The host system does have IPv6 enabled, however, so when we inspect
Section 2.7 Unicast Address Assignment
its group memberships we see that it is subscribed to the IPv6 All Nodes multicast
group on its local loopback (lo) interface. We can also see that the IPv4 All Hosts
group is in use, in addition to the mDNS (multicast DNS) service [IDChes]. The
mDNS protocol uses the static IPv4 multicast address
Single Provider/Single Network/Single Address
Many Internet users who own more than one computer find that having only a
single computer attached to the Internet is not an ideal situation. As a result, they
have home LAN or WLAN networks and use either a router or a computer acting
as a router to provide connectivity to the Internet. Such configurations are very
similar to the single-computer case, except the router forwards packets from the
home network to the ISP and also performs NAT (see Chapter 7; also called Internet Connection Sharing (ICS) in Windows) by rewriting the IP addresses in packets
being exchanged with the customer’s ISP. From the ISP’s point of view, only a
single IP address has been used. Today, much of this activity is automated, so the
need for manual address configuration is minimal. The routers provide automatic
address assignment to the home clients using DHCP. They also handle address
assignment for the link set up with the ISP if necessary. Details of DHCP operation
and host configuration are given in Chapter 6.
Single Provider/Multiple Networks/Multiple Addresses
Many organizations find that the allocation of a single unicast address, especially
if it is only temporarily assigned, is insufficient for their Internet access needs.
In particular, organizations intending to run Internet servers (such as Web sites)
generally wish to have an IP address that does not change over time. These sites
also often have multiple LANs; some of them are internal (separated from the
Internet by firewalls and NAT devices), and others may be external (providing
services to the Internet). For such networks, there is typically a site or network
administrator who must decide how many IP addresses the site requires, how
to structure subnets at the site, and which subnets should be internal and which
external. The arrangement shown in Figure 2-16 is typical for small and mediumsize enterprises.
In this figure, a site has been allocated the prefix, providing
up to 64 (minus 2) routable IPv4 addresses. The “DMZ” network (“demilitarized
zone” network, outside the primary firewall; see Chapter 7) is used to attach servers that can be accessed by users on the Internet. Such computers typically provide Web access, login servers, and other services. These servers are assigned IP
addresses from a small subset of the prefix range; many sites have only a few
public servers. The remaining addresses from the site prefix are given to the NAT
router as the basis for a “NAT pool” (see Chapter 7). This router can rewrite datagrams entering and leaving the internal network using any of the addresses in
its pool. The network setup in Figure 2-16 is convenient for two primary reasons.
The Internet Address Architecture
Figure 2-16
A typical small to medium-size enterprise network. The site has been allocated 64
public (routable) IPv4 addresses in the range A “DMZ” network holds
servers that are visible to the Internet. The internal router provides Internet access for
computers internal to the enterprise using NAT.
First, the separation of the internal network from the DMZ helps protect internal
computers from damage should the DMZ servers be compromised. In addition,
this setup partitions the IP address assignment. Once the border router, DMZ, and
internal NAT router have been set up, any address structure can be used internally, where many (private) IP addresses are available. Of course, this example
is only one way of setting up small enterprise networks, and other factors such
as cost might ultimately drive the way routers, networks, and IP addresses are
deployed for any particular small or medium-size enterprise.
Multiple Providers/Multiple Networks/Multiple Addresses (Multihoming)
Some organizations that depend on Internet access for their continued operations
attach to the Internet using more than one provider (called multihoming) in order
to provide for redundancy in case of failure, or for other reasons. Because of CIDR,
Section 2.7 Unicast Address Assignment
organizations with a single ISP tend to have PA IP addresses associated with that
ISP. If they obtain a second ISP, the question arises as to what IP addresses should
be used in each of the hosts. Some guidance has been developed for operating
with multiple ISPs, or when transitioning from one to another (which raises some
similar concerns). For IPv4, [RFC4116] discusses how either PI or PA addresses can
be used for multihoming. Consider the situation shown in Figure 2-17.
Figure 2-17
Provider-aggregatable and provider-independent IPv4 addresses used in a hypothetical
multihomed enterprise. Site operators tend to prefer using PI space if it is available. ISPs
prefer PA space because it promotes prefix aggregation and reduces routing table size.
Here, a (somewhat) fictitious site S has two ISPs, P1 and P2. If it uses PA address
space from P1’s block (, it advertises this prefix at points C and D to
P1 and P2, respectively. The prefix can be aggregated by P1 into its 12/8 block in
advertisements to the rest of the Internet at point A, but P2 is not able to aggregate
it at point B because it is not numerically adjacent to its own prefix (137.164/16).
In addition, from the point of view of some host in the other parts of the Internet,
traffic for tends to go through ISP P2 rather than ISP P1 because the
prefix for site S is longer (“more specific”) than when it goes through P1. This is
a consequence of the way the longest matching prefix algorithm works for Internet
routing (see Chapter 5 for more details). In essence, a host in the other parts of the
Internet could reach the address via either a matching prefix
at point A or the prefix at point B. Because each prefix matches (i.e.,
contains a common set of prefix bits with the destination address, the
one with the larger or longer mask (larger number of matching bits) is preferred,
The Internet Address Architecture
which in this case is P2. Thus, P2 is in the position of being unable to aggregate the
prefix from S and also winds up carrying most of S’s traffic.
If site S decides to use PI space instead of PA space, the situation is more symmetric. However, no aggregation is possible. In this case, the PI prefix
is advertised to P1 and P2 at points C and D, respectively, but neither ISP is able
to aggregate it because it is not numerically adjacent to either of the ISPs’ address
blocks. Thus, both ISPs advertise the identical prefix at points A
and B. In this fashion the “natural” shortest-path computations in Internet routing can take place, and site S can be reached by whichever ISP is closer to the host
sending to it. In addition, if site S decides to switch ISPs, it does not have to change
its assigned addresses. Unfortunately, the inability to aggregate such addresses
can be a concern for future scalability of the Internet, so PI space is in relatively
short supply.
Multihoming for IPv6 has been the subject of study within the IETF for
some time, resulting in the Multi6 architecture [RFC4177] and the Shim6 protocol [RFC5533]. Multi6 outlines a number of approaches that have been proposed
for handling the issue. Broadly, the options mentioned include using a routing
approach equivalent to IPv4 multihoming mentioned previously, using the capabilities of Mobile IPv6 [RFC6275], and creating a new method that splits the identification of nodes away from their locators. Today, IP addresses serve as both
identifiers (essentially a form of name) and locators (an address understood by the
routing system) for a network interface attached to the Internet. Providing a separation would allow the network protocol implementation to function even if the
underlying IP address changes. Protocols that provide this separation are sometimes called identifier/locator separating or id/loc split protocols.
Shim6 introduces a “shim” network-layer protocol that separates the “upperlayer protocol identifier” used by the transport protocols from the IP address.
Multihoming is achieved by selecting which IP address (locator) to use based
on dynamic network conditions and without requiring PI address allocations.
Communicating hosts (peers) agree on which locators to use and when to switch
between them. Separation of identifiers from locators is the subject of several other
efforts, including the experimental Host Identity Protocol (HIP) [RFC4423], which
identifies hosts using cryptographic host identifiers. Such identifiers are effectively the public keys of public/private key pairs associated with hosts, so HIP
traffic can be authenticated as having come from a particular host. Security issues
are discussed in more detail in Chapter 18.
Attacks Involving IP Addresses
Given that IP addresses are essentially numbers, few network attacks involve only
them. Generally, attacks can be carried out when sending “spoofed” datagrams (see
Chapter 5) or with other related activities. That said, IP addresses are now being
used to help identify individuals suspected of undesirable activities (e.g., copyright
Section 2.9 Summary
infringement in peer-to-peer networks or distribution of illegal materials). Doing
this can be misleading for several reasons. For example, in many circumstances
IP addresses are only temporary and are reassigned to different users at different
times. Therefore, any errors in accurate timekeeping can easily cause databases
that map IP addresses to users to be incorrect. Furthermore, access controls are not
widely and securely deployed; it is often possible to attach to the Internet through
some public access point or some unintentionally open wireless router in someone’s home or office. In such circumstances, the unsuspecting home or business
owner may be targeted based on IP address even though that person was not the
originator of traffic on the network. This can also happen when compromised hosts
are used to form botnets. Such collections of computers (and routers) can now be
leased on what has effectively become an Internet-based black market for carrying
out attacks, serving illicit content, and other misdeeds [RFC4948].
The IP address is used to identify and locate network interfaces on devices
throughout the Internet system (unicast addresses). It may also be used for identifying more than one such interface (multicast, broadcast, or anycast addresses).
Each interface has a minimum of one 32-bit IPv4 address (when IPv4 is being
used) and usually has several 128-bit addresses if using IPv6. Unicast addresses
are allocated in blocks by a hierarchically structured set of administrative entities.
Prefixes allocated by such entities represent a chunk of unicast IP address space
typically given to ISPs that in turn provide addresses to their users. Such prefixes
are usually a subrange of the ISP’s address block (called provider-aggregatable or
PA addresses) but may instead be owned by the user (called provider-independent or PI addresses). Numerically adjacent address prefixes (PA addresses) can
be aggregated to save routing table space and improve scalability of the Internet.
This approach arose when the Internet’s “classful” network structure consisting of class A, B, and C network numbers was abandoned in favor of classless
inter-domain routing (CIDR). CIDR allows for different sizes of address blocks to
be assigned to organizations with different needs for address space; essentially,
CIDR enables more efficient allocation of address space. Anycast addresses are
unicast addresses that refer to different hosts depending on where the sender is
located; such addresses are often used for discovering network services that may
be present in multiple locations.
IPv6 unicast addresses differ somewhat from IPv4 addresses. Most important,
IPv6 addresses have a scope concept, for both unicast and multicast addresses,
that specifically indicates where an address is valid. Typical scopes include nodelocal, link-local, and global. Link-local addresses are often created based on a standard prefix in combination with an IID that can be based on addresses provided
by lower-layer protocols (such as hardware/MAC addresses) or random values.
This approach aids in autoconfiguration of IPv6 addresses.
The Internet Address Architecture
Both IPv4 and IPv6 support addressing formats that refer to more than one
network interface at a time. Broadcast and multicast addresses are supported in
IPv4, but only multicast addresses are supported in IPv6. Broadcast allows for oneto-all communication, whereas multicast allows for one-to-many communication.
Senders send to multicast groups (IP addresses) that act somewhat like television
channels; the sender has no direct knowledge of the recipients of its traffic or
how many receivers there are on a channel. Global multicast in the Internet has
evolved over more than a decade and involves many protocols—some for routing,
some for address allocation and coordination, and some for signaling that a host
wishes to join or leave a group. There are also many types and uses of IP multicast addresses, both in IPv4 and (especially) in IPv6. Variants of the IPv6 multicast address format provide ways for allocating groups based on unicast prefixes,
embedding routing information (RP addresses) in groups, and creating multicast
addresses based on IIDs.
The development and deployment of CIDR was arguably the last fundamental change made to the Internet’s core routing system. CIDR was successful in
handling the pressure to have more flexibility in allocating address space and
for promoting routing scalability through aggregation. In addition, IPv6 was pursued at the time (early 1990s) with much energy, based on the belief that a much
larger number of addresses would be required soon. Unforeseen at the time, the
widespread use of NAT (see Chapter 7) has since significantly delayed adoption of
IPv6 by not requiring every host attached to the Internet to have a unique address.
Instead, large networks using private address space are now commonplace. Ultimately, however, the number of available routable IP addresses will eventually
dwindle to zero, so some change will be required. In February 2011 the last five /8
IPv4 address prefixes were allocated from the IANA, one to each of the five RIRs.
On April 15, 2011, APNIC exhausted all of its allocatable prefixes. The remaining prefixes held by various RIRs are expected to remain unallocated for only a
few years at most. A current snapshot of IPv4 address utilization can be found at
2.10 References
[CGEMA] Cisco Systems, “Guidelines for Enterprise IP Multicast Address
Allocation,” 2004,
[EIGRP] B. Albrightson, J. J. Garcia-Luna-Aceves, and J. Boyle, “EIGRP—A Fast
Routing Protocol Based on Distance Vectors,” Proc. Infocom, 2004.
[EUI64] Institute for Electrical and Electronics Engineers, “Guidelines for 64-Bit
Global Identifier (EUI-64) Registration Authority,” Mar. 1997,http://standards.
Section 2.10 References
[H96] M. Handley, “The SDR Session Directory: An Mbone Conference Scheduling and Booking System,” Department of Computer Science, University College
London, Apr. 1996,
[IANA] Internet Assigned Numbers Authority,
[IDChes] S. Cheshire and M. Krochmal, “Multicast DNS,” Internet draftcheshire-dnsext-multicastdns, work in progress, Oct. 2010.
[IDv4v6mc] S. Venaas, X. Li, and C. Bao, “Framework for IPv4/IPv6 Multicast
Translation,” Internet draft-venaas-behave-v4v6mc-framework, work in progress,
Dec. 2010.
[IEEERA] IEEE Registration Authority,
[IMR02] B. Edwards, L. Giuliano, and B. Wright, Interdomain Multicast Routing:
Practical Juniper Networks and Cisco Systems Solutions (Addison-Wesley, 2002).
[IP4R] IPv4 Address Report,
[KK77] L. Kleinrock and F. Kamoun, “Hierarchical Routing for Large Networks,
Performance Evaluation and Optimization,” Computer Networks, 1(3), 1977.
[NRO] Number Resource Organization,
[RFC0919] J. C. Mogul, “Broadcasting Internet Datagrams,” Internet RFC 0919/
BCP 0005, Oct. 1984.
[RFC0922] J. C. Mogul, “Broadcasting Internet Datagrams in the Presence of Subnets,” Internet RFC 0922/STD 0005, Oct. 1984.
[RFC0950] J. C. Mogul and J. Postel, “Internet Standard Subnetting Procedure,”
Internet RFC 0950/STD 0005, Aug. 1985.
[RFC1075] D. Waitzman, C. Partridge, and S. E. Deering, “Distance Vector Multicast Routing Protocol,” Internet RFC 1075 (experimental), Nov. 1988.
[RFC1112] S. E. Deering, “Host Extensions for IP Multicasting,” Internet RFC
1112/STD 0005, Aug. 1989.
[RFC1122] R. Braden, ed., “Requirements for Internet Hosts—Communication
Layers,” Internet RFC 1122/STD 0003, Oct. 1989.
The Internet Address Architecture
[RFC1812] F. Baker, ed., “Requirements for IP Version 4 Routers,” Internet RFC
1812/STD 0004, June 1995.
[RFC1918] Y. Rekhter, B. Moskowitz, D. Karrenberg, G. J. de Groot, and E. Lear,
“Address Allocation for Private Internets,” Internet RFC 1918/BCP 0005, Feb.
[RFC2080] G. Malkin and R. Minnear, “RIPng for IPv6,” Internet RFC 2080, Jan.
[RFC2328] J. Moy, “OSPF Version 2,” Internet RFC 2328/STD 0054, Apr. 1988.
[RFC2365] D. Meyer, “Administratively Scoped IP Multicast,” Internet RFC 2365/
BCP 0023, July 1998.
[RFC2544] S. Bradner and J. McQuaid, “Benchmarking Methodology for Network Interconnect Devices,” Internet RFC 2544 (informational), Mar. 1999.
[RFC2622] C. Alaettinoglu, C. Villamizar, E. Gerich, D. Kessens, D. Meyer, T.
Bates, D. Karrenberg, and M. Terpstra, “Routing Policy Specification Language
(RPSL),” Internet RFC 2622, June 1999.
[RFC2644] D. Senie, “Changing the Default for Directed Broadcasts in Routers,”
Internet RFC 2644/BCP 0034, Aug. 1999.
[RFC2974] M. Handley, C. Perkins, and E. Whelan, “Session Announcement Protocol,” Internet RFC 2974 (experimental), Oct. 2000.
[RFC3056] B. Carpenter and K. Moore, “Connection of IPv6 Domains via IPv4
Clouds,” Internet RFC 3056, Feb. 2001.
[RFC3068] C. Huitema, “An Anycast Prefix for 6to4 Relay Routers,” Internet RFC
3068, June 2001.
[RFC3170] B. Quinn and K. Almeroth, “IP Multicast Applications: Challenges
and Solutions,” Internet RFC 3170 (informational), Sept. 2001.
[RFC3180] D. Meyer and P. Lothberg, “GLOP Addressing in 233/8,” Internet RFC
3180/BCP 0053, Sept. 2001.
[RFC3306] B. Haberman and D. Thaler, “Unicast-Prefix-Based IPv6 Multicast
Addresses,” Internet RFC 3306, Aug. 2002.
[RFC3307] B. Haberman, “Allocation Guidelines for IPv6 Multicast Addresses,”
Internet RFC 3307, Aug. 2002.
[RFC3315] R. Droms, ed., J. Bound, B. Volz, T. Lemon, C. Perkins, and M. Carney,
“Dynamic Host Configuration Protocol for IPv6 (DHCPv6),” Internet RFC 3315,
July 2003.
[RFC3569] S. Bhattacharyya, ed., “An Overview of Source-Specific Multicast
(SSM),” Internet RFC 3569 (informational), July 2003.
Section 2.10 References
[RFC3701] R. Fink and R. Hinden, “6bone (IPv6 Testing Address Allocation)
Phaseout,” Internet RFC 3701 (informational), Mar. 2004.
[RFC3810] R. Vida and L. Costa, eds., “Multicast Listener Discovery Version 2
(MLDv2) for IPv6,” Internet RFC 3810, June 2004.
[RFC3849] G. Huston, A. Lord, and P. Smith, “IPv6 Address Prefix Reserved for
Documentation,” Internet RFC 3849 (informational), July 2004.
[RFC3879] C. Huitema and B. Carpenter, “Deprecating Site Local Addresses,”
Internet RFC 3879, Sept. 2004.
[RFC3927] S. Cheshire, B. Aboba, and E. Guttman, “Dynamic Configuration of
IPv4 Link-Local Addresses,” Internet RFC 3927, May 2005.
[RFC3956] P. Savola and B. Haberman, “Embedding the Rendezvous Point (RP)
Address in an IPv6 Multicast Address,” Internet RFC 3956, Nov. 2004.
[RFC4012] L. Blunk, J. Damas, F. Parent, and A. Robachevsky, “Routing Policy
Specification Language Next Generation (RPSLng),” Internet RFC 4012, Mar.
[RFC4116] J. Abley, K. Lindqvist, E. Davies, B. Black, and V. Gill, “IPv4 Multihoming Practices and Limitations,” Internet RFC 4116 (informational), July 2005.
[RFC4177] G. Huston, “Architectural Approaches to Multi-homing for IPv6,”
Internet RFC 4177 (informational), Sept. 2005.
[RFC4193] R. Hinden and B. Haberman, “Unique Local IPv6 Unicast Addresses,”
Oct. 2005.
[RFC4286] B. Haberman and J. Martin, “Multicast Router Discovery,” Internet
RFC 4286, Dec. 2005.
[RFC4291] R. Hinden and S. Deering, “IP Version 6 Addressing Architecture,”
Internet RFC 4291, Feb. 2006.
[RFC4380] C. Huitema, “Teredo: Tunneling IPv6 over UDP through Network
Address Translations (NATs),” Internet RFC 4380, Feb. 2006.
[RFC4423] R. Moskowitz and P. Nikander, “Host Identity Protocol (HIP) Architecture,” Internet RFC 4423 (informational), May 2006.
[RFC4489] J.-S. Park, M.-K. Shin, and H.-J. Kim, “A Method for Generating LinkScoped IPv6 Multicast Addresses,” Internet RFC 4489, Apr. 2006.
[RFC4566] M. Handley, V. Jacobson, and C. Perkins, “SDP: Session Description
Protocol,” Internet RFC 4566, July 2006.
[RFC4601] B. Fenner, M. Handley, H. Holbrook, and I. Kouvelas, “Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol Specification (Revised),”
Internet RFC 4601, Aug. 2006.
The Internet Address Architecture
[RFC4607] H. Holbrook and B. Cain, “Source-Specific Multicast for IP,” Internet
RFC 4607, Aug. 2006.
[RFC4608] D. Meyer, R. Rockell, and G. Shepherd, “Source-Specific Protocol Independent Multicast in 232/8,” Internet RFC 4608/BCP 0120, Aug. 2006.
[RFC4610] D. Farinacci and Y. Cai, “Anycast-RP Using Protocol Independent Multicast (PIM),” Internet RFC 4610, Aug. 2006.
[RFC4632] V. Fuller and T. Li, “Classless Inter-domain Routing (CIDR): The Internet Address Assignment and Aggregation Plan,” Internet RFC 4632/BCP 0122,
Aug. 2006.
[RFC4786] J. Abley and K. Lindqvist, “Operation of Anycast Services,” Internet
RFC 4786/BCP 0126, Dec. 2006.
[RFC4795] B. Aboba, D. Thaler, and L. Esibov, “Link-Local Multicast Name Resolution (LLMNR),” Internet RFC 4795 (informational), Jan. 2007.
[RFC4843] P. Nikander, J. Laganier, and F. Dupont, “An IPv6 Prefix for Overlay
Routable Cryptographic Hash Identifiers (ORCHID),” Internet RFC 4843 (experimental), Apr. 2007.
[RFC4893] Q. Vohra and E. Chen, “BGP Support for Four-Octet AS Number
Space,” Internet RFC 4893, May 2007.
[RFC4948] L. Andersson, E. Davies, and L. Zhang, eds., “Report from the IAB
Workshop on Unwanted Traffic March 9–10, 2006,” Internet RFC 4948 (informational), Aug. 2007.
[RFC5059] N. Bhaskar, A. Gall, J. Lingard, and S. Venaas, “Bootstrap Router (BSR)
Mechanism for Protocol Independent Multicast (PIM),” Internet RFC 5059, Jan.
[RFC5110] P. Savola, “Overview of the Internet Multicast Routing Architecture,”
Internet RFC 5110 (informational), Jan. 2008.
[RFC5156] M. Blanchet, “Special-Use IPv6 Addresses,” Internet RFC 5156 (informational), Apr. 2008.
[RFC5214] F. Templin, T. Gleeson, and D. Thaler, “Intra-Site Automatic Tunnel
Addressing Protocol (ISATAP),” Internet RFC 5214 (informational), Mar. 2008.
[RFC5352] R. Stewart, Q. Xie, M. Stillman, and M. Tuexen, “Aggregate Server
Access Protocol (ASAP),” Internet RFC 5352 (experimental), Sept. 2008.
[RFC5415] P. Calhoun, M. Montemurro, and D. Stanley, eds., “Control and Provisioning of Wireless Access Points (CAPWAP) Protocol Specification,” Internet
RFC 5415, Mar. 2009.
Section 2.10 References
[RFC5498] I. Chakeres, “IANA Allocations for Mobile Ad Hoc Network
(MANET) Protocols,” Internet RFC 5498, Mar. 2009.
[RFC5533] E. Nordmark and M. Bagnulo, “Shim6: Level 3 Multihoming Shim
Protocol for IPv6,” Internet RFC 5533, June 2009.
[RFC5735] M. Cotton and L. Vegoda, “Special Use IPv4 Addresses,” Internet RFC
5735/BCP 0153, Jan. 2010.
[RFC5736] G. Huston, M. Cotton, and L. Vegoda, “IANA IPv4 Special Purpose
Address Registry,” Internet RFC 5736 (informational), Jan. 2010.
[RFC5737] J. Arkko, M. Cotton, and L. Vegoda, “IPv4 Address Blocks Reserved
for Documentation,” Internet RFC 5737 (informational), Jan. 2010.
[RFC5771] M. Cotton, L. Vegoda, and D. Meyer, “IANA Guidelines for IPv4 Multicast Address Assignments,” Internet RFC 5771/BCP 0051, Mar. 2010.
[RFC5952] S. Kawamura and M. Kawashima, “A Recommendation for IPv6
Address Text Representation,” Internet RFC 5952, Aug. 2010.
[RFC5905] D. Mills, J. Martin, ed., J. Burbank, and W. Kasch, “Network Time
Protocol Version 4: Protocol and Algorithms Specification,” Internet RFC 5905,
June 2010.
[RFC6034] D. Thaler, “Unicast-Prefix-Based IPv4 Multicast Addresses,” Internet
RFC 6034, Oct. 2010.
[RFC6052] C. Bao, C. Huitema, M. Bagnulo, M. Boucadair, and X. Li, “IPv6
Addressing of IPv4/IPv6 Translators,” Internet RFC 6052, Oct. 2010.
[RFC6217] J. Arkko and M. Townsley, “IPv4 Run-Out and IPv4-IPv6 Co-Existence
Scenarios,” Internet RFC 6127 (experimental), May 2011.
[RFC6144] F. Baker, X. Li, C. Bao, and K. Yin, “Framework for IPv4/IPv6 Translation,” Internet RFC 6144 (informational), Apr. 2011.
[RFC6164] M. Kohno, B. Nitzan, R. Bush, Y. Matsuzaki, L. Colitti, and T. Narten,
“Using 127-Bit IPv6 Prefixes on Inter-Router Links,” Internet RFC 6164, Apr. 2011.
[RFC6275] C. Perkins, ed., D. Johnson, and J. Arkko, “Mobility Support in IPv6,”
Internet RFC 3775, July 2011.
[RFC6308] P. Savola, “Overview of the Internet Multicast Addressing Architecture,” Internet RFC 6308 (informational), June 2011.
This page intentionally left blank
Link Layer
In Chapter 1, we saw that the purpose of the link layer in the TCP/IP protocol suite
is to send and receive IP datagrams for the IP module. It is also used to carry a
few other protocols that help support IP, such as ARP (see Chapter 4). TCP/IP supports many different link layers, depending on the type of networking hardware
being used: wired LANs such as Ethernet, metropolitan area networks (MANs) such
as cable TV and DSL connections available through service providers, and wired
voice networks such as telephone lines with modems, as well as the more recent
wireless networks such as Wi-Fi (wireless LAN) and various wireless data services based on cellular technlology such as HSPA, EV-DO, LTE, and WiMAX. In
this chapter we shall look at some of the details involved in using the Ethernet and
Wi-Fi link layers, how the Point-to-Point Protocol (PPP) is used, and how link-layer
protocols can be carried inside other (link- or higher-layer) protocols, a technique
known as tunneling. Covering the details of every link technology available today
would require a separate text, so we instead focus on some of the most commonly
used link-layer protocols and how they are used by TCP/IP.
Most link-layer technologies have an associated protocol format that describes
how the corresponding PDUs must be constructed in order to be carried by the
network hardware. When referring to link-layer PDUs, we usually use the term
frame, so as to distinguish the PDU format from those at higher layers such as
packets or segments, terms used to describe network- and transport-layer PDUs,
respectively. Frame formats usually support a variable-length frame size ranging
from a few bytes to a few kilobytes. The upper bound of the range is called the
maximum transmission unit (MTU), a characteristic of the link layer that we shall
encounter numerous times in the remaining chapters. Some network technologies, such as modems and serial lines, do not impose their own maximum frame
size, so they can be configured by the user.
Link Layer
Ethernet and the IEEE 802 LAN/MAN Standards
The term Ethernet generally refers to a set of standards first published in 1980 and
revised in 1982 by Digital Equipment Corp., Intel Corp., and Xerox Corp. The first
common form of Ethernet is now sometimes called “10Mb/s Ethernet” or “shared
Ethernet,” and it was adopted (with minor changes) by the IEEE as standard number
802.3. Such networks were usually arranged like the network shown in Figure 3-1.
Figure 3-1
A basic shared Ethernet network consists of one or more stations (e.g., workstations,
supercomputers) attached to a shared cable segment. Link-layer PDUs (frames) can be
sent from one station to one or more others when the medium is determined to be free.
If multiple stations send at the same time, possibly because of signal propagation delays,
a collision occurs. Collisions can be detected, and they cause sending stations to wait a
random amount of time before retrying. This common scheme is called carrier sense,
multiple access with collision detection.
Because multiple stations share the same network, this standard includes a
distributed algorithm implemented in each Ethernet network interface that controls when a station gets to send data it has. The particular method, known as
carrier sense, multiple access with collision detection (CSMA/CD), mediates which
computers can access the shared medium (cable) without any other special agreement or synchronization. This relative simplicity helped to promote the low cost
and resulting popularity of Ethernet technology.
With CSMA/CD, a station (e.g., computer) first looks for a signal currently
being sent on the network and sends its own frame when the network is free.
This is the “carrier sense” portion of the protocol. If some other station happens
to send at the same time, the resulting overlapping electrical signal is detected as
a collision. In this case, each station waits a random amount of time before trying again. The amount of time is selected by drawing from a uniform probability
distribution that doubles in length each time a subsequent collision is detected.
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
Eventually, each station gets its chance to send or times out trying after some
number of attempts (16 in the case of conventional Ethernet). With CSMA/CD,
only one frame is traveling on the network at any given time. Access methods such
as CSMA/CD are more formally called Media Access Control (MAC) protocols.
There are many types of MAC protocols; some are based on having each station
try to use the network independently (contention-based protocols like CSMA/
CD), and others are based on prearranged coordination (e.g., by allocating time
slots for each station to send).
Since the development of 10Mb/s Ethernet, faster computers and infrastructure have driven the need for ever-increasing speeds in LANs. Given the popularity of Ethernet, significant innovation and effort have managed to increase its
speed from 10Mb/s to 100Mb/s to 1000Mb/s to 10Gb/s, and now to even more.
The 10Gb/s form is becoming popular in larger data centers and large enterprises,
and speeds as high as 100Gb/s have been demonstrated. The very first (research)
Ethernet ran at 3Mb/s, but the DIX (Digital, Intel, Xerox) standard ran at 10Mb/s
over a shared physical cable or set of cable segments interconnected by electrical repeaters. By the early 1990s, the shared cable had largely been replaced by
twisted-pair wiring (resembling telephone wires and often called “10BASE-T”).
With the development of 100Mb/s (also called “fast Ethernet,” the most popular
version of which is known as “100BASE-TX”), contention-based MAC protocols
have become less popular. Instead, the wiring between each LAN station is often
not shared but instead provides a dedicated electrical path in a star topology. This
can be accomplished with Ethernet switches, as shown in Figure 3-2.
Figure 3-2
A switched Ethernet network consists of one or more stations, each of which is attached
to a switch port using a dedicated wiring path. In most cases where switched Ethernet is
used, the network operates in a full-duplex fashion and the CSMA/CD algorithm is not
required. Switches may be cascaded to form larger Ethernet LANs by interconnecting
switch ports, sometimes called “uplink” ports.
Link Layer
At present, switches are commonly used, providing each Ethernet station with
the ability to send and receive data simultaneously (called “full-duplex Ethernet”).
Although half-duplex (one direction at a time) operation is still supported even by
1000Mb/s Ethernet (1000BASE-T), it is rarely used relative to full-duplex Ethernet.
We shall discuss how switches process PDUs in more detail later.
One of the most popular technologies used to access the Internet today is
wireless networking, the most common for wireless local area networks (WLANs)
being an IEEE standard known as Wireless Fidelity or Wi-Fi, and sometimes
called “wireless Ethernet” or 802.11. Although this standard is distinct from the
802 wired Ethernet standards, the frame format and general interface are largely
borrowed from 802.3, and all are part of the set of IEEE 802 LAN standards. Thus,
most of the capabilities used by TCP/IP for Ethernet networks are also used for
Wi-Fi networks. We shall explore each of these in more detail. First, however, it
is useful to get a bigger picture of all of the IEEE 802 standards that are relevant
for setting up home and enterprise networks. We also include references to those
IEEE standards governing MAN standards, including IEEE 802.16 (WiMAX) and
the standard for media-independent handoffs in cellular networks (IEEE 802.21).
The IEEE 802 LAN/MAN Standards
The original Ethernet frame format and operation were described by industry
agreement, mentioned earlier. This format was known as the DIX format or Ethernet II format. This type of Ethernet network, with slight modification, was later
standardized by the IEEE as a form of CSMA/CD network, called 802.3. In the
world of IEEE standards, standards with the prefix 802 define the operations of
LANs and MANs. The most popular 802 standards today include 802.3 (essentially Ethernet) and 802.11 (WLAN/Wi-Fi). These standards have evolved over
time and have changed names as freestanding amendments (e.g., 802.11g) are
ultimately incorporated in revised standards. Table 3-1 shows a fairly complete
list of the IEEE 802 LAN and MAN standards relevant to supporting the TCP/IP
protocols, as of mid-2011.
Table 3-1
LAN and MAN IEEE 802 standards relevant to the TCP/IP protocols (2011)
Official Reference
Multiple Registration Protocol (MRP)
MAC Security (MACSec)
Link Aggregation (formerly 802.3ad)
MAC Bridges
Traffic classes/priority/QoS
Virtual Bridged LANs/Corrections to MRP
Multiple Spanning Tree Protocol (MSTP)
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
Table 3-1
LAN and MAN IEEE 802 standards relevant to the TCP/IP protocols (2011) (continued )
Official Reference
Rapid Spanning Tree Protocol (RSTP)
Port-Based Network Access Control (PNAC)
Logical Link Control (LLC)
Baseline Ethernet and 10Mb/s Ethernet
[802.3-2008] (Section One)
100Mb/s Ethernet (“Fast Ethernet”)
[802.3-2008] (Section Two)
Full-duplex operation and flow control
1000Mb/s Ethernet (“Gigabit Ethernet”)
10Gb/s Ethernet (“Ten-Gigabit Ethernet”)
Link Aggregation
[802.3-2008] (Section
[802.3-2008] (Section Four)
Power over Ethernet (PoE) (to 15.4W)
[802.3-2008] (Section Two)
[802.3-2008] (Section Five)
Access Ethernet (“Ethernet in the First Mile
Frame format extensions (to 2000 bytes)
Power over Ethernet enhancements (“PoE+”, to
40/100Gb/s Ethernet
54Mb/s Wireless LAN at 5GHz
11Mb/s Wireless LAN at 2.4GHz
QoS enhancement for 802.11
54Mb/s Wireless LAN at 2.4GHz
Spectrum/power management extensions
Security enhancements/replaces WEP
4.9–5.0GHz operation in Japan
6.5–600Mb/s Wireless LAN at 2.4 and 5GHz
using optional MIMO and 40MHz channels
Mesh networking, congestion control
Under development
54Mb/s wireless LAN at 3.7GHz (licensed)
Broadband Wireless Access Systems (WiMAX)
Fixed Wireless MAN Standard (WiMAX)
Fixed/Mobile Wireless MAN Standard (WiMAX) [802.16-2009]
Improved Coexistence Mechanisms
Multihop Relays in 802.16
Bridging of 802.16
Media Independent Handovers
802.11s (draft)
Link Layer
Other than the specific types of LAN networks defined by the 802.3, 802.11,
and 802.16 standards, there are some related standards that apply across all of
the IEEE standard LAN technologies. Common to all three of these is the 802.2
standard that defines the Logical Link Control (LLC) frame header common among
many of the 802 networks’ frame formats. In IEEE terminology, LLC and MAC
are “sublayers” of the link layer, where the LLC (mostly frame format) is generally
common to each type of network and the MAC layer may be somewhat different.
While the original Ethernet made use of CSMA/CD, for example, WLANs often
make use of CSMA/CA (CA is “collision avoidance”).
Unfortunately the combination of 802.2 and 802.3 defined a different frame format
from Ethernet II until 802.3x finally rectified the situation. It has been incorporated into [802.3-2008]. In the TCP/IP world, the encapsulation of IP datagrams
is defined in [RFC0894] and [RFC2464] for Ethernet networks, although the older
LLC/SNAP encapsulation remains published as [RFC1042]. While this is no longer much of an issue, it was once a source of concern, and similar issues occasionally arise [RFC4840].
The frame format has remained essentially the same until fairly recently. To
get an understanding of the details of the format and how it has evolved, we now
turn our focus to these details.
The Ethernet Frame Format
All Ethernet (802.3) frames are based on a common format. Since its original specification, the frame format has evolved to support additional functions. Figure 3-3
shows the current layout of an Ethernet frame and how it relates to a relatively new
term introduced by IEEE, the IEEE packet (a somewhat unfortunate term given its
uses in other standards).
The Ethernet frame begins with a Preamble area used by the receiving interface’s circuitry to determine when a frame is arriving and to determine the amount
of time between encoded bits (called clock recovery). Because Ethernet is an asynchronous LAN (i.e., precisely synchronized clocks are not maintained in each Ethernet interface card), the space between encoded bits may differ somewhat from
one interface card to the next. The preamble is a recognizable pattern (0xAA typically), which the receiver can use to “recover the clock” by the time the start frame
delimiter (SFD) is found. The SFD has the fixed value 0xAB.
The original Ethernet encoded bits using a Manchester Phase Encoding (MPE)
with two voltage levels. With MPE, bits are encoded as voltage transitions rather
than absolute values. For example, the bit 0 is encoded as a transition from -0.85
to +0.85V, and a 1 bit is encoded as a +0.85 to -0.85V transition (0V indicates
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
that the shared wire is idle). The 10Mb/s Ethernet specification required network
hardware to use an oscillator running at 20MHz, because MPE requires two clock
cycles per bit. The bytes 0xAA (10101010 in binary) present in the Ethernet preamble would be a square wave between +0.85 and -0.85V with a frequency of
10MHz. Manchester encoding was replaced with different encodings in other Ethernet standards to improve efficiency.
This basic frame format includes 48-bit (6-byte) Destination (DST) and Source
(SRC) Address fields. These addresses are sometimes known by other names such
as “MAC address,” “link-layer address,” “802 address,” “hardware address,” or
“physical address.” The destination address in an Ethernet frame is also allowed
to address more than one station (called “broadcast” or “multicast”; see Chapter 9). The broadcast capability is used by the ARP protocol (see Chapter 4) and
multicast capability is used by the ICMPv6 protocol (see Chapter 8) to convert
between network-layer and link-layer addresses.
Following the source address is a Type field that doubles as a Length field. Ordinarily, it identifies the type of data that follows the header. Popular values used
with TCP/IP networks include IPv4 (0x0800), IPv6 (0x86DD), and ARP (0x0806).
The value 0x8100 indicates a Q-tagged frame (i.e., one that can carry a “virtual
LAN” or VLAN ID according to the 802.1q standard). The size of a basic Ethernet
frame is 1518 bytes, but the more recent standard extended this size to 2000 bytes.
) '67 65&
E\WHV 34
Figure 3-3
3 )
D &
G 6
The Ethernet (IEEE 802.3) frame format contains source and destination addresses, an overloaded
Length/Type field, a field for data, and a frame check sequence (a CRC32). Additions to the basic
frame format provide for a tag containing a VLAN ID and priority information (802.1p/q) and
more recently for an extensible number of tags. The preamble and SFD are used for synchronizing receivers. When half-duplex operation is used with Ethernet running at 100Mb/s or more,
additional bits may be appended to short frames as a carrier extension to ensure that the collision
detection circuitry operates properly.
Link Layer
The original IEEE (802.3) specification treats the Length/Type field as a Length
field instead of a Type field. The field is thereby overloaded (used for more than
one purpose). The trick is to look at the value of the field. Today, if the value in the
field is greater than or equal to 1536, the field must contain a type value, which
is assigned by standards to have values exceeding 1536. If the value of the field
is 1500 or less, the field indicates the length. The full list of types is given by
Following the Destination and Source Address fields, [802.3-2008] provides for
a variable number of tags that contain various protocol fields defined by other
IEEE standards. The most common of these are the tags used by 802.1p and 802.1q,
which provide for virtual LANs and some quality-of-service (QoS) indicators. These
are discussed in Section 3.2.3.
The current [802.3-2008] standard incorporates the frame format modifications
of 802.3 as that provides for a maximum of 482 bytes for holding “tags” to be carried with each Ethernet frame. These larger frames, called envelope frames, may
be up to 2000 bytes in length. Frames containing 802.1p/q tags, called Q-tagged
frames, are also envelope frames. However, not all envelope frames are necessarily Q-tagged frames.
Following the fields discussed so far is the data area or payload portion of the
frame. This is the area where higher-layer PDUs such as IP datagrams are placed.
Traditionally, the payload area for Ethernet has always been 1500 bytes, representing the MTU for Ethernet. Most systems today use the 1500-byte MTU size for
Ethernet, although it is generally possible to configure a smaller value if this is
desired. The payload sometimes is padded (appended) with 0 bytes to ensure that
the overall frame meets the minimum length requirements we discuss in Section Frame Check Sequence/Cyclic Redundancy Check (CRC)
The final field of the Ethernet frame format follows the payload area and provides
an integrity check on the frame. The Cyclic Redundancy Check (CRC) field at the
end includes 32 bits and is sometimes known as the IEEE/ANSI standard CRC32
[802.3-2008]. To use an n-bit CRC for detection of data transmission in error, the
message to be checked is first appended with n 0 bits, forming the augmented message. Then, the augmented message is divided (using modulo-2 division) by an (n
+ 1)-bit value called the generator polynomial, which acts as the divisor. The value
placed in the CRC field of the message is the one’s complement of the remainder of
this division (the quotient is discarded). Generator polynomials are standardized
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
for a number of different values of n. For Ethernet, which uses n = 32, the CRC32
generator polynomial is the 33-bit binary number 100000100110000010001110110
110111. To get a feeling for how the remainder is computed using long (mod-2)
binary division, we can examine a simpler case using CRC4. The ITU has standardized the value 10011 for the CRC4 generator polynomial in a standard called
G.704 [G704]. If we wish to send the 16-bit message 1001111000101111, we first
begin with the long (mod-2) binary division shown in Figure 3-4.
Quotient (Discarded)
Figure 3-4
Long (mod-2) binary division demonstrating the computation of a CRC4
Link Layer
In this figure, we see that the remainder after division is the 4-bit value 1111.
Ordinarily, the one’s complement of this value (0000) would be placed in a CRC or
Frame Check Sequence (FCS) field in the frame. Upon receipt, the receiver performs
the same division and checks whether the value in the FCS field matches the computed remainder. If the two do not match, the frame was likely damaged in transit
and is usually discarded. The CRC family of functions can be used to provide a
strong indicator of corrupted messages because any change in the bit pattern is
highly likely to cause a change in the remainder term. Frame Sizes
There is both a minimum and a maximum size of Ethernet frames. The minimum
is 64 bytes, requiring a minimum data area (payload) length of 48 bytes (no tags).
In cases where the payload is smaller, pad bytes (value 0) are appended to the end
of the payload portion to ensure that the minimum length is enforced.
The minimum was important for the original 10Mb/s Ethernet using CSMA/CD.
In order for a transmitting station to know which frame encountered a collision, a
limit of 2500m (five 500m cable segments with four repeaters) was placed upon
the length of an Ethernet network. Given that the propagation rate for electrons
in copper is about .77c or 231M m/s, and given the transmission time of 64 bytes
to be (64 * 8/10,000,000) = 51.2µs at 10Mb/s, a minimum-size frame could consume about 11,000m of cable. With a maximum of 2500m of cable, the maximum
round-trip distance from one station to another is 5000m. The designers of Ethernet included a factor of 2 overdesign in fixing the minimum frame size, so in all
compliant cases (and many noncompliant cases), the last bit of an outgoing frame
would still be in the process of being transmitted after the time required for its signal to arrive at a maximally distant receiver and return. If a collision is detected,
the transmitting station thus knows with certainty which frame collided—the one
it is currently transmitting. In this case, the station sends a jamming signal (high
voltage) to alert other stations, which then initiate a random binary exponential
backoff procedure.
The maximum frame size of conventional Ethernet is 1518 bytes (including
the 4-byte CRC and 14-byte header). This value represents a sort of trade-off: if
a frame contains an error (detected on receipt by an incorrect CRC), only 1.5KB
need to be retransmitted to repair the problem. On the other hand, the size limits
the MTU to not more than 1500 bytes. In order to send a larger message, multiple
frames are required (e.g., 64KB, a common larger size used with TCP/IP networks,
would require at least 44 frames).
The unfortunate consequence of requiring multiple Ethernet frames to hold a
larger upper-layer PDU is that each frame contributes a fixed overhead (14 bytes
header, 4 bytes CRC). To make matters worse, Ethernet frames cannot be squished
together on the network without any space between them, in order to allow the
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
Ethernet hardware receiver circuits to properly recover data from the network and
to provide the opportunity for other stations to interleave their traffic with the
existing Ethernet traffic. The Ethernet II specification, in addition to specifying a
7-byte preamble and 1-byte SFD that precedes any Ethernet frame, also specifies
an inter-packet gap (IPG) of 12 byte times (9.6µs at 10Mb/s, 960ns at 100Mb/s, 96ns
at 1000Mb/s, and 9.6ns at 10,000Mb/s). Thus, the per-frame efficiency for Ethernet
II is at most 1500/(12 + 8 + 14 + 1500 + 4) = 0.975293, or about 98%. One way to
improve efficiency when moving large amounts of data across an Ethernet would
be to make the frame size larger. This has been accomplished using Ethernet jumbo
frames [JF], a nonstandard extension to Ethernet (in 1000Mb/s Ethernet switches
primarily) that typically allows the frame size to be as large as 9000 bytes. Some
environments make use of so-called super jumbo frames, which are usually understood to carry more than 9000 bytes. Care should be taken when using jumbo
frames, as these larger frames are not interoperable with the smaller 1518-byte
frame size used by most legacy Ethernet equipment.
802.1p/q: Virtual LANs and QoS Tagging
With the growing use of switched Ethernet, it has become possible to interconnect
every computer at a site on the same Ethernet LAN. The advantage of doing this
is that any host can directly communicate with any other host, using IP and other
network-layer protocols, and requiring little or no administrator configuration. In
addition, broadcast and multicast traffic (see Chapter 9) is distributed to all hosts
that may wish to receive it without having to set up special multicast routing protocols. While these represent some of the advantages of placing many stations on the
same Ethernet, having broadcast traffic go to every computer can create an undesirable amount of network traffic when many hosts use broadcast, and there may
be some security reasons to disallow complete any-to-any station communication.
To address some of these problems with running large, multiuse switched
networks, IEEE extended the 802 LAN standards with a capability called virtual
LANs (VLANs) in a standard known as 802.1q [802.1Q-2005]. Compliant Ethernet
switches isolate traffic among hosts to common VLANs. Note that because of this
isolation, two hosts attached to the same switch but operating on different VLANs
require a router between them for traffic to flow. Combination switch/router
devices have been created to address this need, and ultimately the performance of
routers has been improved to match the performance of VLAN switching. Thus,
the appeal of VLANs has diminished somewhat, in favor of modern high-performance routers. Nonetheless, they are still used, remain popular in some environments, and are important to understand.
Several methods are used to specify the station-to-VLAN mapping. Assigning VLANs by port is a simple and common method, whereby the switch port
to which the station is attached is assigned a particular VLAN, so any station so
attached becomes a member of the associated VLAN. Other options include MACaddress-based VLANs that use tables within Ethernet switches to map a station’s
Link Layer
MAC address to a corresponding VLAN. This can become difficult to manage if
stations change their MAC addresses (which they do sometimes, thanks to the
behavior of some users). IP addresses can also be used as a basis for assigning
When stations in different VLANs are attached to the same switch, the switch
ensures that traffic does not leak from one VLAN to another, irrespective of the
types of Ethernet interfaces being used by the stations. When multiple VLANs
must span multiple switches (trunking), it becomes necessary to label Ethernet
frames with the VLAN to which they belong before they are sent to another
switch. Support for this capability uses a tag called the VLAN tag (or header),
which holds 12 bits of VLAN identifier (providing for 4096 VLANs, although VLAN
0 and VLAN 4095 are reserved). It also contains 3 bits of priority for supporting
QoS, defined in the 802.1p standard, as indicated in Figure 3-3. In many cases, the
administrator must configure the ports of the switch to be used to send 802.1p/q
frames by enabling trunking on the appropriate ports. To make this job somewhat
easier, some switches support a native VLAN option on trunked ports, meaning
that untagged frames are by default associated with the native VLAN. Trunking
ports are used to interconnect VLAN-capable switches, and other ports are typically used to attach stations. Some switches also support proprietary methods for
VLAN trunking (e.g., the Cisco Inter-Switch Link (ISL) protocol).
802.1p specifies a mechanism to express a QoS identifier on each frame. The
802.1p header includes a 3-bit-wide Priority field indicating a QoS level. This
standard is an extension of the 802.1q VLAN standard. The two standards work
together and share bits in the same header. With the 3 available bits, eight classes
of service are defined. Class 0, the lowest priority, is for conventional, best-effort
traffic. Class 7 is the highest priority and might be used for critical routing or network management functions. The standards specify how priorities are encoded in
packets but leave the policy that governs which packets should receive which class,
and the underlying mechanisms implementing prioritized services, to be defined
by the implementer. Thus, the way traffic of one priority class is handled relative to
another is implementation- or vendor-defined. Note that 802.1p can be used independently of VLANs if the VLAN ID field in the 802.1p/q header is set to 0.
The Linux command for manipulating 802.1p/q information is called vconfig. It can be used to add and remove virtual interfaces associating VLAN IDs to
physical interfaces. It can also be used to set 802.1p priorities, change the way virtual interfaces are identified, and influence the mapping between packets tagged
with certain VLAN IDs and how they are prioritized during protocol processing
in the operating system. The following commands add a virtual interface to interface eth1 with VLAN ID 2, remove it, change the way such virtual interfaces are
named, and add a new interface:
Linux# vconfig add eth1 2
Added VLAN with VID == 2 to IF -:eth1:Linux# ifconfig eth1.2
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
eth1.2 Link encap:Ethernet HWaddr 00:04:5A:9F:9E:80
RX packets:0 errors:0 dropped:0 overruns:0
TX packets:0 errors:0 dropped:0 overruns:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Linux# vconfig rem eth1.2
Removed VLAN -:eth1.2:Linux# vconfig set_name_type VLAN_PLUS_VID
Set name-type for VLAN subsystem. Should be visible in
Linux# vconfig add eth1 2
Added VLAN with VID == 2 to IF -:eth1:Linux# ifconfig vlan0002
vlan0002 Link encap:Ethernet HWaddr 00:04:5A:9F:9E:80
RX packets:0 errors:0 dropped:0 overruns:0
TX packets:0 errors:0 dropped:0 overruns:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Here we can see that the default method of naming virtual interfaces in Linux
is based on concatenating the associated physical interface with the VLAN ID. For
example, VLAN ID 2 associated with the interface eth1 is called eth1.2. This
example also shows how an alternative naming method can be used, whereby the
VLANs are enumerated by the names vlan<n> where <n> is the identifier of the
VLAN. Once this is set up, frames sent on the VLAN device are tagged with the
VLAN ID, as expected. We can see this using Wireshark, as shown in Figure 3-5.
Figure 3-5
Frames tagged with the VLAN ID as shown in Wireshark. The default columns and settings have been changed to display the VLAN ID and raw Ethernet addresses.
Link Layer
This figure shows an ARP packet (see Chapter 4) carried on VLAN 2. We can
see that the frame size is 60 bytes (not including CRC). The frame is encapsulated
using the Ethernet II encapsulation with type 0x8100, indicating a VLAN. Other
than the VLAN header, which indicates that this frame belongs to VLAN 2 and
has priority 0, this frame is unremarkable. All the other fields are as we would
expect with a regular ARP packet.
802.1AX: Link Aggregation (Formerly 802.3ad)
Some systems equipped with multiple network interfaces are capable of bonding or
link aggregation. With link aggregation, two or more interfaces are treated as one in
order to achieve greater reliability through redundancy or greater performance by
splitting (striping) data across multiple interfaces. The IEEE Amendment 802.1AX
[802.1AX-2008] defines the most common method for performing link aggregation
and the Link Aggregation Control Protocol (LACP) to manage such links. LACP uses
IEEE 802 frames of a particular format (called LACPDUs).
Using link aggregation on Ethernet switches that support it can be a costeffective alternative to investing in switches with high-speed network ports. If
more than one port can be aggregated to provide adequate bandwidth, higherspeed ports may not be required. Link aggregation may be supported not only on
network switches but across multiple network interface cards (NICs) on a host computer. Often, aggregated ports must be of the same type, operating in the same
mode (i.e., half- or full-duplex).
Linux has the capability to implement link aggregation (bonding) across different types of devices using the following commands:
Linux# modprobe bonding
Linux# ifconfig bond0 netmask
Linux# ifenslave bond0 eth0 wlan0
This set of commands first loads the bonding driver, which is a special type
of device driver supporting link aggregation. The second command creates the
bond0 interface with the IPv4 address information provided. Although providing
the IP-related information is not critical for creating an aggregated interface, it is
typical. Once the ifenslave command executes, the bonding device, bond0, is
labeled with the MASTER flag, and the eth0 and wlan0 devices are labeled with
the SLAVE flag:
bond0 Link encap:Ethernet HWaddr 00:11:A3:00:2C:2A
inet addr: Bcast: Mask:
inet6 addr: fe80::211:a3ff:fe00:2c2a/64 Scope:Link
RX packets:2146 errors:0 dropped:0 overruns:0 frame:0
TX packets:985 errors:0 dropped:0 overruns:0 carrier:0
collisions:18 txqueuelen:0
RX bytes:281939 (275.3 KiB) TX bytes:141391 (138.0 KiB)
Section 3.2 Ethernet and the IEEE 802 LAN/MAN Standards
eth0 Link encap:Ethernet HWaddr 00:11:A3:00:2C:2A
RX packets:1882 errors:0 dropped:0 overruns:0 frame:0
TX packets:961 errors:0 dropped:0 overruns:0 carrier:0
collisions:18 txqueuelen:1000
RX bytes:244231 (238.5 KiB) TX bytes:136561 (133.3 KiB)
Interrupt:20 Base address:0x6c00
wlan0 Link encap:Ethernet HWaddr 00:11:A3:00:2C:2A
RX packets:269 errors:0 dropped:0 overruns:0 frame:0
TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:38579 (37.6 KiB) TX bytes:4830 (4.7 KiB)
In this example, we have bonded together a wired Ethernet interface with
a Wi-Fi interface. The master device, bond0, is assigned the IPv4 address information we would typically assign to either of the individual interfaces, and it
receives the first slave’s MAC address by default. When IPv4 traffic is sent out of
the bond0 virtual interface, there are a number of possibilities as to which of the
slave interfaces will carry it. In Linux, the options are selected using arguments
provided when the bonding driver is loaded. For example, a mode option determines whether round-robin delivery is used between the interfaces, one interface
acts as a backup to the other, the interface is selected based on performing an XOR
of the MAC source and destination addresses, frames are copied to all interfaces,
802.3ad standard link aggregation is performed, or more advance load-balancing
options are used. The second mode is used for high-availability systems that can
fail over to a redundant network infrastructure if one link has ceased functioning (detectable by MII monitoring; see [BOND] for more details). The third mode
is intended to choose the slave interface based on the traffic flow. With enough
different destinations, traffic between the two stations is pinned to one interface.
This can be useful when trying to minimize reordering while also trying to loadbalance traffic across multiple slave interfaces. The fourth mode is for fault tolerance. The fifth mode is for use with 802.3ad-capable switches, to enable dynamic
aggregation over homogeneous links.
The LACP protocol is designed to make the job of setting up link aggregation
simpler by avoiding manual configuration. Typically the LACP “actor” (client) and
“partner” (server) send LACPDUs every second once enabled. LACP automatically determines which member links can be aggregated into a link aggregation
group (LAG) and aggregates them. This is accomplished by sending a collection of
information (MAC address, port priority, port number, and key) across the link. A
receiving station can compare the values it sees from other ports and perform the
aggregation if they match. Details of LACP are covered in [802.1AX-2008].
Link Layer
Full Duplex, Power Save, Autonegotiation, and 802.1X
Flow Control
When Ethernet was first developed, it operated only in half-duplex mode using
a shared cable. That is, data could be sent only one way at one time, so only one
station was sending a frame at any given point in time. With the development of
switched Ethernet, the network was no longer a single piece of shared wire, but
instead many sets of links. As a result, multiple pairs of stations could exchange
data simultaneously. In addition, Ethernet was modified to operate in full duplex,
effectively disabling the collision detection circuitry. This also allowed the physical length of the Ethernet to be extended, because the timing constraints associated with half-duplex operation and collision detection were removed.
In Linux, the ethtool program can be used to query whether full duplex is
supported and whether it is being used. This tool can also display and set many
other interesting properties of an Ethernet interface:
Linux# ethtool eth0
Settings for eth0:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: Yes
Speed: 10Mb/s
Duplex: Half
Port: MII
Transceiver: internal
Auto-negotiation: on
Current message level: 0x00000001 (1)
Link detected: yes
Linux# ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
Transceiver: internal
Auto-negotiation: on
Section 3.3 Full Duplex, Power Save, Autonegotiation, and 802.1X Flow Control
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes
In this example, the first Ethernet interface (eth0) is attached to a half-duplex
10Mb/s network. We can see that it is capable of autonegotiation, which is a mechanism originating with 802.3u to enable interfaces to exchange information such
as speed and capabilities such as half- or full-duplex operation. Autonegotiation
information is exchanged at the physical layer using signals sent when data is
not being transmitted or received. We can see that the second Ethernet interface
(eth1) also supports autonegotiation and has set its rate to 100Mb/s and operation
mode to full duplex. The other values (Port, PHYAD, Transceiver) identify the
physical port type, its address, and whether the physical-layer circuitry is internal
or external to the NIC. The current message-level value is used to configure log
messages associated with operating modes of the interface; its behavior is specific to the driver being used. We discuss the wake-on values after the following
In Windows, details such as these are available by navigating to Control Panel
| Network Connections and then right-clicking on the interface of interest, selecting Properties, and then clicking the Configure box and selecting the Advanced
tab. This brings up a menu similar to the one shown in Figure 3-6 (this particular
example is from an Ethernet interface on a Windows 7 machine).
Figure 3-6 Advanced tab of network interface properties in Windows (7). This control allows the
user to supply operating parameters to the network device driver.
Link Layer
In Figure 3-6, we can see the special features that can be configured using the
adapter’s device driver. For this particular adapter and driver, 802.1p/q tags can
be enabled or disabled, as can flow control and wake-up capabilities (see Section
3.3.2). The speed and duplex can be set by hand, or to the more typical autonegotiation option.
Duplex Mismatch
Historically, there have been some interoperability problems using autonegotiation, especially when a computer and its associated switch port are configured
using different duplex configurations or when autonegotiation is disabled at one
end of the link but not the other. In this case, a so-called duplex mismatch can occur.
Perhaps surprisingly, when this happens the connection does not completely fail
but instead may suffer significant performance degradation. When the network
has moderate to heavy traffic in both directions (e.g., during a large data transfer), a half-duplex interface can detect incoming traffic as a collision, triggering
the exponential backoff function of the CSMA/CD Ethernet MAC. At the same
time, the data triggering the collision is lost and may require higher-layer protocols such as TCP to retransmit. Thus, the performance degradation may be noticed
only when there is sufficient traffic for the half-duplex interface to be receiving
data at the same time it is sending, a situation that does not generally occur under
light load. Some researchers have attempted to build analysis tools to detect this
unfortunate situation [SC05].
Wake-on LAN (WoL), Power Saving, and Magic Packets
In both the Linux and Windows examples, we saw some indication of power management capabilities. In Windows the Wake-Up Capabilities and in Linux the WakeOn options are used to bring the network interface and/or host computer out of
a lower-power (sleep) state based on the arrival of certain kinds of packets. The
kinds of packets used to trigger the change to full-power state can be configured.
In Linux, the Wake-On values are zero or more bits indicating whether receiving the following types of frames trigger a wake-up from a low-power state: any
physical-layer (PHY) activity (p), unicast frames destined for the station (u), multicast frames (m), broadcast frames (b), ARP frames (a), magic packet frames (g),
and magic packet frames including a password. These can be configured using
options to ethtool. For example, the following command can be used:
Linux# ethtool –s eth0 wol umgb
This command configures the eth0 device to signal a wake-up if any of the
frames corresponding to the types u, m, g, or b is received. Windows provides a
similar capability, but the standard user interface allows only magic packet frames
and a predefined subset of the u, m, b, and a frame types. Magic packets contain
Section 3.3 Full Duplex, Power Save, Autonegotiation, and 802.1X Flow Control
a special repeated pattern of the byte value 0xFF. Often, such frames are sent as a
form of UDP packet (see Chapter 10) encapsulated in a broadcast Ethernet frame.
Several tools are available to generate them, including wol [WOL]:
Linux# wol 00:08:74:93:C8:3C
Waking up 00:08:74:93:C8:3C...
The result of this command is to construct a magic packet, which we can view
using Wireshark (see Figure 3-7).
Figure 3-7
A magic packet frame in Wireshark begins with 6 0xFF bytes and then repeats the MAC
address 16 times.
The packet shown in Figure 3-7 is mostly a conventional UDP packet, although
the port numbers (1126 and 40000) are arbitrary. The most unusual part of the
packet is the data area. It contains an initial 6 bytes with the value 0xFF. The rest
of the data area includes the destination MAC address 00:08:74:93:C8:3C repeated
16 times. This data payload pattern defines the magic packet.
Link Layer
Link-Layer Flow Control
Operating an extended Ethernet LAN in full-duplex mode and across segments of
different speeds may require the switches to buffer (store) frames for some period
of time. This happens, for example, when multiple stations send to the same destination (called output port contention). If the aggregate traffic rate headed for a
station exceeds the station’s link rate, frames start to be stored in the intermediate
switches. If this situation persists for a long time, frames may be dropped.
One way to mitigate this situation is to apply flow control to senders (i.e., slow
them down). Some Ethernet switches (and interfaces) implement flow control by
sending special signal frames between switches and NICs. Flow control signals to
the sender that it must slow down its transmission rate, although the specification
leaves the details of this to the implementation. Ethernet uses an implementation
of flow control called PAUSE messages (also called PAUSE frames), specified by
802.3x [802.3-2008].
PAUSE messages are contained in MAC control frames, identified by the
Ethernet Length/Type field having the value 0x8808 and using the MAC control
opcode of 0x0001. A receiving station seeing this is advised to slow its rate. PAUSE
frames are always sent to the MAC address 01:80:C2:00:00:01 and are used only
on full-duplex links. They include a hold-off time value (specified in quantas equal
to 512 bit times), indicating how long the sender should pause before continuing
to transmit.
The MAC control frame is a frame format using the regular encapsulation
from Figure 3-3, but with a 2-byte opcode immediately following the Length/Type
field. PAUSE frames are essentially the only type of frames that uses MAC control
frames. They include a 2-byte quantity encoding the hold-off time. Implementation
of the “entire” MAC control layer (basically, just 802.3x flow control) is optional.
Using Ethernet-layer flow control may have a significant negative side effect,
and for this reason it is typically not used. When multiple stations are sending
through a switch (see the next section) that is becoming overloaded, the switch
may naturally send PAUSE frames to all hosts. Unfortunately, the utilization of
the switch’s memory may not be symmetric with respect to the sending hosts, so
some may be penalized (flow-controlled) even though they were not responsible
for much of the traffic passing through the switch.
Bridges and Switches
The IEEE 802.1d standard specifies the operation of bridges, and thus switches,
which are essentially high-performance bridges. A bridge or switch is used to join
multiple physical link-layer networks (e.g., a pair of physical Ethernet segments) or
groups of stations. The most basic setup involves connecting two switches to form
an extended LAN, as shown in Figure 3-8.
Section 3.4 Bridges and Switches
Figure 3-8
A simple extended Ethernet LAN with two switches. Each switch port has a number for
reference, and each station (including each switch) has its own MAC address.
Switches A and B in the figure have been interconnected to form an extended
LAN. In this particular example, client systems are connected to A and servers
to B, and ports are numbered for reference. Note that every network element,
including each switch, has its own MAC address. Nonlocal MAC addresses are
“learned” by each bridge over time so that eventually every switch knows the port
upon which every station can be reached. These lists are stored in tables (called
filtering databases) within each switch on a per-port (and possibly per-VLAN) basis.
As an example, after each switch has learned the location of every station, these
databases would contain the information shown in Figure 3-9.
Switch A’s Database
Switch B’s Database
Figure 3-9 Filtering databases on switches A and B from Figure 3-8 are created over time (“learned”)
by observing the source address on frames seen on switch ports.
When a switch (bridge) is first turned on, its database is empty, so it does
not know the location of any stations except itself. Whenever it receives a frame
destined for a station other than itself, it makes a copy for each of the ports other
than the one on which the frame arrived and sends a copy of the frame out of each
Link Layer
one. If switches (bridges) never learned the location of stations, every frame would
be delivered across every network segment, leading to unwanted overhead. The
learning capability reduces overhead significantly and is a standard feature of
switches and bridges.
Today, most operating systems support the capability to bridge between network interfaces, meaning that a standard computer with multiple interfaces can
be used as a bridge. In Windows, for example, interfaces may be bridged together
by navigating to the Network Connections menu from the Control Panel, highlighting the interfaces to bridge, right-clicking the mouse, and selecting Bridge
Connections. When this is done, a new icon appears that represents the bridging
function itself. Most of the normal network properties associated with the interfaces are gone and instead appear on the bridge device (see Figure 3-10).
Figure 3-10
In Windows, the bridge device is created by highlighting the network interfaces to be
bridged, right-clicking, and selecting the Bridge Network Interfaces function. Once the
bridge is established, further modifications are made to the bridge device.
Figure 3-10 shows the Properties panels for the network bridge virtual device
on Windows 7. The bridge device’s properties include a list of the underlying
devices being bridged and the set of services running on the bridge (e.g., the
Microsoft Networks client, File and Printer Sharing, etc.). Linux works in a similar
way, using command-line arguments. We use the topology shown in Figure 3-11
for this example.
Section 3.4 Bridges and Switches
Figure 3-11
In this simple topology, a Linux-based PC is configured to operate as a bridge between
the two Ethernet segments it interconnects. As a learning bridge, it accumulates tables
of which port should be used to reach the various other systems on the extended LAN.
The simple network in Figure 3-11 uses a Linux-based PC with two Ethernet
ports as a bridge. Attached to port 2 is a single station, and the rest of the network
is attached to port 1. The following commands enable the bridge:
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
ifconfig eth0 up
ifconfig eth1 up
ifconfig br0 up
This series of commands creates a bridge device br0 and adds the interfaces
eth0 and eth1 to the bridge. Interfaces can be removed using the brctl delif
command. Once the interfaces are established, the brctl showmacs command
can be used to inspect the filter databases (called forwarding databases or fdbs in
Linux terminology):
Linux# brctl show
bridge name bridge id
STP enabled interfaces
8000.0007e914a9c1 no
eth0 eth1
Linux# brctl showmacs br0
port no mac addr is local? ageing timer
1 00:04:5a:9f:9e:80 no 0.79
2 00:07:e9:14:a9:c1 yes 0.00
1 00:08:74:93:c8:3c yes 0.00
2 00:14:22:f4:19:5f no 0.81
1 00:17:f2:e7:6d:91 no 2.53
1 00:90:f8:00:90:b7 no 17.13
The output of this command reveals one other detail about bridges. Because
stations may move around, have their network cards replaced, have their MAC
address changed, or other things, once the bridge discovers that a MAC address
Link Layer
is reachable via a certain port, this information cannot be assumed to be correct
forever. To deal with this issue, each time an address is learned, a timer is started
(commonly defaulted to 5 minutes). In Linux, a fixed amount of time associated
with the bridge is applied to each learned entry. If the address in the entry is not
seen again within the specified “ageing” time, the entry is removed, as indicated
Linux# brctl setageing br0 1
Linux# brctl showmacs br0
port no mac addr is local? ageing timer
1 00:04:5a:9f:9e:80 no 0.76
2 00:07:e9:14:a9:c1 yes 0.00
1 00:08:74:93:c8:3c yes 0.00
2 00:14:22:f4:19:5f no 0.78
1 00:17:f2:e7:6d:91 no 0.00
Here, we have set the ageing value unusually low for demonstration purposes. When an entry is removed because of aging, subsequent frames for the
removed destination are once again sent out of every port except the receiving one
(called flooding), and the entry is placed anew into the filtering database. The use
of filtering databases and learning is really a performance optimization—if the
tables are empty, the network experiences more overhead but still functions. Next
we turn our attention to the case where more than two bridges are interconnected
with redundant links. In this situation, flooding of frames could lead to a sort of
flooding catastrophe with frames looping forever. Obviously, we require a way of
dealing with this problem.
Spanning Tree Protocol (STP)
Bridges may operate in isolation, or in combination with other bridges. When more
than two bridges are in use (or in general when switch ports are cross-connected),
the possibility exists for a cascading, looping set of frames to be formed. Consider
the network shown in Figure 3-12.
Assume that the switches in Figure 3-12 have just been turned on and their
filtering databases are empty. When station S sends a frame, switch B replicates
the frame on ports 7, 8, and 9. So far, the initial frame has been “amplified” three
times. These frames are received by switches A, D, and C. Switch A produces copies of the frame on ports 2 and 3. Switches D and C produce more copies on ports
20, 22 and 13, 14, respectively. The amplification factor has grown to 6, with copies
of the frames traveling in both directions among switches A, C, and D. Once these
frames arrive, the forwarding databases begin to oscillate as the bridge attempts to
figure out which port is really the one through which station S should be reached.
Obviously, this situation is intolerable. If it were allowed to occur, bridges used in
such configurations would be useless. Fortunately, there is a protocol that is used
to avoid this situation called the Spanning Tree Protocol (STP). We describe STP in
Section 3.4 Bridges and Switches
Figure 3-12
An extended Ethernet network with four switches and multiple redundant links. If
simple flooding were used in forwarding frames through this network, a catastrophe
would occur because of excess multiplying traffic (a so-called broadcast storm). This
type of situation requires the use of the STP.
some detail to explain why some approach to duplicate suppression is needed for
bridges and switches. In the current standard [802.1D-2004], conventional STP is
replaced with the Rapid Spanning Tree Protocol (RSTP), which we describe after the
conventional STP preliminaries.
STP works by disabling certain ports at each bridge so that topological loops
are avoided (i.e., no duplicate paths between bridges are permitted), yet the topology is not partitioned—all stations can be reached. Mathematically, a spanning
tree is a collection of all of the nodes and some of the edges of a graph such that
there is a path or route from any node to any other node (spanning the graph), but
there are no loops (the edge set forms a tree). There can be many spanning trees on
a graph. STP finds one of them for the graph formed by bridges as nodes and links
as edges. Figure 3-13 illustrates the idea.
Figure 3-13 Using STP, the B-A, A-C, and C-D links have become active on the spanning tree. Ports
6, 7, 1, 2, 13, 14, and 20 are in the forwarding state; all other ports are blocked (i.e., not
forwarding). This keeps frames from looping and avoids broadcast storms. If a configuration change occurs or a switch fails, the blocked ports are changed to the forwarding
state and the bridges compute a new spanning tree.
Link Layer
In this figure, the dark lines represent the links in the network selected by STP
for forwarding frames. None of the other links are used—ports 8, 9, 12, 21, 22, and
3 are blocked. With STP, the various problems raised earlier do not occur, as frames
are created only as the result of another frame arriving. There is no amplification.
Furthermore, looping is avoided because there is only one path between any two
stations. The spanning tree is formed and maintained by bridges using a distributed algorithm running in each bridge.
As with forwarding databases, STP must deal with the situation where bridges
are turned off and on, interface cards are replaced, or MAC addresses are changed.
Clearly, such changes could affect the operation of the spanning tree, so the STP
adapts to these changes. The adaptation is implemented using an exchange of
special frames called Bridge Protocol Data Units (BPDUs). These frames are used
for forming and maintaining the spanning tree. The tree is “grown” from a bridge
elected by the others and known as the “root bridge.”
As mentioned previously, there are many possible spanning trees for a given
network. Determining which one might be the best to use for forwarding frames
depends on a set of costs that can be associated with each link and the location of
the root bridge. Costs are simply integers that are (recommended to be) inversely
proportional to the link speeds. For example, a 10Mb/s link has a recommended
cost of 100, and 100Mb/s and 1000Mb/s links have recommended cost values of 19
and 4, respectively. STP operates by computing least-cost paths to the root bridge
using these costs. If multiple links must be traversed, the corresponding cost is
simply the sum of the link costs. Port States and Roles
To understand the basic operation of STP, we need to understand the operation of
the state machine for each port at each bridge, as well as the contents of BPDUs.
Each port in each bridge may be in one of five states: blocking, listening, learning,
forwarding, and disabled. The relationship among them can be seen in the state
transition diagram shown in Figure 3-14.
The normal transitions for ports on the spanning tree are indicated in Figure
3-14 by solid arrows, and the smaller arrows with dashed lines indicate changes
due to administrative configuration. After initialization, a port enters the blocking
state. In this state, it does not learn addresses, forward frames, or transmit BPDUs,
but it does monitor received BPDUs in case it needs to be included in the future on
a path to the root bridge, in which case the port transitions to the listening state. In
the listening state, the port is now permitted to send as well as receive BPDUs but
not learn addresses or forward data. After a typical forwarding delay timeout of
15s, a port enters the learning state. Here it is permitted to do all procedures except
forward data. It waits another forwarding delay before entering the forwarding
state and commencing to forward frames.
Related to the port state machine, each port is said to have a role. This terminology becomes more important with RSTP (see Section A port may have
the role of root port, designated port, alternate port, or backup port. Root ports are those
Section 3.4 Bridges and Switches
Figure 3-14
Ports transition among four major states in normal STP operation. In the blocking state,
frames are not forwarded, but a topology change or timeout may cause a transition to
the listening state. The forwarding state is the normal state for active switch ports carrying data traffic. The state names in parentheses indicate the port states according to
the RSTP.
ports at the end of an edge on the spanning tree headed toward the root. Designated ports are ports in the forwarding state acting as the port on the least-cost
path to the root from the attached segment. Alternate ports are other ports on an
attached segment that could also reach the root but at higher cost. They are not in
the forwarding state. A backup port is a port connected to the same segment as a
designated port on the same bridge. Thus, backup ports could easily take over for
a failing designated port without disrupting any of the rest of the spanning tree
topology but do not offer an alternate path to the root should the entire bridge fail. BPDU Structure
To determine the links in the spanning tree, STP uses BPDUs that adhere to the
format shown in Figure 3-15.
The format shown in Figure 3-15 applies to both the original STP as well as
the newer RSTP (see Section BPDUs are always sent to the group address
01:80:C2:00:00:00 (see Chapter 9 for details of link-layer group and Internet multicast addressing) and are not forwarded through a bridge without modification. In
the figure, the DST, SRC, and L/T (Length/Type) fields are part of the conventional
Ethernet (802.3) header of the frame carrying the example BPDU. The 3-byte LLC/
SNAP header is defined by 802.1 and for BPDUs is set to the constant 0x424203.
Not all BPDUs are encapsulated using LLC/SNAP, but this is a common option.
Link Layer
/ ) $ &
Figure 3-15 BPDUs are carried in the payload area of 802 frames and exchanged between bridges to establish the spanning tree. Important fields include the source, root node, cost to root, and topology change indication. With 802.1w and [802.1D-2004] (including Rapid STP or RSTP), additional
fields indicate the state of the ports.
The Protocol (Prot) field gives the protocol ID number, set to 0. The Version
(Vers) field is set to 0 or 2, depending on whether STP or RSTP is in use. The Type
field is assigned similarly. The Flags field contains Topology Change (TC) and Topology Change Acknowledgment (TCA) bits, defined by the original 802.1d standard.
Additional bits are defined for Proposal (P), Port Role (00, unknown; 01, alternate;
10, root; 11, designated), Learning (L), Forwarding (F), and Agreement (A). These are
discussed in the context of RSTP in Section The Root ID field gives the identifier of the root bridge in the eyes of the sender of the frame, whose MAC address
is given in the Bridge ID field. Both of these ID fields are encoded in a special way
that includes a 2-byte Priority field immediately preceding the MAC address. The
priority values can be manipulated by management software in order to force the
spanning tree to be rooted at any particular bridge (Cisco, for example, uses a
default value of 0x8000 in its Catalyst switches).
The root path cost is the computed cost to reach the bridge specified in the
Root ID field. The PID field is the port identifier and gives the number of the port
from which the frame was sent appended to a 1-byte configurable Priority field
(default 0x80). The Message A (MsgA) field gives the message age (see the next
paragraph). The Maximum Age (MaxA) field gives the maximum age before timeout (default: 20s). The Hello Time field gives the time between periodic transmissions of configuration frames. The Forward Delay (Forw Delay) field gives the time
spent in the learning and listening states. All of the age and time fields are given
in units of 1/256s.
Section 3.4 Bridges and Switches
The Message Age field is not a fixed value like the other time-related fields.
When the root bridge sends a BPDU, it sets this field to 0. Any bridge receiving the
frame emits frames on its non-root ports with the Message Age field incremented by
1. In essence, the field acts as a hop count, giving the number of bridges by which
the BPDU has been processed before being received. When a BPDU is received on
a port, the information it contains is kept in memory and participates in the STP
algorithm until it is timed out, which happens at time (MaxA – MsgA). Should
this time pass on a root port without receipt of another BPDU, the root bridge is
declared “dead” and the bridge starts the root bridge election process over again. Building the Spanning Tree
The first job of STP is to elect the root bridge. The root bridge is discovered as
the bridge in the network (or VLAN) with the smallest identifier (priority combined with MAC address). When a bridge initializes, it assumes itself to be the
root bridge and sends configuration BPDUs with the Root ID field matching its
own bridge ID, but if it detects a bridge with a smaller ID, it ceases sending its own
frames and instead adopts the frame it received containing the smaller ID to be the
basis for further BPDUs it sends. The port where the BPDU with the smaller root
ID was received is then marked as the root port (i.e., the port on the path to the root
bridge). The remaining ports are placed in either blocked or forwarding states. Topology Changes
The next important job of STP is to handle topology changes. Although we could
conceivably use the basic database aging mechanism described earlier to adapt to
changing topologies, this is a poor approach because the aging timers can take a
long time (5 minutes) to delete incorrect entries. Instead, STP incorporates a way
to detect topology changes and inform the network about them quickly. In STP, a
topology change occurs when a port has entered the blocking or forwarding states.
When a bridge detects a connectivity change (e.g., a link goes down), the bridge notifies its parent bridges on the tree to the root by sending topology change notification
(TCN) BPDUs out of its root port. The next bridge on the tree to the root acknowledges the TCN BPDUs to the notifying bridge and also forwards them on toward
the root. Once informed of the topology change, the root bridge sets the TC bit field
in subsequent periodic configuration messages. Such messages are relayed by every
bridge in the network and are received by ports in either the blocking or forwarding
states. The setting of this bit field allows bridges to reduce their aging time to that of
the forward delay timer, on the order of seconds instead of the 5 minutes normally
recommended for the aging time. This allows database entries that may now be
incorrect to be purged and relearned more quickly, yet it also allows stations that
are actively communicating to not have their entries deleted erroneously. Example
In Linux, the bridge function disables STP by default, on the assumption that
topologies are relatively simple in most cases where a regular computer is being
Link Layer
used as a bridge. To enable STP on the example bridge we are using so far, we can
do the following:
Linux# brctl stp br0 on
The consequences of executing this command can be inspected as follows:
Linux# brctl showstp br0
bridge id
designated root
root port
max age
hello time
forward delay
ageing time
hello timer
topology change timer
path cost
bridge max age
bridge hello time
bridge forward delay
tcn timer
gc timer
eth0 (0)
port id
designated root
designated bridge
designated port
designated cost
path cost
message age timer
forward delay timer 0.00
hold timer
path cost
message age timer
forward delay timer 0.00
hold timer
eth1 (0)
port id
Here we can see the STP setup for a simple bridged network. The bridge
device, br0, holds information for the bridge as a whole. This includes the bridge
ID (8000.0007e914a9c1), derived from the smallest MAC address on the PCbased bridge (port 1) of Figure 3-11. The major configuration parameters (e.g., hello
time, topology change timer, etc.) are given in seconds. The flags values indicate
a recent topology change, which is expected given the fact that the network was
recently connected. The rest of the output describes per-port information for eth0
Section 3.4 Bridges and Switches
(bridge port 1) and eth1 (bridge port 2). Note that the path cost for eth0 is about
ten times greater than the cost of eth1. This is consistent with the observation that
eth0 is a 10Mb/s Ethernet network and eth1 is a full-duplex 100Mb/s network.
We can use Wireshark to look at a BPDU. In Figure 3-16 we see the contents
of a 52-byte BPDU. The length of 52 bytes (less than the Ethernet minimum of 64
bytes because the Linux capture facility removed the padding) is derived from
the Length/Type field of the Ethernet header by adding 14, in this case giving the
length of 52. The destination address is the group address, 01:80:C2:00:00:00, as
expected. The payload length is 38 bytes, the value contained in the Length field.
The SNAP/LLC field contains the constant 0x424243, and the encapsulated frame
is a spanning tree (version 0) frame. The rest of the protocol fields indicate that the
station 00:07:e9:14:a9:c1 believes it is the root of the spanning tree, using priority
32768 (a low priority), and the BPDU has been sent from port 2 with priority 0x80.
It also indicates a maximum age of 20s, a hello time of 2s, and a forwarding delay
of 15s.
Figure 3-16
Wireshark showing a BPDU. The Ethernet destination is a group address for bridges
Link Layer Rapid Spanning Tree Protocol (RSTP) (Formerly 802.1w)
One of the perceived problems with conventional STP is that a change in topology
is detected only by the failure to receive a BPDU in a certain amount of time. If
the timeout is large, the convergence time (time to reestablish data flow along the
spanning tree) could be larger than desired. The IEEE 802.1w standard (now part
of [802.1D-2004]) specifies enhancements to the conventional STP and adopts the
new name Rapid Spanning Tree Protocol (RSTP). The main improvement in RSTP
over STP is to monitor the status of each port and upon indication of failure to
immediately trigger a topology change indication. In addition, RSTP uses all 6 bits
in the Flag field of the BPDU format to support agreements between bridges that
avoid some of the need for timers to initiate protocol operations. It reduces the
normal STP five port states to three (discarding, learning, and forwarding, as
indicated by the state names in parentheses in Figure 3-14). The discarding state
in RSTP absorbs the disabled, blocking, and listening states in conventional STP.
RSTP also creates a new port role called an alternate port, which acts as an immediate backup should a root port cease to operate.
RSTP uses only one type of BPDU, so there are no special topology change
BPDUs, for example. RSTP BPDUs, as they are called, use version and type number 2 instead of 0. In RSTP, any switch detecting a topology change sends BPDUs
indicating a topology change, and any switch receiving them clears its filtering
databases immediately. This change can significantly affect the protocol’s convergence time. Instead of waiting for the topology change to migrate to the root
bridge and back followed by the forwarding delay wait time, entries are cleared
immediately. Overall, convergence time can be cut from tens of seconds down to a
fraction of a second in most cases.
RSTP makes a distinction between edge ports (those attached only to end stations) and normal spanning tree ports and also between point-to-point links and
shared links. Edge ports and ports on point-to-point links do not ordinarily form
loops, so they are permitted to skip the listening and learning states and move
directly to the forwarding state. Of course, the assumption of being an edge port
could be violated if, for example, two ports were cross-connected, but this is handled by reclassifying ports as spanning tree ports if they ever carry any form of
BPDUs (simple end stations do not normally generate BPDUs). Point-to-point links
are inferred from the operating mode of the interface; if the interface is running in
full-duplex mode, the link is classified as a point-to-point link.
In regular STP, BPDUs are ordinarily relayed from a notifying or root bridge.
In RSTP, BPDUs are sent periodically by all bridges as “keepalives” to determine
if connections to neighbors are operating properly. This is what most higher-layer
routing protocols do also. If a bridge fails to receive an updated BPDU within
three times the hello interval, the bridge concludes that it has lost its connection
with its neighbor. Note that in RSTP, topology changes are not induced as a result
of edge ports being connected or disconnected as they are in regular STP. When
a topology change is detected, the notifying bridge sends BPDUs with the TC bit
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
field set, not only to the root but also to all other bridges. Doing so allows the
entire network to be notified of the topology change much faster than with conventional STP. When a bridge receives these messages, it flushes all table entries
except those associated with edge ports and restarts the learning process.
Many of RSTP’s features were developed by Cisco Systems and other companies that had for some time provided proprietary enhancements to regular STP in
their products. The IEEE committee incorporated many of these enhancements into
the updated 802.1d standard, which covers both types of STP, so extended LANs
can run regular STP on some segments and RSTP on others (although the RSTP
benefits are lost). RSTP has been extended to include VLANs [802.1Q-2005]—a
protocol called the Multiple Spanning Tree Protocol (MSTP). This protocol retains
the RSTP (and hence STP) BPDU format, so backward compatibility is possible,
but it also supports the formation of multiple spanning trees (one for each VLAN).
802.1ak: Multiple Registration Protocol (MRP)
The Multiple Registration Protocol (MRP) provides a general method for registering
attributes among stations in a bridged LAN environment. [802.1ak-2007] defines
two particular “applications” of MRP called MVRP (for registering VLANs) and
MMRP (for registering group MAC addresses). MRP replaces the earlier GARP
framework; MVRP and MMRP replace the older GVRP and GMRP protocols,
respectively. All were originally defined by 802.1q.
With MVRP, once an end station is configured as a member of a VLAN, this
information is communicated to its attached switch, which in turn propagates
the fact of the station’s participation in the VLAN to other switches. This allows
switches to augment their filtering tables based on station VLAN IDs and allows
changes of VLAN topology without necessarily triggering a recalculation of the
existing spanning tree via STP. Avoiding STP recalculation was one of the reasons
for migrating from GVRP to MVRP.
MMRP is a method for stations to register their interest in group MAC
addresses (multicast addresses). This information may be used by switches to
establish the ports through which multicast traffic must be delivered. Without
such a facility, switches would have to broadcast all multicast traffic, potentially
leading to unwanted overhead. MMRP is a layer 2 protocol with similarities to
IGMP and MLD, layer 3 protocols, and the “IGMP/MLD snooping” capability supported in many switches. We discuss IGMP, MLD and snooping in Chapter 9.
Wireless LANs—IEEE 802.11(Wi-Fi)
One of the most popular technologies being used to access the Internet today is
wireless fidelity (Wi-Fi), also known by its IEEE standard name 802.11, effectively
a wireless version of Ethernet. Wi-Fi has developed to become an inexpensive,
highly convenient way to provide connectivity and performance levels acceptable
Link Layer
for most applications. Wi-Fi networks are easy to set up, and most portable computers and smartphones now include the necessary hardware to access Wi-Fi
infrastructure. Many coffee shops, airports, hotels, and other facilities include
Wi-Fi “hot spots,” and Wi-Fi is even seeing considerable advancement in developing countries where other infrastructure may be difficult to obtain. The architecture of an IEEE 802.11 network is shown in Figure 3-17.
Figure 3-17
The IEEE 802.11 terminology for a wireless LAN. Access points (APs) can be connected
using a distribution service (DS, a wireless or wired backbone) to form an extended
WLAN (called an ESS). Stations include both APs and mobile devices communicating
together that form a basic service set (BSS). Typically, an ESS has an assigned ESSID that
functions as a name for the network.
The network in Figure 3-17 includes a number of stations (STAs). Typically
stations are organized with a subset operating also as access points (APs). An AP
and its associated stations are called a basic service set (BSS). The APs are generally
connected to each other using a wired distribution service (called a DS, basically a
“backbone”), forming an extended service set (ESS). This setup is commonly termed
infrastructure mode. The 802.11 standard also provides for an ad hoc mode. In this
configuration there is no AP or DS; instead, direct station-to-station (peer-to-peer)
communication takes place. In IEEE terminology, the STAs participating in an
ad hoc network form an independent basic service set (IBSS). A WLAN formed from
a collection of BSSs and/or IBSSs is called a service set, identified by a service set
identifier (SSID). An extended service set identifier (ESSID) is an SSID that names a
collection of connected BSSs and is essentially a name for the LAN that can be up
to 32 characters long. Such names are ordinarily assigned to Wi-Fi APs when a
WLAN is first installed.
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
802.11 Frames
There is one common overall frame format for 802.11 networks but multiple types
of frames. Not all the fields are present in every type of frame. Figure 3-18 shows
the format of the common frame and a (maximal-size) data frame.
Figure 3-18
The 802.11 basic data frame format (as of [802.11n-2009]). The MPDU format resembles that of
Ethernet but has additional fields depending on the type of DS being used among access points,
whether the frame is headed to the DS or from it, and if frames are being aggregated. The QoS
Control field is used for special performance features, and the HT Control field is used for control
of 802.11n’s “high-throughput” features.
The frame shown in Figure 3-18 includes a preamble for synchronization,
which depends on the particular variant of 802.11 being used. Next, the Physical
Layer Convergence Procedure (PLCP) header provides information about the specific physical layer in a somewhat PHY-independent way. The PLCP portion of the
frame is generally transmitted at a lower data rate than the rest of the frame. This
serves two purposes: to improve the probability of correct delivery (lower speeds
tend to have better error resistance) and to provide compatibility with and protection from interference from legacy equipment that may operate in the same area at
slower rates. The MAC PDU (MPDU) corresponds to a frame similar to Ethernet,
but with some additional fields.
At the head of the MPDU is the Frame Control Word, which includes a 2-bit
Type field identifying the frame type. There are three types of frames: management
frames, control frames, and data frames. Each of these can have various subtypes,
depending on the type. The full table of types and subtypes is given in [802.11n2009, Table 7-1]. The contents of the remaining fields, if present, are determined by
the frame type, which we discuss individually. Management Frames
Management frames are used for creating, maintaining, and ending associations
between stations and access points. They are also used to determine whether
encryption is being used, what the name (SSID or ESSID) of the network is, what
Link Layer
transmission rates are supported, and a common time base. These frames are used
to provide the information necessary when a Wi-Fi interface “scans” for nearby
access points.
Scanning is the procedure by which a station discovers available networks
and related configuration information. This involves switching to each available
frequency and passively listening for traffic to identify available access points. Stations may also actively probe for networks by transmitting a particular management frame (“probe request”) while scanning. There are some limitations on such
probe requests to ensure that 802.11 traffic is not transmitted on a frequency that
is being used for non-802.11 purposes (e.g., medical services). Here is an example
of initiating a scan by hand on a Linux system:
Linux# iwlist wlan0 scan
wlan0 Scan completed :
Cell 01 - Address: 00:02:6F:20:B5:84
Frequency:2.427 GHz (Channel 4)
Quality=5/100 Signal level=47/100
Encryption key:on
IE: WPA Version 1
Group Cipher : TKIP
Pairwise Ciphers (2) : CCMP TKIP
Authentication Suites (1) : PSK
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s;
6 Mb/s; 12 Mb/s; 24 Mb/s; 36 Mb/s; 9 Mb/s;
18 Mb/s; 48 Mb/s; 54 Mb/s
Here we see the result of a hand-initiated scan using wireless interface wlan0.
An AP with MAC address 00:02:6F:20:B5:84 is acting as a master (i.e., is acting as an AP in infrastructure mode). It is broadcasting the ESSID "Grizzly5354-Aries-802.11b/g" on channel 4 (2.427GHz). (See Section 3.5.4 on channels
and frequencies for more details on channel selection.) The quality and signal
level give indications of how well the scanning station is receiving a signal from
the AP, although the meaning of these values varies among manufacturers. WPA
encryption is being used on this link (see Section 3.5.5), and bit rates from 1Mb/s
to 54Mb/s are available. The tsf (time sync function) value indicates the AP’s
notion of time, which is used for synchronizing various features such as powersaving mode (see Section 3.5.2).
When an AP broadcasts its SSID, any station may attempt to establish an
association with the AP. When an association is established, most Wi-Fi networks
today also set up the necessary configuration information to provide Internet
access to the station (see Chapter 6). However, an AP’s operator may wish to control which stations make use of the network. Some operators intentionally make
this more difficult by having the AP not broadcast its SSID, as a security measure.
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
This approach provides little security, as the SSID may be guessed. More robust
security is provided by link encryption and passwords, which we discuss in Section 3.5.5. Control Frames: RTS/CTS and ACKs
Control frames are used to handle a form of flow control as well as acknowledgments for frames. Flow control helps ensure that a receiver can slow down a
sender that is too fast. Acknowledgments help a sender know what frames have
been received correctly. These concepts also apply to TCP at the transport layer
(see Chapter 15). 802.11 networks support optional request-to-send (RTS)/clear-tosend (CTS) moderation of transmission for flow control. When these are enabled,
prior to sending a data frame a station transmits an RTS frame, and when the
recipient is willing to receive additional traffic, it responds with a CTS. After the
RTS/CTS exchange, the station has a window of time (identified in the CTS frame)
to transmit data frames that are acknowledged when successfully received. Such
transmission coordination schemes are common in wireless networks and mimic
the flow control signaling that has been used on wired serial lines for years (sometimes called hardware flow control).
The RTS/CTS exchange helps to avoid the hidden terminal problem by instructing each station when it is permitted to transmit, so as to avoid simultaneous
transmissions from stations that cannot hear each other. Because RTS and CTS
frames are short, they do not use the channel for long. An AP generally initiates
an RTS/CTS exchange for a packet if the size of the packet is large enough. Typically, an AP has a configuration option called the packet size threshold (or similar).
Frames larger than the threshold cause an RTS to be sent prior to transmission of
the data. Most vendors use a default setting for this value of approximately 500
bytes if RTS/CTS exchanges are desired. In Linux, the RTS/CTS threshold can be
set in the following way:
Linux# iwconfig wlan0 rts 250
wlan0 IEEE 802.11g ESSID:"Grizzly-5354-Aries-802.11b/g"
Frequency:2.427 GH
Access Point: 00:02:6F:20:B5:84
Bit Rate=24 Mb/s Tx-Power=0 dBm
Retry min limit:7 RTS thr=250 B Fragment thr=2346 B
Encryption key:xxxx- ... -xxxx [3]
Link Quality=100/100 Signal level=46/100
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0
The iwconfig command can be used to set many variables, including the RTS
and fragmentation thresholds (see Section It can also be used to determine
statistics such as the number of frame errors due to wrong network ID (ESSID) or
wrong encryption key. It also gives the number of excessive retries (i.e., the number of retransmission attempts), a rough indicator of the reliability of the link that
Link Layer
is popular for guiding routing decisions in wireless networks [ETX]. In WLANs
with limited coverage, where hidden terminal problems are unlikely to occur, it
may be preferable to disable RTS/CTS by adjusting the stations’ RTS thresholds to
be a high value (1500 or larger). This avoids the overhead imposed by requiring
RTS/CTS exchanges for each packet.
In wired Ethernet networks, the absence of a collision indicates that a frame
has been received correctly with high probability. In wireless networks, there is
a wider range of reasons a frame may not be delivered correctly, such as insufficient signal or interference. To help address this potential problem, 802.11 extends
the 802.3 retransmission scheme with a retransmission/acknowledgment (ACK)
scheme. An acknowledgment is expected to be received within a certain amount
of time for each unicast frame sent (802.11a/b/g) or each group of frames sent
(802.11n or 802.11e with “block ACKs”). Multicast and broadcast frames do not
have associated ACKs to avoid “ACK implosion” (see Chapter 9). Failure to receive
an ACK within the specified time results in retransmission of the frame(s).
With retransmissions, it is possible to have duplicate frames formed within
the network. The Retry bit field in the Frame Control Word is set when any frame
represents a retransmission of a previously transmitted frame. A receiving station
can use this to help eliminate duplicate frames. Stations are expected to keep a
small cache of entries indicating addresses and sequence/fragment numbers seen
recently. When a received frame matches an entry, the frame is discarded.
The amount of time necessary to send a frame and receive an ACK for it
relates to the distance of the link and the slot time (a basic unit of time related to
the 802.11 MAC protocol; see Section 3.5.3). The time to wait for an ACK (as well as
the slot time) can be configured in most systems, although the method for doing
so varies. In most cases such as home or office use, the default values are adequate.
When using Wi-Fi over long distances, these values may require adjusting (see, for
example, [MWLD]). Data Frames, Fragmentation, and Aggregation
Most frames seen on a busy network are data frames, which do what one would
expect—carry data. Typically, there is a one-to-one relationship between 802.11
frames and the link-layer (LLC) frames made available to higher-layer protocols such as IP. However, 802.11 supports frame fragmentation, which can divide
frames into multiple fragments. With the 802.11n specification, it also supports
frame aggregation, which can be used to send multiple frames together with less
When fragmentation is used, each fragment has its own MAC header and trailing CRC and is handled independently of other fragments. For example, fragments
to different destinations can be interleaved. Fragmentation can help improve performance when the channel has significant interference. Unless block ACKs are
used, each fragment is sent individually, producing one ACK per fragment by the
receiver. Because fragments are smaller than full-size frames, if a retransmission
needs to be invoked, a smaller amount of data will need to be repaired.
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
Fragmentation is applied only to frames with a unicast (non-broadcast or
multicast) destination address. To enable this capability, the Sequence Control field
contains a fragment number (4 bits) and a sequence number (12 bits). If a frame is fragmented, all fragments contain a common sequence number value, and each adjacent fragment has a fragment number differing by 1. A total of 15 fragments for the
same frame is possible, given the 4-bit-wide field. The More Frag field in the Frame
Control Word indicates that further fragments are yet to come. Terminal fragments
have this bit set to 0. A destination defragments the original frame from fragments
it receives by assembling the fragments in order based on fragment number order
within the frame sequence number. Provided that all fragments constituting a
sequence number have been received and the last fragment has a More Frag field of
0, the frame is reconstructed and passed to higher-layer protocols for processing.
Fragmentation is not often used because it does require some tuning. If used
without tuning, it can worsen performance slightly. When smaller frames are
used, the chance of having a bit error (see the next paragraph) can be reduced.
The fragment size can usually be set from 256 bytes to 2KB as a threshold (only
those frames that exceed the threshold in size are fragmented). Many APs default
to not using fragmentation by setting the threshold high (such as 2437 bytes on a
Linksys-brand AP).
The reason fragmentation can be useful is a fairly simple exercise in probability. If the bit error rate (BER) is P, the probability of a bit being successfully
delivered is (1 - P) and the probability that N bits are successfully delivered is
(1 - P) N. As N grows, this value shrinks. Thus, if we can shorten a frame, we can
in principle improve its error-free delivery probability. Of course, if we divide a
frame of size N bits into K fragments, we have to send at least ⎡N/K⎤ fragments. As
a concrete example, assume that we wish to send a 1500-byte (12,000-bit) frame.
If we assume P = 10-4 (a relatively high BER), the probability of successful delivery without fragmentation would be (1 - 10-4)12,000 = .301. So we have only about a
30% chance of such a frame being delivered without errors the first time, and on
average we would have to send the frame three or four times for it to be received
If we use fragmentation for the same example and set the fragmentation threshold to 500, we produce three fragments of about 4000 bits each. The probability of
one such fragment being delivered without error is about (1 - 10-4)4000 = .670. Thus,
each fragment has about a 67% chance of being delivered successfully. Of course,
we have to have three of them delivered successfully to reconstruct the whole
frame. The probabilities of 3, 2, 1, and 0 fragments being delivered successfully
are (.67) 3 = 0.30, 3(.67)2(.33) = 0.44, 3(0.67)(.33)2 = .22, and (.33) 3 = .04, respectively.
So, although the chances that all three are delivered successfully without retries
are about the same as for the nonfragmented frame being delivered successfully,
the chances that two or three fragments are delivered successfully are fairly good.
If this should happen, at most a single fragment would have to be retransmitted, which would take significantly less time (about a third) than sending the
original 1500-byte unfragmented frame. Of course, each fragment consumes some
Link Layer
overhead, so if the BER is effectively 0, fragmentation only decreases performance
by creating more frames to handle.
One of the enhancements provided by 802.11n is the support of frame
aggregation, in two forms. One form, called the aggregated MAC service data unit
(A-MSDU), allows for multiple complete 802.3 (Ethernet) frames to be aggregated
within an 802.11 frame. The other form, called the aggregated MAC protocol data unit
(A-MPDU), allows multiple MPDUs with the same source, destination, and QoS
settings to be aggregated by being sent in short succession. The two aggregation
types are depicted in Figure 3-19.
3 H
Figure 3-19
Frame aggregation in 802.11n includes A-MSDU and A-MPDU. A-MSDU aggregates frames using
a single FCS. A-MPDU aggregation uses a 4-byte delimiter between each aggregated 802.11 frame.
Each A-MPDU subframe has its own FCS and can be individually acknowledged using block
ACKs and retransmitted if necessary.
For a single aggregate, the A-MSDU approach is technically more efficient.
Each 802.3 header is ordinarily 14 bytes, which is relatively small compared to
an 802.11 MAC header that could be as long as 36 bytes. Thus, with only a single
802.11 MAC header for multiple 802.3 frames, a savings of up to 22 bytes per extra
aggregated frame could be achieved. An A-MSDU may be up to 7935 bytes, which
can hold over 100 small (e.g., 50-byte) packets, but only a few (5) larger (1500byte) data packets. The A-MSDU is covered by a single FCS. This larger size of an
A-MSDU frame increases the chances it will be delivered with errors, and because
there is only a single FCS for the entire aggregate, the entire frame would have to
be retransmitted on error.
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
A-MPDU aggregation is a different form of aggregation whereby multiple (up
to 64) 802.11 frames, each with its own 802.11 MAC header and FCS and up to 4095
bytes each, are sent together. A-MPDUs may carry up to 64KB of data—enough
for more than 1000 small packets and about 40 larger (1.5KB) packets. Because
each constituent frame (subframe) carries its own FCS, it is possible to selectively
retransmit only those subframes received with errors. This is made possible by the
block acknowledgment facility in 802.11n (originating in 802.11e), which is a form of
extended ACK that provides feedback to a transmitter indicating which particular
A-MPDU subframes were delivered successfully. This capability is similar in purpose, but not in its details, to the selective acknowledgments we will see in TCP
(see Chapter 14). So, although the type of aggregation offered by A-MSDUs may
be more efficient for error-free networks carrying large numbers of small packets,
in practice it may not perform as well as A-MPDU aggregation [S08].
Power Save Mode and the Time Sync Function (TSF)
The 802.11 specification provides a way for stations to enter a limited power state,
called power save mode (PSM). PSM is designed to save power by allowing an STA’s
radio receive circuitry to be powered down some of the time. Without PSM, the
receiver circuitry would always be running, draining power. When in PSM, an
STA’s outgoing frames have a bit set in the Frame Control Word. A cooperative
AP noticing this bit being set buffers any frames for the station until the station
requests them. APs ordinarily send out beacon frames (a type of management
frame) indicating various things like SSID, channel, and authentication information. When supporting stations that use PSM, APs can also indicate the presence
of buffered frames to a station by setting an indication in the Frame Control Word
of frames it sends. When stations enter PSM, they do so until the next AP beacon
time, when they wake up and determine if there are pending frames stored at the
AP for them.
PSM should be used with care and understanding. Although it may extend battery life, the NIC is not the only module drawing power in most wireless devices.
Other parts of the system such as the screen and hard drive can be significant consumers of power, so overall battery life may not be extended much. Furthermore,
using PSM can affect throughput performance significantly as idle periods are
added between frame transmissions and time is spent switching modes [SHK07].
The ability to awaken an STA to check for pending frames at exactly the correct time (i.e., when an AP is about to send a beacon frame) depends on a common
sense of time at the AP and the PSM stations it serves. Wi-Fi synchronizes time
using the time synchronization function (TSF). Each station maintains a 64-bit counter reference time (in microseconds) that is synchronized with other stations in the
network. Synchronization is maintained to within 4µs plus the maximum propagation delay of the PHY (for PHYs of rate 1Mb/s or more). This is accomplished
by having any station that receives a TSF update (basically, a copy of the 64-bit
counter sent from another station) check to see if the provided value is larger than
Link Layer
its own. If so, the receiving station updates its own notion of time to be the larger
value. This approach ensures that clocks always move forward, but it also raises
some concern that, given stations with slightly differing clock rates, the slower
ones will tend to be synced to the fastest one.
With the incorporation of 802.11e (QoS) features into 802.11, the basic PSM of
802.11 has been extended to include the ability to schedule periodic batch processing of buffered frames. The frequency is expressed in terms of the number of beacon frames. The capability, called automatic power save delivery (APSD), uses some
of the subfields of the QoS control word. APSD may be especially useful for small
power-constrained devices, as they need not necessarily awaken at each beacon
interval as they do in conventional 802.11 PSM. Instead, they may elect to power
down their radio transceiver circuitry for longer periods of their own choosing.
802.11n also extends the basic PSM by allowing an STA equipped with multiple
radio circuits operating together (see MIMO, Section to power down all but
one of the circuits until a frame is ready. This is called spatial multiplexing power
save mode. The specification also includes an enhancement to APSD called Power
Save Multi-Poll (PSMP) that provides a way to schedule transmissions of frames in
both directions (e.g., to and from AP) at the same time.
802.11 Media Access Control
In wireless networks, it is much more challenging to detect a “collision” than in
wired networks such as 802.3 LANs. In essence, the medium is effectively simplex, and multiple simultaneous transmitters must be avoided, by coordinating
transmissions in either a centralized or a distributed manner. The 802.11 standard has three approaches to control sharing of the wireless medium, called the
point coordination function (PCF), the distributed coordinating function (DCF), and
the hybrid coordination function (HCF). HCF was brought into the 802.11 specification [802.11-2007] with the addition of QoS support in 802.11e and is also used by
802.11n. Implementation of the DCF is mandatory for any type of station or AP, but
implementation of the PCF is optional and not widespread (so we shall not discuss
it in detail). HCF is found in relatively new QoS-capable Wi-Fi equipment, such as
802.11n APs and earlier APs that support 802.11e. We turn our attention to DCF for
now and describe HCF in the context of QoS next.
DCF is a form of CSMA/CA for contention-based access to the medium. It is
used for both infrastructure and ad hoc operation. With CSMA/CA, stations listen
to see if the medium is free and, if so, may have an opportunity to transmit. If not,
they avoid sending for a random amount of time before checking again to see if
the medium is free. This behavior is similar to how a station sensing a collision
backs off when using CSMA/CD on wired LANs. Channel arbitration in 802.11 is
based on CSMA/CA with enhancements to provide priority access to certain stations or frame types.
802.11 carrier sense is performed in both a physical and a virtual way. Generally, stations wait for a period of time when ready to send (called the distributed
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
inter-frame space or DIFS) to allow higher-priority stations to access the channel.
If the channel becomes busy during the DIFS period, a station starts the waiting
period again. When the medium appears idle, a would-be transmitter initiates
the collision avoidance/backoff procedure described in Section This procedure is also initiated after a successful (unsuccessful) transmission is indicated
by the receipt (lack of receipt) of an ACK. In the case of unsuccessful transmission,
the backoff procedure is initiated with different timing (using the extended interframe space or EIFS). We now discuss the implementation of DCF in more detail,
including the virtual and physical carrier sense mechanisms. Virtual Carrier Sense, RTS/CTS, and the Network Allocation Vector (NAV)
In the 802.11 MAC protocol, a virtual carrier sense mechanism operates by observing the Duration field present in each MAC frame. This is accomplished by a station listening to traffic not destined for it. The Duration field is present in both
RTS and CTS frames optionally exchanged prior to transmission, as well as conventional data frames, and provides an estimate of how long the medium will be
busy carrying the frame.
The transmitter sets the Duration field based on the frame length, transmit
rate, and PHY characteristics (e.g., rate, etc.). Each station keeps a local counter
called the Network Allocation Vector (NAV) that estimates how long the medium
will be busy carrying the current frame, and consequently how long it will need to
wait before attempting its next transmission. A station overhearing traffic with a
Duration field greater than its NAV updates its NAV to the new value. Because the
Duration field is present in both RTS and CTS frames, if used, any station in range
of either the sender or the receiver is able to ascertain the Duration field value. The
NAV is maintained in time units and decremented based on a local clock. The
medium is considered busy when the local NAV is nonzero. It is reset to 0 upon
receipt of an ACK. Physical Carrier Sense (CCA)
Each 802.11 PHY specification (e.g., for different frequencies and radio technology)
is required to provide a function for assessing whether the channel is clear based
upon energy and waveform recognition (usually recognition of a well-formed
PLCP). This function is called clear channel assessment (CCA) and its implementation is PHY-dependent. The CCA capability represents the physical carrier sense
capability for the 802.11 MAC to understand whether the medium is currently
busy. It is used in conjunction with the NAV to determine when a station must
defer (wait) prior to transmission. DCF Collision Avoidance/Backoff Procedure
Upon determining that the channel is likely to be free (i.e., because the NAV duration has been met and CCA does not indicate a busy channel), a station defers
access prior to transmission. Because many stations may have been waiting for
the channel to become free, each station computes and waits for a backoff time prior
Link Layer
to sending. The backoff time is equal to the product of a random number and the
slot time (unless the station attempting to transmit already has a nonzero backoff
time, in which case it is not recomputed). The slot time is PHY-dependent but is
generally a few tens of microseconds. The random number is drawn from a uniform distribution over the interval [0, CW], where the contention window (CW) is
an integer containing a number of time slots to wait, with limits aCWmin ≤ CW
≤ aCWmax defined by the PHY. The set of CW values increases by powers of 2
(minus 1) beginning with the PHY-specific constant aCWmin value and continuing up to and including the constant aCWmax value for each successive transmission attempt. This is similar in effect to Ethernet’s backoff procedure initiated
during a collision detection event.
In a wireless environment, collision detection is not practical because it is difficult for a transmitter and receiver to operate simultaneously in the same piece of
equipment and hear any transmissions other than its own, so collision avoidance
is used instead. In addition, ACKs are generated in response to unicast frames to
determine whether a frame has been delivered successfully. A station receiving
a correct frame begins transmitting an ACK after waiting a small period of time
(called the Short Interframe Space or SIFS), without regard to the busy/idle state of
the medium. This should not cause a problem because the SIFS value is always
smaller than DIFS, so in effect stations generating ACKs get priority access to the
channel to complete their transactions. The source station waits a certain amount
of time without receiving an ACK frame before concluding that a transmission
has failed. Upon failure, the backoff procedure discussed previously is initiated
and the frame is retried. The same procedure is initiated if a CTS is not received in
response to an earlier RTS within a certain (different) amount of time (a constant
called CTStimeout). HCF and 802.11e/n QoS
Clauses 5, 6, 7, and 9 of the 802.11 standard [802.11-2007] are based in part on the
work of the 802.11e group within IEEE, and the terms 802.11e, Wi-Fi QoS, and
WMM (for Wi-Fi Multimedia) are often used. They cover the QoS facility—changes
to the 802.11 MAC-layer and system interfaces in support of multimedia applications such as voice over IP (VoIP) and streaming video. Whether the QoS facility
is really necessary or not often depends on the congestion level of the network
and the types of applications to be supported. If utilization of the network tends
to be low, the QoS MAC support may be unnecessary, although some of the other
802.11e capabilities may still be useful (e.g., block ACKs and APSD). In situations
where utilization and congestion are high and there is a need to support a lowjitter delivery capability for services such as VoIP, QoS support may be desirable.
These specifications are relatively new, so QoS-capable Wi-Fi equipment is likely
to be more expensive and complex than non-QoS equipment.
The QoS facility introduces new terminology such as QoS stations (QSTAs),
QoS access points (QAPs), and the QoS BSS (QBSS, a BSS supporting QoS). In general, any of the devices supporting QoS capabilities also support conventional
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
non-QoS operation. 802.11n “high-throughput” stations (called HT STAs) are
also QSTAs. A new form of coordination function, the hybrid coordination function
(HCF), supports both contention-based and controlled channel access, although
the controlled channel variant is seldom used. Within the HCF, there are two specified channel access methods that can operate together: HFCA-controlled channel
access (HCCA) and the more popular enhanced DCF channel access (EDCA), corresponding to reservation-based and contention-based access, respectively. There
is also some support for admission control, which may deny connectivity entirely
under high load.
EDCA builds upon the basic DCF access. With EDCA, there are eight user
priorities (UPs) that are mapped to four access categories (ACs). The user priorities
use the same structure as 802.1d priority tags and are numbered 1 through 7, with
7 being the highest priority. (There is also a 0 priority between 2 and 3.) The four
ACs are nominally intended for background, best-effort, video, and audio traffic.
Priorities 1 and 2 are intended for the background AC, priorities 0 and 3 are for
the best-effort AC, 4 and 5 are for the video AC, and 6 and 7 are for the voice AC.
For each AC, a variant of DCF contends for channel access credits called transmit
opportunities (TXOPs), using alternative MAC parameters that tend to favor the
higher-priority traffic. In EDCA, many of the various MAC parameters from DCF
(e.g., DIFS, aCWmin, aCWmax) become adjustable as configuration parameters.
These values are communicated to QSTAs using management frames.
HCCA builds loosely upon PCF and uses polling-controlled channel access.
It is designed for synchronous-style access control and takes precedence ahead of
the contention-based access of EDCA. A hybrid coordinator (HC) is located within
an AP and has priority to allocate channel accesses. Prior to transmission, a station
can issue a traffic specification (TSPEC) for its traffic and use UP values between 8
and 15. The HC can allocate reserved TXOPs to such requests to be used during
short-duration controlled access phases of frame exchange that take place before
EDCA-based frame transmission. The HC can also deny TXOPs to TSPECs based
on admission control policies set by the network administrator. The HCF exploits
the virtual carrier sense mechanism discussed earlier with DCF to keep contention-based stations from interfering with contention-free access. Note that a single
network comprising QSTAs and conventional stations can have both HCF and
DCF running simultaneously by alternating between the two, but ad hoc networks
do not support the HC and thus do not handle TSPECs and do not perform admission control. Such networks might still run HCF, but TXOPs are gained through
EDCA-based contention.
Physical-Layer Details: Rates, Channels, and Frequencies
The [802.11-2007] standard now includes the following earlier amendments:
802.11a, 802.11b, 802.11d, 802.11g, 802.11h, 802.11i, 802.11j, and 802.11e. The 802.11n
standard was adopted as an amendment to 802.11 in 2009 [802.11n-2009]. Most
of these amendments provide additional modulation, coding, and operating
Link Layer
frequencies for 802.11 networks, but 802.11n also adds multiple data streams and
a method for aggregating multiple frames (see Section We will avoid
detailed discussion of the physical layer, but to appreciate the breadth of options,
Table 3-2 includes those parts of the 802.11 standard that describe this layer in
Table 3-2
Parts of the 802.11 standard that describe the physical layer
Speeds (Mb/s)
Frequency Range; Modulation
Channel Set
(Clause 17)
6, 9, 12, 18, 24, 36,
48, 54
5.16–5.35 and 5.725–5.825GHz;
(Clause 18)
1, 2, 5.5, 11
2.401–2.495GHz; DSSS
34–165 (varies by country)
channel width options
1–14 (varies by country)
(Clause 19)
1, 2, 5.5, 6, 9, 11, 12,
18, 24, 36, 48, 54
(plus 22, 33)
6.5–600 with many
options (up to 4
MIMO streams)
2.401–2.495GHz; OFDM
1–14 (varies by country)
2.4 and 5GHz modes with
20MHz- or 40MHz-wide
channels; OFDM
1–13 (2.4GHz band);
36–196 (5GHz band)
(varies by country)
(Same as
3.650–3.700GHz (licensed);
1–25, 36–64, 100–161
(varies by country)
The first column gives the original standard name and its present location in
[802.11-2007], plus details for the 802.11n and 802.11y amendments. It is important
to note from this table that 802.11b/g operate in the 2.4GHz Industrial, Scientific, and
Medical (ISM) band, 802.11a operates only in the higher 5GHz Unlicensed National
Information Infrastructure (U-NII) band, and 802.11n can operate in both. The
802.11y amendment provides for licensed use in the 3.65–3.70GHz band within
the United States. An important practical consequence of the data in this table is
that 802.11b/g equipment does not interoperate or interfere with 802.11a equipment, but 802.11n equipment may interfere with either if not deployed carefully. Channels and Frequencies
Regulatory bodies (e.g., the Federal Communications Commission in the United
States) divide the electromagnetic spectrum into frequency ranges allocated for
various uses across the world. For each range and use, a license may or may not
be required, depending on local policy. In 802.11, there are sets of channels that
may be used in various ways at various power levels depending on the regulatory domain or country. Wi-Fi channels are numbered in 5MHz units starting at
some base center frequency. For example, channel 36 with a base center frequency
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
of 5.00GHz gives the frequency 5000 + 36 * 5 = 5180MHz, the center frequency of
channel 36. Although channel center frequencies are 5MHz apart from each other,
channels may be wider than 5MHz (up to 40MHz for 802.11n). Consequently, some
channels within channel sets of the same band usually overlap. Practically speaking, this means that transmissions on one channel might interfere with transmissions on nearby channels.
Figure 3-20 presents the channel-to-frequency mapping for the 802.11b/g
channels in the 2.4GHz ISM band. Each channel is 22MHz wide. Not all channels
are available for legal use in every country. For example, channel 14 is authorized
at present for use only in Japan, and channels 12 and 13 are authorized for use in
Europe, while the United States permits channels 1 through 11 to be used. Other
countries may be more restrictive (see Annex J of the 802.11 standard and amendments). Note that policies and licensing requirements may change over time.
Figure 3-20
The 802.11b and 802.11g standards use a frequency band between about 2.4GHz and 2.5GHz. This
band is divided into fourteen 22MHz-wide overlapping channels, of which some subset is generally available for legal use depending on the country of operation. It is advisable to assign nonoverlapping channels, such as 1, 6, and 11 in the United States, to multiple base stations operating
in the same area. Only a single 40MHz 802.11n channel may be used in this band without overlap.
As shown in Figure 3-20, the effect of overlapping channels is now clear. A
transmitter on channel 1, for example, overlaps with channels 2, 3, 4, and 5 but
not higher channels. This becomes important when selecting which channels
to assign for use in environments where multiple access points are to be used
and even more important when multiple access points serving multiple different
networks in the same area are to be used. One common approach in the United
States is to assign up to three APs in an area using nonoverlapping channels 1, 6,
and 11, as channel 11 is the highest-frequency channel authorized for unlicensed
use in the United States. In cases where other WLANs may be operating in the
same bands, it is worth considering jointly planning channel settings with all the
affected WLAN administrators.
Link Layer
As shown in Figure 3-21, 802.11a/n/y share a somewhat more complicated
channel set but offer a larger number of nonoverlapping channels to use (i.e., 12
unlicensed 20MHz channels in the United States).
Figure 3-21
Many of the approved 802.11 channel numbers and center frequencies for 20MHz channels. The most common range for unlicensed use involves the U-NII bands, all above
5GHz. The lower band is approved for use in most countries. The “Europe” band is
approved for use in most European countries, and the high band is approved for use in
the United States and China. Channels are typically 20MHz wide for 802.11a/y but may
be 40MHz wide for 802.11n. Narrower channels and some channels available in Japan
are also available (not shown).
In Figure 3-21, the channels are numbered in 5MHz increments, but different
channel widths are available: 5MHz, 10MHz, 20MHz, and 40MHz. The 40MHz
channel width is an option with 802.11n (see Section, along with several
proprietary Wi-Fi systems that aggregate two 20MHz channels (called channel
For typical Wi-Fi networks, an AP has its operating channel assigned during
installation, and client stations change channels in order to associate with the AP.
When operating in ad hoc mode, there is no controlling AP, so a station is typically
hand-configured with the operating channel. The sets of channels available and
operating power may be constrained by the regulatory environment, the hardware capabilities, and possibly the supporting driver software. 802.11 Higher Throughput/802.11n
In late 2009, the IEEE standardized 802.11n [802.11n-2009] as an amendment to
[802.11-2007]. It makes a number of important changes to 802.11. To support higher
throughput, it incorporates support for multiple input, multiple output (MIMO) management of multiple simultaneously operating data streams carried on multiple
antennas, called spatial streams. Up to four such spatial streams are supported on a
given channel. 802.11n channels may be 40MHz wide (using two adjacent 20MHz
channels), twice as wide as conventional channels in 802.11a/b/g/y. Thus, there
is an immediate possibility of having up to eight times the maximum data rate of
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
802.11a/g (54Mb/s), for a total of 432Mb/s. However, 802.11n also improves the
single-stream performance by using a more efficient modulation scheme (802.11n
uses MIMO- orthogonal frequency division multiplexing (OFDM) with up to 52
data subcarriers per 20MHz channel and 108 per 40MHz channel, instead of 48 in
802.11a and 802.11g), plus a more efficient forward error-correcting code (rate 5/6
instead of 3/4), bringing the per-stream performance to 65Mb/s (20MHz channel)
or 135Mb/s (40MHz channel). By also reducing the guard interval (GI, a forced
idle time between symbols) duration to 400ns from the legacy 800ns, the maximum per-stream performance is raised to about 72.2Mb/s (20MHz channel) and
150Mb/s (40MHz channel). With four spatial streams operating in concert perfectly, this provides a maximum of about 600Mb/s.
Some 77 combinations of modulation and coding options are supported
by 802.11n, including 8 options for a single stream, 24 using the same or equal
modulation (EQM) on all streams, and 43 using unequal modulation (UEQM) on
multiple streams. Table 3-3 gives some of the combinations for modulation and
coding scheme according to the first 33 values of the modulation and coding scheme
(MCS) value. Higher values (33–76) include combinations for two channels (values 33–38), three channels (39–52), and four channels (53–76). MCS value 32 is a
special combination where the signals in the two halves of the 40MHz channel
Table 3-3 MCS values for 802.11n include combinations of equal and unequal modulation, different
FEC coding rates, up to four spatial streams using 20MHz- or 40MHz-wide channels, and
an 800ns or 400ns GI. The 77 combinations provide data rates from 6Mb/s to 600Mb/s.
Modulation Type
Rates (Mb/s)
Rates (Mb/s)
Link Layer
contain the same information. Each data rate column gives two values, one using
the legacy 800ns GI and one giving the greater data rate available using the shorter
400ns GI. The underlined values, 6Mb/s and 600Mb/s, represent the smallest and
largest throughput rates, respectively.
Table 3-3 shows the various combinations of coding, including binary phase shift
keying (BPSK), quadrature phase shift keying (QPSK), and various levels of quadrature
amplitude modulation (16- and 64-QAM), available with 802.11n. These modulation
schemes provide an increasing data rate for a given channel bandwidth. However,
the more high-performance and complex a modulation scheme, the more vulnerable it tends to be to noise and interference. Forward error correction (FEC) includes
a set of methods whereby redundant bits are introduced at the sender that can be
used to detect and repair bit errors introduced during delivery. For FEC, the code
rate is the ratio of the effective useful data rate to the rate imposed on the underlying communication channel. For example, a ½ code rate would deliver 1 useful
bit for every 2 bits sent.
802.11n may operate in one of three modes. In 802.11n-only environments,
the optional so-called greenfield mode, the PLCP contains special bit arrangements
(“training sequences”) known only to 802.11n equipment and does not interoperate
with legacy equipment. To maintain compatibility, 802.11n has two other interoperable modes. However, both of these impose a performance penalty to native 802.11n
equipment. One mode, called non-HT mode, essentially disables all 802.11n features
but remains compatible with legacy equipment. This is not a very interesting mode,
so we shall not discuss it further. However, a required mode called HT-mixed mode
supports both 802.11n and legacy operation, depending on which stations are communicating. The information required to convey an AP’s 802.11n capability to HT
STAs yet protect legacy STAs is provided in the PLCP, which is augmented to contain both HT and legacy information and is transmitted at a slower rate than in
greenfield mode so that it can be processed by legacy equipment. HT protection also
requires an HT AP to use self-directed CTS frames (or RTS/CTS frame exchanges)
at the legacy rate to inform legacy stations when it will use shared channels. Even
though RTS/CTS frames are short, the requirement to send them at the legacy rate
(6Mb/s) can significantly reduce an 802.11n WLAN’s performance.
When deploying an 802.11n AP, care should be taken to set up appropriate channel assignments. When using 40MHz channels, 802.11n APs should be
operated in the U-NII bands above 5GHz as there is simply not enough useful
spectrum to use these wider channels in the 2.4GHz ISM band. An optional BSS
feature called phased coexistence operation (PCO) allows an AP to periodically switch
between 20MHz and 40MHz channel widths, which can provide better coexistence between 802.11n APs operating near legacy equipment at the cost of some
additional throughput. Finally, it is worth mentioning that 802.11n APs generally
require more power than conventional APs. This higher power level exceeds the
basic 15W provided by 802.3af power-over-Ethernet (PoE) system wiring, meaning
that PoE+ (802.3at, capable of 30W) should be used unless some other form of
power such as a direct external power supply is available.
Section 3.5 Wireless LANs—IEEE 802.11(Wi-Fi)
Wi-Fi Security
There has been considerable evolution in the security model for 802.11 networks.
In its early days, 802.11 used an encryption method known as wired equivalent
privacy (WEP). WEP was later shown to be so weak that some replacement was
required. Industry responded with Wi-Fi protected access (WPA), which replaced
the way keys are used with encrypted blocks (see Chapter 18 for the basics of
cryptography). In WPA, a scheme called the Temporal Key Integrity Protocol (TKIP)
ensures, among other things, that each frame is encrypted with a different encryption key. It also includes a message integrity check, called Michael, that fixed one
of the major weaknesses in WEP. WPA was created as a placeholder that could be
used on fielded WEP-capable equipment by way of a firmware upgrade while the
IEEE 802.11i standards group worked on a stronger standard that was ultimately
absorbed into Clause 8 of [802.11-2007] and dubbed “WPA2” by industry. Both
WEP and WPA use the RC4 encryption algorithm [S96]. WPA2 uses the Advanced
Encryption Standard (AES) algorithm [AES01].
The encryption techniques we just discussed are aimed at providing privacy
between the station and AP, assuming the station has legitimate authorization to
be accessing the network. In WEP, and small-scale environments that use WPA
or WPA2, authorization is typically implemented by pre-placing a shared key
or password in each station as well as in the AP during configuration. A user
knowing the key is assumed to have legitimate access to the network. These keys
are also frequently used to initialize the encryption keys used to ensure privacy.
Using such pre-shared keys (PSKs) has limitations. For example, an administrator
may have considerable trouble in providing keys only to authorized users. If a
user becomes de-authorized, the PSK has to be replaced and all legitimate users
informed. This approach does not scale to environments with many users. As a
result, WPA and later standards support a port-based network access control standard
called 802.1X [802.1X-2010]. It provides a way to carry the Extensible Authentication
Protocol (EAP) [RFC3748] in IEEE 802 LANs (called EAPOL), including 802.3 and
802.11 [RFC4017]. EAP, in turn, can be used to carry many other standard and nonstandard authentication protocols. It can also be used to establish keys, including
WEP keys. Details of these protocols are given in Chapter 18, but we shall also see
the use of EAP when we discuss PPP in Section 3.6.
With the completion of the IEEE 802.11i group’s work, the RC4/TKIP combination in WPA was extended with a new algorithm called CCMP as part of WPA2.
CCMP is based on using the counter mode (CCM [RFC3610]) of the AES for confidentiality with cipher block chaining message authentication code (CBC-MAC; note the
“other” use of the term MAC here) for authentication and integrity. All AES processing is performed using a 128-bit block size and 128-bit keys. CCMP and TKIP
form the basis for a Wi-Fi security architecture named the Robust Security Network
(RSN), which supports Robust Security Network Access (RSNA). Earlier methods,
such as WEP, are called pre-RSNA methods. RSNA compliance requires support
for CCMP (TKIP is optional), and 802.11n does away with TKIP entirely. Table 3-4
provides a summary of this somewhat complicated situation.
Link Layer
Table 3-4 Wi-Fi security has evolved from WEP, which was found to be insecure, to WPA, to the
now-standard WPA2 collection of algorithms.
Key Stream Management
WEP (pre-RSNA)
PSK, (802.1X/EAP)
PSK, 802.1X/EAP
PSK, 802.1X/EAP
In all cases, both pre-shared keys as well as 802.1X can be used for authentication and initial keying. The major attraction of using 802.1X/EAP is that a managed
authentication server can be used to provide access control decisions on a per-user
basis to an AP. For this reason, authentication using 802.1X is sometimes referred to
as “Enterprise” (e.g., WPA-Enterprise). EAP itself can encapsulate various specific
authentication protocols, which we discuss in more detail in Chapter 18.
Wi-Fi Mesh (802.11s)
The IEEE is working on the 802.11s standard, which covers Wi-Fi mesh operation.
With mesh operation, wireless stations can act as data-forwarding agents (like
APs). The standard is not yet complete as of writing (mid-2011). The draft version
of 802.11s defines the Hybrid Wireless Routing Protocol (HWRP), based in part on the
IETF standards for Ad-Hoc On-Demand Distance Vector (AODV) routing [RFC3561]
and the Optimized Link State Routing (OLSR) protocol [RFC3626]. Mesh stations
(mesh STAs) are a type of QoS STA and may participate in HWRP or other routing
protocols, but compliant nodes must include an implementation of HWRP and the
associated airtime link metric. Mesh nodes coordinate using EDCA or may use an
optional coordinating function called mesh deterministic access. Mesh points (MPs)
are those nodes that form mesh links with neighbors. Those that also include AP
functionality are called mesh APs (MAPs). Conventional 802.11 stations can use
either APs or MAPs to access the rest of the wireless LAN.
The 802.11s draft specifies a new optional form of security for RSNA called
Simultaneous Authentication of Equals (SAE) authentication [SAE]. This security
protocol is a bit different from others because it does not require lockstep operation between a specially designated initiator and responder. Instead, stations
are treated as equals, and any station that first recognizes another may initiate a
security exchange (or this may happen simultaneously as two stations initiate an
Point-to-Point Protocol (PPP)
PPP stands for the Point-to-Point Protocol [RFC1661][RFC1662][RFC2153]. It is a popular method for carrying IP datagrams over serial links—from low-speed dial-up
modems to high-speed optical links [RFC2615]. It is widely deployed by some DSL
Section 3.6 Point-to-Point Protocol (PPP)
service providers, which also use it for assigning Internet system parameters (e.g.,
initial IP address and domain name server; see Chapter 6).
PPP should be considered more of a collection of protocols than a single protocol. It supports a basic method to establish a link, called the Link Control Protocol (LCP), as well as a family of NCPs, used to establish network-layer links for
various kinds of protocols, including IPv4 and IPv6 and possibly non-IP protocols,
after LCP has established the basic link. A number of related standards cover control of compression and encryption for PPP, and a number of authentication methods can be employed when a link is brought up.
Link Control Protocol (LCP)
The LCP portion of PPP is used to establish and maintain a low-level two-party
communication path over a point-to-point link. PPP’s operation therefore need
be concerned only with two ends of a single link; it does not need to handle the
problem of mediating access to a shared resource like the MAC-layer protocols of
Ethernet and Wi-Fi.
PPP generally, and LCP more specifically, imposes minimal requirements on
the underlying point-to-point link. The link must support bidirectional operation
(LCP uses acknowledgments) and operate either asynchronously or synchronously. Typically, LCP establishes a link using a simple bit-level framing format
based on the High-Level Data Link Control (HDLC) protocol. HDLC was already
a well-established framing format by the time PPP was designed [ISO3309]
[ISO4335]. IBM modified it to form Synchronous Data Link Control (SDLC), a protocol used as the link layer in its proprietary System Network Architecture (SNA)
protocol suite. HDLC was also used as the basis for the LLC standard in 802.2 and
ultimately for PPP as well. The format is shown in Figure 3-22.
Figure 3-22 The PPP basic frame format was borrowed from HDLC. It provides a protocol identifier, payload
area, and 2- or 4-byte FCS. Other fields may or may not be present, depending on compression
The PPP frame format, in the common case when HDLC-like framing is used
as shown in Figure 3-22, is surrounded by two 1-byte Flag fields containing the
fixed value 0x7E. These fields are used by the two stations on the ends of the
point-to-point link for finding the beginning and end of the frame. A small problem arises if the value 0x7E itself occurs inside the frame. This is handled in one of
Link Layer
two ways, depending on whether PPP is operating over an asynchronous or a synchronous link. For asynchronous links, PPP uses character stuffing (also called byte
stuffing). If the flag character appears elsewhere in the frame, it is replaced with
the 2-byte sequence 0x7D5E (0x7D is known as the “PPP escape character”). If the
escape character itself appears in the frame, it is replaced with the 2-byte sequence
0x7D5D. Thus, the receiver replaces 0x7D5E with 0x7E and 0x7D5D with 0x7D
upon receipt. On synchronous links (e.g., T1 lines, T3 lines), PPP uses bit stuffing.
Noting that the flag character has the bit pattern 01111110 (a contiguous sequence
of six 1 bits), bit stuffing arranges for a 0 bit to be inserted after any contiguous
string of five 1 bits appearing in a place other than the flag character itself. Doing
so implies that bytes may be sent as more than 8 bits, but this is generally OK, as
low layers of the serial processing hardware are able to “unstuff” the bit stream,
restoring it to its prestuffed pattern.
After the first Flag field, PPP adopts the HDLC Address (Addr) and Control
fields. In HDLC, the Address field would specify which station is being addressed,
but because PPP is concerned only with a single destination, this field is always
defined to have the value 0xFF (all stations). The Control field in HDLC is used to
indicate frame sequencing and retransmission behavior. As these link-layer reliability functions are not ordinarily implemented by PPP, the Control field is set
to the fixed value 0x03. Because both the Address and Control fields are fixed constants in PPP, they are often omitted during transmission with an option called
Address and Control Field Compression (ACFC), which essentially eliminates the two
There has been considerable debate over the years as to how much reliability
link-layer networks should provide, if any. With Ethernet, up to 16 retransmission attempts are made before giving up. Typically, PPP is configured to do no
retransmission, although there do exist specifications for adding retransmission
[RFC1663]. The trade-off can be subtle and is dependent on the types of traffic to
be carried. A detailed discussion of the considerations is contained in [RFC3366].
The Protocol field of the PPP frame indicates the type of data being carried.
Many different types of protocols can be carried in a PPP frame. The official list
and the assigned number used in the Protocol field are given by the “Point-to-Point
Protocol Field Assignments” document [PPPn]. In conforming to the HDLC specification, any protocol numbers are assigned such that the least significant bit of the
most significant byte equals 0 and the least significant bit of the least significant
byte equals 1. Values in the (hexadecimal) range 0x0000–0x3FFF identify networklayer protocols, and values in the 0x8000–0xBFFF range identify data belonging to
an associated NCP. Protocol values in the range 0x4000–0x7FFF are used for “lowvolume” protocols with no associated NCP. Protocol values in the range 0xC000–
0XEFFF identify control protocols such as LCP. In some circumstances the Protocol
Section 3.6 Point-to-Point Protocol (PPP)
field can be compressed to a single byte, if the Protocol Field Compression (PFC)
option is negotiated successfully during link establishment. This is applicable to
protocols with protocol numbers in the range 0x0000–0x00FF, which includes most
of the popular network-layer protocols. Note, however, that LCP packets always
use the 2-byte uncompressed format.
The final portion of the PPP frame contains a 16-bit FCS (a CRC16, with generator polynomial 10001000000100001) covering the entire frame except the FCS field
itself and Flag bytes. Note that the FCS value covers the frame before any byte or
bit stuffing has been performed. With an LCP option (see Section, the CRC
can be extended from 16 to 32 bits. This case uses the same CRC32 polynomial
mentioned previously for Ethernet. LCP Operation
LCP has a simple encapsulation beyond the basic PPP packet. It is illustrated in
Figure 3-23.
Figure 3-23
The LCP packet is a fairly general format capable of identifying the type of encapsulated data and
its length. LCP frames are used primarily in establishing a PPP link, but this basic format also
forms the basis of many of the various network control protocols.
The PPP Protocol field value for LCP is always 0xC021, which is not eliminated
using PFC, so as to minimize ambiguity. The Ident field is a sequence number
provided by the sender of LCP request frames and is incremented for each subsequent message. When forming a reply (ACK, NACK, or REJECT response), this
field is constructed by copying the value included in the request to the response
packet. In this fashion, the requesting side can identify replies to the appropriate
request by matching identifiers. The Code field gives the type of operation being
either requested or responded to: configure-request (0x01), configure-ACK (0x02),
configure-NACK (0x03), configure-REJECT (0x04), terminate-request (0x05), terminate-ACK (0x06), code-REJECT (0x07), protocol-REJECT (0x08), echo-request
(0x09), echo-reply (0x0A), discard-request (0x0B), identification (0x0C), and timeremaining (0x0D). Generally, ACK messages indicate acceptance of a set of options,
and NACK messages indicate a partial rejection with suggested alternatives. A
REJECT message rejects one or more options entirely. A rejected code indicates
that one of the field values contained in a previous packet is unknown. The Length
Link Layer
field gives the length of the LCP packet in bytes and is not permitted to exceed the
link’s maximum received unit (MRU), a form of maximum advised frame limit we
shall discuss later. Note that the Length field is part of the LCP protocol; the PPP
protocol in general does not provide such a field.
The main job of LCP is to bring up a point-to-point link to a minimal level.
Configure messages cause each end of the link to start the basic configuration procedure and establish agreed-upon options. Termination messages are used to clear
a link when complete. LCP also provides some additional features mentioned previously. Echo Request/Reply messages may be exchanged anytime a link is active
by LCP in order to verify operation of the peer. The Discard Request message can
be used for performance measurement; it instructs the peer to discard the packet
without responding. The Identification and Time-Remaining messages are used for
administrative purposes: to know the type of the peer system and to indicate the
amount of time allowed for the link to remain established (e.g., for administrative
or security reasons).
Historically, one common problem with point-to-point links occurs if a remote
station is in loopback mode or is said to be “looped.” Telephone company wide area
data circuits are sometimes put into loopback mode for testing—data sent at one
side is simply returned from the other. Although this may be useful for line testing, it is not at all helpful for data communication, so LCP includes ways to send
a magic number (an arbitrary number selected by the sender) to see if it is immediately returned in the same message type. If so, the line is detected as being looped,
and maintenance is likely required.
To get a better feeling for how PPP links are established and options are negotiated, Figure 3-24 illustrates a simplified packet exchange timeline as well as a
simplified state machine (implemented at both ends of the link).
The link is considered to be established once the underlying protocol layer
has indicated that an association has become active (e.g., carrier detected for
modems). Link quality testing, which involves an exchange of link quality reports
and acknowledgments (see Section, may also be accomplished during this
period. If the link requires authentication, which is common, for example, when
dialing in to an ISP, a number of additional exchanges may be required to establish the authenticity of one or both parties attached to the link. The link is terminated once the underlying protocol or hardware has indicated that the association
has stopped (e.g., carrier lost) or after having sent a link termination request and
received a termination ACK from the peer. LCP Options
Several options can be negotiated by LCP as it establishes a link for use by one or
more NCPs. We shall discuss two of the more common ones. The Asynchronous
Control Character Map (ACCM) or simply “asyncmap” option defines which control
characters (i.e., ASCII characters in the range 0x00–0x1F) need to be “escaped” as
PPP operates. Escaping a character means that the true value of the character is
Section 3.6 Point-to-Point Protocol (PPP)
$ .
$ .
Figure 3-24 LCP is used to establish a PPP link and agree upon options by each peer. The typical
exchange involves a pair of configure requests and ACKs that contain the option list,
an authentication exchange, data exchange (not pictured), and a termination exchange.
Because PPP is such a general-purpose protocol with many parts, many other types of
operations may occur between the establishment of a link and its termination.
not sent, but instead the PPP escape character (0x7D) is stuffed in front of a value
formed by XORing the original control character with the value 0x20. For example, the XOFF character (0x13) would be sent as (0x7D33). ACCM is used in cases
where control characters may affect the operation of the underlying hardware.
For example, if software flow control using XON/XOFF characters is enabled and
the XOFF character is passed through the link unescaped, the data transfer ceases
until the hardware observes an XON character. The asyncmap option is generally
specified as a 32-bit hexadecimal number where a 1 bit in the nth least significant
bit position indicates that the control character with value n should be escaped.
Thus, the asyncmap 0xffffffff would escape all control characters, 0x00000000
would escape none of them, and 0x000A0000 would escape XON (value 0x11) and
XOFF (value 0x13). Although the value 0xffffffff is the specified default, many
links today can operate safely with the asyncmap set to 0x00000000.
Link Layer
Because PPP lacks a Length field and serial lines do not typically provide framing, no immediate hard limit is set on the length of a PPP frame, in theory. In practice, some maximum frame size is typically given by specifying the MRU. When
a host specifies an MRU option (type 0x01), the peer is requested to never send
frames longer than the value provided in the MRU option. The MRU value is the
length of the data field in bytes; it does not count the various other PPP overhead
fields (i.e., Protocol, FCS, Flag fields). Typical values are 1500 or 1492 but may be as
large as 65,535. A minimum of 1280 is required for IPv6 operations. The standard
requires PPP implementations to accept frames as large as 1500 bytes, so the MRU
serves more as advice to the peer in choosing the packet size than as a hard limit
on the size. When small packets are interleaved with larger packets on the same
PPP link, the larger packets may use most of the bandwidth of a low-bandwidth
link, to the detriment of the small packets. This can lead to jitter (delay variance),
negatively affecting interactive applications such as remote login and VoIP. Configuring a smaller MRU (or MTU) can help mitigate this issue at the cost of higher
PPP supports a mechanism to exchange link quality reporting information.
During option negotiation, a configuration message including a request for a particular quality protocol may be included. Sixteen bits of the option are reserved to
specify the particular protocol, but the most common is a PPP standard involving
Link Quality Reports (LQRs) [RFC1989], using the value 0xC025 in the PPP Protocol
field. If this is enabled, the peer is asked to provide LQRs at some periodic rate.
The maximum time between LQRs requested is encoded as a 32-bit number present in the configuration option and expressed in 1/100s units. Peers may generate
LQRs more frequently than requested. LQRs include the following information:
a magic number, the number of packets and bytes sent and received, the number
of incoming packets with errors and the number of discarded packets, and the
total number of LQRs exchanged. A typical implementation allows the user to
configure how often LQRs are requested from the peer. Some also provide a way
to terminate the link if the quality history fails to meet some configured threshold.
LQRs may be requested after the PPP link has reached the Establish state. Each
LQR is given a sequence number, so it is possible to determine trends over time,
even in the face of reordering of LQRs.
Many PPP implementations support a callback capability. In a typical callback
setup, a PPP dial-up callback client calls in to a PPP callback server, authentication information is provided, and the server disconnects and calls the client back.
This may be useful in situations where call toll charges are asymmetric or for
some level of security. The protocol used to negotiate callback is an LCP option
with value 0x0D [RFC1570]. If agreed upon, the Callback Control Protocol (CBCP)
completes the negotiation.
Some compression and encryption algorithms used with PPP require a certain minimum number of bytes, called the block size, when operating. When data is
not otherwise long enough, padding may be added to cause the length to become
an even multiple of the block size. If present, padding is included beyond the data
Section 3.6 Point-to-Point Protocol (PPP)
area and prior to the PPP FCS field. A padding method known as self-describing
padding [RFC1570] alters the value of padding to be nonzero. Instead, each byte
gets the value of its offset in the pad area. Thus, the first byte of pad would have
the value 0x01, and the final byte contains the number of pad bytes that were
added. At most, 255 bytes of padding are supported. The self-describing padding
option (type 10) indicates to a peer the ability to understand this form of padding
and includes the maximum pad value (MPV), which is the largest pad value allowed
for this association. Recall that the basic PPP frame lacks an explicit Length field,
so a receiver can use self-describing padding to determine how many pad bytes
should be trimmed from the received data area.
To lessen the impact of the fixed costs of sending a header on every frame, a
method has been introduced to multiplex multiple distinct payloads of potentially
different protocols into the same PPP frame, an approach called PPPMux [RFC3153].
The primary PPP header Protocol field is set to multiplexed frame (0x0059), and then
each payload block is inserted into the frame. This is accomplished by introducing a 1- to 4-byte subframe header in front of each payload block. It includes 1 bit
(called PFF) indicating whether a Protocol field is included in the subframe header
and another 1-bit field (called LXT) indicating whether the following Length field
is 1 or 2 bytes. Beyond this, if present, is the 1- or 2-byte Protocol ID using the same
values and same compression approach as with the outer PPP header. A 0 value for
PFF (meaning no PID field is present) is possible when the subframe matches the
default PID established when the configuration state is set up using the PPPMux
Control Protocol (PPPMuxCP).
The PPP frame format in Figure 3-19 indicates that the ordinary PPP/HDLC
FCS can be either 16 or 32 bits. While the default is 16, 32-bit FCS values can be
enabled with the 32-bit FCS option. Other LCP options include the use of PFC and
ACFC, and selection of an authentication algorithm.
Internationalization [RFC2484] provides a way to convey the language and
character set to be used. The character set is one of the standard values from the
“charset registry” [IANA-CHARSET], and the language value is chosen from the
list in [RFC5646][RFC4647].
Multilink PPP (MP)
A special option to PPP called multilink PPP (MP) [RFC1990] can be used to
aggregate multiple point-to-point links to act as one. This idea is similar to link
aggregation, discussed earlier, and has been used for aggregating multiple circuit-switched channels together (e.g., ISDN B channels). MP includes a special
LCP option to indicate multilink support as well as a negotiation protocol to fragment and recombine fragmented PPP frames across multiple links. An aggregated
link, called a bundle, operates as a complete virtual link and can contain its own
configuration information. The bundle comprises a number of member links. Each
member link may also have its own set of options.
Link Layer
The obvious method to implement MP would be to simply alternate packets across the member links. This approach, called the bank teller’s algorithm, may
lead to reordering of packets, which can have undesirable performance impacts on
other protocols. (Although TCP/IP, for example, can function properly with reordered packets, it may not function as well as it could without reordering.) Instead,
MP places a 2- or 4-byte sequencing header in each packet, and the remote MP
receiver is tasked with reconstructing the proper order. The data frame appears as
shown in Figure 3-25.
Figure 3-25
An MP fragment contains a sequencing header that allows the remote end of a multilink bundle
to reorder fragments. Two formats of this header are supported: a short header (2 bytes) and a
long header (4 bytes).
In Figure 3-25 we see an MP fragment with the begin (B) and end (E) fragment
bit fields and Sequence Number field. Note that there is both a long format, in which
4 bytes are used for the fragmentation information, and a short format, in which
only 2 bytes are used. The format being used is selected during option negotiation
using the LCP short sequence number option (type 18). If a frame is not fragmented
but is carried in this format, both the B and E bits are set, indicating that the fragment is the first and last (i.e., it is the whole frame). Otherwise, the first fragment
has the BE bit combination set to 10 and the final fragment has the BE bits set to
01, and all fragments in between have them set to 00. The sequence number then
gives the packet number offset relative to the first fragment.
Use of MP is requested by including an LCP option called the multilink maximum received reconstructed unit (MRRU, type 18) that can act as a sort of larger MRU
applying to the bundle. Frames larger than any of the member link MRUs may
still be permitted across the MP link, up to the limit advertised in this value.
Because an MP bundle may span multiple member links, a method is needed
to identify member links as belonging to the same bundle. Member links in the
same bundle are identified by the LCP endpoint discriminator option (type 19). The
Section 3.6 Point-to-Point Protocol (PPP)
endpoint discriminator could be a phone number, a number derived from an IP or
MAC address, or some administrative string. Other than being common to each
member link, there are few restrictions on the form of this option.
The basic method of establishing MP as defined in [RFC1990] expects that
member links are going to be used symmetrically—about the same number of
fragments will be allocated to each of a fixed number of links. In order to achieve
more sophisticated allocations than this, the Bandwidth Allocation Protocol (BAP)
and Bandwidth Allocation Control Protocol (BACP) are specified in [RFC2125]. BAP
can be used to dynamically add or remove links from a bundle, and BACP can be
used to exchange information regarding how links should be added or removed
using BAP. This capability can be used to help implement bandwidth on demand
(BOD). In networks where some fixed resource needs to be allocated in order to
meet an application’s need for bandwidth (e.g., by dialing some number of telephone connections), BOD typically involves monitoring traffic and creating new
connections when usage is high and shutting down connections when usage is
low. This is useful, for example, in cases where some monetary charge is associated with the number of connections being used.
BAP/BACP makes use of a new link discriminator LCP option (LCP option type
23). This option contains a 16-bit numeric value that is required to be different for
each member link of a bundle. It is used by BAP to identify which links are to be
added or removed. BACP is negotiated once per bundle during the network phase
of a PPP link. Its main purpose is to identify a favored peer. That is, if more than one
bundle is being set up simultaneously among multiple peers, the favored peer is
preferentially allocated member links.
BAP includes three packet types: request, response, and indication. Requests
are to add a link to a bundle or to request the peer to delete a link from a bundle.
Indications convey the results of attempted additions back to the original requester
and are acknowledged. Responses are either ACKs or NACKs for these requests.
More details can be found in [RFC2125].
Compression Control Protocol (CCP)
Historically, PPP has been the protocol of choice when using relatively slow dialup modems. As a consequence, a number of methods have been developed to
compress data sent over PPP links. This type of compression is distinct both from
the types of compression supported in modem hardware (e.g., V.42bis, V.44) and
also from protocol header compression, which we discuss later. Today, several compression options are available. To choose among them for each direction on a PPP
link, LCP can negotiate an option to enable the Compression Control Protocol (CCP)
[RFC1962]. CCP acts like an NCP (see Section 3.6.5) but handles the details of configuring compression once the compression option is indicated in the LCP link
establishment exchange.
In behaving like an NCP, CCP can be negotiated only once the link has entered
the Network state. It uses the same packet exchange procedures and formats as
Link Layer
LCP, except the Protocol field is set to 0x80FD, there are some special options, and
in addition to the common Code field values (1–7) two new operations are defined:
reset-request (0x0e) and reset-ACK (0x0f). If an error is detected in a compressed
frame, a reset request can be used to cause the peer to reset compression state
(e.g., dictionaries, state variables, state machines, etc.). After resetting, the peer
responds with a reset-ACK.
One or more compressed packets may be carried within the information portion of a PPP frame (i.e., the portion including the LCP data and possibly pad
portions). Compressed frames carry the Protocol field value of 0x00FD, but the
mechanism used to indicate the presence of multiple compressed datagrams is
dependent on the particular compression algorithm used (see Section 3.6.6). When
used in conjunction with MP, CCP may be used either on the bundle or on some
combination of the member links. If used only on member links, the Protocol field
is set to 0x00FB (individual link compressed datagram).
CCP can enable one of about a dozen compression algorithms [PPPn]. Most
of the algorithms are not official standards-track IETF documents, although they
may be described in informational RFCs (e.g., [RFC1977] describes the BSD compression scheme, and [RFC2118] describes the Microsoft Point-to-Point Compression Protocol (MPPC)). If compression is being used, PPP frames are reconstructed
before further processing, so higher-layer PPP operations are not generally concerned with the details of the compressed frames.
PPP Authentication
Before a PPP link becomes operational in the Network state, it is often necessary to
establish the identity of the peer(s) of the link using some authentication (identity
verification) mechanism. The basic PPP specification has a default of no authentication, so the authentication exchange of Figure 3-24 would not be used in such
cases. More often, however, some form of authentication is required, and a number of protocols have evolved over the years to deal with this situation. In this
chapter we discuss them only from a high-level point of view and leave the details
for the chapter on security (Chapter 18). Other than no authentication, the simplest and least secure authentication scheme is called the Password Authentication
Protocol (PAP). This protocol is very simple—one peer requests the other to send a
password, and the password is so provided. As the password is sent unencrypted
over the PPP link, any eavesdropper on the line can simply capture the password
and use it later. Because of this significant vulnerability, PAP is not recommended
for authentication. PAP packets are encoded as LCP packets with the Protocol field
value set to 0xC023.
A somewhat more secure approach to authentication is provided by the Challenge-Handshake Authentication Protocol (CHAP) [RFC1994]. Using CHAP, a random
value is sent from one peer (called the authenticator) to the other. A response is
formed by using a special one-way (i.e., not easily invertible) function to combine
the random value with a shared secret key (usually derived from a password)
Section 3.6 Point-to-Point Protocol (PPP)
to produce a number that is sent in response. Upon receiving this response, the
authenticator can determine with a very high degree of confidence that its peer
possesses the correct secret key. This protocol never sends the key or password
over the link in a clear (unencrypted) form, so any eavesdropper is unable to learn
the secret. Because a different random value is used each time, the result of the
function changes for each challenge/response, so the values an eavesdropper may
be able to capture cannot be reused (played back) to impersonate the peer. However, CHAP is vulnerable to a “man in the middle” form of attack (see Chapter 18).
EAP [RFC3748] is an authentication framework available for many different
network types. It also supports many (about 40) different authentication methods,
ranging from simple passwords such as PAP and CHAP to more elaborate types
of authentication (e.g., smart cards, biometrics). EAP defines a message format
for carrying a variety of specific types of authentication formats, but additional
specifications are needed to define how EAP messages are carried over particular
types of links.
When EAP is used with PPP, the basic authentication method discussed so
far is altered. Instead of negotiating a specific authentication method early in the
link establishment (at LCP link establishment), the authentication operation may
be postponed until the Auth state (just before the Network state). This allows for
a greater richness in the types of information that can be used to influence access
control decisions by remote access servers (RASs). When there is a standard protocol
for carrying a variety of authentication mechanisms, a network access server may
not need to process the contents of EAP messages at all but can instead depend on
some other infrastructure authentication server (e.g., a RADIUS server [RFC2865])
to determine access control decisions. This is currently the design of choice for
enterprise networks and ISPs.
Network Control Protocols (NCPs)
Although many different NCPs can be used on a PPP link (even simultaneously),
we shall focus on the NCPs supporting IPv4 and IPv6. For IPv4, the NCP is called
the IP Control Protocol (IPCP) [RFC1332]. For IPv6, the NCP is IPV6CP [RFC5072].
Once LCP has completed its link establishment and authentication, each end of the
link is in the Network state and may proceed to negotiate a network-layer association using zero or more NCPs (one, such as IPCP, is typical).
IPCP, the standard NCP for IPv4, can be used to establish IPv4 connectivity over
a link and configure Van Jacobson header compression (VJ compression) [RFC1144].
IPCP packets may be exchanged after the PPP state machine has reached the Network state. IPCP packets use the same packet exchange mechanism and packet
format as LCP, except the Protocol field is set to 0x8021, and the Code field is limited
to the range 0–7. These values of the Code field correspond to the message types:
vendor-specific (see [RFC2153]), configure-request, configure-ACK, configureREJECT, terminate-request, terminate-ACK, and code-REJECT. IPCP can negotiate
a number of options, including an IP compression protocol (2), the IPv4 address
Link Layer
(3), and Mobile IPv4 [RFC2290] (4). Other options are available for learning the
location of primary and secondary domain name servers (see Chapter 11).
IPV6CP uses the same packet exchange and format as LCP, except it has two
different options: interface-identifier and IPv6-compression-protocol. The interface identifier option is used to convey a 64-bit IID value (see Chapter 2) used as
the basis for forming a link-local IPv6 address. Because it is used only on the local
link, it does not require global uniqueness. This is accomplished using a standard
link-local prefix for the higher-order bits of the IPv6 address and allowing the
lower-order bits to be a function of the interface identifier. This mimics IPv6 autoconfiguration (see Chapter 6).
Header Compression
PPP dial-up lines have historically been comparatively slow (54,000 bits/s or less),
and many small packets are often used with TCP/IP (e.g., for TCP’s acknowledgments; see Chapter 15). Most of these packets contain a TCP and IP header that
changes little from one packet to another on the same TCP connection. Other
higher-layer protocols behave similarly. Thus, it is useful to have a way of compressing the headers of these higher-layer protocols (or eliminating them) so that
fewer bytes need to be carried over relatively slow point-to-point links. The methods employed to compress or eliminate headers have evolved over time. We discuss
them in chronological order, beginning with VJ compression, mentioned earlier.
In VJ compression, portions of the higher-layer (TCP and IP) headers are
replaced with a small, 1-byte connection identifier. [RFC1144] discusses the origin
of this approach, using an older point-to-point protocol called CSLIP (Compressed
Serial Line IP). A typical IPv4 header is 20 bytes, and a TCP header without options
is another 20. Together, a common combined TCP/IPv4 header is thus 40 bytes,
and many of the fields do not change from packet to packet. Furthermore, many
of the fields that do change from packet to packet change only slightly or in a
limited way. When the nonchanging values are sent over a link once (or a small
number of times) and kept in a table, a small index can be used as a replacement
for the constants in subsequent packets. The limited changing values are then
encoded differentially (i.e., only the amount of change is sent). As a result, the
entire 40-byte header can usually be compressed to an effective 3 or 4 bytes. This
can significantly improve TCP/IP performance over slow links.
The next step in the evolution of header compression is simply called IP header
compression [RFC2507][RFC3544]. It provides a way to compress the headers of
multiple packets using both TCP or UDP transport-layer protocols and either IPv4
or IPv6 network-layer protocols. The techniques are a logical extension and generalization of the VJ compression technique that applies to more protocols, and to
links other than PPP links. [RFC2507] points out the necessity of some strong error
detection mechanism in the underlying link layer because erroneous packets can
be constructed at the egress of a link if compressed header values are damaged in
transit. This is important to recognize when header compression is used on links
that may not have as strong an FCS computation as PPP.
Section 3.6 Point-to-Point Protocol (PPP)
The most recent step in the evolution of header compression is known as
Robust Header Compression (ROHC) [RFC5225]. It further generalizes IP header
compression to cover more transport protocols and allows more than one form
of header compression to operate simultaneously. Like the IP header compression
mentioned previously, it can be used over various types of links, including PPP.
We now look at the debugging output of a PPP server interacting with a client
over a dial-in modem. The dialing-in client is an IPv6-capable Microsoft Windows
Vista machine, and the server is Linux. The Vista machine is configured to negotiate multilink capability even on single links (Properties | Options | PPP Settings),
for demonstration purposes, and the server is configured to require an encryption
protocol negotiated using CCP (see MPPE in the following listing):
data dev=ttyS0, pid=28280, caller='none', conn='38400',
name='',cmd='/usr/sbin/pppd', user='/AutoPPP/'
pppd 2.4.4 started by a_ppp, uid 0
using channel 54
Using interface ppp0
ppp0 <--> /dev/ttyS0
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <auth eap>
<magic 0xa5ccc449><pcomp> <accomp>]
rcvd [LCP ConfNak id=0x1 <auth chap MS-v2>]
sent [LCP ConfReq id=0x2 <asyncmap 0x0> <auth chap MS-v2>
<magic 0xa5ccc449><pcomp> <accomp>]
rcvd [LCP ConfAck id=0x2 <asyncmap 0x0> <auth chap MS-v2>
<magic 0xa5ccc449><pcomp> <accomp>]
rcvd [LCP ConfReq id=0x2 <asyncmap 0x0> <magic 0xa531e06>
<pcomp> <accomp><callback CBCP> <mrru 1614>
<endpoint [local:12.92.67.ef.2f.fe.44.6e.84.f8.
sent [LCP ConfRej id=0x2 <callback CBCP> <mrru 1614>]
rcvd [LCP ConfReq id=0x3 <asyncmap 0x0> <magic 0xa531e06>
<pcomp> <accomp>
<endpoint [local:12.92.67.ef.2f.fe.44.6e.84.f8.
sent [LCP ConfAck id=0x3 <asyncmap 0x0> <magic 0xa531e06>
<pcomp> <accomp>
<endpoint [local:12.92.67.ef.2f.fe.44.6e.84.f8.
sent [CHAP Challenge id=0x1a <4d53c52b8e7dcfe7a9ea438b2b4daf55>,
name = "dialer"]
rcvd [LCP Ident id=0x4 magic=0xa531e06 "MSRASV5.20"]
rcvd [LCP Ident id=0x5 magic=0xa531e06 "MSRAS-0-VISTA"]
rcvd [CHAP Response id=0x1a
00000000033a555d2a77bd1fa692f2a0af707cd 4f0c0072c379c82e0f00>,
name = "dialer"]
sent [CHAP Success id=0x1a
"S=7E0B6B513215C87520BEF6725EF8A9945C28E918M=Access granted"]
Link Layer
[CCP ConfReq id=0x1 <mppe +H -M +S +L -D -C>]
[IPV6CP ConfReq id=0x6 <addr fe80::0000:0000:dead:beef>]
[IPV6CP TermAck id=0x6]
[CCP ConfReq id=0x7 <mppe -H -M -S -L -D +C>]
[CCP ConfNak id=0x7 <mppe +H -M +S +L -D -C>]
[IPCP ConfReq id=0x8 <compress VJ 0f 01> <addr>
<ms-dns1> <ms-wins> <ms-dns3>
sent [IPCP TermAck id=0x8]
rcvd [CCP ConfNak id=0x1 <mppe -H -M +S -L -D -C>]
sent [CCP ConfReq id=0x2 <mppe -H -M +S -L -D -C>]
rcvd [CCP ConfReq id=0x9 <mppe -H -M +S -L -D -C>]
sent [CCP ConfAck id=0x9 <mppe -H -M +S -L -D -C>]
rcvd [CCP ConfAck id=0x2 <mppe -H -M +S -L -D -C>]
MPPE 128-bit stateful compression enabled
sent [IPCP ConfReq id=0x1 <compress VJ 0f 01> <addr>]
sent [IPV6CP ConfReq id=0x1 <addr fe80::0206:5bff:fedd:c5c3>]
rcvd [IPCP ConfAck id=0x1 <compress VJ 0f 01> <addr>]
rcvd [IPV6CP ConfAck id=0x1 <addr fe80::0206:5bff:fedd:c5c3>]
rcvd [IPCP ConfReq id=0xa <compress VJ 0f 01>
<addr> <ms-dns1>
<ms-wins> <ms-dns3> <ms-wins>]
sent [IPCP ConfRej id=0xa <ms-wins> <ms-wins>]
rcvd [IPV6CP ConfReq id=0xb <addr fe80::0000:0000:dead:beef>]
sent [IPV6CP ConfAck id=0xb <addr fe80::0000:0000:dead:beef>]
rcvd [IPCP ConfAck id=0x1 <compress VJ 0f 01> <addr>]
rcvd [IPV6CP ConfAck id=0x1 <addr fe80::0206:5bff:fedd:c5c3>]
local LL address fe80::0206:5bff:fedd:c5c3
remote LL address fe80::0000:0000:dead:beef
rcvd [IPCP ConfReq id=0xc <compress VJ 0f 01>
<addr> <ms-dns1> <ms-dns3>]
sent [IPCP ConfNak id=0xc <addr> <ms-dns1>
sent [IPCP ConfAck id=0xd <compress VJ 0f 01> <addr>
<ms-dns1> <ms-dns3>]
local IP address
remote IP address
... data ...
Here we can see a somewhat involved PPP exchange, as viewed from the
server. The PPP server process creates a (virtual) network interface called ppp0,
which is awaiting an incoming connection on the dial-up modem attached to
serial port ttyS0. Once the incoming connection arrives, the server requests an
asyncmap of 0x0, EAP authentication, PFC, and ACFC. The client refuses EAP
authentication and instead suggests MS-CHAP-v2 (ConfNak) [RFC2759]. The
server then tries again, this time using MS-CHAP-v2, which is then accepted and
acknowledged (ConfAck). Next, the incoming request includes CBCP; an MRRU
of 1614 bytes, which is associated with MP support; and an endpoint ID. The server
rejects the request for CBCP and multilink operation (ConfRej). The endpoint
discriminator is once again sent by the client, this time without the MRRU, and is
Section 3.7 Loopback
accepted and acknowledged. Next, the server sends a CHAP challenge with the
name dialer. Before a response to the challenge arrives, two incoming identity
messages arrive, indicating that the peer is identified by the strings MSRASV5.20
and MSRAS-0-VISTA. Finally, the CHAP response arrives and is validated as correct, and an acknowledgment indicates that access is granted. PPP then moves on
to the Network state.
Once in the Network state, the CCP, IPCP, and IPV6CP NCPs are exchanged.
CCP attempts to negotiate Microsoft Point-to-Point Encryption (MPPE) [RFC3078].
MPPE is somewhat of an anomaly, as it is really an encryption protocol, and rather
than compressing the packet it actually expands it by 4 bytes. It does, however,
provide a relatively simple means of establishing encryption early in the negotiation process. The options +H -M +S +L -D -C indicate whether MPPE stateless
operation is desired (H), what cryptographic key strength is available (secure, S;
medium, M; or low, L), an obsolete D bit, and whether a separate, proprietary compression protocol called MPPC [RFC2118] is desired (C). Eventually the two peers
agree on stateful mode using strong 128-bit keying (-H, +S). Note that during the
middle of this negotiation, the client attempts to send an IPCP request, but the
server responds with an unsolicited TermAck (a message defined within LCP
that ICPC adopts). This is used to indicate to the peer that the server is “in need of
renegotiation” [RFC1661].
After the successful negotiation of MPPE, the server requests the use of VJ
header compression and provides its IPv4 and IPv6 addresses, and
fe80::0206:5bff:fedd:c5c3. This IPv6 address is derived from the server’s
Ethernet MAC address 00:06:5B:DD:C5:C3. The client initially suggests its IPv4
address and name servers to be using IPCP, but this is rejected. The client
then requests to use fe80::0000:0000:dead:beef as its IPv6 address, which
is accepted and acknowledged. Finally, the client ACKs both the IPv4 and IPv6
addresses of the server, and the IPv6 addresses have been established. Next, the
client again requests IPv4 and server addresses of, which is rejected in
favor of These are accepted and acknowledged.
As we can see from this exchange, the PPP negotiation is both flexible and
tedious. There are many options that can be attempted, rejected, and renegotiated.
While this may not be a big problem on a link with low delay, imagine how long
this exchange could take if each message took a few seconds (or longer) to reach its
destination, as might occur over a satellite link, for example. Link establishment
would be a visibly long procedure for the user.
Although it may seem surprising, in many cases clients may wish to communicate
with servers on the same computer using Internet protocols such as TCP/IP. To
enable this, most implementations support a network-layer loopback capability that
typically takes the form of a virtual loopback network interface. It acts like a real
Link Layer
network interface but is really a special piece of software provided by the operating system to enable TCP/IP and other communications on the same host computer. IPv4 addresses starting with 127 are reserved for this, as is the IPv6 address
::1 (see Chapter 2 for IPv4 and IPv6 addressing conventions). Traditionally, UNIXlike systems including Linux assign the IPv4 address of (::1 for IPv6) to the
loopback interface and assign it the name localhost. An IP datagram sent to the
loopback interface must not appear on any network. Although we could imagine
the transport layer detecting that the other end is a loopback address and shortcircuiting some of the transport-layer logic and all of the network-layer logic, most
implementations perform complete processing of the data in the transport layer
and network layer and loop the IP datagram back up in the network stack only
when the datagram leaves the bottom of the network layer. This can be useful for
performance measurement, for example, because the amount of time required to
execute the stack software can be measured without any hardware overheads. In
Linux, the loopback interface is called lo.
Linux% ifconfig lo
lo Link encap:Local Loopback
inet addr: Mask:
inet6 addr: ::1/128 Scope:Host
RX packets:458511 errors:0 dropped:0 overruns:0 frame:0
TX packets:458511 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:266049199 (253.7 MiB)
TX bytes:266049199 (253.7 MiB)
Here we see that the local loopback interface has the IPv4 address
and a subnet mask of (corresponding to class A network number 127
in classful addressing). The IPv6 address ::1 has a 128-bit-long prefix, so it represents only a single address. The interface has an MTU of 16KB (this can be configured to a much larger size, up to 2GB). A significant amount of traffic, nearly half a
million packets, has passed through the interface without error since the machine
was initialized two months earlier. We would not expect to see errors on the local
loopback device, given that it never really sends packets on any network.
In Windows, the Microsoft Loopback Adapter is not installed by default, even
though IP loopback is still supported. This adapter can be used for testing various
network configurations even when a physical network interface is not available.
To install it under Windows XP, select Start | Control Panel | Add Hardware |
Select Network Adapters from list | Select Microsoft as manufacturer | Select
Microsoft Loopback Adapter. For Windows Vista or Windows 7, run the program
hdwwiz from the command prompt and add the Microsoft Loopback Adapter
manually. Once this is performed, the ipconfig command reveals the following
(this example is from Windows Vista):
Section 3.7 Loopback
C:\> ipconfig /all
Ethernet adapter Local Area Connection 2:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Loopback Adapter
Physical Address. . . . . . . . . : 02-00-4C-4F-4F-50
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . :
Autoconfiguration IPv4 Address. . :
Subnet Mask . . . . . . . . . . . :
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 302121036
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
NetBIOS over Tcpip. . . . . . . . : Enabled
Here we can see that the interface has been created, has been assigned both
IPv4 and IPv6 addresses, and appears as a sort of virtual Ethernet device. Now the
machine has several loopback addresses:
C:\> ping
Pinging with 32 bytes of data:
Reply from bytes=32 time<1ms TTL=128
Reply from bytes=32 time<1ms TTL=128
Reply from bytes=32 time<1ms TTL=128
Reply from bytes=32 time<1ms TTL=128
Ping statistics for
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
C:\> ping ::1
Pinging ::1 from ::1 with 32 bytes of data:
Reply from ::1: time<1ms
Reply from ::1: time<1ms
Reply from ::1: time<1ms
Reply from ::1: time<1ms
Ping statistics for ::1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
C:\> ping
Pinging with 32 bytes of data:
Reply from bytes=32 time<1ms TTL=128
Reply from bytes=32 time<1ms TTL=128
Reply from bytes=32 time<1ms TTL=128
Link Layer
Reply from bytes=32 time<1ms TTL=128
Ping statistics for
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Here we can see that in IPv4, any destination address starting with 127 is
looped back. For IPv6, however, only the single address ::1 is defined for loopback
operation. We can also see how the loopback adapter with address
returned data immediately. One subtlety to which we will return in Chapter 9 is
whether multicast or broadcast datagrams should be copied back to the sending
computer (over the loopback interface). This choice can be made by each individual application.
MTU and Path MTU
As we can see from Figure 3-3, there is a limit on the size of the frame available for
carrying the PDUs of higher-layer protocols in many link-layer networks such as
Ethernet. This usually limits the number of payload bytes to about 1500 for Ethernet and often the same amount for PPP in order to maintain compatibility with
Ethernet. This characteristic of the link layer is called the maximum transmission
unit (MTU). Most packet networks (like Ethernet) have a fixed upper limit. Most
stream-type networks (serial links) have a configurable limit that is then used by
framing protocols such as PPP. If IP has a datagram to send, and the datagram is
larger than the link layer’s MTU, IP performs fragmentation, breaking the datagram up into smaller pieces (fragments), so that each fragment is smaller than the
MTU. We discuss IP fragmentation in Chapters 5 and 10.
When two hosts on the same network are communicating with each other, it is
the MTU of the local link interconnecting them that has a direct effect on the size
of datagrams that are used during the conversation. When two hosts communicate across multiple networks, each link can have a different MTU. The minimum
MTU across the network path comprising all of the links is called the path MTU.
The path MTU between any two hosts need not be constant over time. It
depends on the path being used at any time, which can change if the routers or
links in the network fail. Also, paths are often not symmetric (i.e., the path from
host A to B may not be the reverse of the path from B to A); hence the path MTU
need not be the same in the two directions.
[RFC1191] specifies the path MTU discovery (PMTUD) mechanism for IPv4,
and [RFC1981] describes it for IPv6. A complementary approach that avoids some
of the issues with these mechanisms is described in [RFC4821]. PMTU discovery is
used to determine the path MTU at a point in time and is required of IPv6 implementations. In later chapters we shall see how this mechanism operates after we
have described ICMP and IP fragmentation. We shall also see what effect it can
have on transport performance when we discuss TCP and UDP.
Section 3.9 Tunneling Basics
Tunneling Basics
In some cases it is useful to establish a virtual link between one computer and
another across the Internet or other network. VPNs, for example, offer this type of
service. The method most commonly used to implement these types of services is
called tunneling. Tunneling, generally speaking, is the idea of carrying lower-layer
traffic in higher-layer (or equal-layer) packets. For example, IPv4 can be carried in
an IPv4 or IPv6 packet; Ethernet can be carried in a UDP or IPv4 or IPv6 packet,
and so on. Tunneling turns the idea of strict layering of protocols on its head and
allows for the formation of overlay networks (i.e., networks where the “links” are
really virtual links implemented in some other protocol instead of physical connections). It is a very powerful and useful technique. Here we discuss the basics
of some of the tunneling options.
There is a great variety of methods for tunneling packets of one protocol
and/or layer over another. Three of the more common protocols used to establish
tunnels include Generic Routing Encapsulation (GRE) [RFC2784], the Microsoft proprietary Point-to-Point Tunneling Protocol (PPTP) [RFC2637], and the Layer 2 Tunneling Protocol (L2TP) [RFC3931]. Others include the earlier nonstandard IP-in-IP
tunneling protocol [RFC1853]. GRE and LT2P were developed to standardize and
replace IP-in-IP and PPTP, respectively, but all of these approaches are still in use.
We shall focus on GRE and PPTP, with more emphasis on PPTP, as it is more visible
to individual users even though it is not an IETF standard. L2TP is often used with
security at the IP layer (IPsec; see Chapter 18) because L2TP by itself does not provide security. Because GRE and PPTP are closely related, we now look at the GRE
header in Figure 3-26, in both its original standard and revised standard forms.
. 6
Figure 3-26
The basic GRE header is only 4 bytes but includes the option of a 16-bit checksum (of a type common to many Internet protocols). The header was later extended to include an identifier (Key field)
common to multiple packets in a flow, and a Sequence Number, to help in resequencing packets that
get out of order.
Link Layer
As can be seen from the headers in Figure 3-26, the baseline GRE specification
[RFC2784] is rather simple and provides only a minimal encapsulation for other
packets. The first bit field (C) indicates whether a checksum is present. If it is,
the Checksum field contains the same type of checksum found in many Internetrelated protocols (see Section 5.2.2). If the Checksum field is present, the Reserved1
field is also present and is set to 0. [RFC2890] extends the basic format to include
optional Key and Sequence Number fields, present if the K and S bit fields from
Figure 3-26 are set to 1, respectively. If present, the Key field is arranged to be a
common value in multiple packets, indicating that they belong to the same flow
of packets. The Sequence Number field is used in order to reorder packets if they
should become out of sequence (e.g., by going through different links).
Although GRE forms the basis for and is used by PPTP, the two protocols serve
somewhat different purposes. GRE tunnels are typically used within the network
infrastructure to carry traffic between ISPs or within an enterprise intranet to
serve branch offices and are not necessarily encrypted, although GRE tunnels can
be combined with IPsec. PPTP, conversely, is most often used between users and
their ISPs or corporate intranets and is encrypted (e.g., using MPPE). PPTP essentially combines GRE with PPP, so GRE can provide the virtual point-to-point link
upon which PPP operates. GRE carries its traffic using IPv4 or IPv6 and as such
is a layer 3 tunneling technology. PPTP is more often used to carry layer 2 frames
(such as Ethernet) so as to emulate a direct LAN (link-layer) connection. This can
be used for remote access to corporate networks, for example. PPTP uses a nonstandard variation on the standard GRE header (see Figure 3-27).
& 5 . 6 V
Figure 3-27 The PPTP header is based on an older, nonstandard GRE header. It includes a sequence number,
a cumulative packet acknowledgment number, and some identification information. Most of the
fields in the first word are set to 0.
We can see a number of differences in Figure 3-27 from the standard GRE
header, including the extra R, s, and A bit fields, additional Flags field, and Recur
field. Most of these are simply set to 0 and not used (their assignment is based on
an older, nonstandard version of GRE). The K, S, and A bit fields indicate that the
Key, Sequence Number, and Acknowledgment Number fields are present. If present,
the value of the Sequence Number field holds the largest packet number seen by the
Section 3.9 Tunneling Basics
We now turn to the establishment of a PPTP session. We shall conclude later
with a brief discussion of some of PPTP’s other capabilities. The following example
is similar to the PPP link establishment example given earlier, except now instead
of using a dial-up link, PPTP is providing the “raw” link to PPP. Once again, the
client is Windows Vista, and the server is Linux. This output comes from the
/var/log/messages file when the debug option is enabled:
MGR: Manager process started
MGR: Maximum of 100 connections available
MGR: Launching /usr/sbin/pptpctrl to handle client
CTRL: local address =
CTRL: remote address =
CTRL: pppd options file = /etc/ppp/options.pptpd
CTRL: Client control connection started
CTRL: Received PPTP Control Message (type: 1)
CTRL: I wrote 156 bytes to the client.
CTRL: Sent packet to client
CTRL: Received PPTP Control Message (type: 7)
pptpd: CTRL: Set parameters to 100000000 maxbps, 64 window size
pptpd: CTRL: Made a OUT CALL RPLY packet
pptpd: CTRL: Starting call (launching pppd, opening GRE)
pptpd: CTRL: pty_fd = 6
pptpd: CTRL: tty_fd = 7
pptpd: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd
pptpd: CTRL (PPPD Launcher): local address =
pptpd: CTRL (PPPD Launcher): remote address =
pppd: pppd 2.4.4 started by root, uid 0
pppd: using channel 60
pptpd: CTRL: I wrote 32 bytes to the client.
pptpd: CTRL: Sent packet to client
pppd: Using interface ppp0
pppd: Connect: ppp0 <--> /dev/pts/1
pppd: sent [LCP ConfReq id=0x1 <asyncmap 0x0> <auth chap MS-v2>
<magic 0x4e2ca200> <pcomp> <accomp>]
pptpd: CTRL: Received PPTP Control Message (type: 15)
pptpd: CTRL: Got a SET LINK INFO packet with standard ACCMs
pptpd: GRE: accepting packet #0
pppd: rcvd [LCP ConfReq id=0x0 <mru 1400> <magic 0x5e565505>
<pcomp> <accomp>]
pppd: sent [LCP ConfAck id=0x0 <mru 1400> <magic 0x5e565505>
<pcomp> <accomp>]
pppd: sent [LCP ConfReq id=0x1 <asyncmap 0x0> <auth chap MS-v2>
<magic 0x4e2ca200> <pcomp> <accomp>]
pptpd: GRE: accepting packet #1
pppd: rcvd [LCP ConfAck id=0x1 <asyncmap 0x0> <auth chap MS-v2>
<magic 0x4e2ca200> <pcomp> <accomp>]
pppd: sent [CHAP Challenge id=0x3
<eb88bfff67d1c239ef73e98ca32646a5>, name = "dialer"]
pptpd: CTRL: Received PPTP Control Message (type: 15)
pptpd: CTRL: Ignored a SET LINK INFO packet with real ACCMs!
Link Layer
pptpd: GRE: accepting packet #2
pppd: rcvd [CHAP Response id=0x3<276f3678f0f03fa57f64b3c367529565000000
3179160900>, name = "dialer"]
pppd: sent [CHAP Success id=0x3
pppd: sent [CCP ConfReq id=0x1 <mppe +H -M +S +L -D -C>]
pptpd: GRE: accepting packet #3
pppd: rcvd [IPV6CP ConfReq id=0x1 <addr fe80::1cfc:fddd:8e2c:e118>]
pppd: sent [IPV6CP TermAck id=0x1]
pptpd: GRE: accepting packet #4
pppd: rcvd [CCP ConfReq id=0x2 <mppe +H -M -S -L -D -C>]
pppd: sent [CCP ConfNak id=0x2 <mppe +H -M +S +L -D -C>]
pptpd: GRE: accepting packet #5
pptpd: GRE: accepting packet #6
pppd: rcvd [IPCP ConfReq id=0x3 <addr> <ms-dns1>
<ms-wins> <ms-dns3> <ms-wins>]
pptpd: GRE: accepting packet #7
pppd: sent [IPCP TermAck id=0x3]
pppd: rcvd [CCP ConfNak id=0x1 <mppe +H -M +S -L -D -C>]
pppd: sent [CCP ConfReq id=0x2 <mppe +H -M +S -L -D -C>]
pppd: rcvd [CCP ConfReq id=0x4 <mppe +H -M +S -L -D -C>]
pppd: sent [CCP ConfAck id=0x4 <mppe +H -M +S -L -D -C>]
pptpd: GRE: accepting packet #8
pppd: rcvd [CCP ConfAck id=0x2 <mppe +H -M +S -L -D -C>]
pppd: MPPE 128-bit stateless compression enabled
pppd: sent [IPCP ConfReq id=0x1 <addr>]
pppd: sent [IPV6CP ConfReq id=0x1 <addr fe80::0206:5bff:fedd:c5c3>]
pptpd: GRE: accepting packet #9
pppd: rcvd [IPCP ConfAck id=0x1 <addr>]
pptpd: GRE: accepting packet #10
pppd: rcvd [IPV6CP ConfAck id=0x1 <addr fe80::0206:5bff:fedd:c5c3>]
pptpd: GRE: accepting packet #11
pppd: rcvd [IPCP ConfReq id=0x5 <addr>
<ms-dns1> <ms-wins>
<ms-dns3> <ms-wins>]
pppd: sent [IPCP ConfRej id=0x5 <ms-wins> <ms-wins>]
pptpd: GRE: accepting packet #12
pppd: rcvd [IPV6CP ConfReq id=0x6 <addr fe80::1cfc:fddd:8e2c:e118>]
pppd: sent [IPV6CP ConfAck id=0x6 <addr fe80::1cfc:fddd:8e2c:e118>]
pppd: local LL address fe80::0206:5bff:fedd:c5c3
pppd: remote LL address fe80::1cfc:fddd:8e2c:e118
pptpd: GRE: accepting packet #13
pppd: rcvd [IPCP ConfReq id=0x7 <addr>
<ms-dns1> <ms-dns3>]
pppd: sent [IPCP ConfNak id=0x7 <addr>
<ms-dns1> <ms-dns3>]
pptpd: GRE: accepting packet #14
pppd: rcvd [IPCP ConfReq id=0x8 <addr>
<ms-dns1> <ms-dns3>]
pppd: sent [IPCP ConfAck id=0x8 <addr>
<ms-dns1> <ms-dns3>]
Section 3.9 Tunneling Basics
pppd: local IP address
pppd: remote IP address
pptpd: GRE: accepting packet #15
pptpd: CTRL: Sending ECHO REQ id 1
pptpd: CTRL: Made a ECHO REQ packet
pptpd: CTRL: I wrote 16 bytes to the client.
pptpd: CTRL: Sent packet to client
This output looks similar to the PPP example we examined earlier, except this
one has output from both the pppd process as well as a pptpd process. These
processes work together to establish PPTP sessions at the server. The setup begins
with pptpd receiving a type 1 control message, indicating that the client wishes
to establish a control connection. PPTP uses a separate control and data stream,
so first the control stream is set up. After responding to this request, the server
receives a type 7 control message indicating an outgoing call request from the peer.
The maximum speed (in bits per second) is set to a large value of 100,000,000, which
effectively means it is unbounded. The window is set to 64, a concept we typically
encounter in transport protocols such as TCP (see Chapter 15). Here the window
is used for flow control. That is, PPTP uses its sequence numbers and acknowledgment numbers to determine how many frames reach the destination successfully. If
too few frames are successfully delivered, the sender slows down. To determine the
amount of time to wait for an acknowledgment for frames it sends, PPTP uses an
adaptive timeout mechanism based on estimating the round-trip time of the link.
We shall see this type of calculation again when we study TCP.
Soon after the window is set, the pppd application begins to run and process
the PPP data as we saw before in the dial-up example. The only real difference
between the two is that pptpd relays packets to the pppd process as they arrive
and depart, and a few special PPTP messages (such as set link info and echo
request) are processed by pptpd itself. This example illustrates how the PPTP
protocol really acts as a GRE tunneling agent for PPP packets. This is convenient
because an existing PPP implementation (here, pppd) can be used as is to process
the encapsulated PPP packets. Note that while GRE is itself ordinarily encapsulated in IPv4 packets, similar functionality is available using IPv6 to tunnel packets [RFC2473].
Unidirectional Links
An interesting issue arises when the link to be used operates in only one direction. Such links are called unidirectional links (UDLs), and many of the protocols
described so far do not operate properly in such circumstances because they
require exchanges of information (e.g., PPP’s configuration messages). To deal
with this situation, a standard has been created whereby tunneling over a second Internet interface can be combined with operation of the UDL [RFC3077]. The
typical situation where this arises is an Internet connection that uses a satellite for
downstream traffic (headed to the user) and a dial-up modem link for upstream
Link Layer
traffic. This setup can be useful in cases where the satellite-connected user’s usage
is dominated by downloading as opposed to uploading and was commonly used
in early satellite Internet installations. It operates by encapsulating link-layer
upstream traffic in IP packets using a GRE encapsulation.
To establish and maintain tunnels automatically at the receiver, [RFC3077]
specifies a Dynamic Tunnel Configuration Protocol (DTCP). DTCP involves sending multicast Hello messages on the downlink so that any interested receiver can
learn about the existence of the UDL and its MAC and IP addresses. In addition,
Hello messages indicate a list of tunnel endpoints within the network that can be
reached by the user’s secondary interface. After the user selects which tunnel endpoint to use, DTCP arranges for return traffic to be encapsulated with the same
MAC type as the UDL in GRE tunnels. The service provider arranges to receive
these GRE-encapsulated layer 2 frames (frequently Ethernet), extract them from
the tunnel, and forward them appropriately. Thus, although the upstream side of
the UDLs (provider’s side) requires manual tunnel configuration, the downstream
side, which includes many more users, has automatically configured tunnels.
Note that this approach to handling UDLs essentially “hides” the link asymmetry from the upper-layer protocols. As a consequence, the performance (latency,
bandwidth) of the “two” directions of the link may be highly asymmetric and may
adversely affect higher-layer protocols [RFC3449].
As the satellite example helps to illustrate, one significant issue with tunnels
is the amount of effort required to configure them, which has traditionally been
done by hand. Typically, tunnel configuration involves selecting the endpoints of
a tunnel and configuring the devices located at the tunnel endpoints with an IP
address of the peer, and perhaps also providing protocol selection and authentication information. A number of techniques have arisen to help in configuring or
using tunnels automatically. One such approach specified for transitioning from
IPv4 to IPv6 is called 6to4 [RFC3056]. In 6to4, IPv6 packets are tunneled over an
IPv4 network using the encapsulation specified in [RFC3056]. A problem with this
approach occurs when corresponding hosts are located behind network address
translators (see Chapter 7). This is common today, especially for home users. Dealing with the IPv6 transition using automatically configured tunnels is specified in
an approach called Teredo [RFC4380]. Teredo tunnels IPv6 packets over UDP/IPv4
packets. Because this approach requires some background in IPv4 and IPv6, as
well as UDP, we postpone any detailed discussion of such tunnel autoconfiguration options to Chapter 10.
3.10 Attacks on the Link Layer
Attacking layers below TCP/IP in order to affect the operations of TCP/IP networks has been a popular approach because much of the link-layer information is
not shared by the higher layers and can therefore be somewhat difficult to detect
and mitigate. Nevertheless, many such attacks are now well understood, and we
Section 3.10 Attacks on the Link Layer
mention a few of them here to better understand how problems at the link layer
can affect higher-layer operations.
In conventional wired Ethernet, interfaces can be placed in promiscuous mode,
which allows them to receive traffic even if it is not destined for them. In the early
days of Ethernet, when the medium was literally a shared cable, this capability
allowed anyone with a computer attached to the Ethernet cable to “sniff” anybody
else’s frames and inspect their contents. As many higher-layer protocols at the time
included sensitive information such as passwords, it was nearly trivial to intercept
a person’s password by merely looking at the ASCII decode of a packet trace. Two
factors have affected this approach substantially: the deployment of switches and
the deployment of encryption in higher-layer protocols. With switches, the only
traffic that is provided on a switch port to which an end station is attached is traffic destined for the station itself (or others for which it is bridging) and broadcast/
multicast traffic. As this type of traffic rarely contains information such as passwords, the attack is largely thwarted. Much more effective, however, is simply the
use of encryption at higher layers, which is now common. In this case, sniffing
packets leads to little benefit as the contents are essentially impossible to read.
Another type of attack targets the operation of switches. Recall that switches
hold tables of stations on a per-port basis. If these tables are able to be filled quickly
(e.g., by quickly masquerading as a large number of stations), it is conceivable that
the switch might be forced into discarding legitimate entries, leading to service
interruption for legitimate stations. A related but probably worse attack can be
mounted using the STP. In this case, an attacking station can masquerade as a
switch with a low-cost path to the root bridge and cause traffic to be directed
toward it.
With Wi-Fi networks, some of the eavesdropping and masquerading issues
present in wired Ethernet networks are exacerbated, as any station can enter a
monitoring mode and sniff packets from the air (although placing an 802.11 interface into monitoring mode tends to be more challenging than placing an Ethernet
interface into promiscuous mode, as doing so depends on an appropriate device
driver). Some of the earliest “attacks” (which may not really have been attacks,
depending on the relevant legal framework) involved simply roaming about while
scanning, looking for access points providing Internet connectivity (i.e., war driving). Although many access points use encryption to limit access to authorized
users, others are either open or use so-called capturing portals that direct a wouldbe user to a registration Web page and then filter access based on MAC address.
Capturing portal systems have been subverted by observing a station as it registers and “hijacking” the connection as it is formed by impersonating the legitimate registering user.
A more sophisticated set of attacks on Wi-Fi involves attacking the cryptographic protection, especially the WEP encryption used on many early access
points. Attacks on WEP [BHL06] were sufficiently devastating so as to prod the
IEEE into revising the standard. The more recent WPA2 encryption framework
(and WPA, to a lesser extent) is known to be significantly stronger, and WEP is no
longer recommended for use.
Link Layer
PPP links can be attacked in a number of ways if the attacker can gain access
to the channel between the two peers. For very simple authentication mechanisms
(e.g., PAP), sniffing can be used to capture the password in order to facilitate illegitimate subsequent use. Depending on the type of higher-layer traffic being carried over the PPP link (e.g., routing traffic), additional unwanted behaviors can be
In terms of attacks, tunneling can play the role of both target and tool. In
terms of a target, tunnels pass through a network (often the Internet) and thus are
subject to being intercepted and analyzed. The configured tunnel endpoints can
also be attacked, either by attempting to establish more tunnels than the endpoint
can support (a DoS attack) or by attacking the configuration itself. If the configuration is compromised, it may be possible to open an unauthorized tunnel to an endpoint. At this point the tunnel becomes a tool rather than a target, and protocols
such as L2TP can provide a convenient protocol-independent method of gaining
access to private internal networks at the link layer. In one GRE-related attack, for
example, traffic is simply inserted in a nonencrypted tunnel, where it appears at
the tunnel endpoint and is injected to the attached “private” network as though it
were sent locally.
3.11 Summary
In this chapter we examined the lowest layer in the Internet protocol suite with
which we are concerned—the link layer. We looked at the evolution of Ethernet,
in terms of both its increases in speed from 10Mb/s to 10Gb/s and beyond, as well
as its evolution of capabilities, including VLANs, priorities, link aggregation, and
frame formats. We saw how switches provide improved performance over bridges
by implementing a direct electrical path between multiple independent sets of stations, and how full-duplex operation has largely replaced the earlier half-duplex
operation. We also looked at the IEEE 802.11 wireless LAN “Wi-Fi” standard in
some detail, noting its similarities and differences with respect to Ethernet. It has
become one of the most popular IEEE standards and provides license-free network access across the two primary bands of 2.4GHz and 5GHz. We also looked
at the evolution of the security methods for Wi-Fi, with the evolution from the
relatively weak WEP to the more formidable WPA and WPA2 frameworks. Moving beyond IEEE standards, we discussed point-to-point links and the PPP protocol. PPP can encapsulate essentially any kind of packets used for TCP/IP and
non-TCP/IP networks using an HDLC-like frame format, and it is used on links
ranging from low-speed dial-up modems to high-speed fiber-optic lines. It is a
whole suite of protocols itself, including methods for compression, encryption,
authentication, and link aggregation. Because it supports only two parties, it does
not have to deal with controlling access to a shared medium like the MAC protocols of Ethernet or Wi-Fi.
Section 3.12 References
The loopback interface is provided by most implementations. Access to this
interface is either through the special loopback address, normally (::1 for
IPv6), or by sending IP datagrams to one of a host’s own IP addresses. Loopback
data has been completely processed by the transport layer and by IP when it loops
around to go up the protocol stack. We described an important feature of many
link layers, the MTU, and the related concept of a path MTU.
We also discussed the use of tunneling, which involves carrying lower-layer
protocols in higher-layer (or equal-layer) packets. This technique allows for the
formation of overlay networks, using tunnels over the Internet as links in another
level of network infrastructure. This technique has become very popular, both for
experimentation with new capabilities (e.g., running an IPv6 network overlay on
an IPv4 internet) and for operational use (e.g., with VPNs).
We concluded the chapter with a brief discussion of the types of attacks
involving the link layer—as either target or tool. Many attacks simply involve
intercepting traffic for analysis (e.g., looking for passwords), but more sophisticated attacks involve masquerading as endpoints or modifying traffic in transit.
Other attacks involve compromising control information such as tunnel endpoints
or the STP to direct traffic to otherwise unintended locations. Access to the link
layer also provides an attacker with a general way to perform DoS attacks. Perhaps
the best-known variant of this is jamming communication signals, an endeavor
undertaken by certain parties since nearly the advent of radio.
This chapter has covered only some of the common link technologies used
with TCP/IP today. One reason for the success of TCP/IP is its ability to work on
top of almost any link technology. In essence, IP requires only that there exists
some path between sender and receiver(s) across a cascade of intermediate links.
Although this is a relatively modest requirement, some research is aimed at
stretching this even farther—to cases where there may never be an end-to-end
path between sender and receiver(s) at any single point in time [RFC4838].
3.12 References
[802.11-2007] “IEEE Standard for Local and Metropolitan Area Networks, Part 11:
Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” June 2007.
[802.11n-2009] “IEEE Standard for Local and Metropolitan Area Networks, Part
11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications Amendment 5: Enhancements for Higher Throughput,” Oct. 2009.
[802.11y-2008] “IEEE Standard for Local and Metropolitan Area Networks, Part
11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications Amendment 3: 3650-3700 MHz Operation in USA,” Nov. 2009.
[802.16-2009] “IEEE Standard for Local and Metropolitan Area Networks, Part 16:
Air Interface for Fixed Broadband Wireless Access Systems,” May 2009.
Link Layer
[802.16h-2010] “IEEE Standard for Local and Metropolitan Area Networks, Part
16: Air Interface for Fixed Broadband Wireless Access Systems Amendment 2:
Improved Coexistence Mechanisms for License-Exempt Operation,” July 2010.
[802.16j-2009] “IEEE Standard for Local and Metropolitan Area Networks, Part
16: Air Interface for Fixed Broadband Wireless Access Systems Amendment 1:
Multihop Relay Specification,” June 2009.
[802.16k-2007] “IEEE Standard for Local and Metropolitan Area Networks, Part
16: Air Interface for Fixed Broadband Wireless Access Systems Amendment 5:
Bridging of IEEE 802.16,” Aug. 2010.
[802.1AK-2007] “IEEE Standard for Local and Metropolitan Area Networks,
Virtual Bridged Local Area Networks Amendment 7: Multiple Registration
Protocol,” June 2007.
[802.1AE-2006] “IEEE Standard for Local and Metropolitan Area Networks
Media Access Control (MAC) Security,” Aug. 2006.
[802.1ak-2007] “IEEE Standard for Local and Metropolitan Area Networks—
Virtual Bridged Local Area Networks—Amendment 7: Multiple Registration
Protocol,” June 2007.
[802.1AX-2008] “IEEE Standard for Local and Metropolitan Area Networks—
Link Aggregation,” Nov. 2008.
[802.1D-2004] “IEEE Standard for Local and Metropolitan Area Networks Media
Access Control (MAC) Bridges,” June 2004.
[802.1Q-2005] IEEE Standard for Local and Metropolitan Area Networks Virtual
Bridged Local Area Networks,” May 2006.
[802.1X-2010] “IEEE Standard for Local and Metropolitan Area Networks PortBased Network Access Control,” Feb. 2010.
[802.2-1998] “IEEE Standard for Local and Metropolitan Area Networks Logical
Link Control” (also ISO/IEC 8802-2:1998), May 1998.
[802.21-2008] “IEEE Standard for Local and Metropolitan Area Networks, Part 21:
Media Independent Handover Services,” Jan. 2009.
[802.3-2008] “IEEE Standard for Local and Metropolitan Area Networks, Part
3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access
Method and Physical Layer Specifications,” Dec. 2008.
[802.3at-2009] “IEEE Standard for Local and Metropolitan Area Networks—Specific Requirements, Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications Amendment
3: Date Terminal Equipment (DTE) Power via the Media Dependent Interface
(MDI) Enhancements,” Oct. 2009.
Section 3.12 References
[802.3ba-2010] “IEEE Standard for Local and Metropolitan Area Networks, Part
3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access
Method and Physical Layer Specifications, Amendment 4: Media Access Control Parameters, Physical Layers, and Management Parameters for 40Gb/s and
100Gb/s Operation,” June 2010.
[802.11n-2009] “IEEE Standard for Local and Metropolitan Area Networks, Part
11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications, Amendment 5: Enhancements for Higher Throughput,” Oct. 2009.
[AES01] U.S. National Institute of Standards and Technology, FIPS PUB 197,
“Advanced Encryption Standard,” Nov. 2001.
[BHL06] A. Bittau, M. Handley, and J. Lackey, “The Final Nail in WEP’s Coffin,”
Proc. IEEE Symposium on Security and Privacy, May 2006.
[ETX] D. De Couto, D. Aguayo, J. Bicket, and R. Morris, “A High-Throughput Path
Metric for Multi-Hop Wireless Routing,” Proc. Mobicom, Sep. 2003.
[G704] ITU, “General Aspects of Digital Transmission Systems: Synchronous
Frame Structures Used at 1544, 6312, 2048k, 8488, and 44736 kbit/s Hierarchical
Levels,” ITU-T Recommendation G.704, July 1995.
[IANA-CHARSET] “Character Sets,”
[ISO3309] International Organization for Standardization, “Information Processing Systems—Data Communication High-Level Data Link Control Procedure—
Frame Structure,” IS 3309, 1984.
[ISO4335] International Organization for Standardization, “Information Processing Systems—Data Communication High-Level Data Link Control Procedure—
Elements of Procedure,” IS 4335, 1987.
[JF] M. Mathis, “Raising the Internet MTU,”
[MWLD] “Long Distance Links with MadWiFi,”
[RFC0894] C. Hornig, “A Standard for the Transmission of IP Datagrams over
Ethernet Networks,” Internet RFC 0894/STD 0041, Apr. 1984.
[RFC1042] J. Postel and J. Reynolds, “Standard for the Transmission of IP Datagrams over IEEE 802 Networks,” Internet RFC 1042/STD 0043, Feb. 1988.
Link Layer
[RFC1144] V. Jacobson, “Compressing TCP/IP Headers for Low-Speed Serial
Links,” Internet RFC 1144, Feb. 1990.
[RFC1191] J. Mogul and S. Deering, “Path MTU Discovery,” Internet RFC 1191,
Nov. 1990.
[RFC1332] G. McGregor, “The PPP Internet Protocol Control Protocol,” Internet
RFC 1332, May 1992.
[RFC1570] W. Simpson, ed., “PPP LCP Extensions,” Internet RFC 1570, Jan. 1994.
[RFC1661] W. Simpson, “The Point-to-Point Protocol (PPP),” Internet RFC 1661/
STD 0051, July 1994.
[RFC1662] W. Simpson, ed., “PPP in HDLC-like Framing,” Internet RFC 1662/
STD 0051, July 1994.
[RFC1663] D. Rand, “PPP Reliable Transmission,” Internet RFC 1663, July 1994.
[RFC1853] W. Simpson, “IP in IP Tunneling,” Internet RFC 1853 (informational),
Oct. 1995.
[RFC1962] D. Rand, “The PPP Compression Protocol (CCP),” Internet RFC 1962,
June 1996.
[RFC1977] V. Schryver, “PPP BSD Compression Protocol,” Internet RFC 1977
(informational), Aug. 1996.
[RFC1981] J. McCann and S. Deering, “Path MTU Discovery for IP Version 6,”
Internet RFC 1981, Aug. 1996.
[RFC1989] W. Simpson, “PPP Link Quality Monitoring,” Internet RFC 1989, Aug.
[RFC1990] K. Sklower, B. Lloyd, G. McGregor, D. Carr, and T. Coradetti, “The PPP
Multilink Protocol (MP),” Internet RFC 1990, Aug. 1996.
[RFC1994] W. Simpson, “PPP Challenge Handshake Authentication Protocol
(CHAP),” Internet RFC 1994, Aug. 1996.
[RFC2118] G. Pall, “Microsoft Point-to-Point (MPPC) Protocol,” Internet RFC 2118
(informational), Mar. 1997.
[RFC2125] C. Richards and K. Smith, “The PPP Bandwidth Allocation Protocol
(BAP)/The PPP Bandwidth Allocation Control Protocol (BACP),” Internet RFC
2125, Mar. 1997.
[RFC2153] W. Simpson, “PPP Vendor Extensions,” Internet RFC 2153 (informational), May 1997.
[RFC2290] J. Solomon and S. Glass, “Mobile-IPv4 Configuration Option for PPP
IPCP,” Internet RFC 2290, Feb. 1998.
Section 3.12 References
[RFC2464] M. Crawford, “Transmission of IPv6 Packets over Ethernet Networks,”
Internet RFC 2464, Dec. 1988.
[RFC2473] A. Conta and S. Deering, “Generic Packet Tuneling in IPv6 Specification,” Internet RFC 2473, Dec. 1998.
[RFC2484] G. Zorn, “PPP LCP Internationalization Configuration Option,”
Internet RFC 2484, Jan. 1999.
[RFC2507] M. Degermark, B. Nordgren, and S. Pink, “IP Header Compression,”
Internet RFC 2507, Feb. 1999.
[RFC2615] A. Malis and W. Simpson, “PPP over SONET/SDH,” Internet RFC
2615, June 1999.
[RFC2637] K. Hamzeh, G. Pall, W. Verthein, J. Taarud, W. Little, and G. Zorn,
“Point-to-Point Tunneling Protocol (PPTP),” Internet RFC 2637 (informational),
July 1999.
[RFC2759] G. Zorn, “Microsoft PPP CHAP Extensions, Version 2,” Internet RFC
2759 (informational), Jan. 2000.
[RFC2784] D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina, “Generic Routing
Encapsulation (GRE),” Internet RFC 2784, Mar. 2000.
[RFC2865] C. Rigney, S. Willens, A. Rubens, and W. Simpson, “Remote Authentication Dial In User Service (RADIUS),” Internet RFC 2865, June 2000.
[RFC2890] G. Dommety, “Key and Sequence Number Extensions to GRE,” Internet RFC 2890, Sept. 2000.
[RFC3056] B. Carpenter and K. Moore, “Connection of IPv6 Domains via IPv4
Clouds,” Internet RFC 3056, Feb. 2001.
[RFC3077] E. Duros, W. Dabbous, H. Izumiyama, N. Fujii, and Y. Zhang, “A LinkLayer Tunneling Mechanism for Unidirectional Links,” Internet RFC 3077, Mar.
[RFC3078] G. Pall and G. Zorn, “Microsoft Point-to-Point Encryption (MPPE)
Protocol,” Internet RFC 3078 (informational), Mar. 2001.
[RFC3153] R. Pazhyannur, I. Ali, and C. Fox, “PPP Multiplexing,” Internet RFC
3153, Aug. 2001.
[RFC3366] G. Fairhurst and L. Wood, “Advice to Link Designers on Link Automatic Repeat reQuest (ARQ),” Internet RFC 3366/BCP 0062, Aug. 2002.
[RFC3449] H. Balakrishnan, V. Padmanabhan, G. Fairhurst, and M. Sooriyabandara, “TCP Performance Implications of Network Path Asymmetry,” Internet
RFC 3449/BCP 0069, Dec. 2002.
Link Layer
[RFC3544] T. Koren, S. Casner, and C. Bormann, “IP Header Compression over
PPP,” Internet RFC 3544, July 2003.
[RFC3561] C. Perkins, E. Belding-Royer, and S. Das, “Ad Hoc On-Demand Distance Vector (AODV) Routing,” Internet RFC 3561 (experimental), July 2003.
[RFC3610] D. Whiting, R. Housley, and N. Ferguson, “Counter with CBC-MAC
(CCM),” Internet RFC 3610 (informational), Sept. 2003.
[RFC3626] T. Clausen and P. Jacquet, eds., “Optimized Link State Routing Protocol (OLSR),” Internet RFC 3626 (experimental), Oct. 2003.
[RFC3748] B. Aboba et al., “Extensible Authentication Protocol (EAP),” Internet
RFC 3748, June 2004.
[RFC3931] J. Lau, M. Townsley, and I. Goyret, eds., “Layer Two Tunneling Protocol—Version 3 (L2TPv3),” Internet RFC 3931, Mar. 2005.
[RFC4017] D. Stanley, J. Walker, and B. Aboba, “Extensible Authentication Protocol (EAP) Method Requirements for Wireless LANs,” Internet RFC 4017 (informational), Mar. 2005.
[RFC4380] C. Huitema, “Teredo: Tunneling IPv6 over UDP through Network
Address Translations (NATs),” Internet RFC 4380, Feb. 2006.
[RFC4647] A. Phillips and M. Davis, “Matching of Language Tags,” Internet RFC
4647/BCP 0047, Sept. 2006.
[RFC4821] M. Mathis and J. Heffner, “Packetization Layer Path MTU Discovery,”
Internet RFC 4821, Mar. 2007.
[RFC4838] V. Cerf et al., “Delay-Tolerant Networking Architecture,” Internet RFC
4838 (informational), Apr. 2007.
[RFC4840] B. Aboba, ed., E. Davies, and D. Thaler, “Multiple Encapsulation Methods Considered Harmful,” Internet RFC 4840 (informational), Apr. 2007.
[RFC5072] S. Varada, ed., D. Haskins, and E. Allen, “IP Version 6 over PPP,” Internet RFC 5072, Sept. 2007.
[RFC5225] G. Pelletier and K. Sandlund, “RObust Header Compression Version 2
(ROHCv2): Profiles for RTP, UDP, IP, ESP, and UDP-Lite,” Internet RFC 5225, Apr.
[RFC5646] A. Phillips and M. Davis, eds., “Tags for Identifying Languages,”
Internet RFC 5646/BCP 0047, Sept. 2009.
[S08] D. Skordoulis et al., “IEEE 802.11n MAC Frame Aggregation Mechanisms
for Next-Generation High-Throughput WLANs,” IEEE Wireless Communications,
Feb. 2008.
[S96] B. Schneier, Applied Cryptography, Second Edition (John Wiley & Sons, 1996).
Section 3.12 References
[SAE] D. Harkins, “Simultaneous Authentication of Equals: A Secure, PasswordBased Key Exchange for Mesh Networks,” Proc. SENSORCOMM, Aug. 2008.
[SC05] S. Shalunov and R. Carlson, “Detecting Duplex Mismatch on Ethernet,”
Proc. Passive and Active Measurement Workshop, Mar. 2005.
[SHK07] C. Sengul, A. Harris, and R. Kravets, “Reconsidering Power Management,” Invited Paper, Proc. IEEE Broadnets, 2007.
This page intentionally left blank
ARP: Address Resolution
We have seen that the IP protocol is designed to provide interoperability of packet
switching across a large variety of physical network types. Doing so requires,
among other things, converting between the addresses used by the network-layer
software and those interpreted by the underlying network hardware. Generally,
network interface hardware has one primary hardware address (e.g., a 48-bit value
for an Ethernet or 802.11 wireless interface). Frames exchanged by the hardware
must be addressed to the correct interface using the correct hardware addresses;
otherwise, no data can be transferred. But a conventional IPv4 network works
with its own addresses: 32-bit IPv4 addresses. Knowing a host’s IP address is
insufficient for the system to send a frame to that host efficiently on networks
where hardware addresses are used. The operating system software (i.e., the Ethernet driver) must know the destination’s hardware address to send data directly.
For TCP/IP networks, the Address Resolution Protocol (ARP) [RFC0826] provides a
dynamic mapping between IPv4 addresses and the hardware addresses used by
various network technologies. ARP is used with IPv4 only; IPv6 uses the Neighbor Discovery Protocol, which is incorporated into ICMPv6 (see Chapter 8).
It is important to note here that the network-layer and link-layer addresses
are assigned by different authorities. For network hardware, the primary address
is defined by the manufacturer of the device and is stored in permanent memory within the device, so it does not change. Thus, any protocol suite designed to
operate with that particular hardware technology must make use of its particular
types of addresses. This allows network-layer protocols of different protocol suites
to operate at the same time. On the other hand, the IP address assigned to a network
interface is installed by the user or network administrator and selected by that
person to meet his or her needs. The IP addresses assigned to a portable device
ARP: Address Resolution Protocol
may, for example, be changed when it is moved. IP addresses are typically derived
from a pool of addresses maintained near the network attachment point and are
installed when systems are turned on or configured (see Chapter 6). When an Ethernet frame containing an IP datagram is sent from one host on a LAN to another,
it is the 48-bit Ethernet address that determines to which interface(s) the frame is
Address resolution is the process of discovering the mapping from one address
to another. For the TCP/IP protocol suite using IPv4, this is accomplished by running the ARP. ARP is a generic protocol, in the sense that it is designed to support mapping between a wide variety of address types. In practice, however, it
is almost always used to map between 32-bit IPv4 addresses and Ethernet-style
48-bit MAC addresses. This case, the one specified in [RFC0826], is also the one of
interest to us. For this chapter, we shall use the terms Ethernet address and MAC
address interchangeably.
ARP provides a dynamic mapping from a network-layer address to a corresponding hardware address. We use the term dynamic because it happens automatically and adapts to changes over time without requiring reconfiguration
by a system administrator. That is, if a host were to have its network interface
card changed, thereby changing its hardware address (but retaining its assigned
IP address), ARP would continue to operate properly after some delay. ARP
operation is normally not a concern of either the application user or the system
A related protocol that provides the reverse mapping from ARP, called RARP, was
used by systems lacking a disk drive (normally diskless workstations or X terminals). It is rarely used today and requires manual configuration by the system
administrator. See [RFC0903] for details.
An Example
Whenever we use Internet services, such as opening a Web page with a browser,
our local computer must determine how to contact the server in which we are
interested. The most basic decision it makes is whether that service is local (part
of the same IP subnetwork) or remote. If it is remote, a router is required to reach
the destination. ARP operates only when reaching those systems on the same IP
subnet. For this example, then, let us assume that we use a Web browser to contact
the following URL:
Note that this URL contains an IPv4 address rather than the more common
domain or host name. The reason for using the address here is to underscore the
Section 4.2 An Example
fact that our demonstration of ARP is most relevant to systems sharing the same
IPv4 prefix (see Chapter 2). Here, we use a URL containing an address identifying
a local Web server and explore how direct delivery operates. Such local servers are
becoming more common as embedded devices such as printers and VoIP adapters
include built-in Web servers for configuration.
Direct Delivery and ARP
In this section, we enumerate the steps taken in direct delivery, focusing on the
operation of ARP. Direct delivery takes place when an IP datagram is sent to an
IP address with the same IP prefix as the sender’s. It plays an important role in
the general method of forwarding of IP datagrams (see Chapter 5). The following
list captures the basic operation of direct delivery with IPv4, using the previous
1. The application, in this case a Web browser, calls a special function to parse
the URL to see if it contains a host name. Here it does not, so the application
uses the 32-bit IPv4 address
2. The application asks the TCP protocol to establish a connection with
3. TCP attempts to send a connection request segment to the remote host by
sending an IPv4 datagram to (We shall see the details of how this is
done in Chapter 15.)
4. Because we are assuming that the address is using the same network prefix as our sending host, the datagram can be sent directly to that
address without going through a router.
5. Assuming that Ethernet-compatible addressing is being used on the IPv4
subnet, the sending host must convert the 32-bit IPv4 destination address
into a 48-bit Ethernet-style address. Using the terminology from [RFC0826],
a translation is required from the logical Internet address to its corresponding physical hardware address. This is the function of ARP. ARP works in
its normal form only for broadcast networks, where the link layer is able to
deliver a single message to all attached network devices. This is an important requirement imposed by the operation of ARP. On non-broadcast networks (sometimes called NBMA for non-broadcast multiple access), other,
more complex mapping protocols may be required [RFC2332].
6. ARP sends an Ethernet frame called an ARP request to every host on the
shared link-layer segment. This is called a link-layer broadcast. We show the
broadcast domain in Figure 4-1 with a crosshatched box. The ARP request
contains the IPv4 address of the destination host ( and seeks an
answer to the following question: “If you are configured with IPv4 address as one of your own, please respond to me with your MAC address.”
ARP: Address Resolution Protocol
Figure 4-1
Ethernet hosts in the same broadcast domain. ARP queries are sent using link-layer
broadcast frames that are received by all hosts. The single host with the assigned
address responds directly to the requesting host. Non-IP hosts must actively discard
ARP queries.
7. With ARP, all systems in the same broadcast domain receive ARP requests.
This includes systems that may not be running the IPv4 or IPv6 protocols at
all but does not include systems on different VLANs, if they are supported
(see Chapter 3 for details on VLANs). Provided there exists an attached system using the IPv4 address specified in the request, it alone responds with
an ARP reply. This reply contains the IPv4 address (for matching with the
request) and the corresponding MAC address. The reply does not ordinarily use broadcast but is directed only to the sender. The host receiving the
ARP request also learns of the sender’s IPv4-to-MAC address mapping at
this time and records it in memory for later use (see Section 4.3).
8. The ARP reply is then received by the original sender of the request, and
the datagram that forced the ARP request/reply to be exchanged can now
be sent.
Section 4.3 ARP Cache
9. The sender now sends the datagram directly to the destination host by
encapsulating it in an Ethernet frame and using the Ethernet address
learned by the ARP exchange as the destination Ethernet address. Because
the Ethernet address refers only to the correct destination host, no other
hosts or routers receive the datagram. Thus, when only direct delivery is
used, no router is required.
ARP is used in multi-access link-layer networks running IPv4, where each
host has its own primary hardware address. Point-to-point links such as PPP (see
Chapter 3) do not use ARP. When these links are established (normally by action
of the user or a system boot), the system is told of the addresses in use at each
end of the link. Because hardware addresses are not involved, there is no need for
address resolution or ARP.
ARP Cache
Essential to the efficient operation of ARP is the maintenance of an ARP cache
(or table) on each host and router. This cache maintains the recent mappings
from network-layer addresses to hardware addresses for each interface that uses
address resolution. When IPv4 addresses are mapped to hardware addresses, the
normal expiration time of an entry in the cache is 20 minutes from the time the
entry was created, as described in [RFC1122].
We can examine the ARP cache with the arp command on Linux or in Windows. The -a option displays all entries in the cache for either system. Running
arp on Linux yields the following type of output:
Linux% arp
Flags Mask Iface
Linux% arp -a
printer.home ( at
00:0A:95:87:38:6A [ether] on eth1
gw.home ( at 00:0D:66:4F:60:00 [ether] on eth1
Running arp on Windows provides output similar to the following:
c:\> arp -a
Interface: --- 0x2
Internet Address
Physical Address
Here we see the IPv4-to-hardware addressing cache. In the first (Linux) case,
each mapping is given by a five-element entry: the host name (corresponding to
ARP: Address Resolution Protocol
an IP address), hardware address type, hardware address, flags, and local network interface for which this mapping is active. The Flags column contains a
symbol: C, M, or P. C-type entries have been learned dynamically by the ARP protocol. M-type entries are entered by hand (by arp -s; see Section 4.9), and P-type
entries mean “publish.” That is, for any P entry, the host responds to incoming
ARP requests with an ARP response. This option is used for configuring proxy
ARP (see Section 4.7). The second Linux example displays similar information
using the “BSD style.” Here, both the host’s name and address are given, along
with the address type (here, [ether] indicates an Ethernet type of address) and
on which interface the mappings are active.
The Windows arp program displays the IPv4 address of the interface, and its
interface number in hexadecimal (0x2 here). The Windows version also indicates
whether the address was entered by hand or learned by ARP. In this example, both
entries are dynamic, meaning they were learned by ARP (they would say static
if entered by hand). Note that the 48-bit MAC addresses are displayed as six hexadecimal numbers separated by colons in Linux and dashes in Windows. Traditionally, UNIX systems have always used colons, whereas the IEEE standards and
other operating systems tend to use dashes. We discuss additional features and
other options of the arp command in Section 4.9.
ARP Frame Format
Figure 4-2 shows the common format of an ARP request and reply packet, when
used on an Ethernet network to resolve an IPv4 address. (As mentioned previously, ARP is general enough to be used with addresses other than IPv4 addresses,
although this is very rare.) The first 14 bytes constitute the standard Ethernet
header, assuming no 802.1p/q or other tags, and the remaining portion is defined
by the ARP protocol. The first 8 bytes of the ARP frame are generic, and the remaining portion in this example applies specifically when mapping IPv4 addresses to
48-bit Ethernet-style addresses.
Figure 4-2
7\SH 7\SH
ARP frame format as used when mapping IPv4 addresses to 48-bit MAC (Ethernet) addresses
Section 4.5 ARP Examples
In the Ethernet header of the ARP frame shown in Figure 4-2, the first two
fields contain the destination and source Ethernet addresses. For ARP requests, the
special Ethernet destination address of ff:ff:ff:ff:ff:ff (all 1 bits) means the broadcast address—all Ethernet interfaces in the same broadcast domain receive these
frames. The 2-byte Ethernet frame Length or Type field is required to be 0x0806 for
ARP (requests or replies).
The first four fields following the Length/Type field specify the types and sizes
of the final four fields. The values are maintained by the IANA [RFC5494]. The
adjectives hardware and protocol are used to describe the fields in the ARP packets.
For example, an ARP request asks for the hardware address (an Ethernet address
in this case) corresponding to a protocol address (an IPv4 address in this case).
These adjectives are rarely used outside the ARP context. Rather, the more common terminology for the hardware address is MAC, physical, or link-layer address
(or Ethernet address when the network in use is based on the IEEE 802.3/Ethernet series of specifications). The Hard Type field specifies the type of hardware
address. Its value is 1 for Ethernet. The Prot Type field specifies the type of protocol
address being mapped. Its value is 0x0800 for IPv4 addresses. This is purposely
the same value as the Type field of an Ethernet frame containing an IPv4 datagram.
The next two 1-byte fields, Hard Size and Prot Size, specify the sizes, in bytes, of the
hardware addresses and the protocol addresses. For an ARP request or reply for
an IPv4 address on an Ethernet they are 6 and 4, respectively. The Op field specifies whether the operation is an ARP request (a value of 1), ARP reply (2), RARP
request (3), or RARP reply (4). This field is required because the Length/Type field
is the same for an ARP request and an ARP reply.
The next four fields that follow are the Sender’s Hardware Address (an Ethernet
MAC address in this example), the Sender’s Protocol Address (an IPv4 address), the
Target Hardware (MAC/Ethernet) Address, and the Target Protocol (IPv4) Address.
Notice that there is some duplication of information: the sender’s hardware
address is available both in the Ethernet header and in the ARP message. For an
ARP request, all the fields are filled in except the Target Hardware Address (which is
set to 0). When a system receives an ARP request directed to it, it fills in its hardware address, swaps the two sender addresses with the two target addresses, sets
the Op field to 2, and sends the reply.
ARP Examples
In this section we will use the tcpdump command to see what really happens
with ARP when we execute normal TCP/IP utilities such as Telnet. Telnet is a
simple application that can establish a TCP/IP connection between two systems.
Normal Example
To see the operation of ARP, we will execute the telnet command, connecting to
a Web server on host using TCP port 80 (called www).
ARP: Address Resolution Protocol
C:\> arp -a
No ARP Entries Found
C:\> telnet www
Connecting to
Escape character is ’^]’.
Verify that the ARP cache is empty
Connect to the Web server [port 80]
Type Control + right bracket to get the Telnet client prompt.
Welcome to Microsoft Telnet Client
Escape Character is 'CTRL+]'
Microsoft Telnet> quit
The quit directive exits the program.
While this is happening, we run the tcpdump command on another system
that can observe the traffic exchanged. We use the -e option, which displays the
MAC addresses (which in our examples are 48-bit Ethernet addresses).
The following listing contains the output from tcpdump. We have deleted
the final four lines of the output that correspond to the termination of the connection (we cover such details in Chapter 13); they are not relevant to the discussion
here. Note that different versions of tcpdump on different systems may provide
slightly different output details.
tcpdump -e
0.0 0:0:c0:6f:2d:40 ff:ff:ff:ff:ff:ff arp 60:
arp who-has tell
0.002174 (0.0022)0:0:c0:c2:9b:26 0:0:c0:6f:2d:40 arp 60:
arp reply is-at 0:0:c0:c2:9b:26
0.002831 (0.0007)0:0:c0:6f:2d:40 0:0:c0:c2:9b:26 ip 60: > S 596459521:596459521(0)
win 4096 <mss 1024> [tos 0x10]
0.007834 (0.0050)0:0:c0:c2:9b:26 0:0:c0:6f:2d:40 ip 60: > S 3562228225:3562228225(0)
ack 596459522 win 4096 <mss 1024>
0.009615 (0.0018)0:0:c0:6f:2d:40 0:0:c0:c2:9b:26 ip 60: > . ack 1 win 4096 [tos 0x10]
In packet 1 the hardware address of the source is 0:0:c0:6f:2d:40. The destination hardware address is ff:ff:ff:ff:ff:ff, which is the Ethernet broadcast
address. All Ethernet interfaces in the same broadcast domain (all those on the
same LAN or VLAN, whether or not they are running TCP/IP) receive the frame
and process it, as shown in Figure 4-1. The next output field in packet 1, arp,
means that the Frame Type field is 0x0806, specifying either an ARP request or an
ARP reply. The value 60 printed after the words arp and ip in each of the five
packets is the length of the Ethernet frame. The size of an ARP request or ARP
reply is always 42 bytes (28 bytes for the ARP message, 14 bytes for the Ethernet
header). Each frame has been padded to the Ethernet minimum: 60 bytes of data
plus a 4-byte CRC (see Chapter 3).
Section 4.5 ARP Examples
The next part of packet 1, arp who-has, identifies the frame as an ARP request
with the IPv4 address of as the target address and the IPv4 address of as the sender’s address. tcpdump prints the host names corresponding
to the IP addresses by default, but here they are not displayed (because no reverse
DNS mappings for them are set up; Chapter 11 explains details of DNS). We will
use the -n option later to see the IP addresses in the ARP request, whether or not
DNS mappings are available.
From packet 2 we see that while the ARP request is broadcast, the destination
address of the ARP reply is the (unicast) MAC address 0:0:c0:6f:2d:40. The
ARP reply is thus sent directly to the requesting host; it is not ordinarily broadcast (see Section 4.8 for some cases where this rule is altered). tcpdump prints
the ARP reply for this frame, along with the IPv4 address and hardware address
of the responder. Line 3 is the first TCP segment requesting that a connection be
established. Its destination hardware address is the destination host (
We shall cover the details of this segment in Chapter 13.
For each packet, the number printed after the packet number is the relative
time (in seconds) when the packet was received by tcpdump. Each packet other
than the first also contains the time difference (in seconds) from the previous time,
in parentheses. We can see in the output that the time between sending the ARP
request and receiving the ARP reply is about 2.2ms. The first TCP segment is sent
0.7ms after this. The overhead involved in using ARP for dynamic address resolution in this example is less than 3ms. Note that if the ARP entry for host
was valid in the ARP cache at, the initial ARP exchange would not have
occurred, and the initial TCP segment could have been sent immediately using the
destination’s Ethernet address.
A subtle point about the tcpdump output is that we do not see an ARP request
from before it sends its first TCP segment to (line 4). While it
is possible that already has an entry for in its ARP cache, normally when a system receives an ARP request addressed to it, in addition to sending the ARP reply, it also saves the requestor’s hardware address and IPv4 address
in its own ARP cache. This is an optimization based on the logical assumption that
if the requestor is about to send it a datagram, the receiver of the datagram will
probably send a reply.
ARP Request to a Nonexistent Host
What happens if the host specified in an ARP request is down or nonexistent? To
see this, we attempt to access a nonexistent local IPv4 address—the prefix corresponds to that of the local subnet, but there is no host with the specified address.
We will use the IPv4 address in this example.
Linux% date ; telnet ; date
Fri Jan 29 14:46:33 PST 2010
telnet: connect to address No route to host
ARP: Address Resolution Protocol
Fri Jan 29 14:46:36 PST 2010
3s after previous date
Linux% arp -a
? ( at <incomplete> on eth0
Here is the output from tcpdump:
Linux# tcpdump –n
1 21:12:07.440845
2 21:12:08.436842
3 21:12:09.436836
arp who-has tell
arp who-has tell
arp who-has tell
This time we did not specify the -e option because we already know that
the ARP requests are sent using broadcast addressing. The frequency of the ARP
request is very close to one per second, the maximum suggested by [RFC1122].
Testing on a Windows system (not illustrated) reveals a different behavior. Rather
than three requests spaced 1s apart, the spacing varies based on the application
and the other protocols being used. For ICMP and UDP (see Chapters 8 and 10,
respectively), a spacing of approximately 5s is used, whereas for TCP 10s is used.
For TCP, the 10s interval allows two ARP requests to be sent without responses
before TCP gives up trying to establish a connection.
ARP Cache Timeout
A timeout is normally associated with each entry in the ARP cache. (Later we
shall see that the arp command enables the administrator to place an entry into
the cache that will never time out.) Most implementations have a timeout of 20
minutes for a completed entry and 3 minutes for an incomplete entry. (We saw an
incomplete entry in our previous example where we forced an ARP to a nonexistent host.) These implementations normally restart the 20-minute timeout for an
entry each time the entry is used. [RFC1122], the Host Requirements RFC, says
that this timeout should occur even if the entry is in use, but many implementations do not do this—they restart the timeout each time the entry is referenced.
Note that this is one of our first examples of soft state. Soft state is information
that is discarded if not refreshed before some timeout is reached. Many Internet
protocols use soft state because it helps to initiate automatic reconfiguration if network conditions change. The cost of soft state is that some protocol must refresh the
state to avoid expiration. “Soft state refreshes” are often incorporated in a protocol
design to keep the soft state active.
Proxy ARP
Proxy ARP [RFC1027] lets a system (generally a specially configured router)
answer ARP requests for a different host. This fools the sender of the ARP request
Section 4.8 Gratuitous ARP and Address Conflict Detection (ACD)
into thinking that the responding system is the destination host, when in fact the
destination host may be elsewhere (or may not exist). Proxy ARP is not commonly
used and is generally to be avoided if possible.
Proxy ARP has also been called promiscuous ARP or the ARP hack. These
names are from a historical use of proxy ARP: to hide two physical networks from
each other. In this case both physical networks can use the same IP prefix as long
as a router in the middle is configured as a proxy ARP agent to respond to ARP
requests on one network for a host on the other network. This technique can be
used to “hide” one group of hosts from another. In the past, there were two common reasons for doing this: some systems were unable to handle subnetting, and
some used an older broadcast address (a host ID of all 0 bits, instead of the current
standard of a host ID with all 1 bits).
Linux supports a feature called auto-proxy ARP. It can be enabled by writing
the character 1 into the file /proc/sys/net/ipv4/conf/*/proxy_arp, or by
using the sysctl command. This supports the ability of using proxy ARP without having to manually enter ARP entries for every possible IPv4 address that is
being proxied. Doing so allows a range of addresses, instead of each individual
address, to be automatically proxied.
Gratuitous ARP and Address Conflict Detection (ACD)
Another feature of ARP is called gratuitous ARP. It occurs when a host sends an
ARP request looking for its own address. This is usually done when the interface
is configured “up” at bootstrap time. Here is an example trace taken on a Linux
machine showing our Windows host booting up:
tcpdump -e -n arp
0.0 0:0:c0:6f:2d:40 ff:ff:ff:ff:ff:ff arp 60:
arp who-has tell
(We specified the -n flag for tcpdump to always print numeric dotted-decimal addresses instead of host names.) In terms of the fields in the ARP request, the
Sender’s Protocol Address and the Target Protocol Address are identical:
Also, the Source Address field in the Ethernet header, 0:0:c0:6f:2d:40 as shown
by tcpdump, equals the sender’s hardware address. Gratuitous ARP achieves two
1. It lets a host determine if another host is already configured with the same
IPv4 address. The host sending the gratuitous ARP is not expecting a reply
to its request. If a reply is received, however, the error message “Duplicate
IP address sent from Ethernet address . . .” is usually displayed. This is a
warning to the system administrator and user that one of the systems in the
same broadcast domain (e.g., LAN or VLAN) is misconfigured.
ARP: Address Resolution Protocol
2. If the host sending the gratuitous ARP has just changed its hardware
address (perhaps the host was shut down, the interface card was replaced,
and then the host was rebooted), this frame causes any other host receiving
the broadcast that has an entry in its cache for the old hardware address
to update its ARP cache entry accordingly. As mentioned before, if a host
receives an ARP request from an IPv4 address that is already in the receiver’s cache, that cache entry is updated with the sender’s hardware address
from the ARP request. This is done for any ARP request received by the
host; gratuitous ARP happens to take advantage of this behavior.
Although gratuitous ARP provides some indication that multiple stations may
be attempting to use the same IPv4 address, it really provides no mechanism to
react to the situation (other than by printing a message that is ideally acted upon by
a system administrator). To deal with this issue, [RFC5227] describes IPv4 Address
Conflict Detection (ACD). ACD defines ARP probe and ARP announcement packets. An ARP probe is an ARP request packet in which the Sender’s Protocol (IPv4)
Address field is set to 0. Probes are used to see if a candidate IPv4 address is being
used by any other systems in the broadcast domain. Setting the Sender’s Protocol
Address field to 0 avoids cache pollution should the candidate IPv4 address already
be in use by another host, a difference from the way gratuitous ARP works. An
ARP announcement is identical to an ARP probe, except both the Sender’s Protocol
Address and the Target Protocol Address fields are filled in with the candidate IPv4
address. It is used to announce the sender’s intention to use the candidate IPv4
address as its own.
To perform ACD, a host sends an ARP probe when an interface is brought up
or out of sleep, or when a new link is established (e.g., when an association with
a new wireless network is made). It first waits a random amount of time (in the
range 0–1s, distributed uniformly) before sending up to three probe packets. The
delay is used to avoid power-on congestion when multiple systems powered on
simultaneously would otherwise attempt to perform ACD at once, leading to a
network traffic spike. The probes are spaced randomly, with between 1 and 2s of
delay (distributed uniformly) placed between.
While sending its probes, a requesting station may receive ARP requests or
replies. A reply to its probe indicates that a different station is already using the
candidate IP address. A request containing the same candidate IPv4 address in the
Target Protocol Address field sent from a different system indicates that the other
system is simultaneously attempting to acquire the candidate IPv4 address. In
either case, the system should indicate an address conflict message and pursue
some alternative address. For example, this is the recommended behavior when
being assigned an address using DHCP (see Chapter 6). [RFC5227] places a limit of
ten conflicts when trying to acquire an address before the requesting host enters a
rate-limiting phase when it is permitted to perform ACD only once every 60s until
Section 4.9 The arp Command
If a requesting host does not discover a conflict according to the procedure
just described, it sends two ARP announcements spaced 2s apart to indicate to systems in the broadcast domain the IPv4 address it is now using. In the announcements, both the Sender’s Protocol Address and the Target Protocol Address fields are
set to the address being claimed. The purpose of sending these announcements is
to ensure that any preexisting cached address mappings are updated to reflect the
sender’s current use of the address.
ACD is considered to be an ongoing process, and in this way it differs from
gratuitous ARP. Once a host has announced an address it is using, it continues
inspecting incoming ARP traffic (requests and replies) to see if its address appears
in the Sender’s Protocol Address field. If so, some other system believes it is rightfully
using the same address. In this case, [RFC5227] provides three possible resolution
mechanisms: cease using the address, keep the address but send a “defensive”
ARP announcement and cease using it if the conflict continues, or continue to
use the address despite the conflict. The last option is recommended only for systems that truly require a fixed, stable address (e.g., an embedded device such as a
printer or router).
[RFC5227] also suggests the potential benefit of having some ARP replies be
sent using link-layer broadcast. Although this has not traditionally been the way
ARP works, there can be some benefit in doing so, at the expense of requiring all
stations on the same segment to process all ARP traffic. Broadcast replies allow
ACD to occur more quickly because all stations will notice the reply and invalidate their caches during a conflict.
The arp Command
We have used the arp command with the -a flag on Windows and Linux to display all the entries in the ARP cache (on Linux we get similar information without
using -a). The superuser or administrator can specify the -d option to delete an
entry from the ARP cache. (This was used before running a few of the examples,
to force an ARP exchange to be performed.)
Entries can also be added using the -s option. It requires an IPv4 address (or
host name that can be converted to an IPv4 address using DNS) and an Ethernet
address. The IPv4 address and the Ethernet address are added to the cache as an
entry. This entry is made semipermanent (i.e., it does not time out from the cache,
but it disappears when the system is rebooted).
The Linux version of arp provides a few more features than the Windows
version. When the temp keyword is supplied at the end of the command line
when adding an entry using -s, the entry is considered to be temporary and times
out in the same way that other ARP entries do. The keyword pub at the end of a
command line, also used with the -s option, causes the system to act as an ARP
responder for that entry. The system answers ARP requests for the IPv4 address,
replying with the specified Ethernet address. If the advertised address is one of
ARP: Address Resolution Protocol
the system’s own, the system is acting as a proxy ARP agent (see Section 4.7) for
the specified IPv4 address. If arp -s is used to enable proxy ARP, Linux responds
for the address specified even if the file /proc/sys/net/ipv4/conf/*/proxy_
arp contains 0.
4.10 Using ARP to Set an Embedded Device’s IPv4 Address
As more embedded devices are made compatible with Ethernet and the TCP/IP
protocols, it is increasingly common to find network-attached devices that have
no direct way to enter their network configuration information (e.g., they have no
keyboard, so entering an IP address for them to use is not possible). These devices
are typically configured in one of two ways. First, DHCP can be used to automatically assign an address and other information (see Chapter 6). Another way is to
use ARP to set an IPv4 address, although this method is less common.
Using ARP to configure an embedded device’s IPv4 address was not the original intent of the protocol, so it is not entirely automatic. The basic idea is to manually establish an ARP mapping for the device (using the arp -s command), then
send an IP packet to the address. Because the ARP entry is already present, no
ARP request/reply is generated. Instead, the hardware address can be used immediately. Of course, the Ethernet (MAC) address of the device must be known. It is
typically printed on the device itself and sometimes doubles as the manufacturer’s
device serial number. When the device receives a packet destined for its hardware
address, whatever destination address is contained in the datagram is used to
assign its initial IPv4 address. After that, the device can be fully configured using
other means (e.g., by an embedded Web server).
4.11 Attacks Involving ARP
There have been a series of attacks involving ARP. The most straightforward is
to use the proxy ARP facility to masquerade as some host, responding to ARP
requests for it. If the victim host is not present, this is straightforward and may not
be detected. It is considerably more difficult if the host is still running, as more
than one response may be generated per ARP request, which is easily detected.
A more subtle attack has been launched against ARP that involves cases where
a machine is attached to more than one network, and ARP entries from one interface “leak” over into the ARP table of the other, because of a bug in the ARP software. This can be exploited to improperly direct traffic onto the wrong network
segment. Linux provides a way to affect this behavior directly, by modifying the
file /proc/sys/net/ipv4/conf/*/arp_filter. If the value 1 is written into
this file, then when an incoming ARP request arrives over an interface, an IP forwarding check is made. The IP address of the requestor is looked up to determine
which interface would be used to send IP datagrams back to it. If the interface
Section 4.13 References
used by the arriving ARP request is different from the interface that would be
used to return an IP datagram to the requestor, the ARP response is suppressed
(and the triggering ARP request is dropped).
A somewhat more damaging attack on ARP involves the handling of static
entries. As discussed previously, static entries may be used to avoid the ARP
request/reply when seeking the Ethernet (MAC) address corresponding to a particular IP address. Such static entries have been used in an attempt to enhance
security. The idea is that static entries placed in the ARP cache for important hosts
would soon detect any hosts masquerading with that IP address. Unfortunately,
most implementations of ARP have traditionally replaced even static cache entries
with entries provided by ARP replies. The consequence of this is that a machine
receiving an ARP reply (even if did not send an ARP request) would be coaxed
into replacing its static entries with those provided by an attacker.
4.12 Summary
ARP is a basic protocol in almost every TCP/IP implementation, but it normally
does its work without the application or user being aware of it. ARP is used to
determine the hardware addresses corresponding to the IPv4 addresses in use on
the locally reachable IPv4 subnet. It is invoked when forwarding datagrams destined for the same subnet as the sending host’s and is also used to reach a router
when the destination of a datagram is not on the subnet (the details of this are
explained in Chapter 5). The ARP cache is fundamental to its operation, and we
have used the arp command to examine and manipulate the cache. Each entry
in the cache has a timer that is used to remove both incomplete and completed
entries. The arp command displays and modifies entries in the ARP cache.
We followed through the normal operation of ARP along with specialized
versions: proxy ARP (when a router answers ARP requests for hosts accessible on
another of the router’s interfaces) and gratuitous ARP (sending an ARP request for
your own IP address, normally when bootstrapping). We also discussed address
conflict detection for IPv4, which uses a continually operating gratuitous ARP-like
exchange to avoid address duplication within the same broadcast domain. Finally,
we discussed a number of attacks that involve ARP. Most of these involve impersonating hosts by fabricating ARP responses for them. This can lead to problems
with higher-layer protocols if they do not implement strong security (see Chapter
4.13 References
[RFC0826] D. Plummer, “Ethernet Address Resolution Protocol: Or Converting
Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware,” Internet RFC 0826/STD 0037, Nov. 1982.
ARP: Address Resolution Protocol
[RFC0903] R. Finlayson, T. Mann, J. C. Mogul, and M. Theimer, “A Reverse
Address Resolution Protocol,” Internet RFC 0903/STD 0038, June 1984.
[RFC1027] S. Carl-Mitchell and J. S. Quarterman, “Using ARP to Implement
Transparent Subnet Gateways,” Internet RFC 1027, Oct. 1987.
[RFC1122] R. Braden, ed., “Requirements for Internet Hosts,” Internet RFC 1122/
STD 0003, Oct. 1989.
[RFC2332] J. Luciani, D. Katz, D. Piscitello, B. Cole, and N. Doraswamy, “NBMA
Next Hop Resolution Protocol (NHRP),” Internet RFC 2332, Apr. 1998.
[RFC5227] S. Cheshire, “IPv4 Address Conflict Detection,” Internet RFC 5227,
July 2008.
[RFC5494] J. Arkko and C. Pignataro, “IANA Allocation Guidelines for the
Address Resolution Protocol (ARP),” Internet RFC 5494, Apr. 2009.
The Internet Protocol (IP)
IP is the workhorse protocol of the TCP/IP protocol suite. All TCP, UDP, ICMP, and
IGMP data gets transmitted as IP datagrams. IP provides a best-effort, connectionless datagram delivery service. By “best-effort” we mean there are no guarantees
that an IP datagram gets to its destination successfully. Although IP does not simply drop all traffic unnecessarily, it provides no guarantees as to the fate of the
packets it attempts to deliver. When something goes wrong, such as a router temporarily running out of buffers, IP has a simple error-handling algorithm: throw
away some data (usually the last datagram that arrived). Any required reliability
must be provided by the upper layers (e.g., TCP). IPv4 and IPv6 both use this basic
best-effort delivery model.
The term connectionless means that IP does not maintain any connection state
information about related datagrams within the network elements (i.e., within the
routers); each datagram is handled independently from all other others. This also
means that IP datagrams can be delivered out of order. If a source sends two consecutive datagrams (first A, then B) to the same destination, each is routed independently and can take different paths, and B may arrive before A. Other things
can happen to IP datagrams as well: they may be duplicated in transit, and they
may have their data altered as the result of errors. Again, some protocol above IP
(usually TCP) has to handle all of these potential problems in order to provide an
error-free delivery abstraction for applications.
In this chapter we take a look at the fields in the IPv4 (see Figure 5-1) and
IPv6 (see Figure 5-2) headers and describe how IP forwarding works. The official
specification for IPv4 is given in [RFC0791]. A series of RFCs describe IPv6, starting with [RFC2460].
The Internet Protocol (IP)
Figure 5-1
The IPv4 datagram. The header is of variable size, limited to fifteen 32-bit words (60
bytes) by the 4-bit IHL field. A typical IPv4 header contains 20 bytes (no options). The
source and destination addresses are 32 bits long. Most of the second 32-bit word is used
for the IPv4 fragmentation function. A header checksum helps ensure that the fields in
the header are delivered correctly to the proper destination but does not protect the data.
Figure 5-2 The IPv6 header is of fixed size (40 bytes) and contains 128-bit source and destination
addresses. The Next Header field is used to indicate the presence and types of additional
extension headers that follow the IPv6 header, forming a daisy chain of headers that may
include special extensions or processing directives. Application data follows the header
chain, usually immediately following a transport-layer header.
Section 5.2 IPv4 and IPv6 Headers
IPv4 and IPv6 Headers
Figure 5-1 shows the format of an IPv4 datagram. The normal size of the IPv4
header is 20 bytes, unless options are present (which is rare). The IPv6 header is
twice as large but never has any options. It may have extension headers, which provide similar capabilities, as we shall see later. In our pictures of headers and datagrams, the most significant bit is numbered 0 at the left, and the least significant
bit of a 32-bit value is numbered 31 on the right.
The 4 bytes in a 32-bit value are transmitted in the following order: bits 0–7
first, then bits 8–15, then 16–23, and bits 24–31 last. This is called big endian byte
ordering, which is the byte ordering required for all binary integers in the TCP/IP
headers as they traverse a network. It is also called network byte order. Computer
CPUs that store binary integers in other formats, such as the little endian format
used by most PCs, must convert the header values into network byte order for
transmission and back again for reception.
IP Header Fields
The first field (only 4 bits or one nibble wide) is the Version field. It contains the
version number of the IP datagram: 4 for IPv4 and 6 for IPv6. The headers for both
IPv4 and IPv6 share the location of the Version field but no others. Thus, the two
protocols are not directly interoperable—a host or router must handle either IPv4
or IPv6 (or both, called dual stack) separately. Although other versions of IP have
been proposed and developed, only versions 4 and 6 have any significant amount
of use. The IANA keeps an official registry of these version numbers [IV].
The Internet Header Length (IHL) field is the number of 32-bit words in the IPv4
header, including any options. Because this is also a 4-bit field, the IPv4 header is
limited to a maximum of fifteen 32-bit words or 60 bytes. Later we shall see how
this limitation makes some of the options, such as the Record Route option, nearly
useless today. The normal value of this field (when no options are present) is 5.
There is no such field in IPv6 because the header length is fixed at 40 bytes.
Following the header length, the original specification of IPv4 [RFC0791]
specified a Type of Service (ToS) byte, and IPv6 [RFC2460] specified the equivalent
Traffic Class byte. Use of these never became widespread, so eventually this 8-bit
field was split into two smaller parts and redefined by a set of RFCs ([RFC3260]
[RFC3168][RFC2474] and others). The first 6 bits are now called the Differentiated
Services Field (DS Field), and the last 2 bits are the Explicit Congestion Notification
(ECN) field or indicator bits. These RFCs now apply to both IPv4 and IPv6. These
fields are used for special processing of the datagram when it is forwarded. We
discuss them in more detail in Section 5.2.3.
The Total Length field is the total length of the IPv4 datagram in bytes. Using
this field and the IHL field, we know where the data portion of the datagram
starts, and its length. Because this is a 16-bit field, the maximum size of an IPv4
datagram (including header) is 65,535 bytes. The Total Length field is required in
The Internet Protocol (IP)
the header because some lower-layer protocols that carry IPv4 datagrams do not
(accurately) convey the size of encapsulated datagrams on their own. Ethernet,
for example, pads small frames to be a minimum length (64 bytes). Even though
the minimum Ethernet payload size is 46 bytes (see Chapter 3), an IPv4 datagram
can be smaller (as few as 20 bytes). If the Total Length field were not provided, the
IPv4 implementation would not know how much of a 46-byte Ethernet frame was
really an IP datagram, as opposed to padding, leading to possible confusion.
Although it is possible to send a 65,535-byte IP datagram, most link layers
(such as Ethernet) are not able to carry one this large without fragmenting it
(chopping it up) into smaller pieces. Furthermore, a host is not required to be able
to receive an IPv4 datagram larger than 576 bytes. (In IPv6 a host must be able to
process a datagram at least as large as the MTU of the link to which it is attached,
and the minimum link MTU is 1280 bytes.) Many applications that use the UDP
protocol (see Chapter 10) for data transport (e.g., DNS, DHCP, etc.) use a limited
data size of 512 bytes to avoid the 576-byte IPv4 limit. TCP chooses its own datagram size based on additional information (see Chapter 15).
When an IPv4 datagram is fragmented into multiple smaller fragments, each of
which itself is an independent IP datagram, the Total Length field reflects the length
of the particular fragment. Fragmentation is described in detail along with UDP in
Chapter 10. In IPv6, fragmentation is not supported by the header, and the length
is instead given by the Payload Length field. This field measures the length of the
IPv6 datagram not including the length of the header; extension headers, however,
are included in the Payload Length field. As with IPv4, the 16-bit size of the field
limits its maximum value to 65,535. With IPv6, however, it is the payload length that
is limited to 64KB, not the entire datagram. In addition, IPv6 supports a jumbogram
option (see Section that provides for the possibility, at least theoretically, of
single packets with payloads as large as 4GB (4,294,967,295 bytes)!
The Identification field helps indentify each datagram sent by an IPv4 host. To
ensure that the fragments of one datagram are not confused with those of another,
the sending host normally increments an internal counter by 1 each time a datagram
is sent (from one of its IP addresses) and copies the value of the counter into the IPv4
Identification field. This field is most important for implementing fragmentation, so
we explore it further in Chapter 10, where we also discuss the Flags and Fragment
Offset fields. In IPv6, this field shows up in the Fragmentation extension header, as
we discuss in Section 5.3.3.
The Time-to-Live field, or TTL, sets an upper limit on the number of routers
through which a datagram can pass. It is initialized by the sender to some value
(64 is recommended [RFC1122], although 128 or 255 is not uncommon) and decremented by 1 by every router that forwards the datagram. When this field reaches
0, the datagram is thrown away, and the sender is notified with an ICMP message
(see Chapter 8). This prevents packets from getting caught in the network forever
should an unwanted routing loop occur.
Section 5.2 IPv4 and IPv6 Headers
The TTL field was originally specified to be the maximum lifetime of an IP datagram in seconds, but routers were also always required to decrement the value by
at least 1. Because virtually no routers today hold on to a datagram longer than 1s
under normal operation, the earlier rule is now ignored or forgotten, and in IPv6
the field has been renamed to its de facto use: Hop Limit.
The Protocol field in the IPv4 header contains a number indicating the type of
data found in the payload portion of the datagram. The most common values are
17 (for UDP) and 6 (for TCP). This provides a demultiplexing feature so that the IP
protocol can be used to carry payloads of more than one protocol type. Although
this field originally specified the transport-layer protocol the datagram is encapsulating, it is now understood to identify the encapsulated protocol, which may or
not be a transport protocol. For example, other encapsulations are possible, such
as IPv4-in-IPv4 (value 4). The official list of the possible values of the Protocol field
is given in the assigned numbers page [AN]. The Next Header field in the IPv6
header generalizes the Protocol field from IPv4. It is used to indicate the type of
header following the IPv6 header. This field may contain any values defined for
the IPv4 Protocol field, or any of the values associated with the IPv6 extension
headers described in Section 5.3.
The Header Checksum field is calculated over the IPv4 header only. This is important to understand because it means that the payload of the IPv4 datagram (e.g.,
TCP or UDP data) is not checked for correctness by the IP protocol. To help ensure
that the payload portion of an IP datagram has been correctly delivered, other
protocols must cover any important data that follows the header with their own
data-integrity-checking mechanisms. We shall see that almost all protocols encapsulated in IP (ICMP, IGMP, UDP, and TCP) have a checksum in their own headers
to cover their header and data and also to cover certain parts of the IP header they
deem important (a form of “layering violation”). Perhaps surprisingly, the IPv6
header does not have any checksum field.
Omitting the checksum field from the IPv6 header was a somewhat controversial
decision. The reasoning behind this action is roughly as follows: Higher-layer protocols requiring correctness in the IP header are required to compute their own
checksums over the data they believe to be important. A consequence of errors
in the IP header is that the data is delivered to the wrong destination, is indicated
to have come from the wrong source, or is otherwise mangled during delivery.
Because bit errors are relatively rare (thanks to fiber-optic delivery of Internet
traffic) and stronger mechanisms are available to ensure correctness of the other
fields (higher-layer checksums or other checks), it was decided to eliminate the
field from the IPv6 header.
The Internet Protocol (IP)
The algorithm used in computing a checksum is also used by most of the
other Internet-related protocols that use checksums and is sometimes known as
the Internet checksum. Note that when an IPv4 datagram passes through a router,
its header checksum must change as a result of decrementing the TTL field. We
discuss the methods for computing the checksum in more detail in Section 5.2.2.
Every IP datagram contains the Source IP Address of the sender of the datagram
and the Destination IP Address of where the datagram is destined. These are 32-bit
values for IPv4 and 128-bit values for IPv6, and they usually identify a single interface on a computer, although multicast and broadcast addresses (see Chapter 2)
violate this rule. While a 32-bit address can accommodate a seemingly large number of Internet entities (4.5 billion), there is widespread agreement that this number is inadequate, a primary motivation for moving to IPv6. The 128-bit address
of IPv6 can accommodate a huge number of Internet entities. As was restated in
[H05], IPv6 has 3.4 × 1038 (340 undecillion) addresses. Quoting from [H05] and others: “The optimistic estimate would allow for 3,911,873,538,269,506,102 addresses
per square meter of the surface of the planet Earth.” It certainly seems as if this
should last a very, very long time indeed.
The Internet Checksum
The Internet checksum is a 16-bit mathematical sum used to determine, with
reasonably high probability, whether a received message or portion of a message
matches the one sent. Note that the Internet checksum algorithm is not the same as
the common cyclic redundancy check (CRC) [PB61], which offers stronger protection.
To compute the IPv4 header checksum for an outgoing datagram, the value
of the datagram’s Checksum field is first set to 0. Then, the 16-bit one’s complement sum of the header is calculated (the entire header is considered a sequence
of 16-bit words). The 16-bit one’s complement of this sum is then stored in the
Checksum field to make the datagram ready for transmission. One’s complement
addition can be implemented by “end-round-carry addition”: when a carry bit
is produced using conventional (two’s complement) addition, the carry is added
back in as a 1 value. Figure 5-3 presents an example, where the message contents
are represented in hexadecimal.
When an IPv4 datagram is received, a checksum is computed across the whole
header, including the value of the Checksum field itself. Assuming there are no
errors, the computed checksum value is always 0 (a one’s complement of the value
FFFF). Note that for any nontrivial packet or header, the value of the Checksum
field in the packet can never be FFFF. If it were, the sum (prior to the final one’s
complement operation at the sender) would have to have been 0. No sum can ever
be 0 using one’s complement addition unless all the bytes are 0—something that
never happens with any legitimate IPv4 header. When the header is found to be
bad (the computed checksum is nonzero), the IPv4 implementation discards the
received datagram. No error message is generated. It is up to the higher layers to
somehow detect the missing datagram and retransmit if necessary.
Section 5.2 IPv4 and IPv6 Headers
Two’s Complement Sum:
One’s Complement Sum:
One’s Complement:
E3 4F 23 96 44 27 99 F3 [00 00]
Checksum Field = 0000
E4FF+1 = E500
~(E500) = ~(1110 0101 0000 0000) = 0001 1010 1111 1111 =
1AFF (the checksum)
Message + Checksum =
E34F + 2396 + 4427 + 99F3 + 1AFF = E500 + 1AFF = FFFF
~(Message + Checksum) = 0000
Figure 5-3 The Internet checksum is the one’s complement of a one’s complement 16-bit sum of the
data being checksummed (zero padding is used if the number of bytes being summed is
odd). If the data being summed includes a Checksum field, the field is first set to 0 prior
to the checksum operation and then filled in with the computed checksum. To check
whether an incoming block of data that contains a Checksum field (header, payload, etc.)
is valid, the same type of checksum is computed over the whole block (including the
Checksum field). Because the Checksum field is essentially the inverse of the checksum of
the rest of the data, computing the checksum on correctly received data should produce
a value of 0. Mathematics of the Internet Checksum
For the mathematically inclined, the set of 16-bit hexadecimal values V = {0001,
. . . , FFFF} and the one’s complement sum operation + together form an Abelian
group. For the combination of a set and an operator to be a group, several properties need to be obeyed: closure, associativity, existence of an identity element, and
existence of inverses. To be an Abelian (commutative) group, commutativity must
also be obeyed. If we look closely, we see that all of these properties are indeed
• For any X,Y in V, (X + Y) is in V
• For any X,Y,Z in V, X + (Y + Z) = (X + Y) + Z
• For any X in V, e + X = X + e = X where e = FFFF
• For any X in V, there is an X′ in V such that X + X′ = e
• For any X,Y in V, (X + Y) = (Y + X)
What is interesting about the set V and the group <V,+> is that we have deleted
the number 0000 from consideration. If we put the number 0000 in the set V, then
<V,+> is not a group any longer. To see this, we first observe that 0000 and FFFF
appear to perform the role of zero (additive identity) using the + operation. For
example, AB12 + 0000 = AB12 = AB12 + FFFF. However, in a group there can be
only one identity element. If we have some element 12AB, and assume the identity
The Internet Protocol (IP)
element is 0000, then we need some inverse X′ so that (12AB + X′) = 0000, but we
see that no such value of X′ exists in V that satisfies the criteria. Therefore, we need
to exclude 0000 from consideration as the identity element in <V,+> by removing it
from the set V to make this structure a true group. For an introduction to abstract
algebra, the reader may wish to consult a detailed text on the subject, such as the
popular book by Pinter [P90].
DS Field and ECN (Formerly Called the ToS Byte or IPv6 Traffic Class)
The third and fourth fields of the IPv4 header (second and third fields of the IPv6
header) are the Differentiated Services (called DS Field) and ECN fields. Differentiated Services (called DiffServ) is a framework and set of standards aimed at supporting differentiated classes of service (i.e., beyond just best-effort) on the Internet
[RFC2474][RFC2475][RFC3260]. IP datagrams that are marked in certain ways (by
having some of these bits set according to predefined patterns) may be forwarded
differently (e.g., with higher priority) than other datagrams. Doing so can lead
to increased or decreased queuing delay in the network and other special effects
(possibly with associated special fees imposed by an ISP). A number is placed in
the DS Field termed the Differentiated Services Code Point (DSCP). A “code point”
refers to a particular predefined arrangement of bits with agreed-upon meaning.
Typically, datagrams have a DSCP assigned to them when they are given to the
network infrastructure that remains unmodified during delivery. However, policies (such as how many high-priority packets are allowed to be sent in a period of
time) may cause a DSCP in a datagram to be changed during delivery.
The pair of ECN bits in the header is used for marking a datagram with a
congestion indicator when passing through a router that has a significant amount of
internally queued traffic. Both bits are set by persistently congested ECN-aware
routers when forwarding packets. The use case envisioned for this function is
that when a marked packet is received at the destination, some protocol (such as
TCP) will notice that the packet is marked and indicate this fact back to the sender,
which would then slow down, thereby easing congestion before a router is forced
to drop traffic because of overload. This mechanism is one of several aimed at
avoiding or dealing with network congestion, which we explore in more detail in
Chapter 16. Although the DS Field and ECN field are not obviously closely related,
the space for them was carved out of the previously defined IPv4 Type of Service
and IPv6 Traffic Class fields. For this reason, they are often discussed together, and
the terms “ToS byte” and “Traffic Class byte” are still in widespread use.
Although the original uses for the ToS and Traffic Class bytes are not widely
supported, the structure of the DS Field has been arranged to provide some backward compatibility with them. To get a clear understanding of how this has been
accomplished, we first review the original structure of the Type of Service field
[RFC0791] as shown in Figure 5-4.
Section 5.2 IPv4 and IPv6 Headers
Figure 5-4
The original IPv4 Type of Service and IPv6 Traffic Class field structures. The Precedence
subfield was used to indicate which packets should receive higher priority (larger values
mean higher priority). The D, T, and R subfields refer to delay, throughput, and reliability. A value of 1 in these fields corresponds to a desire for low delay, high throughput,
and high reliability, respectively.
The D, T, and R subfields are for indicating that the datagram should receive
good treatment with respect to delay, throughput, and reliability. A value of 1 indicates better treatment (low delay, high throughput, high reliability, respectively).
The precedence values range from 000 (routine) to 111 (network control) with
increasing priority (see Table 5-1). They are based on a call preemption scheme
called Multilevel Precedence and Preemption (MLPP) dating back to the U.S. Department of Defense’s AUTOVON telephone system [A92], in which lower-precedence
calls could be preempted by higher-precedence calls. These terms are still in use
and are being incorporated into VoIP systems.
Table 5-1
The original IPv4 Type of Service and IPv6 Traffic Class precedence subfield values
Precedence Name
Flash Override
Internetwork Control
Network Control
In defining the DS Field, the precedence values have been taken into account
[RFC2474] so as to provide a limited form of backward compatibility. Referring to
Figure 5-5, the 6-bit DS Field holds the DSCP, providing support for 64 distinct
code points. The particular value of the DSCP tells a router the forwarding treatment or special handling the datagram should receive. The various forwarding
treatments are expressed as per-hop behavior (PHB), so the DSCP value effectively
tells a router which PHB to apply to the datagram. The default value for the DSCP
is generally 0, which corresponds to routine, best-effort Internet traffic. The 64
possible DSCP values are broadly divided into a set of pools for various uses, as
given in [DSCPREG] and shown in Table 5-2.
The Internet Protocol (IP)
Figure 5-5
The DS Field contains the DSCP in 6 bits (5 bits are currently standardized to indicate
the forwarding treatment the datagram should receive when forwarded by a compliant
router). The following 2 bits are used for ECN and may be turned on in the datagram
when it passes through a persistently congested router. When such datagrams arrive
at their destinations, the congestion indication is sent back to the source in a later datagram to inform the source that its datagrams are passing through one or more congested
Table 5-2
The DSCP values are divided into three pools: standardized, experimental/local use
(EXP/LU), and experimental/local use that is eventually intended for standardization (*).
Code Point Prefix
The arrangement provides for some experimentation and local use by
researchers and operators. DSCPs ending in 0 are subject to standardized use,
and those ending in 1 are for experimental/local use (EXP/LU). Those ending in
01 are intended initially for experimentation or local use but with eventual intent
toward standardization.
Referring to Figure 5-5, the class portion of the DS Field contains the first 3 bits
and is based on the earlier definition of the Precedence subfield of the Type of Service
field. Generally, a router is to first segregate traffic into different classes. Traffic
within a common class may have different drop probabilities, allowing the router
to decide what traffic to drop first if it is forced to discard traffic. The 3-bit class
selector provides for eight defined code points (called the class selector code points)
that correspond to PHBs with a specified minimum set of features providing similar functionality to the earlier IP precedence capability. These are called class selector compliant PHBs. They are intended to support partial backward compatibility
with the original definition given for the IP Precedence subfield given in [RFC0791].
Code points of the form xxx000 always map to such PHBs, although other values
may also map to the same PHBs.
Table 5-3 indicates the class selector DSCP values with their corresponding
terms for the IP Precedence field from [RFC0791]. The Assured Forwarding (AF)
group provides forwarding of IP packets in a fixed number of independent AF
Section 5.2 IPv4 and IPv6 Headers
classes, effectively generalizing the precedence concept. Traffic from one class
is forwarded separately from other classes. Within a traffic class, a datagram is
assigned a drop precedence. Datagrams of higher drop precedence in a class are
handled preferentially (i.e., are forwarded with higher priority) over those with
lower drop precedence in the same class. Combining the traffic class and drop
precedence, the name AFij corresponds to assured forwarding class i with drop
precedence j. For example, a datagram marked with AF32 is in traffic class 3 with
drop precedence 2.
Table 5-3
The DS Field values are designed to be somewhat compatible with the IP Precedence
subfield specified for the Type of Service and IPv6 Traffic Class field. AF and EF provide
enhanced services beyond simple best-effort.
Class selector (best-effort/routine)
Class selector (priority)
Class selector (immediate)
Class selector (flash)
Class selector (flash override)
Class selector (CRITIC/ECP)
Class selector (internetwork control)
Class selector (control)
Assured Forwarding (class 1,dp 1)
Assured Forwarding (1,2)
Assured Forwarding (1,3)
Assured Forwarding (2,1)
Assured Forwarding (2,2)
Assured Forwarding (2,3)
Assured Forwarding (3,1)
Assured Forwarding (3,2)
Assured Forwarding (3,3)
Assured Forwarding (4,1)
Assured Forwarding (4,2)
Assured Forwarding (4,3)
Expedited Forwarding
Capacity-Admitted Traffic
The Expedited Forwarding (EF) service provides the appearance of an uncongested network—that is, EF traffic should receive relatively low delay, jitter, and
loss. Intuitively, this requires the rate of EF traffic going out of a router to be at
least as large as the rate coming in. Consequently, EF traffic will only ever have to
wait in a router queue behind other EF traffic.
The Internet Protocol (IP)
Delivering differentiated services in the Internet has been an ongoing effort
for over a decade. Although much of the standardization effort in terms of mechanisms took place in the late 1990s, only in the twenty-first century are some of its
capabilities being realized and implemented. Some guidance on how to configure
systems to take advantage of these capabilities is given in [RFC4594]. The complexity of differentiated services is due, in part, to the linkage between differentiated services and the presumed differentiated pricing structure and consequent
issues of fairness that would go along with it. Such economic relationships can be
complex and are outside the scope of the present discussion. For more information
on this and related topics, please see [MB97] and [W03].
IP Options
IP supports a number of options that may be selected on a per-datagram basis.
Most of these options were introduced in [RFC0791] at the time IPv4 was being
designed, when the Internet was considerably smaller and when threats from
malicious users were less of a concern. As a consequence, many of the options are
no longer practical or desirable because of the limited size of the IPv4 header or
concerns regarding security. With IPv6, most of the options have been removed
or altered and are not an integral part of the basic IPv6 header. Instead, they are
placed after the IPv6 header in one or more extension headers. An IP router that
receives a datagram containing options is usually supposed to perform special
processing on the datagram. In some cases IPv6 routers process extension headers,
but many headers are designed to be processed only by end hosts. In some routers,
datagrams with options or extensions are not forwarded as fast as ordinary datagrams. We briefly discuss the IPv4 options as background and then look at how
IPv6 implements extension headers and options. Table 5-4 shows most of the IPv4
options that have been standardized over the years.
Table 5-4 gives the reserved IPv4 options for which descriptive RFCs can be
found. The complete list is periodically updated and is available online [IPPARAM]. The options area always ends on a 32-bit boundary. Pad bytes with a value
of 0 are added if necessary. This ensures that the IPv4 header is always a multiple
of 32 bits (as required by the IHL field). The “Number” column in Table 5-4 is the
number of the option. The “Value” column indicates the number placed inside the
option Type field to indicate the presence of the option. These values from the two
columns are not necessarily the same because the Type field has additional structure. In particular, the first (high-order) bit indicates whether the option should
be copied into fragments if the associated datagram is fragmented. The next 2 bits
indicate the option’s class. Currently, all options in Table 5-4 use option class 0
(control) except Timestamp and Traceroute, which are both class 2 (debugging and
measurement). Classes 1 and 3 are reserved.
Most of the standardized options are rarely or never used in the Internet today.
Options such as Source and Record Route, for example, require IPv4 addresses to
be placed inside the IPv4 header. Because there is only limited space in the header
Section 5.2 IPv4 and IPv6 Headers
Table 5-4 Options, if present, are carried in IPv4 packets immediately after the basic IPv4 header. Options
are identified by an 8-bit option Type field. This field is subdivided into three subfields: Copy (1 bit),
Class (2 bits), and Number (5 bits). Options 0 and 1 are a single byte long, and most others are variable
in length. Variable options consist of 1 byte of type identifier, 1 byte of length, and the option itself.
Number Value
End of List
If required
No Op
If required
Rare, often
Security and
Stream ID
Router Alert
Indicates no more
Indicates no operation
to perform (used for
Sender lists router “waypoints” for packet to traverse when forwarded.
Loose means other
routers can be included
between waypoints
(3,131). Strict means all
waypoints have to be traversed exactly in order
Specifies how to include
security labels and
handling restrictions
with IP datagrams in U.S.
military environments.
Records the route taken
by a packet in its header.
Records the time of day
at a packet’s source and
Carries the 16-bit
SATNET stream
Extended Internet
Protocol (an experiment
in the early 1990s)
Adds a route-tracing
option and ICMP
message (an experiment
in the early 1990s).
Indicates that a router
needs to interpret the
contents of the datagram.
Indicates fast transport
protocol start
The Internet Protocol (IP)
(60 bytes total, of which 20 are devoted to the basic IPv4 header), these options are
not very useful in today’s IPv4 Internet where the number of router hops in an
average Internet path is about 15 [LFS07]. In addition, the options are primarily
for diagnostic purposes and make the construction of firewalls more cumbersome
and risky. Thus, IPv4 options are typically disallowed or stripped at the perimeter
of enterprise networks by firewalls (see Chapter 7).
Within enterprise networks, where the average path length is smaller and protection from malicious users may be less of a concern, options can still be useful.
In addition, the Router Alert option represents somewhat of an exception to the
problems with the other options for use on the Internet. Because it is designed
primarily as a performance optimization and does not change fundamental router
behavior, it is permitted more often than the other options. As suggested previously, some router implementations have a highly optimized internal pathway for
forwarding IP traffic containing no options. The Router Alert option informs routers that a packet requires processing beyond the conventional forwarding algorithms. The experimental Quick-Start option at the end of the table is applicable to
both IPv4 and IPv6, and we describe it in the next section when discussing IPv6
extension headers and options.
IPv6 Extension Headers
In IPv6, special functions such as those provided by options in IPv4 can be enabled
by adding extension headers that follow the IPv6 header. The routing and timestamp functions from IPv4 are supported this way, as well as some other functions
such as fragmentation and extra-large packets that were deemed to be rarely used
for most IPv6 traffic (but still desired) and thereby did not justify allocating bits
in the IPv6 header to support them. With this arrangement, the IPv6 header is
fixed at 40 bytes, and extension headers are added only when needed. In choosing
the IPv6 header to be of a fixed size, and requiring that extension headers be processed only by end hosts (with one exception), the designers of IPv6 have made the
design and construction of high-performance routers easier because the demands
on packet processing at routers can be simpler than with IPv4. In practice, packetprocessing performance is governed by many factors, including the complexity
of the protocol, the capabilities of the hardware and software in the router, and
traffic load.
Extension headers, along with headers of higher-layer protocols such as TCP
or UDP, are chained together with the IPv6 header to form a cascade of headers
(see Figure 5-6). The Next Header field in each header indicates the type of the
subsequent header, which could be an IPv6 extension header or some other type.
The value of 59 indicates the end of the header chain. The possible values for the
Next Header field are available at [IP6PARAM], and most are provided in Table 5-5.
As we can see from Table 5-5, the IPv6 extension header mechanism distinguishes some functions (e.g., routing and fragmentation) from options. The order
Section 5.3 IPv6 Extension Headers
1H[W+HDGHU 7&3
Figure 5-6
IPv6 headers form a chain using the Next Header field. Headers in the chain
may be IPv6 extension headers or transport headers. The IPv6 header appears
at the beginning of the datagram and is always 40 bytes long.
Table 5-5 The values for the IPv6 Next Header field may indicate extensions or headers for other protocols. The
same values are used with the IPv4 Protocol field, where appropriate.
Header Type
IPv6 header
Options (HOPOPT)
Destination Options
Encapsulating Security Payload (ESP)
Authentication (AH)
Mobility (MIPv6)
(None—no next header)
Various other upper-layer protocols
[RFC2460]; must immediately follow
IPv6 header
(See Chapter 18)
(See Chapter 18)
(See Chapter 8)
(See Chapter 10)
(See Chapters 13–17)
See [AN] for complete list
The Internet Protocol (IP)
of the extension headers is given as a recommendation, except for the location of
the Hop-by-Hop Options, which is mandatory, so an IPv6 implementation must
be prepared to process extension headers in the order in which they are received.
Only the Destination Options header can be used twice—the first time for options
pertaining to the destination IPv6 address contained in the IPv6 header and the
second time (position 8) for options pertaining to the final destination of the datagram. In some cases (e.g., when the Routing header is used), the Destination IP
Address field in the IPv6 header changes as the datagram is forwarded to its ultimate destination.
IPv6 Options
As we have seen, IPv6 provides a more flexible and extensible way of incorporating extensions and options as compared to IPv4. Those options from IPv4 that
ceased to be useful because of space limitations in the IPv4 header appear in IPv6
as variable-length extension headers or options encoded in special extension
headers that can accommodate today’s much larger Internet. Options, if present,
are grouped into either Hop-by-Hop Options (those relevant to every router along a
datagram’s path) or Destination Options (those relevant only to the recipient). Hopby-Hop Options (called HOPOPTs) are the only ones that need to be processed
by every router a packet encounters. The format for encoding options within the
Hop-by-Hop and Destination Options extension headers is common.
The Hop-by-Hop and Destination Options headers are capable of holding
more than one option. Each of these options is encoded as type-length-value (TLV)
sets, according to the format shown in Figure 5-7.
Figure 5-7 Hop-by-hop and Destination Options are encoded as TLV sets. The first byte gives
the option type, including subfields indicating how an IPv6 node should behave if the
option is not recognized, and whether the option data might change as the datagram is
forwarded. The Opt Data Len field gives the size of the option data in bytes.
The TLV structure shown in Figure 5-7 includes 2 bytes followed by a variablelength number of data bytes. The first byte indicates the type of the option and
includes three subfields. The first subfield gives the action to be taken by an IPv6
node attempting to process the option that does not recognize the 5-bit option Type
subfield. Its possible values are presented in Table 5-6.
Section 5.3 IPv6 Extension Headers
Table 5-6
The 2 high-order bits in an IPv6 TLV option type indicate whether an IPv6 node should
forward or drop the datagram if the option is not recognized, and whether a message
indicating the datagram’s fate should be sent back to the sender.
Skip option, continue processing
Discard the datagram (silently)
Discard the datagram and send an ICMPv6 Parameter Problem message to
the source address
Same as 10, but send the ICMPv6 message only if the offending packet’s
destination was not multicast
If an unknown option were included in a datagram destined for a multicast
destination, a large number of nodes could conceivably generate traffic back to the
source. This can be avoided by use of the 11-bit pattern for the Action subfield. The
flexibility of the Action subfield is useful in the development of new options. A
newly specified option can be carried in datagrams and simply ignored by those
routers that do not understand it, helping to promote incremental deployment of
new options. The Change bit field (Chg in Figure 5-7) is set to 1 when the option data
may be modified as the datagram is forwarded. The options shown in Table 5-7
have been defined for IPv6.
Table 5-7 Options in IPv6 are carried in either Hop-by-Hop (H) or Destination (D) Options extension headers. The option Type field contains the value from the “Type” column with the
Action and Change subfields denoted in binary. The “Length” column contains the value
of the Opt Data Len byte from Figure 5-7. The Pad1 option is the only one lacking this byte.
Option Name
Jumbo Payload
Tunnel Encapsulation
Router Alert
Home Address
[RFC6275] Pad1 and PadN
IPv6 options are aligned to 8-byte offsets, so options that are naturally smaller are
padded with 0 bytes to round out their lengths to the nearest 8 bytes. Two padding
options are available to support this, called Pad1 and PadN. The Pad1 option (type 0)
is the only option that lacks Length and Value fields. It is simply 1 byte long and
The Internet Protocol (IP)
contains the value 0. The PadN option (type 1) inserts 2 or more bytes of padding
into the options area of the header using the format of Figure 5-7. For n bytes of
padding, the Opt Data Len field contains the value (n - 2). IPv6 Jumbo Payload
In some TCP/IP networks, such as those used to interconnect supercomputers,
the normal 64KB limit on the IP datagram size can lead to unwanted overhead
when moving large amounts of data. The IPv6 Jumbo Payload option specifies an
IPv6 datagram with payload size larger than 65,535 bytes, called a jumbogram. This
option need not be implemented by nodes attached to links with MTU sizes below
64KB. The Jumbo Payload option provides a 32-bit field for holding the payload
size for datagrams with payloads of sizes between 65,535 and 4,294,967,295 bytes.
When a jumbogram is formed for transmission, its normal Payload Length field
is set to 0. As we shall see later, the TCP protocol makes use of the Payload Length
field in order to compute its checksum using the Internet checksum algorithm
described previously. When the Jumbo Payload option is used, TCP must be careful to use the length value from the option instead of the regular Length field in
the base header. Although this procedure is not difficult, larger payloads can lead
to an increased chance of undetected error [RFC2675]. Tunnel Encapsulation Limit
Tunneling refers to the encapsulation of one protocol in another that does not conform to traditional layering (see Chapters 1 and 3). For example, IP datagrams may
be encapsulated inside the payload portion of another IP datagram. Tunneling can
be used to form virtual overlay networks, in which one network (e.g., the Internet)
acts as a well-connected link layer for another layer of IP [TWEF03]. Tunnels can
be nested in the sense that datagrams that are in a tunnel may themselves be
placed in a tunnel, in a recursive fashion.
When sending an IP datagram, a sender does not ordinarily have much control over how many tunnel levels are ultimately used for encapsulation. Using this
option, however, a sender can specify this limit. A router intending to encapsulate
an IPv6 datagram into a tunnel first checks for the presence and value of the Tunnel Encapsulation Limit option. If the limit value is 0, the datagram is discarded
and an ICMPv6 Parameter Problem message (see Chapter 8) is sent to the source
of the datagram (i.e., the previous tunnel entry point). If the limit is nonzero, the
tunnel encapsulation is permitted, but the newly formed (encapsulating) IPv6
datagram must include a Tunnel Encapsulation Limit option whose value is 1 less
than the option value in the arriving datagram. In effect, the encapsulation limit
acts like the IPv4 TTL or IPv6 Hop Limit field, but for levels of tunnel encapsulation
instead of forwarding hops. Router Alert
The Router Alert option indicates that the datagram contains information that
needs to be processed by a router. It is used for the same purpose as the IPv4
Router Alert option. [RTAOPTS] gives the current set of values for the option.
Section 5.3 IPv6 Extension Headers
199 Quick-Start
The Quick-Start (QS) option is used in conjunction with the experimental QuickStart procedure for TCP/IP specified in [RFC4782]. It is applicable to both IPv4 and
IPv6 but at present is suggested only for private networks and not the global Internet. The option includes a value encoding the sender’s desired transmission rate in
bits per second, a QS TTL value, and some additional information. Routers along
the path may agree that supporting the desired rate is acceptable, in which case
they decrement the QS TTL and leave the rate request unchanged when forwarding the containing datagram. When they disagree (i.e., wish to support a lower
rate), they can reduce the number to an acceptable rate. Routers that do not recognize the QS option do not decrement the QS TTL. A receiver provides feedback to
the sender, including the difference between the received datagram’s IPv4 TTL or
IPv6 Hop Limit field and its QS TTL, along with the resulting rate that may have
been adjusted by the routers along the forward path. This information is used by
the sender to determine its sending rate (which, for example, may exceed the rate
TCP it would otherwise use). Comparison of the TTL values is used to ensure that
every router along the path participates in the QS negotiation; if any routers are
found to be decrementing the IPv4 TTL (or IPv6 Hop Limit) field and not modifying the QS TTL value, QS is not enabled. CALIPSO
This option is used for supporting the Common Architecture Label IPv6 Security
Option (CALIPSO) [RFC5570] in certain private networks. It provides a method to
label datagrams with a security-level indicator, along with some additional information. In particular, it is intended for use in multilevel secure networking environments (e.g., government, military, and banking) where the security level of all
data must be indicated by some form of label. Home Address
This option holds the “home” address of the IPv6 node sending the datagram
when IPv6 mobility options are in use. Mobile IP (see Section 5.5) specifies a set of
procedures for handling IP nodes that may change their point of network attachment without losing their higher-layer network connections. It has a concept of
a node’s “home,” which is derived from the address prefix of its typical location.
When roaming away from home, the node is generally assigned a different IP
address. This option allows the node to provide its normal home address in addition to its (presumably temporarily assigned) new address while traveling. The
home address can be used by other IPv6 nodes when communicating with the
mobile node. If the Home Address option is present, the Destination Options
header containing it must appear after a Routing header and before the Fragment,
Authentication, and ESP headers (see Chapter 18), if any of them is also present.
We discuss this option in more detail in the context of Mobile IP.
The Internet Protocol (IP)
Routing Header
The IPv6 Routing header provides a mechanism for the sender of an IPv6 datagram to control, at least in part, the path the datagram takes through the network.
At present, two different versions of the routing extension header have been specified, called type 0 (RH0) and type 2 (RH2), respectively. RH0 has been deprecated
because of security concerns [RFC5095], and RH2 is defined in conjunction with
Mobile IP. To best understand the Routing header, we begin by discussing RH0
and then investigate why it has been deprecated and how it differs from RH2. RH0
specifies one or more IPv6 nodes to be “visited” as the datagram is forwarded. The
header is shown in Figure 5-8.
,3$GGUHVV>[email protected]
Figure 5-8 The now-deprecated Routing header type 0 (RH0) generalizes the IPv4 loose and strict
Source Route and Record Route options. It is constructed by the sender to include IPv6
node addresses that act as waypoints when the datagram is forwarded. Each address can
be specified as a loose or strict address. A strict address must be reached by a single IPv6
hop, whereas a loose address may contain one or more other hops in between. The IPv6
Destination IP Address field in the base header is modified to contain the next waypoint
address as the datagram is forwarded.
The IPv6 Routing header shown in Figure 5-8 generalizes the loose Source
and Record Route options from IPv4. It also supports the possibility of routing on
identifiers other than IPv6 addresses, although this feature is not standardized
Section 5.3 IPv6 Extension Headers
and is not discussed further here. For standardized routing on IPv6 addresses,
RH0 allows the sender to specify a vector of IPv6 addresses for nodes to be visited.
The header contains an 8-bit Routing Type identifier and an 8-bit Segments
Left field. The type identifier for IPv6 addresses is 0 for RH0 and 2 for RH2. The
Segments Left field indicates how many route segments remain to be processed—
that is, the number of explicitly listed intermediate nodes still to be visited before
reaching the final destination. The block of addresses starts with a 32-bit reserved
field set by the sender to 0 and ignored by receivers. The addresses are nonmulticast IPv6 addresses to be visited as the datagram is forwarded.
A Routing header is not processed until it reaches the node whose address is
contained in the Destination IP Address field of the IPv6 header. At this time, the
Segments Left field is used to determine the next hop address from the address vector, and this address is swapped with the Destination IP Address field in the IPv6
header. Thus, as the datagram is forwarded, the Segments Left field grows smaller,
and the list of addresses in the header reflects the node addresses that forwarded
the datagram. The forwarding procedure is better understood with an example
(see Figure 5-9).
Figure 5-9
Using an IPv6 Routing header (RH0), the sender (S) is able to direct the datagram
through the intermediate nodes R 2 and R3. The other nodes traversed are determined by
the normal IPv6 routing. Note that the destination address in the IPv6 header is updated
at each hop specified in the Routing header.
In Figure 5-9 we can see how the Routing header is processed by intermediate nodes. The sender (S) constructs the datagram with destination address R1
and a Routing header (type 0) containing the addresses R2, R3, and D. The final
destination of the datagram is the last address in the list (D). The Segments Left
field (labeled “Left” in Figure 5-9) starts at 3. The datagram is forwarded toward
R1 automatically by S and R0. Because R0’s address is not present in the datagram,
The Internet Protocol (IP)
no modifications of the Routing header or addresses are performed by R0. Upon
reaching R1, the destination address from the base header is swapped with the first
address listed in the Routing header and the Segments Left field is decremented.
As the datagram is forwarded, the process of swapping the destination
address with the next address from the address list in the Routing header repeats
until the last destination listed in the Routing header is reached.
We can arrange to include a Routing header with a simple command-line
option to the ping6 command in Windows XP (Windows Vista and later include
only the ping command, which incorporates IPv6 support):
C:\> ping6 -r -s 2001:db8::100 2001:db8::1
This command arranges to use the source address 2001:db8::100 when sending a
ping request to 2001:db8::1. The -r option arranges for a Routing header (RH0)
to be included. We can see the outgoing request using Wireshark (see Figure 5-10).
Figure 5-10
The ping request appears as an ICMPv6 Echo Request in Wireshark. The IPv6 header
includes a Next Header field indicating that the packet contains a type 0 Routing header,
followed by an ICMPv6 header. The number of segments in the RH0 left to be processed
is one (2001:db8::100).
The ping message appears as an ICMPv6 Echo Request packet (see Chapter
8). By following the Next Header field values, we can see that the base header is
followed by a Routing header. In the Routing header, we can see that the type is
0 (indicating an RH0), and there is one segment (hop) left to process. The hop is
specified by the first slot in the address list (number 0): 2001:db8::100.
Section 5.3 IPv6 Extension Headers
As mentioned previously, RH0 has been deprecated by [RFC5095] because of
a security concern that allows RH0 to be used to increase the effectiveness of DoS
attacks. The problem is that RH0 allows the same address to be specified in multiple locations within the Routing header. This can lead to traffic being forwarded
many times between two or more hosts or routers along a particular path. The
potentially high traffic loads that can be created along particular paths in the network can cause disruption to other traffic flows competing for bandwidth across
the same path. Consequently, RH0 has been deprecated and only RH2 remains as
the sole Routing header supported by IPv6. RH2 is equivalent to RH0 except it has
room for only a single address and uses a different value in the Routing Type field.
Fragment Header
The Fragment header is used by an IPv6 source when sending a datagram larger
than the path MTU of the datagram’s intended destination. Path MTU and how
it is determined are discussed in more detail in Chapter 13, but 1280 bytes is a
network-wide link-layer minimum MTU for IPv6 (see section 5 of [RFC2460]). In
IPv4, any host or router can fragment a datagram if it is too large for the MTU on
the next hop, and fields within the second 32-bit word of the IPv4 header indicate
the fragmentation information. In IPv6, only the sender of the datagram is permitted to perform fragmentation, and in such cases a Fragment header is added.
The Fragment header includes the same information as is found in the IPv4
header, but the Identification field is 32 bits instead of the 16 that are used for IPv4.
The larger field provides the ability for more fragmented packets to be outstanding in the network simultaneously. The Fragment header uses the format shown
in Figure 5-11.
Figure 5-11
The IPv6 Fragment header contains a 32-bit Identification field (twice as large as the Identification field in IPv4). The M bit field indicates whether the fragment is the last of an
original datagram. As with IPv4, the Fragment Offset field gives the offset of the payload
into the original datagram in 8-byte units.
Referring to Figure 5-11, the Reserved field and 2-bit Res field are both zero
and ignored by receivers. The Fragment Offset field indicates where the data that
follows the Fragment header is located, as a positive offset in 8-byte units, relative
to the “fragmentable part” (see the next paragraph) of the original IPv6 datagram.
The M bit field, if set to 1, indicates that more fragments are contained in the
datagram. A value of 0 indicates that the fragment contains the last bytes of the
original datagram.
The Internet Protocol (IP)
The datagram serving as input to the fragmentation process is called the
“original packet” and consists of two parts: the “unfragmentable part” and the
“fragmentable part.” The unfragmentable part includes the IPv6 header and any
included extension headers required to be processed by intermediate nodes to the
destination (i.e., all headers up to and including the Routing header, otherwise
the Hop-by-Hop Options extension header if only it is present). The fragmentable
part constitutes the remainder of the datagram (i.e., Destination Options header,
upper-layer headers, and payload data).
When the original packet is fragmented, multiple fragment packets are produced, each of which contains a copy of the unfragmentable part of the original packet, but for which each IPv6 header has the Payload Length field altered to
reflect the size of the fragment packet it describes. Following the unfragmentable
part, each new fragment packet contains a Fragment header with an appropriately
assigned Fragment Offset field (e.g., the first fragment contains offset 0) and a copy
of the original packet’s Identification field. The last fragment has its M (More Fragments) bit field set to 0.
The following example illustrates the way an IPv6 source might fragment a
datagram. In the example shown in Figure 5-12, a payload of 3960 bytes is fragmented such that no fragment’s total packet size exceeds 1500 bytes (a typical MTU
for Ethernet), yet the fragment data sizes still are arranged to be multiples of 8 bytes.
/HQJWK 3D\ORDG/HQJWK 'DWD/HQJWK An example of IPv6 fragmentation where a 3960-byte payload is split into three fragment packets of size 1448 bytes or less. Each fragment contains a Fragment header with
the identical Identification field. All but the last fragment have the More Fragments field
(M) set to 1. The offset is given in 8-byte units—the last fragment, for example, contains data beginning at offset (362 * 8) = 2896 bytes from the beginning of the original
packet’s data. The scheme is similar to fragmentation in IPv4.
Section 5.3 IPv6 Extension Headers
In Figure 5-12 we see how the larger original packet has been fragmented
into three smaller packets, each containing a Fragment header. The IPv6 header’s
Payload Length field is modified to reflect the size of the data and newly formed
Fragment header. The Fragment header in each fragment contains a common Identification field , and the sender ensures that no distinct original packets are assigned
the same field value within the expected lifetime of a datagram on the network.
The Offset field in the Fragment header is given in 8-byte units, so fragmentation is performed at 8-byte boundaries, which is why the first and second fragments
contain 1448 data bytes instead of 1452. Thus, all but the last fragment (possibly) is a
multiple of 8 bytes. The receiver must ensure that all fragments of an original datagram have been received before performing reassembly. The reassembly procedure
aggregates the fragments, forming the original datagram. As with fragmentation in
IPv4 (see Chapter 10), fragments may arrive out of order at the receiver but are reassembled in order to form a datagram that is given to other protocols for processing.
We can see the construction of an IPv6 fragment using this command on Windows 7:
C:\> ping -l 3952 ff01::2
Figure 5-13 shows the Wireshark output of the activity on the network as it runs.
Figure 5-13
The ping program generates ICMPv6 packets (see Chapter 8) containing 3960 IPv6
payload bytes in this example. These packets are fragmented to produce three packet
fragments, each of which is small enough to fit in the Ethernet MTU size of 1500 bytes.
The Internet Protocol (IP)
In Figure 5-13 we see the fragments constituting four ICMPv6 Echo Request
messages sent to the IPv6 multicast address ff01::2. Each request requires fragmentation because the -l 3952 option indicates that 3952 data bytes are to be carried in the data area of each ICMPv6 message (leading to an IPv6 payload length
of 3960 bytes due to the 8-byte ICMPv6 header). The IPv6 source address is linklocal. To determine the target’s link-layer multicast address, a mapping procedure
specific to IPv6 is performed, described in Chapter 9. The ICMPv6 Echo Request
(generated by the ping program) spans several fragments, which Wireshark reassembles to display once it has processed all the constituent fragments. Figure 5-14
shows the second fragment in more detail.
Figure 5-14
The second fragment of an ICMPv6 Echo Request contains 1448 IPv6 payload bytes
including the 8-byte Fragment header. The presence of the Fragment header indicates
that the overall datagram was fragmented at the source, and the Offset field of 181 indicates that this fragment contains data starting at byte offset 1448. The More Fragments
bit field being set indicates that other fragments are needed to reassemble the datagram.
All fragments from the same original datagram contain the same Identification field (2
in this case).
In Figure 5-14 we see the IPv6 header, with payload length 1448 bytes, as
expected. The Next Header field contains the value 44 (0x2c) we saw in Table 5-5,
indicating that a Fragment header follows the IPv6 header. The Fragment header
indicates that the following header is for ICMPv6, meaning there are no more
Section 5.3 IPv6 Extension Headers
extension headers. Also, the Offset field is 181, meaning this fragment contains
data at byte offset 1448 in the original datagram. We know it is not the last fragment because the More Fragments field is set (displayed as Yes by Wireshark). Figure 5-15 shows the final fragment of the initial ICMPv6 Echo Request datagram.
Figure 5-15 The last fragment of the first ICMPv6 Echo Request datagram has an offset of 362 * 8
= 2896 and payload length of 1072 bytes (1064 bytes of the original datagram’s payload
plus 8 bytes of Fragment header). The More Fragments bit field being set to 0 indicates
that this is the last fragment, and the original datagram’s total payload length is 2896
+ 1064 = 3960 bytes (3956 bytes of ICMP data plus 8 bytes for the ICMPv6 header; see
Chapter 8).
In Figure 5-15 we see that the Offset field has the value 362, but this is in 8-byte
units, meaning that the byte offset relative to the original datagram is 362 * 8 =
2896. The Total Length field has the value 1072, which includes 8 bytes for the Fragment header. Wireshark computes the fragmentation pattern for us, indicating
that the first and second fragments contained the first and second sets of 1448
bytes, and the final fragment contained 1064. All in all, the fragmentation process
added 40*2 + 8*3 = 104 bytes to be carried by the network layer (two additional
The Internet Protocol (IP)
IPv6 headers plus an 8-byte Fragment header for each fragment). If we add linklayer overhead, the total comes to 104 + (2*18) = 140 bytes. (Each new Ethernet
frame includes a 14-byte header and a 4-byte CRC.)
IP Forwarding
Conceptually, IP forwarding is simple, especially for a host. If the destination is
directly connected to the host (e.g., a point-to-point link) or on a shared network
(e.g., Ethernet), the IP datagram is sent directly to the destination—a router is not
required or used. Otherwise, the host sends the datagram to a single router (called
the default router) and lets the router deliver the datagram to its destination. This
simple scheme handles most host configurations.
In this section we investigate the details of this simple situation and also how
IP forwarding works when the situation is not as simple. We begin by noting that
most hosts today can be configured to be routers as well as hosts, and many home
networks use an Internet-connected PC to act as a router (and also a firewall, as
we discuss in Chapter 7). What differentiates a host from a router to IP is how IP
datagrams are handled: a host never forwards datagrams it does not originate,
whereas routers do.
In our general scheme, the IP protocol can receive a datagram either from
another protocol on the same machine (TCP, UDP, etc.) or from a network interface. The IP layer has some information in memory, usually called a routing table or
forwarding table, which it searches each time it receives a datagram to send. When a
datagram is received from a network interface, IP first checks if the destination IP
address is one of its own IP addresses (i.e., one of the IP addresses associated with
one of its network interfaces) or some other address for which it should receive
traffic such as an IP broadcast or multicast address. If so, the datagram is delivered
to the protocol module specified by the Protocol field in the IPv4 header or Next
Header field in the IPv6 header. If the datagram is not destined for one of the IP
addresses being used locally by the IP module, then (1) if the IP layer was configured to act as a router, the datagram is forwarded (that is, handled as an outgoing
datagram as described in Section 5.4.2); or (2) the datagram is silently discarded.
Under some circumstances (e.g., no route is known in case 1), an ICMP message
may be sent back to the source indicating an error condition.
Forwarding Table
The IP protocol standards do not dictate the precise data required to be in a forwarding table, as this choice is left up to the implementer of the IP protocol. Nevertheless, several key pieces of information are generally required to implement
the forwarding table for IP, and we shall discuss these now. Each entry in the
routing or forwarding table contains the following information fields, at least
Section 5.4 IP Forwarding
• Destination: This contains a 32-bit field (or 128-bit field for IPv6) used for
matching the result of a masking operation (see the next bulleted item).
The destination can be as simple as zero, for a “default route” covering all
destinations, or as long as the full length of an IP address, in the case of a
“host route” that describes only a single destination.
• Mask: This contains a 32-bit field (128-bit field for IPv6) applied as a bitwise
AND mask to the destination IP address of a datagram being looked up in
the forwarding table. The masked result is compared with the set of destinations in the forwarding table entries.
• Next-hop: This contains the 32-bit IPv4 address or 128-bit IPv6 address of
the next IP entity (router or host) to which the datagram should be sent. The
next-hop entity is typically on a network shared with the system performing the forwarding lookup, meaning the two share the same network prefix
(see Chapter 2).
• Interface: This contains an identifier used by the IP layer to reference the
network interface that should be used to send the datagram to its next hop.
For example, it could refer to a host’s 802.11 wireless interface, a wired Ethernet interface, or a PPP interface associated with a serial port. If the forwarding system is also the sender of the IP datagram, this field is used in
selecting which source IP address to use on the outgoing datagram (see
IP forwarding is performed on a hop-by-hop basis. As we can see from this
forwarding table information, the routers and hosts do not contain the complete
forwarding path to any destination (except, of course, those destinations that are
directly connected to the host or router). IP forwarding provides the IP address of
only the next-hop entity to which the datagram is sent. It is assumed that the next
hop is really “closer” to the destination than the forwarding system is, and that
the next-hop router is directly connected to (i.e., shares a common network prefix with) the forwarding system. It is also generally assumed that no “loops” are
constructed between the next hops so that a datagram does not circulate around
the network until its TTL or hop limit expires. The job of ensuring correctness of
the routing table is given to one or more routing protocols. Many different routing
protocols are available to do this job, including RIP, OSPF, BGP, and IS-IS, to name a
few (see, for example, [DC05] for more detail on routing protocols).
IP Forwarding Actions
When the IP layer in a host or router needs to send an IP datagram to a next-hop
router or host, it first examines the destination IP address (D) in the datagram.
Using the value D, the following longest prefix match algorithm is executed on the
forwarding table:
The Internet Protocol (IP)
1. Search the table for all entries for which the following property holds:
(D ^ mj) = dj, where mj is the value of the mask field associated with the forwarding entry ej having index j, and dj is the value of the destination field
associated with ej. This means that the destination IP address D is bitwise
ANDed with the mask in each forwarding table entry (mj), and the result is
compared against the destination in the same forwarding table entry (dj).
If the property holds, the entry (ej here) is a “match” for the destination IP
address. When a match happens, the algorithm notes the entry index (j
here) and how many bits in the mask mj were set to 1. The more bits set to
1, the “better” the match.
2. The best matching entry ek (i.e., the one with the largest number of 1 bits in
its mask mk) is selected, and its next-hop field nk is used as the next-hop IP
address in forwarding the datagram.
If no matches in the forwarding table are found, the datagram is undeliverable.
If the undeliverable datagram was generated locally (on this host), a “host unreachable” error is normally returned to the application that generated the datagram. On
a router, an ICMP message is normally sent back to the host that sent the datagram.
In some circumstances, more than one entry may match an equal number of 1
bits. This can happen, for example, when more than one default route is available
(e.g., when attached to more than one ISP, called multihoming). The end-system
behavior in such cases is not set by standards and is instead specific to the operating system’s protocol implementation. A common behavior is for the system to simply choose the first match. More sophisticated systems may attempt to load-balance
or split traffic across the multiple routes. Studies suggest that multihoming can be
beneficial not only for large enterprises, but also for residential users [THL06].
To get a solid understanding of how IP forwarding works both in the simple local
environment (e.g., same LAN) and in the somewhat more complicated multihop
(global Internet) environment, we look at two cases. The first case, where all systems are using the same network prefix, is called direct delivery, and the other case
is called indirect delivery (see Figure 5-16). Direct Delivery
First consider a simple example. Our Windows XP host (with IPv4 address S and
MAC address S), which we will just call S, has an IP datagram to send to our Linux
host (IPv4 address D, MAC address D), which we will call D. These systems are
interconnected using a switch. Both hosts are on the same Ethernet (see inside
front cover). Figure 5-16 (top) shows the delivery of the datagram. When the IP
layer in S receives a datagram to send from one of the upper layers such as TCP or
UDP, it searches its forwarding table. We would expect the forwarding table on S
to contain the information shown in Table 5-8.
Section 5.4 IP Forwarding
Figure 5-16 Direct delivery does not require the presence of a router—IP datagrams are encapsulated in a link-layer frame that directly identifies the source and destination. Indirect
delivery involves a router—data is forwarded to the router using the router’s link-layer
address as the destination link-layer address. The router’s IP address does not appear
in the IP datagram (unless the router itself is the source or destination, or when source
routing is used).
In Table 5-8, the destination IPv4 address D ( matches both the first
and second forwarding table entries. Because it matches the second entry better (25 bits instead of none), the “gateway” or next-hop address is, the
address S. Thus, the gateway portion of the entry contains the address of the sending host’s own network interface (no router is referenced), indicating that direct
delivery is to be used to send the datagram.
Table 5-8
The (unicast) IPv4 forwarding table at host S contains only two entries. Host S is configured with IPv4 address and subnet mask Datagrams destined for addresses
in the range through use the second forwarding table entry and are sent
using direct delivery. All other datagrams use the first entry and are given to router R
with IPv4 address
Gateway (Next Hop)
The Internet Protocol (IP)
The datagram is encapsulated in a lower-layer frame destined for the target
host D. If the lower-layer address of the target host is unknown, the ARP protocol
(for IPv4; see Chapter 4) or Neighbor Solicitation (for IPv6; see Chapter 8) operation may be invoked at this point to determine the correct lower-layer address, D.
Once known, the destination address in the datagram is D’s IPv4 address (,
and D is placed in the Destination IP Address field in the lower-layer header. The
switch delivers the frame to D based solely on the link-layer address D; it pays no
attention to the IP addresses. Indirect Delivery
Now consider another example. Our Windows host has an IP datagram to send
to the host, whose IPv4 address is Figure 5-16 (bottom)
shows the conceptual path of the datagram through four routers. First, the Windows machine searches its forwarding table but does not find a matching prefix
on the local network. It uses its default route entry (which matches every destination, but with no 1 bits at all). The default entry indicates that the appropriate nexthop gateway is (the “a side” of the router R1). This is a typical scenario for
a home network.
Recall that in the direct delivery case, the source and destination IP addresses
correspond to those associated with the source and destination hosts. The same
is true for the lower-layer (e.g., Ethernet) addresses. In indirect delivery, the
IP addresses correspond to the source and destination hosts as before, but the
lower-layer addresses do not. Instead, the lower-layer addresses determine which
machines receive the frame containing the datagram on a per-hop basis. In this
example, the lower-layer address needed is the Ethernet address of the next-hop
router R1’s a-side interface, the lower-layer address corresponding to IPv4 address This is accomplished by ARP (or a Neighbor Solicitation request if this
example were using IPv6) on the network interconnecting S and R1. Once R1
responds with its a-side lower-layer address, S sends the datagram to R1. Delivery
from S to R1 takes place based on processing only the lower-layer headers (more
specifically, the lower-layer destination address). Upon receipt of the datagram, R1
checks its forwarding table. The information in Table 5-9 would be typical.
Table 5-9
The forwarding table at R1 indicates that address translation should be performed for
traffic. The router has a private address on one side ( and a public address on the
other ( Address translation is used to make datagrams originating on the network appear to the Internet as though they had been sent from
Gateway (Next Hop)
Section 5.4 IP Forwarding
When R1 receives the datagram, it realizes that the datagram’s destination IP
address is not one of its own, so it forwards the datagram. Its forwarding table is
searched and the default entry is used. The default entry in this case has a next
hop within the ISP servicing the network, (this is R2’s a-side interface). This address happens to be within SBC’s DSL network called by the somewhat cumbersome name
Because this router is in the global Internet and the Windows machine’s source
address is the private address, R1 performs Network Address Translation
(NAT) on the datagram to make it routable on the Internet. The NAT operation
results in the datagram having the new source address, which corresponds to R1’s b-side interface. Networks that do not use private addressing (e.g.,
ISPs and larger enterprises) avoid the last step and the original source address
remains unchanged. NAT is described in more detail in Chapter 7.
When router R2 (inside the ISP) receives the datagram, it goes through the
same steps that the local router R1 did (except for the NAT operation). If the datagram is not destined for one of its own IP addresses, the datagram is forwarded.
In this case, the router usually has not only a default route but several others,
depending on its connectivity to the rest of the Internet and its own local policies.
Note that IPv6 forwarding varies only slightly from conventional IPv4 forwarding. Aside from the larger addresses, IPv6 uses a slightly different mechanism (Neighbor Solicitation messages) to ascertain the lower-layer address of its
next hop. It is described in more detail in Chapter 8, as it is part of ICMPv6. In
addition, IPv6 has both link-local addresses and global addresses (see Chapter 2).
While global addresses behave like regular IP addresses, link-local addresses can
be used only on the same link. In addition, because all the link-local addresses
share the same IPv6 prefix (fe80::/10), a multihomed host may require user input
to determine which interface to use when sending a datagram destined for a linklocal destination.
To illustrate the use of link-local addresses, we start with our Windows XP
machine, assuming IPv6 is enabled and operational:
C:\> ping6 fe80::204:5aff:fe9f:9e80
Pinging fe80::204:5aff:fe9f:9e80 with 32 bytes of data:
No route to destination.
Specify correct scope-id or use -s to specify source address.
C:\> ping6 fe80::204:5aff:fe9f:9e80%6
Pinging fe80::204:5aff:fe9f:9e80%6
from fe80::205:4eff:fe4a:24bb%6 with 32 bytes of data:
Reply from fe80::204:5aff:fe9f:9e80%6: bytes=32 time=1ms
Reply from fe80::204:5aff:fe9f:9e80%6: bytes=32 time=1ms
The Internet Protocol (IP)
Reply from fe80::204:5aff:fe9f:9e80%6: bytes=32 time=1ms
Reply from fe80::204:5aff:fe9f:9e80%6: bytes=32 time=1ms
Ping statistics for fe80::204:5aff:fe9f:9e80%6:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 1ms, Average = 1ms
Here we see that failing to specify which interface to use for outbound linklocal traffic results in an error. In Windows XP, we can specify either a scope ID or
a source address. In this example we specify the scope ID as an interface number
using the %6 extension to the destination address. This informs the system to use
interface number 6 as the correct interface when sending the ping traffic.
To see the path taken to an IP destination, we can use the traceroute program (called tracert on Windows, which has a slightly different set of options)
with the -n option to not convert IP addresses to names:
Linux% traceroute -n
traceroute to (, 30 hops max, 38 byte packets
1 9.285 ms 8.404 ms 8.887 ms
2 8.412 ms 8.764 ms 8.661 ms
3 8.502 ms 8.995 ms 8.644 ms
4 8.705 ms 8.673 ms 9.014 ms
5 9.149 ms 9.057 ms 9.537 ms
6 9.680 ms 10.389 ms 11.003 ms
7 11.605 ms 37.699 ms 11.374 ms
8 13.449 ms 12.804 ms 13.126 ms
9 15.114 ms 15.020 ms 13.654 ms
MPLS Label=32307 CoS=5 TTL=1 S=0
10 16.011 ms 13.555 ms 13.167 ms
11 15.594 ms 15.497 ms 16.093 ms
12 15.103 ms 14.769 ms 15.128 ms
13 77.501 ms 77.593 ms 76.974 ms
14 77.906 ms 78.101 ms 78.398 ms
15 81.146 ms 81.281 ms 80.918 ms
16 77.988 ms 78.007 ms 77.947 ms
17 81.912 ms 82.231 ms 83.115 ms
This program lists each of the IP hops traversed while sending a series of datagrams to the destination ( The traceroute program
uses a combination of UDP datagrams (with increasing TTL over time) and ICMP
messages (used to detect each hop when the UDP datagrams expire) to accomplish
its task. Three UDP packets are sent at each TTL value, providing three roundtrip-time measurements to each hop. Traditionally, traceroute has carried only
IP information, but here we also see the following line:
MPLS Label=32307 CoS=5 TTL=1 S=0
Section 5.5 Mobile IP
This indicates that Multiprotocol Label Switching (MPLS) [RFC3031] is being used
on the path, and the label ID is 32307, class of service is 5, TTL is 1, and the message
is not the bottom of the MPLS label stack (S = 0; see [RFC4950]). MPLS is a form
of link-layer network capable of carrying multiple network-layer protocols. Its
interaction with ICMP is described in [RFC4950], and its handling of IPv4 packets
containing options is described in [RFC6178]. Many network operators use it for
traffic engineering purposes (i.e., controlling where network traffic flows through
their networks).
In the examples we have just seen there are a few key points that should be kept in
mind regarding the operation of IP unicast forwarding:
1. Most of the hosts and routers in this example used a default route consisting of a single forwarding table entry of this form: mask 0, destination 0,
next hop <some IP address>. Indeed, most hosts and most routers at the
edge of the Internet can use a default route for everything other than destinations on local networks because there is only one interface available that
provides connectivity to the rest of the Internet.
2. The source and destination IP addresses in the datagram never change
once in the regular Internet. This is always the case unless either source
routing is used, or when other functions (such as NAT, as in the example)
are encountered along the data path. Forwarding decisions at the IP layer
are based on the destination address.
3. A different lower-layer header is used on each link that uses addressing,
and the lower-layer destination address (if present) always contains the
lower-layer address of the next hop. Therefore, lower-layer headers routinely change as the datagram is moved along each hop toward its destination. In our example, both Ethernet LANs encapsulated a link-layer
header containing the next hop’s Ethernet address, but the DSL link did
not. Lower-layer addresses are normally obtained using ARP (see Chapter
4) for IPv4 and ICMPv6 Neighbor Discovery for IPv6 (see Chapter 8).
Mobile IP
So far we have discussed the conventional ways that IP datagrams are forwarded
through the Internet, as well as private networks that use IP. One assumption of the
model is that a host’s IP address shares a prefix with its nearby hosts and routers. If
such a host should move its point of network attachment, yet remain connected to
the network at the link layer, all of its upper-layer (e.g., TCP) connections would fail
The Internet Protocol (IP)
because either its IP address would have to be changed or routing would not deliver
packets to the (moved) host properly. A multiyear (actually, multidecade!) effort
known as Mobile IP addresses this issue. (Other protocols have also been suggested;
see [RFC6301].) Although there are versions of Mobile IP for both IPv4 [RFC5944]
(MIPv4) and IPv6 [RFC6275], we focus on Mobile IPv6 (called MIPv6) because it is
more flexible and somewhat easier to explain. Also, it currently appears more likely
to be deployed in the quickly growing smartphone market. Note that we do not
discuss MIPv6 comprehensively; it is sufficiently complex to merit a book on its own
(e.g., [RC05]). Nonetheless, we will cover its basic concepts and principles.
Mobile IP is based on the idea that a host has a “home” network but may visit
other networks from time to time. While at home, ordinary forwarding is performed, according to the algorithms discussed in this chapter. When away from
home, the host keeps the IP address it would ordinarily use at home, but some
special routing and forwarding tricks are used to make the host appear to the
network, and to the other systems with which it communicates, as though it is
attached to its home network. The scheme depends on a special type of router
called a “home agent” that helps provide routing for mobile nodes.
Most of the complexity in MIPv6 involves signaling messages and how they
are secured. These messages use various forms of the Mobility extension header
(Next Header field value 135 in Table 5-5, often just called the mobility header), so
Mobile IP is, in effect, a special protocol of its own. The IANA maintains a registry
of the various header types (17 are reserved currently), along with many other
parameters associated with MIPv6 [MP]. We shall focus on the basic messages
specified in [RFC6275]. Other messages are used to implement “fast handovers”
[RFC5568], changing of the home agent [RFC5142], and experiments [RFC5096].
To understand MIPv6, we begin by introducing the basic model for IP mobility
and the associated terminology.
The Basic Model: Bidirectional Tunneling
Figure 5-17 shows the entities involved in making MIPv6 work. Much of the terminology also applies to MIPv4 [RFC5944]. A host that might move is called a mobile
node (MN), and the hosts with which it is communicating are called correspondent nodes (CNs). The MN is given an IP address chosen from the network prefix
used in its home network. This address is known as its home address (HoA). When
it travels to a visited network, it is given an additional address, called its care-of
address (CoA). In the basic model, whenever a CN communicates with an MN,
the traffic is routed through the MN’s home agent (HA). HAs are a special type of
router deployed in the network infrastructure like other important systems (e.g.,
routers and Web servers). The association between an MN’s HoA and its CoA is
called a binding for the MN.
The basic model (see Figure 5-17) works in cases where an MN’s CNs do not
engage in the MIPv6 protocol. This model is also used for network mobility (called
“NEMO” [RFC3963]), when an entire network is mobile. When the MN (or mobile
Section 5.5 Mobile IP
Figure 5-17
Mobile IP supports the ability of nodes to change their point of network attachment and keep
network connections operating. The mobile node’s home agent helps to forward traffic for mobiles
it serves and also plays a role in route optimization, which can substantially improve routing performance by allowing mobile and correspondent nodes to communicate directly.
network router) attaches to a new point in the network, it receives its CoA and
sends a binding update message to its HA. The HA responds with a binding acknowledgment. Assuming that all goes well, traffic between the MN and CNs is thereafter routed through the MN’s HA using a two-way form of IPv6 packet tunneling
[RFC2473] called bidirectional tunneling. These messages are ordinarily protected
using IPsec with the Encapsulating Security Payload (ESP) (see Chapter 18). Doing so
ensures that an HA is not fooled into accepting a binding update from a fake MN.
Route Optimization (RO)
Bidirectional tunneling makes MIPv6 work in a relatively simple way, and with
CNs that are not Mobile-IP-aware, but the routing can be extremely inefficient,
especially if the MN and CNs are near each other but far away from the MN’s HA.
To improve upon the inefficient routing that may occur in basic MIPv6, a process
called route optimization (RO) can be used, provided it is supported by the various
nodes involved. As we shall see, the methods used to ensure that RO is secure and
useful are somewhat complicated. We shall sketch only its basic operations. For a
more detailed discussion, see [RFC6275] and [RFC4866]. For a discussion of the
design rationale behind RO security, see [RFC4225].
The Internet Protocol (IP)
When used, RO involves a correspondent registration whereby an MN notifies
its CNs of its current CoA to allow routing to take place without help from the HA.
RO operates in two parts: one part involves establishing and maintaining the registration bindings; another involves the method used to exchange datagrams once
all bindings are in place. To establish a binding with its CNs, an MN must prove
to each CN that it is the proper MN. This is accomplished by a Return Routability
Procedure (RRP). The messages that support RRP are not protected using IPsec as
are the messages between an MN and its HA. Expecting IPsec to work between
an MN and any CN was believed to be too unreliable (IPv6 requires IPsec support but does not require its use). Although the RRP is not as strong as IPsec, it
is simpler and covers most of the security threats of concern to the designers of
Mobile IP.
The RRP uses the following mobility messages, all of which are subtypes of the
IPv6 Mobility extension header: Home Test Init (HoTI), Home Test (HoT), Care-of
Test Init (CoTI), Care-of Test (CoT). These messages verify to a CN that a particular MN is reachable both at its home address (HoTI and HoT messages) and at its
care-of addresses (CoTI and CoT messages). The protocol is shown in Figure 5-18.
Figure 5-18
The return routability check procedure used in sending binding updates from an MN
to a CN in order to enable route optimization. The check aims to demonstrate to a CN
that an MN is reachable at both its home address and its care-of address. In this figure,
messages routed indirectly are indicated with dashed arrows. The numbers indicate
the ordering of messages, although the HoTI and CoTI messages can be sent by an MN
in parallel.
Section 5.5 Mobile IP
To understand the RRP, we take the simplest case of a single MN, its HA, and
a CN as shown in Figure 5-18. The MN begins by sending both a HoTI and CoTI
message to the CN. The HoTI message is forwarded through the HA on its way
to the CN. The CN receives both messages in some order and responds with a
HoT and CoT message to each, respectively. The HoT message is sent to the MN
via the HA. Inside these messages are random bit strings called tokens, which the
MN uses to form a cryptographic key (see Chapter 18 for a discussion of the basics
of cryptography and keys). The key is then used to form authenticated binding
updates that are sent to the CN. If successful, the route can be optimized and data
can flow directly between an MN and a CN, as shown in Figure 5-19.
Figure 5-19
Once a binding is established between an MN and a CN, data flows directly between
them. The direction from MN to CN uses an IPv6 Home Address Destination option.
The reverse direction uses a type 2 Routing header (RH2).
Once a binding has been established successfully, data may flow directly
between an MN and its CNs without the inefficiency of bidirectional tunneling.
This is accomplished using an IPv6 Destination option for traffic moving from
the MN to a CN and a type 2 Routing header (RH2) for traffic headed in the
reverse direction, as detailed in Figure 5-19. The packets from MN to CN include
a Source IP Address field of the MN’s CoA, which avoids problems associated with
ingress filtering [RFC2827] that might cause packets containing the MN’s HoA in
the Source IP Address field to be dropped. The MN’s HoA, contained in the Home
Address option, is not processed by routers, so it passes through to the CN without modification. On the return path, packets are destined for the MN’s CoA. After
successfully receiving a returning packet, the MN processes the extension headers
and replaces the destination IP address with the HoA contained in the RH2. The
resulting packet is delivered to the rest of the MN’s protocol stack, so applications
“believe” they are using the MN’s HoA instead of its CoA for establishing connections and other actions.
The Internet Protocol (IP)
There are a number of issues with Mobile IP. It is designed to address a certain
type of mobility in which a node’s IP address may change while the underlying
link layer remains more or less connected. This type of usage is not common for
portable computers, which tend to shut down or be put to sleep when being moved
from place to place. The usage model requiring Mobile IP (and MIPv6 in particular) is more likely to be a large number of smartphones that use IP. Such devices
may be running real-time applications (e.g., VoIP) that have latency requirements.
Consequently, several approaches are being explored to reduce the amount of time
required to execute binding updates. These include fast handovers [RFC5568], a
modification to MIPv6 called Hierarchical MIPv6 (HMIPv6) [RFC5380], and a
modification in which the mobile signaling ordinarily required of an MN is performed by a proxy (called proxy MIPv6 or PMIPv6 [RFC5213]).
Host Processing of IP Datagrams
Although routers do not ordinarily have to consider which IP addresses to place
in the Source IP Address and Destination IP Address fields of the packets they forward, hosts must consider both. Applications such as Web browsers may attempt
to make connections to a named host or server that can have multiple addresses.
The client system making such connections may also have multiple addresses.
Thus, there is some question as to which address (and version of IP) should be
used when sending a datagram. A more subtle point we shall explore is whether
to accept traffic destined for a local IP address if it arrives on the wrong interface
(i.e., one that is not configured with the destination address present in a received
Host Models
Although it may appear to be a straightforward decision to determine whether a
received unicast datagram matches one of a host’s IP addresses and should be processed, this decision depends on the host model of the receiving system [RFC1122]
and is most relevant for multihomed hosts. There are two host models, the strong
host model and the weak host model. In the strong host model, a datagram is accepted
for delivery to the local protocol stack only if the IP address contained in the Destination IP Address field matches one of those configured on the interface upon which
the datagram arrived. In systems implementing the weak host model, the opposite is true—a datagram carrying a destination address matching any of the local
addresses may arrive on any interface and is processed by the receiving protocol
stack, irrespective of the network interface upon which it arrived. Host models also
apply to sending behavior. That is, a host using the strong host model sends datagrams from a particular interface only if one of the interface’s configured addresses
matches the Source IP Address field in the datagram being sent.
Section 5.6 Host Processing of IP Datagrams
Figure 5-20 illustrates a case where the host model becomes important. In
this example, two hosts (A and B) are connected through the global Internet but
also through a local network. If host A is set up to conform to the strong host
model, packets it receives destined for from the Internet or destined for from the local network are dropped. This situation can arise, for example,
if host B is configured to obey the weak host model. It may choose to send packets to using the local network (e.g., because doing so may be cheaper or
faster). This situation seems unfortunate, as A receives what appear to be perfectly
legitimate packets, yet drops them merely because it is operating according to the
strong host model. So a reasonable question would be: Why is it ever a good idea
to use the strong host model?
Figure 5-20
Hosts may be connected by more than one interface. In such cases, they must decide
which addresses to use for the Source IP Address and Destination IP Address fields of the
packets they exchange. The addresses used result from a combination of each host’s forwarding table, application of an address selection algorithm [RFC 3484], and whether
hosts are operating using a weak or strong host model.
The attraction of using the strong host model relates to a security concern.
Referring to Figure 5-20, consider a malicious user on the Internet who injects a
packet destined for the address This packet could also include a forged
(“spoofed”) source IP address (e.g., If the Internet cooperates in routing such a packet to B, applications running on B may be tricked into believing
they have received local traffic originating from A. This can have significant negative consequences if such applications make access control decisions based on the
source IP address.
The host model, for both sending and receiving behavior, can be configured
in some operating systems. In Windows (Vista and later), strong host behavior is
The Internet Protocol (IP)
the default for sending and receiving for IPv4 and IPv6. In Linux, the IP behavior
defaults to the weak host model. BSD (including Mac OS X) uses the strong host
model. In Windows, the following commands can be used to configure weak host
receive and send behavior, respectively:
C:\> netsh interface ipvX set interface <ifname> weakhostreceive=Yabled
C:\> netsh interface ipvX set interface <ifname> weakhostsend=Yabled
For these commands, <ifname> is replaced with the appropriate interface name;
X is replaced with either 4 or 6, depending on which version of IP is being configured; and Y is replaced with either en or dis, depending on whether weak
behavior is to be enabled or disabled, respectively.
Address Selection
When a host sends an IP datagram, it must decide which of its IP addresses to
place in the Source IP Address field of the outgoing datagram, and which destination address to use for a particular destination host if multiple addresses for it are
known. In some cases the source address is already known because it is provided
by an application or because the packet is being sent in response to a previously
received packet on the same connection (see, for example, Chapter 13 for how
addresses are managed with TCP).
In modern IP implementations, the IP addresses used in the Source IP Address
and Destination IP Address fields of the datagram are selected using a set of procedures called source address selection and destination address selection. Historically,
most Internet hosts had only one IP address for external communication, so selecting the addresses was not terribly difficult. With the advent of multiple addresses
per interface and the use of IPv6 in which simultaneous use of addresses with
multiple scopes is normal, some procedure must be used. The situation is further
complicated when communication is to take place between two hosts that implement both IPv4 and IPv6 (“dual-stack” hosts; see [RFC4213]). Failure to select the
correct addresses can lead to asymmetric routing, unwanted filtering, or discarding of packets. Fixing such problems can be a challenge.
[RFC3484] gives the rules for selecting IPv6 default addresses; IPv4-only hosts
do not ordinarily have such complex issues. In general, applications can invoke
special API operations to override the default behavior, as suggested previously.
Even then, tricky deployment situations may still arise [RFC5220]. The default
rules in [RFC3484] are to prefer source/destination address pairs where the
addresses are of the same scope, to prefer smaller over larger scopes, to avoid the
use of temporary addresses when other addresses are available, and to otherwise
prefer pairs with the longest common prefix. Global addresses are preferred over
temporary addresses when available. The specification also includes a method of
Section 5.6 Host Processing of IP Datagrams
providing “administrative override” to the default rules, but this is deploymentspecific and we do not discuss it further.
The selection of default addresses is controlled by a policy table, present (at
least conceptually) in each host. It is a longest-matching-prefix lookup table, similar to a forwarding table used with IP routing. For an address A, a lookup in this
table produces a precedence value for A, P(A), and a label for A, L(A). A higher precedence value indicates greater preference. The labels are used for grouping of
similar address types. For example if L(S) = L(D), the algorithm prefers to use the
pair (S,D) as a source/destination pair. If no other policy is specified, [RFC3484]
suggests that the policy values from Table 5-10 be used.
Table 5-10
The default host policy table, according to [RFC3484]. Higher precedence values indicate
a greater preference.
Precedence P()
Label L()
This table, or one configured at a site based upon administrative configuration parameters, is used to drive the address selection algorithm. The function
CPL(A,B) or “common prefix length” is the length, in bits, of the longest common prefix between IPv6 addresses A and B, starting from the left-most significant bit. The function S(A) is the scope of IPv6 address A mapped to a numeric
value with larger scopes mapping to larger values. If A is link-scoped and B is
global scope, then S(A) < S(B). The function M(A) maps an IPv4 address A to an
IPv4-mapped IPv6 address. Because the scope properties of IPv4 addresses are
based on the value of the address itself, the following relations need to be defined:
S(M(169.254.x.x)) = S(M(127.x.x.x)) < S(M(private address space)) < S(M(any other
address)). The notation Λ(A) is the lifecycle of the address (see Chapter 6). Λ (A) <
Λ (B) if A is a deprecated address (i.e., one whose use is discouraged) and B is a preferred address (i.e., an address preferred for active use). Finally, H(A) is true if A is
a home address and C(A) is true if A is a care-of address. These last two terms are
used only in the context of Mobile IP. The Source Address Selection Algorithm
The source address selection algorithm defines a candidate set CS(D) of potential
source addresses based on a particular destination address D. There is a restriction
The Internet Protocol (IP)
that anycast, multicast, and the unspecified address are never in CS(D) for any D.
We shall use the notation R(A) to indicate the rank of address A in the set CS(D).
A higher rank (i.e., greater value of R(A)) for A versus B in CS(D), denoted R(A) >
R(B), means that A is preferred to B for use as a source address for reaching the
machine with address D. The notation R(A) *> R(B) means to assign A a higher
rank than B in CS(D). The notation I(D) indicates the interface selected (i.e., by
the forwarding longest matching prefix algorithm described previously) to reach
destination D. The notation @(i) is the set of addresses assigned to interface i. The
notation T(A) is the Boolean true if A is a temporary address (see Chapter 6) and
false otherwise.
The following rules are applied to establish a partial ordering between
addresses A and B in CS(D) for destination D:
1. Prefer same address: if A = D, R(A) *> R(B); if B = D, R(B) *> R(A).
2. Prefer appropriate scope: if S(A) < S(B) and S(A) < S(D), R(B) *> R(A) else
R(A) *> R(B); if S(B) < S(A) and S(B) < S(D), R(A) *> R(B) else R(B) *> R(A).
3. Avoid deprecated addresses: if S(A) = S(B), { if Λ(A) < Λ(B), R(B) *> R(A) else
R(A) *> R(B) }.
4. Prefer home address: if H(A) and C(A) and ¬(C(B) and H(B)), R(A) *> R(B);
if H(B) and C(B) and ¬(C(A) and H(A)), R(B) *> R(A); if (H(A) and ¬C(A))
and (¬H(B) and C(B)), R(A) *> R(B); if (H(B) and ¬C(B)) and (¬H(A) and
C(A)), R(B) *> R(A).
5. Prefer outgoing interface: if A ∈ @(I(D)) and B ∈ @(I(D)), R(A) *> R(B); if B
∈@(I(D)) and A ∈ @(I(D)), R(B) *> R(A).
6. Prefer matching label: if L(A) = L(D) and L(B) ≠ L(D), R(A) *> R(B); if L(B) =
L(D) and L(A) ≠ L(D), R(B) *> R(A).
7. Prefer nontemporary addresses: if T(B) and ¬T(A), R(A) *> R(B); if T(A) and
¬T(B), R(B) *> R(A).
8. Use longest matching prefix: if CPL(A,D) > CPL(B,D), R(A) *> R(B); if
CPL(B,D) > CPL(A,D), R(B) *> R(A).
The partial ordering rules can be used to form a total ordering of all the candidate addresses in CS(D). The one with the largest rank is the selected source
address for destination D, denoted Q(D), and is used by the destination address
selection algorithm. If Q(D) = Ø (null), no source could be determined for destination D. The Destination Address Selection Algorithm
We now turn to the problem of default destination address selection. It is specified
in a way similar to source address selection. Recall that Q(D) is the source address
Section 5.6 Host Processing of IP Datagrams
selected in the preceding example to reach the destination D. Let U(B) be the Boolean true if destination B is not reachable and E(A) indicate that destination A is
reached using some “encapsulating transport” (e.g., tunneled routing). Using the
same structure as before on pairwise elements A and B of the set SD(S), we have
the following rules:
1. Avoid unusable destinations: if U(B) or Q(B) = Ø, R(A) *> R(B); if U(A) or
Q(A) = Ø, R(B) *> R(A).
2. Prefer matching scope: if S(A) = S(Q(A)) and S(B) ≠ S(Q(B)), R(A) *> R(B); if
S(B) = S(Q(B)) and S(A) ≠ S(Q(A)), R(B)*>R(A).
3. Avoid deprecated addresses: if Λ (Q(A)) < Λ (Q(B)), R(B) *> R(A); if Λ (Q(B))
< Λ (Q(A)), R(A) *> R(B).
4. Prefer home address: if H(Q(A)) and C(Q(A)) and ¬(C(Q(B)) and H(Q(B))),
R(A) *> R(B); if (Q(B)) and C(Q(B)) and ¬(C(Q(A)) and H(Q(A))), R(B) *>
R(A); if (H(Q(A)) and ¬C(Q(A))) and (¬H(Q(B)) and C(Q(B))), R(A) *> R(B);
if (H(Q(B)) and ¬C(Q(B))) and (¬H(Q(A)) and C(Q(A))), R(B) *> R(A).
5. Prefer matching label: if L(Q(A)) = L(A) and L(Q(B)) ≠ L(B), R(A) *> R(B); if
L(Q(A)) ≠ L(A) and L(Q(B)) = L(B), R(B) *> R(A).
6. Prefer higher precedence: if P(A) > P(B), R(A) *> R(B); if P(A) < P(B), R(B) *>
7. Prefer native transport: if E(A) and ¬E(B), R(B) *> R(A); if E(B) and ¬E(A),
R(A) *> R(B).
8. Prefer smaller scope: if S(A) < S(B), R(A) *> R(B) else R(B) *> R(A).
9. Use longest matching prefix: if CPL(A, Q(A)) > CPL(B, Q(B)), R(A) *> R(B);
if CPL(A, Q(A)) < CPL (B, Q(B)), R(B) *> R(A).
10. Otherwise, leave rank order unchanged.
As with source address selection, these rules form a partial ordering between
two elements of the set of possible destinations in the set of destinations SD(S) for
source S. The highest-rank address gives the output for the destination address
selection algorithm. As mentioned previously, some issues have been raised
regarding operation of this algorithm (e.g., step 9 of the destination address selection can lead to problems with DNS round-robin; see Chapter 11). As a result,
an update to [RFC3484] is being considered [RFC3484-revise]. Importantly, this
revision addresses how so-called Unique Local IPv6 Unicast Addresses (ULAs)
[RFC4193] are treated by the address selection algorithms. ULAs are globally
scoped IPv6 addresses that are constrained to be used only within a common
(private) network.
The Internet Protocol (IP)
Attacks Involving IP
There have been a number of attacks on the IP protocol over the years, based primarily on the operation of options, or by exploiting bugs in specialized code (such
as fragment reassembly). Simple attacks involve trying to get a router to crash or
perform poorly because one or more of the IP header fields is not valid (e.g., bad
header length or version number). Typically, routers in the Internet today ignore
or strip IP options, and the bugs in basic packet processing have been fixed. Thus,
these types of simple attacks are not a big concern. Attacks involving fragmentation can be addressed using other means [RFC1858][RFC3128].
Without authentication or encryption (or when it is disabled for IPv6), IP
spoofing attacks are possible. Some of the earliest attacks involved fabricating
the source IP address. Because early access control mechanisms depended on the
source IP address, many such systems were circumvented. Spoofing would sometimes be combined with various combinations of source routing options. Under
some circumstances, a remote attacker’s computer would appear to be a host on
the local network (or even the same computer) requesting some sort of service.
Although the spoofing of IP addresses is still a concern today, there are several
approaches to limit its damage, including ingress filtering [RFC2827][RFC3704],
whereby an ISP checks the source addresses of its customers’ traffic to ensure that
datagrams contain source addresses from an assigned IP prefix.
As IPv6 and Mobile IP are relatively new, at least compared to IPv4, all of
their vulnerabilities have undoubtedly not yet been discovered. With the newer
and more flexible types of options headers, an attacker could have considerable
influence on the processing of an IPv6 packet. For example, the Routing header
(type 0) was discovered to have such severe security problems that its use has
been deprecated entirely. Other possibilities include spoofing the source address
and/or Routing header entries to make packets appear as if they have come from
other places. These attacks are avoided by configuring packet-filtering firewalls to
take into account the contents of Routing headers. It is worth noting that simply
filtering out all packets containing extension headers and options in IPv6 would
severely restrict its use. In particular, disabling extension headers would prevent
Mobile IPv6 from functioning.
In this chapter we started with a description of the IPv4 and IPv6 headers, discussing some of the related functions such as the Internet checksum and fragmentation. We saw how IPv6 increases the size of addresses, improves upon IP’s method
of including options in packets by use of the extension headers, and removes several of the noncritical fields from the IPv4 header. With the addition of this functionality, the IP header increases in size by only a factor of 2 even though the
size of the addresses has increased fourfold. The IPv4 and IPv6 headers are not
Section 5.8 Summary
directly compatible and share only the 4-bit Version field in common. Because of
this, some level of translation is required to interconnect IPv4 and IPv6 nodes.
Dual-stack hosts implement both IPv4 and IPv6 but must choose which protocol
to use and when.
Since its inception, IP has included a header field to indicate a type of traffic
or service class associated with each datagram. This mechanism has been redefined over the years in hopes of providing mechanisms to support differentiated
services on the Internet. If it is widely implemented, the Internet could potentially
offer improved performance for some traffic or users versus others in a standard
way. To what extent this happens will be based in part on working out the business models surrounding the differentiated services capability.
IP forwarding describes the way IP datagrams are transported through single
and multihop networks. IP forwarding is performed on a hop-by-hop basis unless
special processing takes place. The destination IP address never changes as the
datagram proceeds through all the hops, but the link-layer encapsulation and destination link-layer address change on each hop. Forwarding tables and the longest
prefix match algorithm are used by hosts and routers to determine the best matching forwarding entry and determine the next hops along a forwarding path. In
many circumstances, very simple tables consisting of only a default route, which
matches all possible destinations equally, are adequate.
Using a special set of protocols for security and signaling, Mobile IP establishes secure bindings between a mobile node’s home address and care-of address.
These bindings may be used to communicate with a mobile node even when it is
not at home. The basic function involves tunneling traffic through a cooperating
home agent, but this may lead to very inefficient routing. A number of additional
features support a route optimization feature that allows a mobile node to talk
directly with other remote nodes and vice versa. This requires a mobile node’s
correspondent hosts to support MIPv6 as well as route optimization, which is an
optional feature. Ongoing work aims at reducing the latency involved in the route
optimization binding update procedure.
We also looked at how the host model, strong or weak, affects how IP datagrams are processed. In the strong model, each interface is permitted to receive
or send only datagrams that use addresses associated with the interface, whereas
the weak model is less restrictive. The weak host model permits communication
in some cases where it would not otherwise be possible but may be more vulnerable to certain kinds of attacks. The host model also relates to how a host chooses
which addresses to use when communicating. Early on, most hosts had only one
IP address so the decision was fairly straightforward. With IPv6, in which a host
may have several addresses, and for multihomed hosts using several network
interfaces, the decision is less straightforward yet nonetheless may have an important impact on routing. A set of address selection algorithms, for both source and
destination addresses, was presented. These algorithms tend to prefer limitedscope, permanent addresses.
The Internet Protocol (IP)
We discussed some of the attacks targeted against the IP protocol. Such
attacks have often involved spoofing addresses, including options to alter routing behavior, and attempts to exploit bugs in the implementation of IP, especially
with respect to fragmentation. The protocol implementation bugs have been fixed
in modern operating systems, and in most cases options are disabled at the edge
routers of enterprises. Although spoofing remains somewhat of a concern, procedures such as ingress filtering help to eliminate this problem as well.
[A92] P. Mersky, “Autovon: The DoD Phone Company,”
[DC05] J. Doyle and J. Carroll, Routing TCP/IP, Volume 1, Second Edition (Cisco
Press, 2005).
[H05] G. Huston, “Just How Big Is IPv6?—or Where Did All Those Addresses
Go?” The ISP Column, July 2005,
[LFS07] J. Leguay, T. Friedman, and K. Salamatian, “Describing and Simulating
Internet Routes,” Computer Networks, 51(8), June 2007.
[MB97] L. McKnight and J. Bailey, eds., Internet Economics (MIT Press, 1997).
[P90] C. Pinter, A Book of Abstract Algebra, Second Edition (Dover, 2010; reprint of
1990 edition).
[PB61] W. Peterson and D. Brown, “Cyclic Codes for Error Detection,” Proc. IRE,
49(228), Jan. 1961.
[RC05] S. Raab and M. Chandra, Mobile IP Technology and Applications (Cisco
Press, 2005).
[RFC0791] J. Postel, “Internet Protocol,” Internet RFC 0791/STD 0005, Sept. 1981.
[RFC1108] S. Kent, “U.S. Department of Defense Security Options for the Internet
Protocol,” Internet RFC 1108 (historical), Nov. 1991.
Section 5.9 References
[RFC1122] R. Braden, ed., “Requirements for Internet Hosts—Communication
Layers,” Internet RFC 1122/STD 0003, Oct. 1989.
[RFC1385] Z. Wang, “EIP: The Extended Internet Protocol,” Internet RFC 1385
(informational), Nov. 1992.
[RFC1393] G. Malkin, “Traceroute Using an IP Option,” Internet RFC 1393
(experimental), Jan. 1993.
[RFC1858] G. Ziemba, D. Reed, and P. Traina, “Security Consideration for IP Fragment Filtering,” Internet RFC 1858 (informational), Oct. 1995.
[RFC2113] D. Katz, “IP Router Alert Option,” Internet RFC 2113, Feb. 1997.
[RFC2460] S. Deering and R. Hinden, “Internet Protocol, Version 6 (IPv6),” Internet RFC 2460, Dec. 1998.
[RFC2473] A. Conta and S. Deering, “Generic Packet Tunneling in IPv6 Specification,” Internet RFC 2473, Dec. 1998.
[RFC2474] K. Nichols, S. Blake, F. Baker, and D. Black, “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers,” Internet RFC 2474,
Dec. 1998.
[RFC2475] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An
Architecture for Differentiated Services,” Internet RFC 2475 (informational), Dec.
[RFC2597] J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, “Assured Forwarding PHB Group,” Internet RFC 2597, June 1999.
[RFC2675] D. Borman, S. Deering, and R. Hinden, “IPv6 Jumbograms,” Internet
RFC 2675, Aug. 1999.
[RFC2711] C. Partridge and A. Jackson, “IPv6 Router Alert Option,” Internet RFC
2711, Oct. 1999.
[RFC2827] P. Ferguson and D. Senie, “Network Ingress Filtering: Defeating
Denial of Service Attacks Which Employ IP Source Address Spoofing,” Internet
RFC 2827/BCP 0038, May 2000.
[RFC3031] E. Rosen, A. Viswanathan, and R. Callon, “Multiprotocol Label
Switching Architecture,” Internet RFC 3031, Jan. 2001.
[RFC3128] I. Miller, “Protection Against a Variant of the Tiny Fragment Attack,”
Internet RFC 3128 (informational), June 2001.
[RFC3168] K. Ramakrishnan, S. Floyd, and D. Black, “The Addition of Explicit
Congestion Notification (ECN) to IP,” Internet RFC 3168, Sept. 2001.
[RFC3246] B. Davie, A. Charny, J. C. R. Bennett, K. Benson, J. Y. Le Boudec, W.
Courtney, S. Davari, V. Firoiu, and D. Stiliadis, “An Expedited Forwarding PHB
(Per-Hop Behavior),” Internet RFC 3246, Mar. 2002.
The Internet Protocol (IP)
[RFC3260] D. Grossman, “New Terminology and Clarifications for Diffserv,”
Internet RFC 3260 (informational), Apr. 2002.
[RFC3484] R. Draves, “Default Address Selection for Internet Protocol Version 6
(IPv6),” Internet RFC 3484, Feb. 2003.
[RFC3484-revise] A. Matsumoto, J. Kato, T. Fujisaki, and T. Chown, “Update to
RFC 3484 Default Address Selection for IPv6,” Internet draft-ietf-6man-rfc3484revise, work in progress, July 2011.
[RFC3704] F. Baker and P. Savola, “Ingress Filtering for Multihomed Hosts,”
Internet RFC 3704/BCP 0084, May 2004.
[RFC3963] V. Devarapalli, R. Wakikawa, A. Petrescu, and P. Thubert, “Network
Mobility (NEMO) Basic Support Protocol,” Internet RFC 3963, Jan. 2005.
[RFC4193] R. Hinden and B. Haberman, “Unique Local IPv6 Unicast Addresses,”
Internet RFC 4193, Oct. 2005.
[RFC4213] E. Nordmark and R. Gilligan, “Basic Transition Mechanisms for IPv6
Hosts and Routers,” Internet RFC 4213, Oct. 2005.
[RFC4225] P. Nikander, J. Arkko, T. Aura, G. Montenegro, and E. Nordmark,
“Mobile IP Version 6 Route Optimization Security Design Background,” Internet
RFC 4225 (informational), Dec. 2005.
[RFC4594] J. Babiarz, K. Chan, and F. Baker, “Configuration Guidelines for
Diffserv Service Classes,” Internet RFC 4594 (informational), Aug. 2006.
[RFC4782] S. Floyd, M. Allman, A. Jain, and P. Sarolahti, “Quick-Start for TCP
and IP,” Internet RFC 4782 (experimental), Jan. 2007.
[RFC4866] J. Arkko, C. Vogt, and W. Haddad, “Enhanced Route Optimization for
Mobile IPv6,” Internet RFC 4866, May 2007.
[RFC4950] R. Bonica, D. Gan, D. Tappan, and C. Pignataro, “ICMP Extensions for
Multiprotocol Label Switching,” Internet RFC 4950, Aug. 2007.
[RFC5095] J. Abley, P. Savola, and G. Neville-Neil, “Deprecation of Type 0 Routing Headers in IPv6,” Internet RFC 5095, Dec. 2007.
[RFC5096] V. Devarapalli, “Mobile IPv6 Experimental Messages,” Internet RFC
5094, Dec. 2007.
[RFC5142] B. Haley, V. Devarapalli, H. Deng, and J. Kempf, “Mobility Header
Home Agent Switch Message,” Internet RFC 5142, Jan. 2008.
[RFC5213] S. Gundavelli, ed., K. Leung, V. Devarapalli, K. Chowdhury, and B.
Patil, “Proxy Mobile IPv6,” Internet RFC 5213, Aug. 2008.
[RFC5220] A. Matsumoto, T. Fujisaki, R. Hiromi, and K. Kanayama, “Problem Statement for Default Address Selection in Multi-Prefix Environments:
Section 5.9 References
Operational Issues of RFC 3484 Default Rules,” Internet RFC 5220 (informational), July 2008.
[RFC5350] J. Manner and A. McDonald, “IANA Considerations for the IPv4 and
IPv6 Router Alert Options,” Internet RFC 5350, Sept. 2008.
[RFC5380] H. Soliman, C. Castelluccia, K. ElMalki, and L. Bellier, “Hierarchical
Mobile IPv6 (HMIPv6) Mobility Management,” Internet RFC 5380, Oct. 2008.
[RFC5568] R. Koodli, ed., “Mobile IPv6 Fast Handovers,” Internet RFC 5568, July
[RFC5570] M. StJohns, R. Atkinson, and G. Thomas, “Common Architecture
Label IPv6 Security Option (CALIPSO),” Internet RFC 5570 (informational), July
[RFC5865] F. Baker, J. Polk, and M. Dolly, “A Differentiated Services Code Point
(DSCP) for Capacity-Admitted Traffic,” Internet RFC 5865, May 2010.
[RFC5944] C. Perkins, ed., “IP Mobility Support for IPv4, Revised,” Internet RFC
5944, Nov. 2010.
[RFC6178] D. Smith, J. Mullooly, W. Jaeger, and T. Scholl, “Label Edge Router Forwarding of IPv4 Option Packets,” Internet RFC 6178, Mar. 2011.
[RFC6275] C. Perkins, ed., D. Johnson, and J. Arkko, “Mobility Support in IPv6,”
Internet RFC 6275, June 2011.
[RFC6301] Z. Zhu, R. Rakikawa, and L. Zhang, “A Survey of Mobility Support in
the Internet,” Internet RFC 6301 (informational), July 2011.
[THL06] N. Thompson, G. He, and H. Luo, “Flow Scheduling for End-Host Multihoming,” Proc. IEEE INFOCOM, Apr. 2006.
[TWEF03] J. Touch, Y. Wang, L. Eggert, and G. Flinn, “A Virtual Internet Architecture,” Proc. ACM SIGCOMM Future Directions in Network Architecture Workshop,
Mar. 2003.
[W03] T. Wu, “Network Neutrality, Broadband Discrimination,” Journal of Telecommunications and High Technology Law, 2, 2003 (revised 2005).
This page intentionally left blank
System Configuration: DHCP
and Autoconfiguration
To make use of the TCP/IP protocol suite, each host and router requires a certain
amount of configuration information. Configuration information is used to assign
local names to systems, and identifiers (such as IP addresses) to interfaces. It is
also used to either provide or make use of various network services, such as the
Domain Name System (DNS) and Mobile IP home agents. Over the years there have
been many ways of providing and obtaining this information, but fundamentally there are three approaches: type in the information by hand, have a system
obtain it using a network service, or use some sort of algorithm to automatically
determine it. We shall explore each of these options and see how they are used
with both IPv4 and IPv6. Understanding how configuration works is important,
because it is one of the issues that every system administrator and nearly every
end user must deal with to some extent.
Recall from Chapter 2 that every interface to be used with TCP/IP networking
requires an IP address, subnet mask, and broadcast address (for IPv4). The broadcast address can ordinarily be determined using the address and mask. With this
minimal information, it is generally possible to carry out communication with
other systems on the same subnetwork. To engage in communication beyond the
local subnet, called indirect delivery in Chapter 5, a system requires a routing or
forwarding table that indicates what router(s) are to be used for reaching various destinations. To be able to use services such as the Web and e-mail, the DNS
(see Chapter 11) is used to map user-friendly domain names to the IP addresses
required by the lower-protocol layers. Because the DNS is a distributed service,
any system making use of it must know how to reach at least one DNS server.
All in all, having an IP address, subnet mask, and the IP address of a DNS server
System Configuration: DHCP and Autoconfiguration
and router are the “bare essentials” to get a system running on the Internet that
is capable of using or providing popular services such as Web and e-mail. To use
Mobile IP, a system also needs to know how to find a home agent.
In this chapter we will focus primarily on the protocols and procedures used
to establish the bare essentials in Internet client hosts: the Dynamic Host Configuration Protocol (DHCP) and stateless address autoconfiguration in IPv4 and IPv6. We
will also discuss how some ISPs use PPP with Ethernet for configuration of client
systems. Servers and routers are more often configured by hand, usually by typing the relevant configuration information into a file or graphical user interface.
There are several reasons for this distinction. First, client hosts are moved around
more often than servers and routers, meaning they should have mechanisms for
flexibly reassigning their configuration information. Second, server hosts and
routers are expected to be “always available” and relatively autonomous. As such,
having their configuration information not depend on other network services can
lead to greater confidence in their reliability. Third, there are often far more clients
in an organization than servers or routers, so it is simpler and less error-prone to
use a centralized service to dynamically assign configuration information to client hosts. Fourth, the operators of clients often have less system administration
experience than server and router administrators, so it is once again less errorprone to have most clients configured by a centralized service administered by an
experienced staff.
Beyond the bare essentials, there are numerous other bits of configuration
information a host or router may require, depending on the types of services it
uses or provides. These may include the locations of home agents, multicast routers, VPN gateways, and Session Initiation Protocol (SIP)/VoIP gateways. Some of
these services have standardized mechanisms and supporting protocols to obtain
the relevant configuration information; others do not and instead require the user
to type in the necessary information.
Dynamic Host Configuration Protocol (DHCP)
DHCP [RFC2131] is a popular client/server protocol used to assign configuration
information to hosts (and, less frequently, to routers). DHCP is very widely used,
in both enterprises and home networks. Even the most basic home router devices
support embedded DHCP servers. DHCP clients are incorporated into all common
client operating systems and a large number of embedded devices such as network printers and VoIP phones. Such devices usually use DHCP to acquire their IP
address, subnet mask, router IP address, and DNS server IP address. Information
pertaining to other services (e.g., SIP servers used with VoIP) may also be conveyed
using DHCP. DHCP was originally conceived for use with IPv4, so references to
it or its relationship with IP in this chapter will refer to IPv4 unless otherwise
specified. IPv6 can also use a version of DHCP called DHCPv6 [RFC3315], which
we discuss in Section 6.2.5, but IPv6 also supports its own automatic processes to
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
determine configuration information. In a hybrid configuration, IPv6 automatic
configuration can be combined with the use of DHCPv6.
The design of DHCP is based on an earlier protocol called the Internet Bootstrap Protocol (BOOTP) [RFC0951][RFC1542], which is now effectively obsolete.
BOOTP provides limited configuration information to clients and does not have
a mechanism to support changing that information after it has been provided.
DHCP extends the BOOTP model with the concept of leases [GC89] and can provide all information required for a host to operate. Leases allow clients to use
the configuration information for an agreed-upon amount of time. A client may
request to renew the lease and continue operations, subject to agreement from
the DHCP server. BOOTP and DHCP are backward-compatible in the sense that
BOOTP-only clients can make use of DHCP servers and DHCP clients can make
use of BOOTP-only servers. BOOTP, and therefore DHCP as well, is carried using
UDP/IP (see Chapter 10). Clients use port 68 and servers use port 67.
DHCP comprises two major parts: address management and delivery of
configuration data. Address management handles the dynamic allocation of IP
addresses and provides address leases to clients. Configuration data delivery
includes the DHCP protocol’s message formats and state machines. A DHCP
server can be configured to provide three levels of address allocation: automatic
allocation, dynamic allocation, and manual allocation. The differences among the
three have to do with whether the addresses assigned are based on the identity of
the client and whether such addresses are subject to being revoked or changed.
The most commonly used method is dynamic allocation, whereby a client is given
a revocable IP address from a pool (usually a predefined range) of addresses configured at the server. In automatic allocation, the same method is used but the
address is never revoked. In manual allocation, the DHCP protocol is used to convey the address, but the address is fixed for the requesting client (i.e., it is not part
of an allocatable pool maintained by the server). In this last mode, DHCP acts like
BOOTP. We shall focus on dynamic allocation, as it is the most interesting and
common case.
Address Pools and Leases
In dynamic allocation, a DHCP client requests the allocation of an IP address.
The server responds with one address selected from a pool of available addresses.
Typically, the pool is a contiguous range of IP addresses allocated specifically for
DHCP’s use. The address given to the client is allocated for only a specific amount
of time, called the lease duration. The client is permitted to use the IP address until
the lease expires, although it may request extension of the lease as required. In
most situations, clients are able to renew leases they wish to extend.
The lease duration is an important configuration parameter of a DHCP server.
Lease durations can range from a few minutes to days or more (“infinite” is possible but not recommended for anything but simple networks). Determining the
best value to use for leases is a trade-off between the number of expected clients,
System Configuration: DHCP and Autoconfiguration
the size of the address pool, and the desire for the stability of addresses. Longer
lease durations tend to deplete the available address pool faster but provide greater
stability in addresses and somewhat reduced network overhead (because there
are fewer requests to renew leases). Shorter leases tend to keep the pool available
for other clients, with a consequent potential decrease in stability and increase in
network traffic load. Common defaults include 12 to 24 hours, depending on the
particular DHCP server being used. Microsoft, for example, recommends 8 days
for small networks and 16 to 24 days for larger networks. Clients begin trying to
renew leases after half of the lease duration has passed.
When making a DHCP request, a client is able to provide information to the
server. This information can include the name of the client, its requested lease
duration, a copy of the address it is already using or last used, and other parameters. When the server receives such a request, it can make use of whatever information the client has provided (including the requesting MAC address) in addition
to other exogenous information (e.g., the time of day, the interface on which the
request was received) to determine what address and configuration information
to provide in response. In providing a lease to a client, a server stores the lease
information in persistent memory, typically in nonvolatile memory or on disk. If
the DHCP server restarts and all goes well, leases are maintained intact.
DHCP and BOOTP Message Format
DHCP extends BOOTP, DHCP’s predecessor. Compatibility is maintained between
the protocols by defining the DHCP message format as an extension to BOOTP’s
in such a way that BOOTP clients can be served by DHCP servers, and BOOTP
relay agents (see Section 6.2.6) can be used to support DHCP use, even on networks
where DHCP servers do not reside. The message format includes a fixed-length
initial portion and a variable-length tail portion (see Figure 6-1).
The message format of Figure 6-1 is defined by BOOTP and DHCP in several
RFCs ([RFC0951][RFC1542][RFC2131]). The Op (Operation) field identifies the message as either a request (1) or a reply (2). The HW Type (htype) field is assigned
based on values used with ARP (see Chapter 4) and defined in the corresponding
IANA ARP parameters page [IARP], with the value 1 (Ethernet) being very common. The HW Len (hlen) field gives the number of bytes used to hold the hardware
(MAC) address and is commonly 6 for Ethernet-like networks. The Hops field is
used to store the number of relays through which the message has traveled. The
sender of the message sets this value to 0, and it is incremented at each relay.
The Transaction ID is a (random) number chosen by the client and copied into
responses by the server. It is used to match replies with requests.
The Secs field is set by the client with the number of seconds that have elapsed
since the first attempt to establish or renew an address. The Flags field currently
contains only a single defined bit called the broadcast flag. Clients may set this bit
in requests if they are unable or unwilling to process incoming unicast IP datagrams but can process incoming broadcast datagrams (e.g., because they do not
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Figure 6-1
The BOOTP message format, including field names from [RFC0951], [RFC1542], and [RFC2131].
The BOOTP message format is used to hold DHCP messages by appropriate assignment of options.
In this way, BOOTP relay agents can process DHCP messages, and BOOTP clients can use DHCP
servers. The Server Name and Boot File Name fields can be used to carry DHCP options if necessary.
yet have an IP address). Setting the bit informs the server and relays that broadcast addressing should be used for replies.
There has been some difficulty in Windows environments regarding the use of
the broadcast flag. Windows XP and Windows 7 DHCP clients do not set the
flag, but Windows Vista clients do. Some DHCP servers in use do not process
the flag properly, leading to apparent difficulties in supporting Vista clients, even
though the Vista implementation is RFC-compliant. See [MKB928233] for more
The next four fields are various IP addresses. The Client IP Address (ciaddr)
field includes a current IP address of the requestor, if known, and is 0 otherwise.
The “Your” IP Address (yiaddr) field is filled in by a server when providing an
System Configuration: DHCP and Autoconfiguration
address to a requesting client. The Next Server IP Address (siaddr) field gives the IP
address of the next server to use for the client’s bootstrap process (e.g., if the client
needs to download an operating system image that may be accomplished from a
server other than the DHCP server). The Gateway (or Relay) IP Address (giaddr) field
is filled in by a DHCP or BOOTP relay with its address when forwarding DHCP
(BOOTP) messages. The Client Hardware Address (chaddr) field holds a unique
identifier of the client and can be used in various ways by the server, including
arranging for the same IP address to be given each time a particular client makes
an address request. This field has traditionally held the client’s MAC address,
which has been used as an identifier. Nowadays, the Client Identifier, an option
described in Sections 6.2.3 and 6.2.4, is preferred for this use.
The remaining fields include the Server Name (sname) and Boot File Name (file)
fields. These fields are not always filled in, but if they are, they contain 64 or 128
bytes, respectively, of ASCII characters indicating the name of the server or path to
the boot file. Such strings are null-terminated, as in the C programming language.
They can also be used instead to hold DHCP options if space is tight (see Section
6.2.3). The final field, originally known as the Vendor Extensions field in BOOTP
and fixed in length, is now known as the Options field and is variable in length. As
we shall see, options are used extensively with DHCP and are required to distinguish DHCP messages from legacy BOOTP messages.
DHCP and BOOTP Options
Given that DHCP extends BOOTP, any fields needed by DHCP that were not present when BOOTP was designed are carried as options. Options take a standard
format beginning with an 8-bit tag indicating the option type. For some options,
a fixed number of bytes following the tag contain the option value. All others
consist of the tag followed by 1 byte containing the length of the option value (not
including the tag or length), followed by a variable number of bytes containing the
option value itself.
A large number of options are available with DHCP, some of which are also
supported by BOOTP. The current list is given by the BOOTP/DHCP parameters
page [IBDP]. The first 77 options, including the most common ones, are specified in [RFC2132]. Common options include Pad (0), Subnet Mask (1), Router
Address (3), Domain Name Server (6), Domain Name (15), Requested IP Address
(50), Address Lease Time (51), DHCP Message Type (53), Server Identifier (54),
Parameter Request List (55), DHCP Error Message (56), Lease Renewal Time (58),
Lease Rebinding Time (59), Client Identifier (61), Domain Search List (119), and
End (255).
The DHCP Message Type option (53) is a 1-byte-long option that is always used
with DHCP messages and has the following possible values: DHCPDISCOVER
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
DHCPLEASEUNKNOWN (12), and DHCPLEASEACTIVE (13). The last four values are defined by [RFC4388].
Options may be carried in the Options field of a DHCP message, as well as in
the Server Name and Boot File Name fields mentioned previously. When options are
carried in either of these latter two places, called option overloading, a special Overload option (52) is included to indicate which fields have been appropriated for
holding options. For options whose lengths exceed 255 bytes, a special long options
mechanism has been defined [RFC3396]. In essence, if the same option is repeated
multiple times in the same message, the contents are concatenated in the order in
which they appear in the message, and the result is processed as a single option. If
a long option also uses option overloading, the order of processing is last to first:
Options field, Boot File Name field, and then Server Name field.
Options tend to either provide relatively simple configuration information or
be used in supporting some other agreement protocol. For example, [RFC2132]
specifies options for most of the traditional configuration information a TCP/IP
node requires (addressing information, server addresses, Boolean assignments of
configuration information such as enabling IP forwarding, initial TTL values).
Subsequent specifications describe simple configuration information for NetWare
[RFC2241][RFC2242], user classes [RFC3004], FQDN [RFC4702], Internet Storage
Name Service server (iSNS, used in storage networks) [RFC4174], Broadcast and
Multicast Service controller (BCMCS, used with 3G cellular networks) [RFC4280],
time zone [RFC4833], autoconfiguration [RFC2563], subnet selection [RFC3011],
name service selection (see Chapter 11) [RFC2937], and servers for the Protocol
for Carrying Authentication for Network Access (PANA) (see Chapter 18) [RFC5192].
Those options defined for use in support of other protocols and functions are
described later, starting with Section 6.2.7.
DHCP Protocol Operation
DHCP messages are essentially BOOTP messages with a special set of options.
When a new client attaches to a network, it first discovers what DHCP servers are
available and what addresses they are offering. It then decides which server to
use and which address it desires and requests it from the offering server (while
informing all the servers of its choice). Unless the server has given away the
address in the meantime, it responds by acknowledging the address allocation
to the requesting client. The time sequence of events between a typical client and
server is depicted in Figure 6-2.
Requesting clients set the BOOTP Op field to BOOTREQUEST and the first
4 bytes of the Options field to the decimal values 99, 130, 83, and 99, respectively
(the magic cookie value from [RFC2132]). Messages from client to server are sent as
UDP/IP datagrams containing a BOOTP BOOTREQUEST operation and an appropriate DHCP message type (usually DHCPDISCOVER or DHCPREQUEST). Such
messages are sent from address (port 68) to the limited broadcast address (port 67). Messages traveling in the other direction (from server to
System Configuration: DHCP and Autoconfiguration
Figure 6-2
&29( 5
2) ) ( 5
5 ( 48
2WKHU6 HUYHUV0D\2))( 5
3 $&'
'( &/,1(
A typical DHCP exchange. A client discovers a set of servers and addresses they are
offering using broadcast messages, requests the address it desires, and receives an
acknowledgment from the selected server. The transaction ID (xid) allows requests and
responses to be matched up, and the server ID (an option) indicates which server is providing and committing the provided address binding with the client. If the client already
knows the address it desires, the protocol can be simplified to include use of only the
REQUEST and ACK messages.
client) are sent from the IP address of the server and port 67 to the IP local broadcast address and port 68 (see Chapter 10 for details on UDP).
In a typical exchange, a client first broadcasts a DHCPDISCOVER message. Each
server receiving the request, either directly or through a relay, may respond with a
DHCPOFFER message, including an offered IP address in the “Your” IP Address
field. Other configuration options (e.g., IP address of DNS server, subnet mask) are
often included. The offer message includes the lease time (T), which provides the
upper bound on the amount of time the address can be used if it is not renewed. The
message also contains the renewal time (T1), which is the amount of time before the
client should attempt to renew its lease with the server from which it acquired its
lease, and the rebinding time (T2), which bounds the time in which it should attempt
to renew its address with any DHCP server. By default, T1 = (T/2) and T2 = (7T/8).
After receiving one or more DHCPOFFER messages from one or more servers,
the client determines which offer it will accept and broadcasts a DHCPREQUEST
message including the Server Identifier option. The Requested IP Address option is
set to the address received in the selected DHCPOFFER message. Multiple servers
may receive the broadcast DHCPREQUEST message, but only the server identified
within the DHCPREQUEST message acts by committing the address binding to
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
persistent storage; the others clear any state regarding the request. After handling
the binding, the selected server responds with a DHCPACK message, indicating to
the client that the address binding can now be used. In the case where the server
cannot allocate the address contained in the DHCPREQUEST message (e.g., it has
been allocated in some other way or is not available), the server responds with a
DHCPNAK message.
Once the client receives the DHCPACK message and other associated configuration information, it may probe the network to ensure that the address provided
is not in use (e.g., by sending an ARP request for the address to perform ACD,
described in Chapter 4). Should the client determine that the address is already in
use, the client ceases using the address and sends a DHCPDECLINE message to
the server to indicate that the address cannot be used. After a recommended 10s
delay, the client is able to retry. If a client elects to relinquish its address before its
lease expires, it sends a DHCPRELEASE message.
In circumstances where a client already has an IP address and wishes only
to renew its lease, the initial DHCPDISCOVER/DHCPOFFER messages can be
skipped. Instead, the protocol begins with the client requesting the address it
is currently using with a DHCPREQUEST message. At this point, the protocol
works as already described: the server will likely grant the request (with a DHCPACK) or deny the request by issuing a DHCPNAK. Another circumstance arises
when a client already has an address, does not need to renew it, but requires other
(non-address) configuration information. In this case, it can use a DHCPINFORM
message in place of a DHCPREQUEST message to indicate its use of an existing
address and desire to obtain additional information. Such messages elicit a DHCPACK message from the server, which includes the requested additional configuration information. Example
To see DHCP in action, we now inspect the packets exchanged when a Microsoft
Vista laptop attaches to a wireless LAN supported by a Linux-based DHCP server
(Windows 7 systems are nearly identical). The client was recently associated with a
different wireless network, using a different IP prefix, and is now being connected
to the new network. Because it remembers the address it had from the previous network, the client first tries to continue using that address using a DHCPREQUEST
message (see Figure 6-3).
There is now an agreed-upon procedure for detecting network attachment (DNA),
specified in [RFC4436] for IPv4 and [RFC6059] for IPv6. These specifications do
not contain new protocols but instead suggest how unicast ARP (for IPv4) and
a combination of unicast and multicast Neighbor Solicitation/Router Discovery
messages (for IPv6; see Chapter 8) can be used to reduce the latency of acquiring configuration information when a host switches network links. As these specifications are relatively new (especially for IPv6), not all systems implement them.
System Configuration: DHCP and Autoconfiguration
Figure 6-3
A client has switched networks and attempts to request its old address,, from
a DHCP server on the new network using a DHCPREQUEST message.
In Figure 6-3 we can see a DHCP request sent in a link-layer broadcast frame
(destination ff:ff:ff:ff:ff:ff) using the unspecified source IP address and the
limited broadcast destination address Because the client does not
yet know if the address it is requesting will be successfully allocated and does
not know the network prefix used on the network to which it is attaching, it has
little alternative to using these addresses. The message is a UDP/IP datagram sent
from the BOOTP client port 68 (bootpc) to the server port 67 (bootps). As DHCP
is really part of BOOTP, the protocol is the Bootstrap Protocol and the message
type is a BOOTREQUEST (1), with hardware type set to 1 (Ethernet) and address
length of 6 bytes. The transaction ID is 0xdb23147d, a random number chosen by
the client. The BOOTP broadcast flag is set in this message, meaning responses
should be sent using broadcast addressing. The requested address of
is contained in one of several options. We shall have a closer look at the types of
options that appear in DHCP messages beginning in Section 6.2.9.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
The nearby DHCP server receives the client’s DHCPREQUEST message
including the requested IP address of However, the server is unable to
allocate the address because is not in use on the current network. Consequently, the server refuses the client’s request by sending a DHCPNAK message
(see Figure 6-4).
Figure 6-4 A DHCPNAK message is sent by the DHCP server, indicating that the client should not
attempt to use IP address The transaction ID allows the client to know that
the message corresponds to its address request.
The DHCPNAK message shown in Figure 6-4 is sent as a broadcast BOOTP
reply from the server. It includes the message type of DHCPNAK, a transaction ID
matching the client’s request, a Server Identifier option containing, a copy
of the client’s identifier (MAC address in this case), and a textual string indicating
the form of error, "wrong address". At this point the client ceases trying to use
its old address of and instead starts over, looking for whatever servers
and addresses it can find, using a DHCPDISCOVER message (see Figure 6-5).
System Configuration: DHCP and Autoconfiguration
Figure 6-5 The DHCPDISCOVER message indicates that the client is retrying its attempt to obtain
an address after the previous failure of its DHCPREQUEST message.
The DHCPDISCOVER message sent by the client and shown in Figure 6-5
is similar to the DHCPREQUEST message, including the requested IP address it
used before (it does not have any other address to request), but it contains a richer
list of options and a new transaction ID (0x3a681b0b). Most of the rest of the primary BOOTP fields are left empty and set to 0, except the client MAC address,
which appears in the Client Hardware Address (chaddr) field. Note that this address
matches the Ethernet frame source MAC address, as expected, because the packet
was not forwarded through a BOOTP relay agent. The rest of the DISCOVER message contains eight options, most of which are expanded in the screen shot in
Figure 6-6 so that the various option subtypes can be seen.
Figure 6-6 details the options included in the BOOTP request message. The first
option indicates that the message is a DHCPDISCOVER message. The second option
indicates a client’s desire to know whether to use address autoconfiguration [RFC2563]
(described in Section 6.3). If it is unable to obtain an address using DHCP, it is permitted to determine one itself if allowed to do so by the DHCP server.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Figure 6-6 The DHCPDISCOVER message may contain a rich list of parameter requests, indicating
what configuration information the client seeks.
The next option indicates that the Client Identifier (ID) option is set to
0100130220B918 (not shown). The DHCP server can use the client ID to determine
if there is any special configuration information to be given to the particular
requesting client. Most operating systems now allow the user to specify the client
ID for the DHCP client to use when obtaining an address. Generally, however, it is
better to allow the client ID to be chosen automatically, as the use of the same client ID by multiple clients can lead to DHCP problems. The automatically selected
client ID is generally based on the MAC address of the client. In the case of Windows, it is the MAC address with a 1-byte hardware type identifier prepended to
it (in this case, the value of the byte is 1, indicating Ethernet).
There has been a move to use client identifiers that are not based on MAC
addresses. This is motivated by the desire to have a persistent identifier for a client for use with IPv4 or IPv6 that remains consistent even if the system’s network
interface hardware changes (which usually causes its MAC address to change).
[RFC4361] specifies node-specific identifiers for IPv4, using a scheme originally
System Configuration: DHCP and Autoconfiguration
defined for IPv6. It involves using a DHCP Unique Identifier (DUID) in combination
with an Identity Association Identifier (IAID) as specified for DHCPv6 [RFC3315]
(also see Sections and, but with conventional DHCPv4. It also
deprecates the use of the Client Hardware Address (chaddr) field in DHCP messages. However, it is not yet widely deployed.
The next (Requested IP Address) option indicates that the client is requesting
IP address This is the IP address it was using when associated with the
previous wireless network. As mentioned before, this address is not available on
the new network because a different network prefix is being used.
Other options indicate a configured host name of “vista,” a vendor class ID
of “MSFT 5.0” (for Microsoft Windows 2000 and later systems), and a parameter
request list. The Parameter Request List option provides an indication to the DHCP
server of what sort of configuration information the client is requesting. It consists of a string of bytes in which each byte indicates a particular option number.
Here we can see that it includes conventional Internet information (subnet mask,
domain name, DNS server, default router) but also a number of other options common to Microsoft systems (i.e., NetBIOS options). It also includes an indication
that the client is interested in knowing whether to perform ICMP Router Discovery (see Chapter 8) and whether any static forwarding table entries should be
placed in the client’s forwarding table when starting up (see Chapter 5).
The reason there are three different types of static route parameters listed is
a consequence of the history of addressing. Before the full adoption of subnet
masks and network prefixes, the network portion of an address was known by
inspection of the address alone (“classful addressing”), and this is the form of
route used with the Static Route (33) parameter. With the adoption of classless
routes, DHCP was updated to hold a mask that could be applied, resulting in the
so-called Classless Static Route (CSR) parameter (121) defined in [RFC3442].
Microsoft’s variant (using code 249) is similar.
The last parameter request (43) is for vendor-specific information. It is ordinarily used in conjunction with the Vendor-Class Identifier option (60), to allow
clients to receive nonstandard information, although another proposal combines
the vendor’s identity with the vendor-specific information [RFC3925], providing
a method to determine the vendor given any vendor-specific information, even
for a single client. In the case of Microsoft systems, vendor-specific information
is used for selecting the use of NetBIOS, indicating whether a DHCP lease should
be released on shutdown, and how the metric (preference) of a default route in the
forwarding table should be processed. It is also used by Microsoft’s Network Access
Protection (NAP) system [MS-DHCPN]. Mac OS systems use vendor-specific information in supporting Apple’s NetBoot service and Boot Server Discovery Protocol
(BSDP) [F07].
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Upon receipt of the DHCPDISCOVER message, a DHCP server responds with
an offer of an IP address, lease, and additional configuration information contained in a DHCPOFFER message. In the example shown in Figure 6-7, there is
only one DHCP server (which is also a router and DNS server).
Figure 6-7
The DHCPOFFER sent from the DHCP server at is offering IP address
for up to 12 hours. Additional information includes the address of a DNS server, domain
name, default router IP address, subnet mask, and broadcast address. In this example,
the system with IP address is the default router, DHCP server, and DNS server.
In the DHCPOFFER message shown in Figure 6-7 we again see that the message
format includes a BOOTP portion as well as a set of options that relate to its DHCP
address handling. The BOOTP message type is BOOTREPLY. The client IP address
provided by the server is, located in the “Your” [client] IP Address field. Note
System Configuration: DHCP and Autoconfiguration
that this address does not match the requested value of contained in the
DHCPDISCOVER message, as the 172.16/12 prefix is not in use on the local network.
Additional information contained in the set of options includes the server’s
IP address (, the lease time of the offered IP address (12 hours), and the T1
(renewal) and T2 (rebinding) timeouts of 6 and 10.5 hours, respectively. In addition,
the server provides the subnet mask for the client to use (, the proper
broadcast address (, the default router and DNS server (all, the same
as the DHCP server in this case), and a default domain name of "home". The domain
name home is not standardized in any way and would not be used outside of a private
network. This example is a home network, so by the author’s convention the names
of machines used on it have the form <name>.home. Once the client has collected a
DHCPOFFER message and decided to attempt leasing the IP address it has
been offered, it continues with a second DHCPREQUEST message (see Figure 6-8).
Figure 6-8
The second DHCPREQUEST indicates that the client wishes to be assigned the IP address The message is sent to the broadcast address and includes the address
in the Server ID option. This allows any other servers that may receive the broadcast to
know which DHCP server and address the client has selected.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
The second DHCPREQUEST message, shown in Figure 6-8, is similar to the
DHCPDISCOVER message, except the requested IP address is now set to,
the DHCP message type is set to DHCPREQUEST, the DHCP autoconfiguration
option is not present, and the Server Identifier option is now filled in with the
address of the server ( Note that this message, like the DHCPDISCOVER
message, is sent using broadcast, so any server or client present on the local network receives it. The Server Identifier option field is used to keep unselected
servers from committing the address binding. When the selected server receives
the DHCPREQUEST and commits the binding, it ordinarily responds with a
DHCPACK message, as we see in Figure 6-9.
Figure 6-9
The DHCPACK message verifies to the client (and other servers) the allocation of address for up to 12 hours.
System Configuration: DHCP and Autoconfiguration
The DHCPACK message shown in Figure 6-9 is very similar to the DHCPOFFER
message we have seen before. However, now the client’s FQDN option is included
as well. In this case (not shown), it is set to vista.home. At this point, the client
is free to use the address, as far as the DHCP server is concerned. It is still
advised to use techniques such as ACD, described in Chapter 4, to ensure that its
address is not used by some other host.
The DHCP messages exchanged in this example are typical of a system when
it boots or is attached to a new network. It is also possible to induce a system to
perform the release or acquisition of DHCP configuration information by hand.
For example, in Windows the following command will release the data acquired
using DHCP:
C:\> ipconfig /release
and the following command will acquire it:
C:\> ipconfig /renew
In Linux, the following commands can be used to achieve the same results:
Linux# dhclient -r
to release a DHCP lease, and
Linux# dhclient
to renew one.
The type of information acquired by DHCP and assigned to the local system
can be ascertained with a variant of the ipconfig command on Windows. Here
is an excerpt from its output:
C:\> ipconfig /all
Wireless LAN adapter Wireless Network Connection:
Connection-specific DNS Suffix . : home
Description . . . . . . . . . . . : Intel(R) PRO/Wireless 3945ABG
Network Connection
Physical Address. . . . . . . . . : 00-13-02-20-B9-18
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . :
Subnet Mask . . . . . . . . . . . :
Lease Obtained. . . . . . . . . . : Sunday, December 21, 2008
11:31:48 PM
Lease Expires . . . . . . . . . . : Monday, December 22, 2008
11:31:40 AM
Default Gateway . . . . . . . . . :
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
DHCP Server . . . .
DNS Servers . . . .
NetBIOS over Tcpip.
. .
. .
. .
. . . . . :
. . . . . :
. . . . . : Enabled
Suffix Search List :home
This command is very useful to see what configuration information has been
assigned to a host using DHCP or other means. The DHCP State Machine
The DHCP protocol operates a state machine at the clients and servers. The states
dictate which types of messages the protocol is expecting to process next. The client state machine is illustrated in Figure 6-10. Transitions between states (arrows)
occur because of messages that are received and sent or when timers expire.
',6 &29( 5
6 HQG'( &/,1(
6 HQG5( 48( 6 7
Figure 6-10
The DHCP client state machine. The boldface states and transitions are typical for a
client first acquiring a leased address. The dashed line and INIT state are where the
protocol begins.
As shown in Figure 6-10, a client begins in the INIT state when it has no information and broadcasts the DHCPDISCOVER message. In the Selecting state, it collects DHCPOFFER messages until it decides which address and server it wishes
to use. Once its selection has been made, it responds with a DHCPREQUEST message and enters the Requesting state. At this point it may receive ACKs for other
System Configuration: DHCP and Autoconfiguration
addresses it does not want. If it finds no address it wants, it sends a DHCPDECLINE
and reverts to the INIT state. More likely, however, it receives a DHCPACK message for an address it wants, accepts it, obtains the timeout values T1 and T2,
and enters the Bound state, where it is able to use the address until expiration.
Upon the first timer expiration (timer T1), the client enters the Renewing state and
attempts to reestablish its lease. This succeeds if a fresh DHCPACK is received
(returning the client to the Bound state). If not, T2 ultimately expires, causing the
client to attempt to reacquire an address from any server. If the lease time finally
expires, the client must give up the leased address and becomes disconnected if it
has no alternative address or network connection to use.
Although the IPv4 and IPv6 DHCP protocols achieve conceptually similar
goals, their respective protocol designs and deployment options differ. DHCPv6
[RFC3315] can be used in either a “stateful” mode, in which it works much like
DHCPv4, or in a “stateless” mode in conjunction with stateless address autoconfiguration (see Section 6.3). In the stateless mode, IPv6 clients are assumed to selfconfigure their IPv6 addresses but require additional information (e.g., DNS server
address) obtained using DHCPv6. Another option exists for deriving the location
of a DNS server using ICMPv6 Router Advertisement messages (see Chapters 8
and 11 and [RFC6106]). IPv6 Address Lifecycle
IPv6 hosts usually operate with multiple addresses per interface, and each address
has a set of timers indicating how long and for what purposes the corresponding
address can be used. In IPv6, addresses are assigned with a preferred lifetime and
valid lifetime. These lifetimes are used to form timeouts that move an address
from one state to another in an address’s state machine (see Figure 6-11).
Figure 6-11
The lifecycle of an IPv6 address. Tentative addresses are used only for DAD until verified as unique. After that, they become preferred and can be used without restriction
until an associated timeout changes their state to deprecated. Deprecated addresses are
not to be used for initiating new connections and may not be used at all after the associated valid timeout expires.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Figure 6-11 shows the lifecycle of an IPv6 address. An address is in the preferred state when it is available for general use and is available as either a source
or destination IPv6 address. A preferred address becomes deprecated when its
preferred timeout occurs. When it becomes deprecated, it may still be used for
existing transport (e.g., TCP) connections but is not to be used for initiating new
When an address is first selected for use, it enters a tentative or optimistic state.
When in the tentative state, it may be used only for the IPv6 Neighbor Discovery
protocol (see Chapter 8). It is not used as a source or destination address for any
other purposes. While in this state the address is being checked for duplication,
to see if any other nodes on the same network are already using the address. The
procedure for doing this is called duplicate address detection (DAD) and is described
in more detail in Section An alternative to conventional DAD is called optimistic DAD [RFC4429], whereby a selected address is used for a limited set of
purposes until DAD completes. Because an optimistic use of an address is really
just a special set of rules for DAD, it is not a truly complete state itself. Optimistic
addresses are treated as deprecated for most purposes. In particular, an address
may be both optimistic and deprecated simultaneously, depending on the preferred and valid lifetimes. DHCPv6 Message Format
DHCPv6 messages are encapsulated as UDP/IPv6 datagrams, with client port 546
and server port 547 (see Chapter 10). Messages are sent using a host’s link-scoped
source address to either relay agents or servers. There are two message formats,
one used directly between a client and a server, and another when a relay is used
(see Figure 6-12).
Figure 6-12 The basic DHCPv6 message format (left) and relay agent message format (right). Most interesting
information in DHCPv6 is carried in options.
System Configuration: DHCP and Autoconfiguration
The primary DHCPv6 message format is given in Figure 6-12 on the left and
an extended version, which includes the Link Address and Peer Address fields, is
given on the right. The format on the right is used between a DHCPv6 relay agent
and a DHCPv6 server. The Link Address field gives the global IPv6 address used
by the server to identify the link on which the client is located. The Peer Address
field contains the address of the relay agent or client from which the message to be
relayed was received. Note that relaying may be chained, so a relay may be relaying a message received from another relay. Relaying, for DHCPv4 and DHCPv6, is
described in Section 6.2.6.
The message type for messages in the format on the left include typical DHCPstyle messages (REQUEST, REPLY, etc.), whereas the message types for messages
in the format on the right include RELAY-FORW and RELAY-REPL, to indicate a
message forwarded from a relay or destined to a relay, respectively. The Options
field for the format on the right always includes a Relay Message option, which
includes the complete message being forwarded by the relay. Other options may
also be included.
One of the differences between DHCPv4 and DHCPv6 is how DHCPv6 uses
IPv6 multicast addressing. Clients send requests to the All DHCP Relay Agents and
Servers multicast address (ff02::1:2). Source addresses are of link-local scope. In
IPv6, there is no legacy BOOTP message format. The message semantics, however,
are similar. Table 6-1 gives the types of DHCPv6 messages, their values, defining
RFCs, and the roughly equivalent message and defining RFC for DHCPv4.
Table 6-1 DHCPv6 message types, values, and defining standards. The approximately equivalent message
types for DHCPv4 are given to the right.
DHCPv6 Message
DHCPv4 Message
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Table 6-1 DHCPv6 message types, values, and defining standards. The approximately equivalent message
types for DHCPv4 are given to the right (continued ).
DHCPv6 Message
DHCPv4 Message
In DHCPv6, most interesting information, including addresses, lease times,
location of services, and client and server identifiers, is carried in options. Two of
the more important concepts used with these options are called the Identity Association (IA) and the DHCP Unique Identifier (DUID). We discuss them next. Identity Association (IA)
An Identity Association (IA) is an identifier used between a DHCP client and server
to refer to a collection of addresses. Each IA comprises an IA identifier (IAID)
and associated configuration information. Each client interface that requests a
DHCPv6-assigned address requires at least one IA. Each IA can be associated with
only a single interface. The client chooses the IAID to uniquely identify each IA,
and this value is then shared with the server.
The configuration information associated with an IA includes one or more
addresses and associated lease information (T1, T2, and total lease duration values). Each address in an IA has both a preferred and a valid lifetime [RFC4862],
which define the address’s lifecycle. The types of addresses requested may be
regular addresses or temporary addresses [RFC4941]. Temporary addresses are
derived in part from random numbers to help improve privacy by frustrating the
tracking of IPv6 hosts based on IPv6 addresses. Temporary addresses are ordinarily assigned at the same time nontemporary addresses are assigned but are regenerated using a different random number more frequently.
When responding to a request, a server assigns one or more addresses to a
client’s IA based on a set of address assignment policies determined by the server’s
administrator. Generally, such policies depend on the link on which the request
System Configuration: DHCP and Autoconfiguration
arrived, standard information about the client (see DUID in Section, and
other information supplied by the client in DHCP options. The formats of the IA
option for nontemporary and temporary addresses are as shown in Figure 6-13.
Figure 6-13 The format for a DHCPv6 IA for nontemporary addresses (left) and temporary addresses (right).
Each option may include additional options describing particular IPv6 addresses and corresponding leases.
The main difference between a nontemporary and a temporary address IA
option, as shown in Figure 6-13, is the inclusion of the T1 and T2 values in the
nontemporary case. These values are expected, as they are also the values used in
DHCPv4. For temporary addresses, the lack of T1 and T2 is made possible because
the lifetimes are generally determined based upon the T1 and T2 values assigned
to a nontemporary address that has been acquired previously. Details of temporary addresses are given in [RFC4941]. DHCP Unique Identifier (DUID)
A DHCP Unique Identifier (DUID) identifies a single DHCPv6 client or server and
is designed to be persistent over time. It is used by servers to identify clients for
the selection of addresses (as part of IAs) and configuration information, and
by clients to identify the server in which they are interested. DUIDs are variable
in length and are treated as opaque values by both clients and servers for most
DUIDs are supposed to be globally unique yet easy to generate. To satisfy
these concerns simultaneously, [RFC3315] defines three different types of possible
DUIDs but also mentions that these are not the only three types that might ever be
created. The three types of DUIDs are as follows:
l. DUID-LLT: a DUID based on link-layer address plus time
2. DUID-EN: a DUID based on enterprise number and vendor assignment
3. DUID-LL: a DUID based on link-layer address only
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
The standard format for encoding a DUID begins with a 2-byte identifier indicating which type of DUID is being expressed. The current list is maintained by
the IANA [ID6PARAM]. This is followed by a 16-bit hardware type derived from
[RFC0826] in the cases of DUID-LLT and DUID-LL, and a 32-bit Private Enterprise
Number in the case of DUID-EN.
A Private Enterprise Number (PEN) is a 32-bit value given out by the IANA to an
enterprise. It is usually used in conjunction with the SNMP protocol for network
management purposes. About 38,000 of them have been assigned as of mid2011. The current list is available from the IANA [IEPARAM].
The first form of DUID, DUID-LLT, is the recommended form. Following
the hardware type, it includes a 32-bit timestamp containing the number of seconds since midnight (UTC), January 1, 2000 (mod 232). This rolls over (returns
to zero) in the year 2136. The last portion is a variable-length link-layer address.
The link-layer address can be selected from any of the host’s interfaces, and the
same DUID should be used, once selected, for traffic on any interface. This form of
DUID is required to be stable even if the network interface from which the DUID
was derived is removed. Thus, it requires the host system to maintain stable storage. The DUID-LL form is very similar but is recommended for systems lacking
stable storage (but having a stable link-layer address). The RFC says that a DUIDLL must not be used by clients or servers that cannot determine if the link-layer
address they are using is associated with a removable interface. Protocol Operation
The DHCPv6 protocol operates much like its DHCPv4 counterpart. Whether or
not a client initiates the use of DHCP is dependent on configuration options carried in an ICMPv6 Router Advertisement message the host receives (see Chapter
8). Router advertisements include two important bit fields. The M field is the Managed Address Configuration flag and indicates that IPv6 addresses can be obtained
using DHCPv6. The O field is the Other Configuration flag and indicates that information other than IPv6 addresses is available using DHCPv6. Both fields, along
with several others, are specified in [RFC5175]. Any combination of the M and O
bit fields is possible, although having M on and O off is probably the least useful
combination. If both are off, DHCPv6 is not used, and address assignment takes
place using stateless address autoconfiguration, described in Section 6.3. Having
M off and O on indicates that clients should use stateless DHCPv6 and obtain their
addresses using stateless address autoconfiguration. The DHCPv6 protocol operates using the messages defined in Table 6-1 and illustrated in Figure 6-14.
Typically, a client starting out first determines what link-local address to use
and performs an ICMPv6 Router Discovery operation (see Chapter 8) to determine
if there is a router on the attached network. A router advertisement includes the M
and O bit fields mentioned previously. If DHCPv6 is in use, at least the M bit field
System Configuration: DHCP and Autoconfiguration
5287( (
7,6( ,'RSWLRQV
Figure 6-14
Basic operation of DHCPv6. A client determines whether or not to use DHCPv6 from
information carried in ICMPv6 router advertisements. If used, DHCPv6 operations are
similar to those in DHCPv4 but differ significantly in the details.
is set and the client multicasts (see Chapter 9) the DHCPSOLICIT message to find
DHCPv6 servers. A response comes in the form of one or more DHCPADVERTISE
messages, indicating the presence of at least one DHCPv6 server. These messages
constitute two of the so-called four-message exchange operations of DHCPv6.
In cases where the location of a DHCPv6 server is already known or an address
need not be allocated (e.g., stateless DHCPv6 or the Rapid Commit option is being
used—see Section 6.2.9), the four-message exchange can be shortened to become
a two-message exchange, in which case only the REQUEST and REPLY messages
are used. A DHCPv6 server commits a binding formed from the combination of
a DUID, IA type (temporary, nontemporary, or prefix—see Section, and
IAID. The IAID is a 32-bit number chosen by the client. Each binding can have
one or more leases, and one or more bindings can be manipulated using a single
DHCPv6 transaction. Extended Example
Figure 6-15 shows an example of a Windows Vista (Service Pack 1) machine attaching to a wireless network. Its IPv4 stack has been disabled. It begins by assigning
its link-local address and checking to see if that address is already being used.
Figure 6-15
DAD for the client system’s link-local address is a Neighbor Solicitation for its own IPv6 address.
System Configuration: DHCP and Autoconfiguration
In Figure 6-15 we see the ICMPv6 Neighbor Solicitation (DAD) for the client’s
optimistic address fe80::fd26:de93:5ab7:405a. (DAD is described in more detail
when we discuss stateless address autoconfiguration in Section The packet
is sent to the corresponding solicited-node address ff02::1:ffb7:405a. It optimistically assumes that this address is not otherwise in use on the link, so it continues
on immediately with a Router Solicitation (RS) (see Figure 6-16).
The RS shown in Figure 6-16 is sent to the All Routers multicast address ff02::2.
It induces each router on the network to respond with a Router Advertisement
(RA), which carries the important M and O bits the client requires to determine
what to do next.
This example shows a router solicitation being sent from an optimistic address
including a source link-layer address option (SLLAO), in violation of [RFC4429].
The problem here is potential pollution of neighbor caches in any listening IPv6
routers. They will process the option and establish a mapping in their neighbor
caches between the tentative address and the link-layer address that may be a
duplicate. However, this is very unlikely and is probably not of significant concern.
Nonetheless, a pending “optimistic” option [IDDN], if standardized, will allow a
router solicitation to include an SLLAO that avoids this issue.
The RA in Figure 6-17 indicates the presence of a router, including its SLLAO
of 00:04:5a:9f:9e:80, which will be useful to the client for encapsulating subsequent
link-layer frames destined for the router. The Flags field indicates that the M and
O bit fields are both enabled (set to 1), so the client should proceed with DHCPv6,
both for obtaining its addresses as well as for obtaining other configuration information. This is accomplished by soliciting a DHCPv6 server (see Figure 6-18).
The DHCPv6 SOLICIT message shown in Figure 6-18 includes a transaction
ID (as in DHCPv4), an elapsed time (0, not shown), and the DUID consisting of a
time and 6-byte MAC address. In this example, the MAC address 00:14:22:f4:19:5f
is the MAC address of the wired Ethernet interface on this client, which is not the
interface being used to send the SOLICIT message. Recall that for DUID-LL and
DUID-TLL types of DUIDs the link-layer information should be the same across
interfaces. The IA is for a nontemporary address, and the client has selected the
IAID 09001302. The time values are left at 0 in the request, meaning that the client
is not expressing a particular desire; they will be determined by the server.
The next option is the FQDN option specified by [RFC4704]. It is used to carry
the FQDN of the client but also to affect how DHCPv6 and DNS interact (see Section 6.4 on DHCP and DNS interaction). This option is used to enable dynamic
updates to FQDN-to-IPv6 address mapping by client or server. (The reverse is
generally handled by the server.) The first portion of this option contains three
Figure 6-16
The Router Solicitation induces a nearby router to provide a Router Advertisement. The solicitation message is sent to the All Routers address
System Configuration: DHCP and Autoconfiguration
Figure 6-17
A Router Advertisement indicates that addresses are managed (available by assignment using
DHCPv6) and that other information (e.g., DNS server) is also available using DHCPv6. This network uses stateful DHCPv6. IPv6 Router Advertisement messages use ICMPv6 (see Chapter 8).
bit fields: N (server should not perform update), O (client request overridden by
server), and S (server should perform update). The second portion of the option
contains a domain name, which may be fully qualified or not.
The Wireshark tool indicates that the FQDN name record in Figure 6-18 is malformed and speculates that the packet may have been generated by a MS Vista
client, which indeed it was. The reason the field is malformed is because the original specification for this option allowed a simple domain name encoding using
ASCII characters. This method has been deprecated by [RFC4704], and the two
encodings are not directly compatible. Microsoft provides a “hotfix” to address
this issue for Vista systems. Microsoft Windows 7 systems exhibit behavior compliant with [RFC4704].
Other information in the solicitation message includes the identification of the
vendor class and requested option list. In this case, the vendor class data includes
the string "MSFT 5.0", which can be used by a DHCPv6 server to determine what
types of processing the client is capable of doing. In response to the client’s solicitation, the server responds with an ADVERTISE message (see Figure 6-19).
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Figure 6-18
The DHCPv6 SOLICIT message requests the location of one or more DHCPv6 servers and includes
information identifying the client and the options in which it is interested.
The ADVERTISE message shown in Figure 6-19 provides a wealth of information to the client. The Client Identifier option echoes the client’s configuration
information. The Server Identifier option gives the time plus a link-layer address
of 10:00:00:00:09:20 to identify the server. The IA has the value IAID 09001302
(provided by the client) and includes the global address 2001:db8:0:f101::10fd with
preferred lifetime and valid lifetime of 130 and 200s, respectively (fairly short
timeouts). The status code of 0 indicates success. Also provided with the DHCPv6
System Configuration: DHCP and Autoconfiguration
Figure 6-19
The DHCPv6 ADVERTISE message includes an address and lease, plus DNS server IPv6 address
and domain search list.
advertisement is the DNS Recursive Name Server option [RFC3646] indicating a
server address of 2001:db8:0:f101::1 and a Domain Search List option containing
the string home. Note that the server does not include an FQDN option, as it does
not implement that option.
The next two packets are a conventional Neighbor Solicitation and Neighbor
Advertisement messages between the client and the router, which we do not detail
further. That exchange is followed by the client’s request for a commitment of the
global nontemporary address 2001:db8:0:f101::10fd (see Figure 6-20).
The REQUEST message shown in Figure 6-20 is very similar to the SOLICIT
message but includes the information carried in the ADVERTISE message from
the server (address, T1, and T2 values). The transaction ID remains the same for
all of the DHCPv6 messages we have seen. The exchange is completed with the
REPLY message, which is identical to the ADVERTISE message except for the different message type and therefore is not detailed.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Figure 6-20
The DHCPv6 REQUEST message is similar to a SOLICIT message but includes information
learned from the server’s ADVERTISE message.
The DHCPv6 messages exchanged in this example are typical of a system
when it boots or is attached to a new network. As with DHCPv4, it is possible to
induce a system to perform the release or acquisition of this information by hand.
For example, in Windows the following command will release the data acquired
using DHCPv6:
C:\> ipconfig /release6
and the following command will acquire it:
C:\> ipconfig /renew6
The type of information acquired by DHCP and assigned to the local interface can be ascertained with another variant of this command that we have seen
before. Here is an excerpt of its output:
C:\> ipconfig /all
Wireless LAN adapter Wireless Network Connection:
Connection-specific DNS Suffix . : home
Description . . . . . . . . . . . : Intel(R) PRO/Wireless 3945ABG
Network Connection
Physical Address. . . . . . . . . : 00-13-02-20-B9-18
DHCP Enabled. . . . . . . . . . . : Yes
System Configuration: DHCP and Autoconfiguration
Autoconfiguration Enabled . . . . : Yes
IPv6 Address. . . . . . . . . . . : 2001:db8:0:f101::12cd(Preferred)
Lease Obtained. . . . . . . . . . : Sunday, December 21, 2008
11:30:45 PM
Lease Expires . . . . . . . . . . : Sunday, December 21, 2008
11:37:04 PM
Link-local IPv6 Address . . . . . :
Default Gateway . . . . . . . . . : fe80::204:5aff:fe9f:9e80%9
DHCPv6 IAID . . . . . . . . . . . : 150999810
DHCPv6 Client DUID. . . . . . . . :
DNS Servers . . . . . . . . . . . : 2001:db8:0:f101::1
NetBIOS over Tcpip. . . . . . . . : Disabled
Connection-specific DNS Suffix Search List :
Here we can see the link-layer address of the system (00:13:02:20:b9:18).
Note how this address was never used as a basis for forming the IPv6 addresses
in this example. DHCPv6 Prefix Delegation (DHCPv6-PD and 6rd)
Although the discussion so far has revolved around configuring hosts, DHCPv6
can also be used to configure routers. This works by having one router delegate a
range of address space to another router. The range of addresses is described by
an IPv6 address prefix. The prefix is carried in a DHCP Prefix option, defined by
[RFC3633]. This is used in situations where the delegating router, which now acts
as a DHCPv6 server as well, does not require detailed topology information about
the network to which the prefix is being delegated. Such a situation can arise, for
example, when an ISP gives out a range of IP addresses to be used and potentially
reassigned by a customer. In such a circumstance, the ISP may choose to delegate
a prefix to the customer’s premises equipment using DHCPv6-PD.
With prefix delegation, a new form of IA called an IA_PD is defined. Each
IA_PD consists of an IAID and associated configuration information and is similar to an IA for addresses, as discussed previously. DHCPv6-PD is useful not only
for prefix delegation for fixed routers, but is also suggested to be used when routers (and their attached subnets) can be mobile [RFC6276].
A special form of PD (6rd, described in [RFC5569]) has been created for supporting IPv6 rapid deployment by service providers. The OPTION_6RD (212) option
[RFC5969] holds the IPv6 6rd prefix that is used in assigning IPv6 addresses at a
customer’s site based on the customer’s assigned IPv4 address. IPv6 addresses are
algorithmically assigned by taking the service provider’s provisioned 6rd prefix as
the first n bits, with n being recommended as less than 32. A customer’s assigned
unicast IPv4 address is then appended as the next 32 (or fewer) bits, resulting in an
IPv6 6rd delegated prefix that is handled identically to DHCPv6-PD and is recommended to be 64 bits or shorter in length to allow automatic address configuration
(see Section 6.4) to operate without problems.
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
The OPTION_6RD option is variable in length and includes the following values: the IPv4 mask length, 6rd prefix length, 6rd prefix, and a list of 6rd
relay addresses (IPv4 addresses of relays that provide 6rd). The IPv4 mask length
gives the number of bits from the IPv4 address to use in assigning IPv6 addresses
(counted from the left).
Using DHCP with Relays
In most simple networks, a single DHCP server is made available directly to clients on the same LAN. However, in more complicated enterprises it may be necessary or convenient to relay DHCP traffic through one or more DHCP relay agents,
as illustrated in Figure 6-21.
Figure 6-21
A DHCP relay agent extends the operation of DHCP beyond a single network segment.
Information carried only between relays and DHCPv4 servers can be carried in the
Relay Agent Information option. Relaying in DHCPv6 works in a similar fashion but
with a different set of options.
A relay agent is used to extend the operation of DHCP across multiple network
segments. In Figure 6-21 the relay between network segments A and B forwards
DHCP messages and may annotate the messages with additional information
using options or by filling in empty fields. Note that in ordinary circumstances,
a relay does not participate in all DHCP traffic exchanged between a client and
a server. Rather, it relays only those messages that are broadcast (or multicast in
IPv6). Such messages are usually exchanged when a client is obtaining its address
for the first time. Once a client has acquired an IP address and the server’s IP
address using the Server Identification option, it can carry out a unicast conversation with the server that does not involve the relay. Note that relay agents have traditionally been layer 3 devices and tend to incorporate routing capabilities. After
discussing the basics of layer 3 relays, we will look briefly at alternatives that operate (mostly) at layer 2.
System Configuration: DHCP and Autoconfiguration Relay Agent Information Option
In the original concept of a BOOTP or DHCP relay [RFC2131], a relay agent served
the purpose only of relaying a message from one subnet to another that would
otherwise not be passed on by a router. This allowed systems that could not
yet perform indirect delivery to acquire an address from a centralized location.
This is sensible for a network operating in an enterprise under one administrative authority, but in cases where DHCP is used at a subscriber’s premises and
the DHCP infrastructure is provided elsewhere (e.g., an ISP), more information
may be required. There are a number of possible reasons. For example, the ISP
may not trust the subscriber completely, or billing and logging may be associated
with other information not available in the basic DHCP protocol. It has therefore
become useful to include extra information in the messages that pass between the
relay and the server. The Relay Agent Information option (for DHCPv4, abbreviated RAIO) [RFC3046] provides ways to include such information for IPv4 networks. IPv6 works somewhat differently, and we cover it in the following section.
The RAIO for DHCPv4 specified in [RFC3046] is really a meta-option, in
the sense that it specifies a framework in which a number of suboptions can be
defined. Many such suboptions have been defined, including several that are used
by ISPs to identify from which user, circuit, or network a request is coming. In
many cases we shall see that a suboption of the DHCPv4 information option has a
corresponding IPv6 option.
Because some of the information conveyed between a relay and a server may
be important to secure, the DHCP Authentication suboption of the RAIO has been
defined in [RFC4030]. It provides a method to ensure data integrity of the messages exchanged between relay and server. The approach is very similar to the
DHCP deferred authentication method (see Section 6.2.7), except the SHA-1 algorithm is used instead of the MD5 algorithm (see Chapter 18). Relay Agent Remote-ID Suboption and IPv6 Remote-ID Option
One common requirement placed upon a relay is to identify the client making a
DHCP request with information beyond what the client itself provides. A suboption of the Relay Agent Information option, called the Remote-ID suboption,
provides a way to identify the requesting DHCP client using a number of naming approaches that are locally interpreted (e.g., caller ID, user name, modem ID,
remote IP address of a point-to-point link). The DHCPv6 Relay Agent Remote-ID
option [RFC4649] provides the same capability but also includes an extra field,
the enterprise number, which indicates the vendor associated with the identifying information. This format of the Remote-ID information is then specified in a
vendor-specific way based on the enterprise number. A common method is to use
a DUID for the remote ID. Server Identifier Override
In some cases a relay may wish to interpose itself for processing between a
DHCP client and server. This can be accomplished with a special Server Identifier
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
Override suboption [RFC5107]. The suboption is a variant of the RAIO mentioned
Ordinarily, a relay forwards SOLICIT messages and may append options to
these messages as they pass from client to server. Relays are necessary in this circumstance because the client is likely to not yet have an acceptable IP address and
only sends its messages to the local subnet using broadcast or multicast addressing. Once a client receives and selects its address, it can talk directly to the DHCP
server based upon the server’s identity carried in the Server Identifier option. In
effect, this cuts the relay out of subsequent transactions between client and server.
It is often useful to allow the relay to include a variety of options (e.g., RAIO
carrying a circuit ID) for other types of messages, such as REQUEST, in addition
to SOLICIT. This option includes a 4-byte value specifying the IPv4 address to use
in the Server Identifier option present in DHCPREPLY messages formed by servers. The Server Identifier Override option is supposed to be used in conjunction
with the Relay Agents Flag suboption [RFC5010]. This suboption of the RAIO is a
set of flags that carry information from relay to server. So far, only one such flag
is defined: whether the destination address on the initial message from the client
used broadcast or unicast addressing. The server may make different address allocation decisions based upon the setting of this flag. Lease Query and Bulk Lease Query
In some environments it is useful to allow a third-party system (such as a relay
or access concentrator) to learn the address bindings for a particular DHCP client.
This facility is provided by DHCP leasequery ([RFC4388][RFC6148] for DHCPv4
and [RFC5007] for DHCPv6). In the case of DHCPv6, it can also provide lease
information for delegated prefixes. In Figure 6-21, the relay agent may “glean”
information from DHCP packets that pass through it in order to influence what
information is provided to the DHCP server. Such information may be kept by the
relay but may be lost upon relay failure. The DHCPLEASEQUERY message allows
such an agent to reacquire this type of information on demand, usually when
relaying traffic for which it has lost a binding. The DHCPLEASEQUERY message
supports four types of queries for DHCPv4: IPv4 address, MAC address, Client
Identifier, and Remote ID. For DHCPv6, it supports two: IPv6 address and Client
Identifier (DUID).
DHCPv4 servers may respond to lease queries with one of the following types of messages: DHCPLEASEUNASSIGNED, DHCPLEASEACTIVE, or
DHCPLEASEUNKNOWN. The first message indicates that the responding server
is authoritative for the queried value but no current associated lease is assigned.
The second form indicates that a lease is active, and the lease parameters (including T1 and T2) are provided. There is no particular presumed use for this information; it is made available to the requestor for whatever purposes it desires.
DHCPv6 servers respond with a LEASEQUERY-REPLY message that contains
a Client Data option. This option, in turn, includes a collection of the following
options: Client ID, IPv6 Address, IPv6 Prefix, and Client Last Transaction Time.
System Configuration: DHCP and Autoconfiguration
The last value is the time (in seconds) since the server last communicated with
the client in question. A LEASEQUERY-REPLY message may also contain the following two options: Relay Data and Client Link. The first includes the data last
sent from a relay about the associated query, and the second indicates the link on
which the subject client has one or more address bindings. Once again, this information is used for whatever purposes the requestor desires.
An extension to lease query called Bulk Leasequery (BL) [RFC5460][ID4LQ]
allows multiple bindings to be queried simultaneously, uses TCP/IP rather than
UDP/IP, and supports a wider range of query types. BL is designed as a special
service for obtaining binding information and is not really part of conventional
DHCP. Thus, clients wishing to obtain conventional configuration information do
not use BL. One particular use of BL is when DHCP is being used for prefix delegation. In this case, it is common for a router to be acting as a DHCP-PD client. It
obtains a prefix and then provides an address from the address range represented
by the prefix as an assignment to conventional DHCP clients. However, if such a
router fails or reboots, it may lose the prefix information and have a difficult time
recovering because the conventional lease query mechanism requires an identifier
for the binding in order to form the query. BL helps this situation, and others, by
generalizing the set of possible query types.
BL provides several extensions to basic lease query. First, it uses TCP/IP (port
547 for IPv6 and port 67 for IPv4) instead of UDP/IP. This change allows for large
amounts of query information to be returned for a single query, as may be necessary when retrieving a large number of delegated prefixes. BL also provides a Relay
Identifier option to allow queries to identify the querier more easily. A BL query
can then be based on relay identifier, link address (network segment), or relay ID.
The Relay ID DHCPv6 option and Relay ID DHCPv4 suboption [ID4RI] may
include a DUID that identifies the relay agent. Relays can insert this option in messages they forward, and the server can use it to associate bindings it receives with
the particular relay providing them. BL supports queries by address and DUID
specified in [RFC5007] and [RFC4388] but also queries by relay ID, link address,
and remote ID. These newer queries are supported only on TCP/IP-based servers
that support BL. Conversely, BL servers support only LEASEQUERY messages, not
the full set of ordinary DHCP messages.
BL extends the basic lease query mechanism with the LEASEQUERY-DATA
and LEASEQUERY-DONE messages. When responding successfully to a query, a
server first includes a LEASEQUERY-REPLY message. If additional information is
available, it includes a set of LEASEQUERY-DATA messages, one per binding, and
completes the set with a LEASEQUERY-DONE message. All messages pertaining
to the same group of bindings share a common transaction ID, the same one provided in the initial LEASEQUERY-REQUEST message. Layer 2 Relay Agents
In some network environments, there are layer 2 devices (e.g., switches, bridges)
that are located near end systems that relay and process DHCP requests. These
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
layer 2 devices do not have a full TCP/IP implementation stack and are not addressable using IP. As a result, they cannot act as conventional relay agents. To deal with
this issue, [IDL2RA] and [RFC6221] specify how layer 2 “lightweight” DHCP relay
agents (LDRAs) should behave, for IPv4 and IPv6, respectively. When referring to
relay behaviors, interfaces are labeled as client-facing or network-facing, and as
either trusted or untrusted. Network-facing interfaces are topologically closer to
DHCP servers, and trusted interfaces are those where it is assumed that arriving
packets are not spoofed.
The primary issue for IPv4 LDRAs is how to handle the DHCP giaddr field and
insert a RAIO when the LDRA itself has no IP layer information. The approach
recommended by [IDL2RA] is to have LDRAs insert the RAIO into DHCP requests
received from clients but not fill in the giaddr field. The resulting DHCP message
is sent in a broadcast fashion to one or more DHCP servers, as well as any other
receiving LDRAs. Such messages are flooded (i.e., sent on all interfaces except
the one upon which the message was received) unless received on an untrusted
interface. LDRAs receiving such a message already including a RAIO do not add
another such option but perform flooding. Responses (e.g., DHCPOFFER messages) sent using broadcast may be intercepted by the LDRA, which in turn strips
the RAIO and uses its information to forward the response to the original requesting client. Many LDRAs also intercept unicast DHCP traffic. In these cases, the
RAIO is also created or stripped as necessary. Note that compatible DHCP servers must support the ability to process and return DHCP messages containing
RAIOs without a valid giaddr field, whether such messages are sent using unicast
or broadcast.
IPv6 LDRAs process DHCPv6 traffic by creating RELAY-FORW and RELAYREPL messages. ADVERTISE, REPLY, RECONFIGURE, and RELAY-REPL messages received on client-facing interfaces are discarded. In addition, RELAY-FORW
messages received on untrusted client-facing interfaces are also discarded as a
security precaution. RELAY-FORW messages are built containing options that
identify the client-facing interface (i.e., Link-Address field, Peer-Address field, and
Interface-ID option). The Link-Address field is set to 0, the Peer-Address field is set
to the client’s IP address, and the Interface-ID option is set to a value configured
in the LDRA. When receiving a RELAY-REPL message containing a Link-Address
field with value 0, the LDRA decapsulates the included message and sends it to
toward the client on the interface specified in the received Interface-ID option
(provided by the server). RELAY-FORW messages received on client-facing interfaces are modified by incrementing the hop count. Messages other than RELAYREPL messages received on network-facing interfaces are dropped.
DHCP Authentication
While we ordinarily discuss various security vulnerabilities at the end of each
chapter (as we do in this one), for DHCP it is worth mentioning them here. It
should be apparent that if the smooth operation of DHCP is interfered with, hosts
System Configuration: DHCP and Autoconfiguration
are likely to be configured with erroneous information and significant disruption
could result. Unfortunately, as we have discussed so far, DHCP has no provision
for security, so it is possible for unauthorized DHCP clients or servers to be set
up, either intentionally or accidentally, that could cause havoc with an otherwise
functioning network.
In an attempt to mitigate these problems, a method to authenticate DHCP
messages is specified in [RFC3118]. It defines a DHCP option, the Authentication
option, with the format shown in Figure 6-22.
Figure 6-22
The DHCP Authentication option includes replay detection and can use various methods for authentication. Specified back in 2001, this option is not widely used today.
The purpose of the Authentication option is to help determine whether a
DHCP message has come from an authorized sender. The Code field is set to 90,
and the Length field gives the number of bytes in the option (not including the
Code or Length fields). If the Protocol and Algorithm fields have the value 0, the
Authentication Information field holds a simple shared configuration token. As long as
the configuration token matches at the client and server, the message is accepted.
This could be used, for example, to hold a password or similar text string, but such
traffic could be intercepted by an attacker, so this method is not very secure. It
might help to fend off accidental DHCP problems, however.
A somewhat more secure method involves so-called deferred authentication,
indicated if the Protocol and Algorithm fields are set to 1. In this case, the client’s
DHCPDISCOVER or DHCPINFORM message includes an Authentication option,
and the server responds with authentication information included in its DHCPOFFER or DHCPACK message. The authentication information includes a message
authentication code (MAC; see Chapter 18), which provides authentication of the
sender and an integrity check on the message contents. Assuming that the server
and client have a shared secret, the MAC can be used to ensure that the client is
trusted by the server and vice versa. It can also be used to ensure that the DHCP
messages exchanged between them have not been modified or replayed from an
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
earlier DHCP exchange. The replay detection method (RDM) is determined by
the value of the RDM field. For RDM set to 0, the Replay Detection field contains a
monotonically increasing value (e.g., timestamp). Received messages are checked
to ensure that this value always increases. If the value does not increase, it is
likely that an earlier DHCP message is simply being replayed (captured, stored,
and played back later). It is conceivable that the value in the Replay Detection field
could fail to advance in a situation where packets are reordered, but this is highly
unlikely in a LAN (where DHCP is most prevalent) because only a single routing
path is ordinarily used between the DHCP client and server.
There are (at least) two reasons why DHCP authentication has not seen widespread use. First, the approach requires shared keys to be distributed between a
DHCP server and each client requiring authentication. Second, the Authentication
option was specified after DHCP was already in relatively widespread use. Nonetheless, [RFC4030] builds upon this specification to help secure DHCP messages
passed through relay agents (see Section 6.2.6).
Reconfigure Extension
In ordinary operation, a DHCP client initiates the renewal of address bindings.
[RFC3203] defines the reconfigure extension and associated DHCPFORCERENEW
message. This extension allows a server to cause a single client to change to the
Renewing state and attempt to renew its lease by an otherwise ordinary operation (i.e., DHCPREQUEST). A server that does not wish to renew the lease for the
requested address may respond with a DHCPNAK, causing the client to restart
in the INIT state. The client would then begin again using a DHCPDISCOVER
The purpose of this extension is to cause the client to reestablish an address or
to cause it to lose its address as the result of some significant change of state within
the network. This could happen, for example, if the network is being administratively taken down or renumbered. Because this message is such an obvious
candidate for a DoS attack, it must be authenticated using DHCP authentication.
Because DHCP authentication is not in widespread use, neither is the reconfigure
Rapid Commit
The DHCP Rapid Commit option [RFC4039] allows a DHCP server to respond
to the DHCPDISCOVER message with a DHCPACK, effectively skipping the
DHCPREQUEST message and ultimately using a two-message exchange instead
of a four-message exchange. The motivation for this option is to quickly configure
hosts that may change their point of network attachment frequently (i.e., mobile
hosts). When only a single DHCP server is available and addresses are plentiful,
this option should be of no significant concern.
System Configuration: DHCP and Autoconfiguration
To use rapid commit, a client includes the option in a DHCPDISCOVER message; it is not permitted to include it in any other message. Similarly, a server uses
this option only in DHCPACK messages. When a server responds with this option,
the receiving client knows that the returned address may be used immediately. If
it should determine later that the address is already in use by another system (e.g.,
via ARP), the client sends a DHCPDECLINE message and abandons the address. It
may also voluntarily relinquish the address it has received using a DHCPRELEASE
6.2.10 Location Information (LCI and LoST)
In some cases, it is useful for a host being configured to become aware of its location in the world. Such information may be encoded using, for example, latitude,
longitude, and altitude. An IETF effort known as Geoconf (“Geographic configuration”) resulted in [RFC6225], which specifies how to provide such geospatial
Location Configuration Information (LCI) to clients using the GeoConf (123) and
GeoLoc (144) DHCP options. Geospatial LCI includes not only the value of the latitude, longitude, and altitude coordinates, but also resolution indicators for each.
LCI can be used for a number of purposes, including emergency services. If a
caller using an IP phone requests emergency assistance, LCI can be used to indicate where the emergency is taking place.
Although the physical location information just mentioned is useful to locate
a particular individual or system, sometimes it is important to know the civic
location of an entity. The civic location expresses location in terms of geopolitical institutions such as country, city, district, street, and other such parameters.
Civic location information can be provided using DHCP in the same way a physical location can, using the same LCI structure as is used with geospatial LCI.
[RFC4776] defines the GEOCONF_CIVIC (99) option for carrying civic location
LCI. This form of LCI is trickier than the geospatial information because the geopolitical method for naming locations varies by country. An additional complexity
arises because such names may also require languages and character sets beyond
the English and ASCII language and characters ordinarily used with DHCP. There
is also a concern regarding the privacy of location in general, not just with respect
to DHCP. The IETF is undertaking this issue in a framework called “Geopriv.” See,
for example, [RFC3693] for more information.
An alternative high-layer protocol known as the HTTP-Enabled Location Delivery (HELD) protocol [RFC5985] may also be used to provide location information.
Instead of encoding the LCI directly in DHCP messages, DHCP options OPTION_
the FQDN of a HELD server for IPv4 and IPv6, respectively [RFC5986].
Once a host knows its location, it may need to contact services associated with
the location (e.g., the location of the nearest hospital). The IETF Location-to-Service
Translation (LoST) framework [RFC5222] accomplishes this using an applicationlayer protocol accessed using a location-dependent URI. The DHCP options
Section 6.2 Dynamic Host Configuration Protocol (DHCP)
OPTION_V4_LOST (137) and OPTION_V6_LOST (51) provide for variable-length
encodings of an FQDN specifying the name of a LoST server for DHCPv4 and
DHCPv6, respectively [RFC5223]. The encoding is in the same format used by
DNS for encoding domain names (see Chapter 11).
6.2.11 Mobility and Handoff Information (MoS and ANDSF)
In response to the increased use of mobile computers and smartphones accessing
the Internet with cellular technology, frameworks and related DHCP options have
been specified to convey information about the cellular configuration and handovers between different wireless networks. At present, there are two sets of DHCP
options relating to this information: IEEE 802.21 Mobility Services (MoS) Discovery
and Access Network Discovery and Selection Function (ANDSF). The latter framework
is being standardized by the 3rd Generation Partnership Project (3GPP), one of the
organizations responsible for creating cellular data communications standards.
The IEEE 802.21 standard [802.21-2008] specifies a framework for mediaindependent handoff (MIH) services between various network types, including
those defined by IEEE (802.3, 802.11, 802.16), those defined by 3GPP, and those
defined by 3GPP2. A design of such a framework in the IETF context is provided
in [RFC5677]. MoS provides three types of services known as information services, command services, and event services. Roughly speaking, these services
provide information about available networks, functions for controlling link
parameters, and notification of link status changes. The MoS Discovery DHCP
options [RFC5678] provide a means for a mobile node to acquire the addresses or
domain names of servers providing each of these services using either DHCPv4
or DHCPv6. For IPv4, the OPTION-IPv4_Address-MoS option (139) contains a
vector of suboptions containing IP addresses for servers providing each of the
services. A suboption of the OPTION-IPv4_FQDN-MoS option (140) provides a
vector of FQDNs for servers for each of the services. Similar options, OPTIONIPv6_Address-MoS (54) and OPTION-IPv6_FQDN (55), provide equivalent capabilities for IPv6.
Based upon 3GPP’s ANDSF specification, [RFC6153] defines DHCPv4 and
DHCPv6 options for carrying ANDSF information. In particular, it defines options
for mobile devices to discover the address of an ANDSF server. ANDSF servers
are configured by cellular infrastructure operators and may hold information
such as the availability and access policies of multiple transport networks (e.g.,
simultaneous use of 3G and Wi-Fi).
The ANDSF IPv4 Address Option (142) contains a vector of IPv4 addresses for
ANDSF servers. The addresses are provided in preference order (first is most preferred). The ANDSF IPv6 Address Option (143) contains a vector of IPv6 addresses
for ANDSF servers. To request ANDSF information using DHCPv4, the mobile node
includes an ANDSF IPv4 Address option in the Parameter Request List. To request
ANDSF information using DHCPv6, the client includes an ANDSF IPv6 Address
option in the Option Request Option (ORO) (see Section 22.7 of [RFC3315]).
System Configuration: DHCP and Autoconfiguration
6.2.12 DHCP Snooping
DHCP “snooping” is a capability that some switch vendors offer in their products that inspects the contents of DHCP messages and ensures that only those
addresses listed on an access control list are able to exchange DHCP traffic. This
can help to protect against two potential problems. First, a “rogue” DHCP server is
limited in the damage it can do because other hosts are not able to hear its DHCP
address offers. Also, the technique can limit the allocation of addresses to a particular set of MAC addresses. While this provides some protection, MAC addresses
can be changed in a system fairly easily using operating system commands, so
this technique offers only limited protection.
Stateless Address Autoconfiguration (SLAAC)
While most routers have their addresses configured manually, hosts can be
assigned addresses manually, using an assignment protocol like DHCP, or automatically using some sort of algorithm. There are two forms of automatic assignment, depending on what type of address is being formed. For addresses that are
to be used only on a single link (link-local addresses), a host need only find some
appropriate address not already in use on the link. For addresses that are to be
used for global connectivity, however, some portion of the address must generally
be managed. There are mechanisms in both IPv4 and IPv6 for link-local address
autoconfiguration, whereby a host determines its address(es) largely without help.
This is called stateless address autoconfiguration (SLAAC).
Dynamic Configuration of IPv4 Link-Local Addresses
In cases where a host without a manually configured address attaches to a network
lacking a DHCP server, IP-based communication is unable to take place unless
the host somehow generates an IP address to use. [RFC3927] describes a mechanism whereby a host can automatically generate its own IPv4 address from the
link-local range through using the 16-bit subnet mask (see [RFC5735]). This method is known as dynamic link-local address
configuration or Automatic Private IP Addressing (APIPA). In essence, a host selects
a random address in the range to use and checks to see if that address is already
in use by some other system on the subnetwork. This check is implemented using
IPv4 ACD (see Chapter 4).
IPv6 SLAAC for Link-Local Addresses
The goal of IPv6 SLAAC is to allow nodes to automatically (and autonomously)
self-assign link-local IPv6 addresses. IPv6 SLAAC is described in [RFC4862]. It
Section 6.3 Stateless Address Autoconfiguration (SLAAC)
involves three major steps: obtaining a link-local address, obtaining a global
address using stateless autoconfiguration, and detecting whether the link-local
address is already in use on the link. Stateless autoconfiguration can be used without routers, in which case only link-local addresses are assigned. When routers are
present, a global address is formed using a combination of the prefix advertised
by a router and locally generated information. SLAAC can also be used in conjunction with DHCPv6 (or manual address assignment) to allow a host to obtain
information in addition to its address (called “stateless” DHCPv6). Hosts that perform SLAAC can be used on the same network as those configured using stateful
or stateless DHCPv6. Generally, stateful DHCPv6 is used when finer control is
required in assigning address to hosts, but it is expected that stateless DHCPv6 in
combination with SLAAC will be the most common deployment option.
In IPv6, tentative (or optimistic) link-local addresses are selected using procedures specified in [RFC4291] and [RFC4941]. They apply only to multicast-capable
networks and are assigned infinite preferred and valid lifetimes once established.
To form the numeric address, a unique number is appended to the well-known
link-local prefix fe80::0 (of appropriate length). This is accomplished by setting
the right-most N bits of the address to be equal to the (N-bit-long) number, the
left-most bits equal to the 10-bit link-local prefix 1111111010, and the rest to 0. The
resulting address is placed into the tentative (or optimistic) state and checked for
duplicates (see the next section). IPv6 Duplicate Address Detection (DAD)
IPv6 DAD uses ICMPv6 Neighbor Solicitation and Neighbor Advertisement messages (see Chapter 8) to determine if a particular (tentative or optimistic) IPv6
address is already in use on the attached link. For purposes of this discussion,
we refer only to tentative addresses, but it is understood that DAD applies to optimistic addresses as well. DAD is specified in [RFC4862] and is recommended to
be used every time an IPv6 address is assigned to an interface manually, using
autoconfiguration, or using DHCPv6. If a duplicate address is discovered, the procedure causes the tentative address to not be used. If DAD succeeds, the tentative
address transitions to the preferred state and can be used without restriction.
DAD is performed as follows: A node first joins the All Nodes multicast address
and the Solicited-Node multicast address of the tentative address (see Chapter 9).
To check for use of an address duplicate, a node sends one or more ICMPv6 Neighbor Solicitation messages. The source and destination IPv6 addresses of these messages are the unspecified address and Solicited-Node address of the target address
being checked, respectively. The Target Address field is set to the address being
checked (the tentative address). If a Neighbor Advertisement message is received
in response, DAD has failed, and the address being checked is abandoned.
System Configuration: DHCP and Autoconfiguration
As a consequence of joining multicast groups, MLD messages are sent (see
Chapter 9), but their transmission is delayed by a random interval according to
[RFC4862] to avoid congesting the network when many nodes simultaneously
join the All Hosts group (e.g., after a restoration of power). For DAD, these MLD
messages are used to inform MLD-snooping switches to forward multicast traffic
as necessary.
When an address has not yet successfully completed DAD, any received
neighbor solicitations for it are treated in a special way, as this is indicative of
some other host’s intention to use the same address. If such messages are received,
they are dropped, the current tentative address is abandoned, and DAD fails.
If DAD fails, by receiving a similar neighbor solicitation from another node
or a neighbor advertisement for the target address, the address is not assigned to
an interface and does not become a preferred address. If the address is a link-local
address being configured based on an interface identifier derived from a local
MAC address, it is unlikely that the same procedure will ultimately produce a
nonconflicting address, so the use of this address is abandoned and administrator
input is required. If the address is based on a different form of interface identifier, IPv6 operations may be retried using another address based on an alternative
tentative address. IPv6 SLAAC for Global Addresses
Once a node has acquired a link-local address, it is likely to require one or more
global addresses as well. Global addresses are formed using a process similar to
that for link-local SLAAC but using a prefix provided by a router. Such prefixes
are carried in the Prefix option of a router advertisement (see Chapter 8), and a
flag indicates whether the prefix should be used in forming global addresses with
SLAAC. If so, the prefix is combined with an interface identifier (e.g., the same one
used in forming a link-local address if the privacy extension is not being used) to
form a global address. The preferred and valid lifetimes of such addresses are also
determined by information present in the Prefix option. Example
The trace in Figure 6-23 shows the series of events an IPv6 (Windows Vista/SP1)
host uses when allocating its addresses with SLAAC. The system first selects a
link-local address based on the link-local prefix of fe80::/64 and a random number.
This method is designed to enhance the privacy of a user by making the address of
the host system change over time [RFC4941]. The other common method involves
using the bits of the MAC address in forming the link-local address. It performs
DAD on this address (fe80::fd26:de93:5ab7:405a) to look for conflicts.
Figure 6-23
During SLAAC, a host begins by performing DAD on the tentative link-local address it wishes to use by sending an ICMPv6 Neighbor Solicitation message for this address from the unspecified address.
System Configuration: DHCP and Autoconfiguration
Figure 6-23 shows the operation of DAD, which involves the host sending an
NS to see if its selected link-local address is in use. It then quickly performs an RS
to determine how to proceed (see Figure 6-24).
Figure 6-24 The ICMPv6 RS message induces a nearby router to supply configuration information such as the
global network prefix in use on the attached network.
The Router Solicitation message shown in Figure 6-24 is sent to the All Routers multicast address (ff02::2) using the autoconfigured link-local IPv6 address as
a source address. The response is given in an RA sent to the All Systems multicast
address (ff02::1), so that all attached systems can see (see Figure 6-25).
The RA shown in Figure 6-25 is sent from fe80::204:5aff:fe9f:9e80, the linklocal address of the router, to the All Systems multicast address ff02::1. The Flags
field in the RA, which may contain several configuration options and extensions
[RFC5175], is set to 0, indicating that addresses are not “managed” on this link
by DHCPv6. The Prefix option indicates that the global prefix 2001:db8::/64 is
in use on the link. The prefix length of 64 is not carried but is instead defined
according to [RFC4291]. The Flags field value of 0xc0 associated with the Prefix option indicates that the prefix is on-link (can be use in conjunction with a
router) and the auto flag is set, meaning that the prefix can be used by the host
to configure other addresses automatically. It also includes the Recursive DNS
Server (RDNSS) option [RFC6106], which indicates that a DNS server is available
at the address 2001::db8::1. The SLLAO indicates that the router’s MAC address is
00:04:5a:9f:9e:80. This information is made available for any node to populate its
neighbor cache (the IPv6 equivalent of the IPv4 ARP cache; Neighbor Discovery is
discussed in Chapter 8).
Section 6.3 Stateless Address Autoconfiguration (SLAAC)
Figure 6-25
An ICMPv6 RA message provides the location and availability of a default router plus the global
address prefix in use on the network. It also includes the location of a DNS server and indicates
whether the router sending the advertisement can also act as a Mobile IPv6 home agent (no in this
case). The client may use some or all of this information in configuring its operation.
After an exchange of Neighbor Solicitation and Neighbor Advertisement messages between the client and the router, the client performs another DAD operation on the new (global) address it selects (see Figure 6-26).
The address 2001:db8::fd26:de93:5ab7:405a has been chosen by the client
based on the prefix 2001::db8 carried in the router advertisement it received earlier. The low-order bits of this address are based on the same random number as
was used to configure its link-local address. As such, the Solicited-Node multicast
address ff02::1:ffb7:405a is the same for DAD for both addresses. After this address
has been tested for duplication, the client allocates another address and applies
DAD to it (see Figure 6-27).
System Configuration: DHCP and Autoconfiguration
Figure 6-26 DAD for the global address derived from the prefix 2001:db8::/64 is sent to the same
Solicited-Node multicast address as the first packet.
Figure 6-27
DAD for the address 2001:db8::9cf4:f812:816d:5c97.
The DAD operation in Figure 6-27 is for the address 2001:db8::9cf4:f812:816d:
5c97. This address is a temporary IPv6 address, generated using a different random number for its lower-order bits for privacy reasons. The difference between
Section 6.3 Stateless Address Autoconfiguration (SLAAC)
the two global addresses here is that the temporary address has a shorter lifetime.
Lifetimes are computed as the lower (smaller) of the following two values: the lifetimes included in the Prefix Information option received in the RA and a local pair
of defaults. In the case of Windows Vista, the default valid lifetime is one week and
the default preferred lifetime is one day. Once this message has completed, the client has performed SLAAC for its link-local address, plus two global addresses.
This is enough addressing information to perform local or global communication.
The temporary address will change periodically to help enhance privacy. In cases
where privacy protection is not desired, the following command can be employed
to disable this feature in Windows:
C:\> netsh interface ipv6 set privacy state=disabled
In Linux, temporary addresses can be enabled using this set of commands:
Linux# sysctl –w net.ipv6.conf.all.use_tempaddr=2
Linux# sysctl –w net.ipv6.conf.default.use_tempaddr=2
and disabled using these commands:
Linux# sysctl –w net.ipv6.conf.all.use_tempaddr=0
Linux# sysctl –w net.ipv6.conf.default.use_tempaddr=0 Stateless DHCP
We have mentioned that DHCPv6 can be used in a “stateless” mode where the
DHCPv6 server does not assign addresses (or keep any per-client state) but
does provide other configuration information. Stateless DHCPv6 is specified in
[RFC3736] and combines SLAAC with DHCPv6. It is believed that this combination is an attractive deployment option because network administrators need
not be directly concerned with address pools as they have been when deploying
In a stateless DHCPv6 deployment, nodes are assumed to have obtained their
addresses using some method other than DHCPv6. Thus, the DHCPv6 server does
not need to handle any of the address management messages specified in Table
6-1. In addition, it does not need to handle any of the options required for establishing IA bindings. This simplifies the server software and server configuration
considerably. The operation of relay agents is unchanged.
Stateless DHCPv6 clients use the DHCPv6 INFORMATION-REQUEST message to request information that is provided in REPLY messages from servers. The
INFORMATION-REQUEST message includes an Option Request option listing
System Configuration: DHCP and Autoconfiguration
the options about which the client wishes to know more. The INFORMATIONREQUEST may include a Client Identifier option, which allows answers to be customized for particular clients.
To be a compliant stateless DHCPv6 server, a system must implement the following messages: INFORMATION-REQUEST, REPLY, RELAY-FORW, and RELAYREPL. It also must implement the following options: Option Request, Status Code,
Server Identifier, Client Message, Server Message, Interface-ID. The last three
are used when relay agents are involved. To be a useful stateless DHCPv6 server,
several other options will likely be necessary: DNS Server, DNS Search List, and
possibly SIP Servers. Other potentially useful, but not required, options include
Preference, Elapsed Time, User Class, Vendor Class, Vendor-Specific Information,
Client Identifier, and Authentication. The Utility of Address Autoconfiguration
The utility of address autoconfiguration for IP is typically limited because routers
that may be on the same network as the client are configured with particular IP
address ranges in use that differ from the addresses a client is likely to autoconfigure. This is especially true for the IPv4 (APIPA) case, as the private link-local prefix
169.254/16 is very unlikely to be used by a router. Therefore, the consequence of
self-assigning an IP address is that local subnet access may work, but Internet
routing and name services (DNS) are likely to fail. When DNS fails, much of the
common Internet “experience” fails with it. Thus, it is often more useful to have a
client fail to get an IP address (which is relatively easily detected) than to allow it
to obtain one that cannot really be used effectively.
There are name services other than conventional DNS that may be of use for
link-local addressing, including Bonjour/ZeroConf (Apple), LLMNR, and NetBIOS
(Microsoft). Because these have evolved over time from different vendors, and
are not established IETF standards, the exact behavior involved when mapping
names to addresses in the local environment varies considerably. See Chapter 11
for more details on local alternatives to DNS.
The use of APIPA can be disabled, which prevents a system from self-assigning an IP address. In Windows, this is accomplished by creating the following
registry key (the key is a single line but is wrapped here for illustration):
This REG_DWORD value may be set to 0 to disable APIPA for all network interfaces. In Linux, the file /etc/sysconfig/network can be modified to include
the following directive:
Section 6.4 DHCP and DNS Interaction
This disables the use of APIPA for all network interfaces. It is also possible to
disable APIPA for specific interfaces by modifying the per-interface configuration files (e.g., /etc/sysconfig/network-scripts/ifcfg-eth0 for the first
Ethernet device).
In the case of IPv6 SLAAC, it is relatively easy to obtain a global IPv6 address,
but the relationship between a name and its address is not secured, leading to a
potential set of unpleasant consequences (see Chapters 11 and 18). Thus, it may
still be desirable to avoid SLAAC in deployments for the time being. To disable
SLAAC for IPv6 global addresses, there are two methods. First, the Router Advertisement messages provided by the local router can be arranged to turn off the
“auto” flag in the Prefix option (or configure it to not provide a Prefix option, as
illustrated in the preceding example). In addition, a local configuration setting
causes a client to avoid autoconfiguration of global addresses.
To disable SLAAC in a Linux client, the following command may be given:
Linux# sysctl –w net.ipv6.conf.all.autoconf=0
To do so on a Mac OS or FreeBSD system, at least for link-local addresses, the following command should be used:
FreeBSD# sysctl –w net.inet6.ip6.auto_linklocal=0
And, finally, for Windows:
C:\> netsh
netsh> interface ipv6
netsh interface ipv6> set interface {ifname} managedaddress=disabled
where {ifname} should be replaced with the appropriate interface name (in this
example, “Wireless Network Connection”). Note that the behavior of these
configuration commands sometimes changes over time. Please check the operating system documentation for the current method if these changes do not perform
as expected.
DHCP and DNS Interaction
One of the important parts of the configuration information a DHCP client typically receives when obtaining an IP address is the IP address of a DNS server. This
allows the client system to convert DNS names to the IPv4 and/or IPv6 addresses
required by the protocol implementation to make transport-layer connections.
Without a DNS server or other way to map names to addresses, most users would
find the system nearly useless for accessing the Internet. If the local DNS is working properly, it should be able to provide address mappings for the Internet as a
whole, but also for local private networks (like .home mentioned earlier), if properly configured.
System Configuration: DHCP and Autoconfiguration
Because DNS mappings for local private networks are cumbersome to manage
by hand, it is convenient to couple the act of providing a DHCP-assigned address
with a method for updating the DNS mappings corresponding to that address.
This can be done either using a combined DHCP/DNS server or with dynamic DNS
(see Chapter 11).
A combined DNS/DHCP server (such as the Linux dnsmasq package) is a
server program that can be configured to give out IP address leases and other
information but that also reads the Client Identifier or Domain Name present in a
DHCPREQUEST and updates an internal DNS database with the name-to-address
binding before responding with the DHCPACK. In doing so, any subsequent DNS
requests initiated either by the DHCP client or by other systems interacting with
the same DNS server are able to convert between the name of the client and its
freshly assigned IP address.
PPP over Ethernet (PPPoE)
For most LANs and some WAN connections, DHCP provides the most common method for configuring client systems. For WAN connections such as DSL,
another method based on PPP is often used instead. This method involves carrying PPP on Ethernet and is called PPP over Ethernet (PPPoE). PPPoE is used in cases
where the WAN connection device (e.g., DSL modem) acts as a switch or bridge
instead of a router. PPP is preferred as a basis for establishing connectivity by
some ISPs because it may provide finer-grain configuration control and audit logs
than other configuration options such as DHCP. To provide Internet connectivity,
some device such as a user’s PC must implement the IP routing and addressing
functions. Figure 6-28 shows the typical use case.
,6 3
+RPH3 &
'6 HP
Figure 6-28
A simplified view of DSL service using PPPoE as provided to a customer. The home PC
implements the PPPoE protocol and authenticates the subscriber with the ISP. It may
also act as a router, DHCP server, DNS server, and/or NAT device for the home LAN.
Section 6.5 PPP over Ethernet (PPPoE)
The figure shows an ISP providing services to many customers using DSL.
DSL provides a point-to-point digital link that can operate simultaneously with a
conventional analog telephone line (called plain old telephone service or POTS). This
simultaneous use of the customer’s physical phone wires is accomplished using
frequency division multiplexing—the DSL information is carried on higher frequencies than POTS. A filter is required when attaching conventional telephone
handsets to avoid interference from the higher DSL frequencies. The DSL modem
effectively provides a bridged service to a PPP port on the ISP’s access concentrator
(AC), which interconnects the customer’s modem line and the ISP’s networking
equipment. The modem and AC also support the PPPoE protocol, which the user
has elected in this example to configure on a home PC attached to the DSL modem
using a point-to-point Ethernet network (i.e., an Ethernet LAN using only a single
Once the DSL modem has successfully established a low-layer link with the
ISP, the PC can begin the PPPoE exchange, as defined in the informational document [RFC2516] and shown in Figure 6-29.
%URD ',
3$'2 W
3$'6 W
Figure 6-29
The PPPoE message exchange starts in a Discovery stage and establishes a PPP Session
stage. Each message is a PAD message. PADI requests responses from PPPoE servers.
PADO offers connectivity. PADR expresses the client’s selection among multiple possible servers. PADS provides an acknowledgment to the client from the selected server.
After the PAD exchanges, a PPP session begins. The PPP session can be terminated by
either side sending a PADT message or when the underlying link fails or is shut down.
System Configuration: DHCP and Autoconfiguration
The protocol includes a Discovery phase and a PPP Session phase. The Discovery phase involves the exchange of several PPPoE Active Discovery (PAD) messages:
PADI (Initiation), PADO (Offer), PADR (Request), PADS (Session Confirmation).
Once the exchange is complete, an Ethernet-encapsulated PPP session proceeds
and ultimately concludes with either side sending a PADT (Termination) message.
The session also concludes if the underlying connection is broken. PPPoE messages use the format shown in Figure 6-30 and are encapsulated in the Ethernet
payload area.
>3$'0HVVDJHV&RQWDLQ7/97DJVLQ3D\[email protected]
Figure 6-30
PPPoE messages are carried in the payload area of Ethernet frames. The Ethernet Type
field is set to 0x8863 during the Discovery phase and 0x8864 when carrying PPP session
data. For PAD messages, a TLV scheme is used for carrying configuration information,
similar to DHCP options. The PPPoE Session ID is chosen by the server and conveyed
in the PADS message.
In Figure 6-30, the PPPoE Ver and Type fields are both 4 bits long and contain
the value 0x1 for the current version of PPPoE. The Code field contains an indication of the PPPoE message type, as shown in the lower right part of Figure 6-30.
The Session ID field contains the value 0x0000 for PADI, PADO, and PADR messages and contains a unique 16-bit number in subsequent messages. The same
value is maintained during the PPP Session phase. PAD messages contain one
or more tags, which are TLVs arranged as a 16-bit TAG_TYPE field followed by a
16-bit TAG_LENGTH field and a variable amount of tag value data. The values and
meanings of the TAG_TYPE field are given in Table 6-2.
Section 6.5 PPP over Ethernet (PPPoE)
Table 6-2 PPPoE TAG_TYPE values, name, and purpose. PAD messages may contain one or more
Indicates that no further tags are present. TAG_
LENGTH must be 0.
Contains a UTF-8-encoded service name (for ISP use).
Contains a UTF-8-encoded string identifying the
access concentrator.
Binary data used by client to match messages; not
interpreted by AC.
Binary data used by AC for DoS protection; echoed
by client.
Not recommended; see [RFC2516] for details.
May be added by a relay relaying PAD traffic.
The requested Service-Name tag cannot be honored
by AC.
The AC experienced an error in performing a
requested action.
Contains a UTF-8 string describing an
unrecoverable error.
To see PPPoE in action, we can monitor the exchange between a home system
such as the home PC from Figure 6-28 and an access concentrator. The Discovery
phase and first PPP session packet are shown in Figure 6-31.
Figure 6-31 shows the expected exchange of PADI, PADO, PADR, and PADS
messages. Each contains the Host-Uniq tag with value 9c3a0000. Messages coming
from the concentrator also include the value 90084090400368-rback37.snfcca in the
AC-Name tag. The PADS message can be seen in more detail in Figure 6-32.
In Figure 6-32, the PADS message indicates the establishment of a PPP session for the client and the use of the session ID 0xecbd. The AC-Name tag is also
maintained to indicate the originating AC. The Discovery phase is now complete,
and a regular PPP session (see Chapter 3) can commence. Figure 6-33 shows the
first PPP session packet.
The figure indicates the beginning of the PPP Session phase within the PPPoE
exchange. The PPP session begins with link configuration (PPP LCP) by the client
sending a Configuration Request (see Chapter 3). It indicates that the client wishes
to use the Password Authentication Protocol, a relatively insecure method, for
authenticating itself to the AC. Once the authentication exchange is complete and
various link parameters are exchanged (e.g., MRU), IPCP is used to obtain and
configure the assigned IP address. Note that additional configuration information
(e.g., IP addresses of the ISP’s DNS servers) may need to be obtained separately
and, depending on the ISP’s configuration, configured by hand.
Figure 6-31 The PPPoE exchange begins with a PADI message sent to the Ethernet broadcast address. Subsequent messages use unicast addressing. In this
exchange, only the Host-Uniq and AC-Name tags are used. The PPP session begins with the fifth packet, which begins a PPP link configuration
exchange that ultimately assigns the system’s IPv4 address using the IPCP (see Chapter 3).
Section 6.5 PPP over Ethernet (PPPoE)
Figure 6-32 The PPPoE PADS message confirms the association between the client and the access concentrator.
This message also defines the session ID as 0xecbd, which is used in subsequent PPP session packets.
Figure 6-33 The first PPP message of the PPPoE session is a Configuration Request. The Ethernet type has changed
to 0x8864 to indicate an active PPP session, and the Session ID is set to 0xecbd. In this case, the PPP
client wishes to authenticate using the (relatively insecure) Password Authentication Protocol.
System Configuration: DHCP and Autoconfiguration
Attacks Involving System Configuration
A wide variety of attacks can be mounted relating to system and network configuration. They range from deploying unauthorized clients or unauthorized servers
that interfere with DHCP to various forms of DoS attacks that involve resource
exhaustion, such as requesting all possible IP addresses a server may have to give
out. Many of these problems are widespread because the older IPv4-based protocols used for address configuration were designed for networks where trust was
assumed, and the newer ones have seen little deployment to date. (Secured deployments are even rarer.) Therefore, none of these attacks are directly addressed by
typical DHCP deployments, although link-layer authentication (e.g., WPA2 as
used with Wi-Fi networks) helps to limit the number of unauthorized clients that
are able to attach to a particular network.
An effort is under way within the IETF to provide security for IPv6 Neighbor
Discovery, which, when or if it is deployed, would directly impact the security
of operating networks using SLAAC. The trust and threat assumptions are outlined in [RFC3756] from 2004, and the Secure Neighbor Discovery (SEND) protocol
is defined in [RFC3971]. SEND applies IPsec (see Chapter 18) to Neighbor Discovery packets, in combination with cryptographically generated addresses (CGAs)
[RFC3972]. Such addresses are derived from a keyed hash function, so they can be
generated only by a system holding the appropriate key material.
A basic set of configuration information is required for a host or router to operate
on the Internet or on a private network using Internet protocols. At a minimum,
routers typically require the assignment of addressing information, whereas hosts
require addresses, a next-hop router, and the location of a DNS server. DHCP is
available for both IPv4 as well as IPv6, but the two are not directly interoperable.
DHCP allows appropriately configured servers to lease one or more addresses to
requesting clients for a defined period of time. Clients renew their leases if they
require ongoing use. DHCP can also be used by the client to acquire additional
information, such as the subnet mask, default routers, vendor-specific configuration information, DNS server, home agents, and default domain name. DHCP can
be used through relay agents when a client and server are located on different networks. Several extensions to DHCP allow for additional information to be carried
between a relay agent and server when this is used. DHCPv6 can also be used to
delegate a range of IPv6 address space to a router.
With IPv6, a host typically uses multiple addresses. An IPv6 client is able to
generate its link-local address autonomously by combining a special link-local
IPv6 prefix with other local information such as bits derived from one of its
MAC addresses or from a random number to help promote privacy. To obtain
a global address, it can obtain a global address prefix from either ICMP Router
Section 6.8 References
Advertisement messages or from a DHCPv6 server. DHCPv6 servers may operate
in a “stateful” mode, in which they lease IPv6 addresses to requesting clients, or a
“stateless” mode, in which they provide configuration information other than the
PPPoE carries PPP messages over Ethernet to establish Internet connectivity with ISPs, especially those ISPs that provide service using DSL. When using
PPPoE, a user usually has a DSL modem with an Ethernet port acting as a bridge
or switch. PPPoE first exchanges a set of Discovery messages to determine the
identity of an access controller and establish a PPP session. After the Discovery
phase is successfully completed, PPP traffic, which can be encapsulated in Ethernet and carry various protocols such as IP, may continue until the PPPoE association is terminated, either intentionally or as a result of disconnection of the
underlying link. When PPPoE is used, the PPP protocol’s configuration capabilities such as IPCP (discussed in Chapter 3) are ultimately responsible for assigning
the IP address to the client system.
DHCP and the ICMPv6 router advertisements used with IPv6 stateless autoconfiguration are ordinarily deployed without security mechanisms being applied
to them. Because of this, they are susceptible to a number of attacks, including network access by unauthorized clients, operation of rogue DHCP servers that give
out bogus addresses and cause various forms of denial of service, and resource
exhaustion attacks in which a client may request more addresses than are available. Most of these attacks can be mitigated by security mechanisms that have
been added to DHCP such as DHCP authentication and the relatively recent SEND
protocol. However, these are not commonly found in operation today.
[802.21-2008] “IEEE Standard for Local and Metropolitan Area Networks—Part
21: Media Independent Handover Services,” Nov. 2008.
[F07] R. Faas, “Hands On: Configuring Apple’s NetBoot Service, Part 1,” Computerworld, Sept. 2007.
[GC89] C. Gray and D. Cheriton, “Leases: An Efficient Fault-Tolerant Mechanism
for Distributed File Cache Consistency,” Proc. ACM Symposium on Operating System Principles (SOSP), 1989.
[ID4LQ] K. Kinnear, B. Volz, M. Stapp, D. Rao, B. Joshi, N. Russell, and P. Kurapati,
“Bulk DHCPv4 Lease Query,” Internet draft-ietf-dhc-dhcpv4-bulk-leasequery,
work in progress, Apr. 2011.
System Configuration: DHCP and Autoconfiguration
[ID4RI] B. Joshi, R. Rao, and M. Stapp, “The DHCPv4 Relay Agent Identifier Suboption,” Internet draft-ietf-dhc-relay-id-suboption, work in progress, June 2011.
[IDDN] G. Daley, E. Nordmark, and N. Moore, “Tentative Options for LinkLayer Addresses in IPv6 Neighbor Discovery,” Internet draft-ietf-dna-tentative
(expired), work in progress, Oct. 2009.
[IDL2RA] B. Joshi and P. Kurapati, “Layer 2 Relay Agent Information,” Internet
draft-ietf-dhc-l2ra, work in progress, Apr. 2011.
[MKB928233] Microsoft Knowledge Base Article 928233 at http://support
[MS-DHCPN] Microsoft Corporation, “[MS-DHCPN]: Dynamic Host Configuration Protocol (DHCP) Extensions for Network Access Protection (NAP),” http://, Oct. 2008.
[RFC0826] D. Plummer, “Ethernet Address Resolution Protocol: Or Converting
Network Protocol Addresses to 48.bit Ethernet Address for Transmission on
Ethernet Hardware,” Internet RFC 0826/STD 0037, Nov. 1982.
[RFC0951] W. J. Croft and J. Gilmore, “Bootstrap Protocol,” Internet RFC 0951,
Sept. 1985.
[RFC1542] W. Wimer, “Clarifications and Extensions for the Bootstrap Protocol,”
Internet RFC 1542, Oct. 1993.
[RFC2131] R. Droms, “Dynamic Host Configuration Protocol,” Internet RFC 2131,
Mar. 1997.
[RFC2132] S. Alexander and R. Droms, “DHCP Options and BOOTP Vendor
Extensions,” Internet RFC 2132, Mar. 1997.
[RFC2241] D. Provan, “DHCP Options for Novell Directory Services,” Internet
RFC 2241, Nov. 1997.
[RFC2242] R. Droms and K. Fong, “NetWare/IP Domain Name and Information,” Internet RFC 2242, Nov. 1997.
[RFC2516] L. Mamakos, K. Lidl, J. Evarts, D. Carrel, D. Simone, and R. Wheeler,
“A Method for Transmitting PPP over Ethernet (PPPoE),” Internet RFC 2516
(informational), Feb. 1999.
[RFC2563] R. Troll, “DHCP Option to Disable Stateless Auto-Configuration in
IPv4 Clients,” Internet RFC 2563, May 1999.
[RFC2937] C. Smith, “The Name Service Search Option for DHCP,” Internet RFC
2937, Sept. 2000.
Section 6.8 References
[RFC3004] G. Stump, R. Droms, Y. Gu, R. Vyaghrapuri, A. Demirtjis, B. Beser, and
J. Privat, “The User Class Option for DHCP,” Internet RFC 3004, Nov. 2000.
[RFC3011] G. Waters, “The IPv4 Subnet Selection Option for DHCP,” Internet RFC
3011, Nov. 2000.
[RFC3046] M. Patrick, “DHCP Relay Agent Information Option,” Internet RFC
3046, Jan. 2001.
[RFC3118] R. Droms and W. Arbaugh, eds., “Authentication of DHCP Messages,”
Internet RFC 3118, June 2001.
[RFC3203] Y. T’Joens, C. Hublet, and P. De Schrijver, “DHCP Reconfigure Extension,” Internet RFC 3203, Dec. 2001.
[RFC3315] R. Droms, ed., J. Bound, B. Volz, T. Lemon, C. Perkins, and M. Carney,
“Dynamic Host Configuration Protocol for IPv6 (DHCPv6),” Internet RFC 3315,
July 2003.
[RFC3396] T. Lemon and S. Cheshire, “Encoding Long Options in the Dynamic
Host Configuration Protocol (DHCPv4),” Internet RFC 3396, Nov. 2002.
[RFC3442] T. Lemon, S. Cheshire, and B. Volz, “The Classless Static Route Option
for Dynamic Host Configuration Protocol (DHCP) Version 4,” Internet RFC 3442,
Dec. 2002.
[RFC3633] O. Troan and R. Droms, “IPv6 Prefix Options for Dynamic Host Configuration Protocol (DHCP) Version 6,” Internet RFC 3633, Dec. 2003.
[RFC3646] R. Droms, ed., “DNS Configuration Options for Dynamic Host Configuration Protocol for IPv6 (DHCPv6),” Internet RFC 3646, Dec. 2003.
[RFC3693] J. Cuellar, J. Morris, D. Mulligan, J. Peterson, and J. Polk, “Geopriv
Requirements,” Internet RFC 3693 (informational), Feb. 2004.
[RFC3736] R. Droms, “Stateless Dynamic Host Configuration Protocol (DHCP)
Service for IPv6,” Internet RFC 3736, Apr. 2004.
[RFC3756] P. Nikander, ed., J. Kempf, and E. Nordmark, “IPv6 Neighbor Discovery (ND) Trust Models and Threats,” Internet RFC 3756 (informational), May
[RFC3925] J. Littlefield, “Vendor-Identifying Vendor Options for Dynamic Host
Configuration Protocol Version 4 (DHCPv4),” Internet RFC 3925, Oct. 2004.
[RFC3927] S. Cheshire, B. Aboba, and E. Guttman, “Dynamic Configuration of
IPv6 Link-Local Addresses,” Internet RFC 3927, May 2005.
[RFC3971] J. Arkko, ed., J. Kempf, B. Zill, and P. Nikander, “SEcure Neighbor
Dicovery (SEND),” Internet RFC 3971, Mar. 2005.
System Configuration: DHCP and Autoconfiguration
[RFC3972] T. Aura, “Cryptographically Generated Addresses (CGA),” Internet
RFC 3972, Mar. 2005.
[RFC4030] M. Stapp and T. Lemon, “The Authentication Suboption for the
Dynamic Host Configuration Protocol (DHCP) Relay Agent Option,” Internet
RFC 4030, Mar. 2005.
[RFC4039] S. Park, P. Kim, and B. Volz, “Rapid Commit Option for the Dynamic
Host Configuration Protocol Version 4 (DHCPv4),” Internet RFC 4039, Mar. 2005.
[RFC4174] C. Monia, J. Tseng, and K. Gibbons, “The IPv4 Dynamic Host Configuration Protocol (DHCP) Option for the Internet Storage Name Service,” Internet
RFC 4174, Sept. 2005.
[RFC4280] K. Chowdhury, P. Yegani, and L. Madour, “Dynamic Host Configuration Protocol (DHCP) Options for Broadcast and Multicast Control Servers,”
Internet RFC 4280, Nov. 2005.
[RFC4291] R. Hinden and S. Deering, “IP Version 6 Addressing Architecture,”
Internet RFC 4291, Feb. 2006.
[RFC4361] T. Lemon and B. Sommerfield, “Node-Specific Client Identifiers for
Dynamic Host Configuration Protocol Version Four (DHCPv4),” Internet RFC
4361, Feb. 2006.
[RFC4388] R. Woundy and K. Kinnear, “Dynamic Host Configuration Protocol
(DHCP) Leasequery,” Internet RFC 4388, Feb. 2006.
[RFC4429] N. Moore, “Optimistic Duplicate Address Detection (DAD) for IPv6,”
Internet RFC 4429, Apr. 2006.
[RFC4436] B. Aboba, J. Carlson, and S. Cheshire, “Detecting Network Attachment
in IPv4 (DNAv4),” Internet RFC 4436, Mar. 2006.
[RFC4649] B. Volz, “Dynamic Host Configuration Protocol (DHCPv6) Relay
Agent Remote-ID Option,” Internet RFC 4649, Aug. 2006.
[RFC4702] M. Stapp, B. Volz, and Y. Rekhter, “The Dynamic Host Configuration
Protocol (DHCP) Client Fully Qualified Domain Name (FQDN) Option,” Internet
RFC 4702, Oct. 2006.
[RFC4704] B. Volz, “The Dynamic Host Configuration Protocol for IPv6 (IPv6)
Client Fully Qualified Domain Name (FQDN) Option,” Internet RFC 4704, Oct.
[RFC4776] H. Schulzrinne, “Dynamic Host Configuration Protocol (DHCPv4 and
DHCPv6) Option for Civic Addresses Configuration Information,” Internet RFC
4776, Nov. 2006.
[RFC4833] E. Lear and P. Eggert, “Timezone Options for DHCP,” Internet RFC
4833, Apr. 2007.
Section 6.8 References
[RFC4862] S. Thomson, T. Narten, and T. Jinmei, “IPv6 Stateless Address Autoconfiguration,” Internet RFC 4862, Sept. 2007.
[RFC4941] T. Narten, R. Draves, and S. Krishnan, “Privacy Extensions for Stateless Address Autoconfiguration in IPv6,” Internet RFC 4941, Sept. 2007.
[RFC5007] J. Brzozowski, K. Kinnear, B. Volz, and S. Zeng, “DHCPv6 Leasequery,” Internet RFC 5007, Sept. 2007.
[RFC5010] K. Kinnear, M. Normoyle, and M. Stapp, “The Dynamic Host Configuration Protocol Version 4 (DHCPv4) Relay Agent Flags Suboption,” Internet RFC
5010, Sept. 2007.
[RFC5107] R. Johnson, J. Kumarasamy, K. Kinnear, and M. Stapp, “DHCP Server
Identifier Override Suboption,” Internet RFC 5107, Feb. 2008.
[RFC5175] B. Haberman, ed., and R. Hinden, “IPv6 Router Advertisement Flags
Option,” Internet RFC 5175, Mar. 2008.
[RFC5192] L. Morand, A. Yegin, S. Kumar, and S. Madanapalli, “DHCP Options
for Protocol for Carrying Authentication for Network Access (PANA) Authentication Agents,” Internet RFC 5192, May 2008.
[RFC5222] T. Hardie, A. Newton, H. Schulzrinne, and H. Tschofenig, “LoST: A
Location-to-Service Translation Protocol,” Internet RFC 5222, Aug. 2008.
[RFC5223] H. Schulzrinne, J. Polk, and H. Tschofenig, “Discovering Location-toService Translation (LoST) Servers Using the Dynamic Host Configuration Protocol (DHCP),” Internet RFC 5223, Aug. 2008.
[RFC5460] M. Stapp, “DHCPv6 Bulk Leasequery,” Internet RFC 5460, Feb. 2009.
[RFC5569] R. Despres, “IPv6 Rapid Deployment on IPv4 Infrastructures (6rd),”
Internet RFC 5569 (informational), Jan. 2010.
[RFC5677] T. Melia, ed., G. Bajko, S. Das, N. Golmie, and JC. Zuniga, “IEEE 802.21
Mobility Services Framework Design (MSFD),” Internet RFC 5677, Dec. 2009.
[RFC5678] G. Bajko and S. Das, “Dynamic Host Configuration Protocol (DHCPv4
and DHCPv6) Options for IEEE 802.21 Mobility Services (MoS) Discovery,” Internet RFC 5678, Dec. 2009.
[RFC5735] M. Cotton and L. Vegoda, “Special-Use IPv4 Addresses,” Internet RFC
5735/BCP 0153, Jan. 2010.
[RFC5969] W. Townsley and O. Troan, “IPv6 Rapid Deployment on IPv4 Infrastructures (6rd)—Protocol Specification,” Internet RFC 5969, Aug. 2010.
[RFC5985] M. Barnes, ed., “HTTP-Enabled Location Delivery (HELD),” Internet
RFC 5985, Sept. 2010.
System Configuration: DHCP and Autoconfiguration
[RFC5986] M. Thomson and J. Winterbottom, “Discovering the Local Location
Information Server (LIS),” Internet RFC 5986, Sept. 2010.
[RFC6059] S. Krishnan and G. Daley, “Simple Procedures for Detecting Network
Attachment in IPv6,” Internet RFC 6059, Nov. 2010.
[RFC6106] J. Jeong, S. Park, L. Beloeil, and S. Madanapalli, “IPv6 Router Advertisement Options for DNS Configuration,” Internet RFC 6106, Nov. 2010.
[RFC6148] P. Kurapati, R. Desetti, and B. Joshi, “DHCPv4 Lease Query by Relay
Agent Remote ID,” Internet RFC 6148, Feb. 2011.
[RFC6153] S. Das and G. Bajko, “DHCPv4 and DHCPv6 Options for Access Network Discovery and Selection Function (ANDSF) Discovery,” Internet RFC 6153,
Feb. 2011.
[RFC6221] D. Miles, ed., S. Ooghe, W. Dec, S. Krishnan, and A. Kavanagh, “Lightweight DHCPv6 Relay Agent,” Internet RFC 6221, May 2011.
[RFC6225] J. Polk, M. Linsner, M. Thomson, and B. Aboba, ed., “Dynamic Host
Configuration Protocol Options for Coordinate-Based Location Configuration
Information,” Internet RFC 6225, Mar. 2011.
[RFC6276] R. Droms, P. Thubert, F. Dupont, W. Haddad, and C. Bernardos,
“DHCPv6 Prefix Delegation for Network Mobility (NEMO),” Internet RFC 6276,
July 2011.
Firewalls and Network Address
Translation (NAT)
During the early years of the Internet and its protocols, most network designers
and developers were from universities or other entities engaged in research. These
researchers were generally friendly and cooperative, and the Internet system was
not especially resilient to attack, but not many people were interested in attacking it, either. By the late 1980s and especially the early to mid-1990s the Internet
had gained the interest of the mass population and ultimately people interested
in compromising its security. Successful attacks became commonplace, and many
problems were caused by bugs or unplanned protocol operations in the software
implementations of Internet hosts. Because some sites had a large number of end
systems with various versions of operating system software, it became very difficult for system administrators to ensure that all the various bugs in these end
systems had been fixed. Furthermore, for obsolete systems, this task was all but
impossible. Fixing the problem would have required a way to control the Internet
traffic to which the end hosts were exposed. Today, this is provided by a firewall—
a type of router that restricts the types of traffic it forwards.
As firewalls were being deployed to protect enterprises, another problem
was becoming important: the number of available IPv4 addresses was diminishing, with a threat of exhaustion. Something would have to be done with the
way addresses were allocated and used. One of the most important mechanisms
developed to deal with this, aside from IPv6, is called Network Address Translation
(NAT). With NAT, Internet addresses need not be globally unique, and as a consequence they can be reused in different parts of the Internet, called address realms.
Allowing the same addresses to be reused in multiple realms greatly eased the
problem of address exhaustion. As we shall see, NAT can also be synergistically
combined with firewalls to produce combination devices that have become the
Firewalls and Network Address Translation (NAT)
most popular types of routers used to connect end users, including home networks and small enterprises, to the Internet. We shall now explore both firewalls
and NATs in further detail.
Given the enormous management problems associated with trying to keep end
system software up-to-date and bug-free, the focus of resisting attacks expanded
from securing end systems to restricting the Internet traffic allowed to flow to end
systems by filtering out some traffic using firewalls. Today, firewalls are common,
and several different types have evolved.
The two major types of firewalls commonly used include proxy firewalls and
packet-filtering firewalls. The main difference between them is the layer in the protocol stack at which they operate, and consequently the way IP addresses and port
numbers are used. The packet-filtering firewall is an Internet router that drops
datagrams that (fail to) meet specific criteria. The proxy firewall operates as a
multihomed server host from the viewpoint of an Internet client. That is, it is the
endpoint of TCP and UDP transport associations; it does not typically route IP
datagrams at the IP protocol layer.
Packet-Filtering Firewalls
Packet-filtering firewalls act as Internet routers and filter (drop) some traffic. They
can generally be configured to discard or forward packets whose headers meet
(or fail to meet) certain criteria, called filters. Simple filters include range comparisons on various parts of the network-layer or transport-layer headers. The most
popular filters involve undesired IP addresses or options, types of ICMP messages, and various UDP or TCP services, based on the port numbers contained in
each packet. As we shall see, the simplest packet-filtering firewalls are stateless,
whereas the more sophisticated ones are stateful. Stateless packet-filtering firewalls treat each datagram individually, whereas stateful firewalls are able associate packets with either previously observed packets or packets that arrive in the
future to make inferences about datagrams or streams—either those belonging to
a single transport association or those IP fragments that constitute an IP datagram
(see Chapter 10). IP fragmentation can significantly complicate a firewall’s job, and
stateless packet-filtering firewalls are easily confused by fragments.
A typical packet-filtering firewall is shown in Figure 7-1. Here, the firewall is
an Internet router with three network interfaces: an “inside,” an “outside,” and a
third “DMZ” interface. The DMZ subnet provides access to an extranet or DMZ
where servers are deployed for Internet users to access. Network administrators
install filters or access control lists (ACLs, basically policy lists indicating what
types of packets to discard or forward) in the firewall. Typically, these filters conservatively block traffic from the outside that may be harmful and liberally allow
traffic to travel from inside to outside.
Section 7.2 Firewalls
'0=( [WUDQHW
$OORZ7&3 8'3 ,&03 '16 UHSO\WUDIILFDQG
Figure 7-1 A typical packet-filtering firewall configuration. The firewall acts as an IP router between
an “inside” and an “outside” network, and sometimes a third “DMZ” or extranet network, allowing only certain traffic to pass through it. A common configuration allows
all traffic to pass from inside to outside but only a small subset of traffic to pass in
the reverse direction. When a DMZ is used, only certain services are permitted to be
accessed from the Internet.
Proxy Firewalls
Packet-filtering firewalls act as routers that selectively drop packets. Other types
of firewalls, called proxy firewalls, are not really Internet routers in the true sense.
Instead, they are essentially hosts running one or more application-layer gateways
(ALGs)—hosts with more than one network interface that relay traffic of certain
types between one connection/association and another at the application layer.
They do not typically perform IP forwarding as routers do, although more sophisticated proxy firewalls are now available that combine various functions.
Figure 7-2 illustrates a proxy firewall. For this type of firewall, clients on the
inside of the firewall are usually configured in a special way to associate (or connect) with the proxy instead of the actual end host providing the desired service.
(Applications capable of operating with proxy firewalls this way include configuration options for it.) These firewalls act as multihomed hosts, and their IP
forwarding capability, if present, is typically disabled. As with packet-filtering
firewalls, a common configuration is to have an “outside” interface assigned a
globally routable IP address and for its “inner” interface to be configured with a
private IP address. Thus, proxy firewalls support the use of private address realms.
Firewalls and Network Address Translation (NAT)
Figure 7-2
The proxy firewall acts as a multihomed Internet host, terminating TCP connections and
UDP associations at the application layer. It does not act as a conventional IP router but
rather as an ALG. Individual applications or proxies for each service supported must be
enabled for communication to take place through the proxy firewall.
While this type of firewall can be quite secure (some people believe this type
is fundamentally more secure than packet-filtering firewalls), this security comes
at a cost of brittleness and lack of flexibility. In particular, because this style of
firewall must contain a proxy for each transport-layer service, any new services
to be used must have a corresponding proxy installed and operated for connectivity to take place through the proxy. In addition, each client must typically be
configured to find the proxy (e.g., using the Web Proxy Auto-Discovery Protocol,
or WPAD [XIDAD], although there are some alternatives—so-called capturing
proxies that catch all traffic of a certain type regardless of destination address).
With respect to deployment, these firewalls work well in environments where all
types of network services being accessed are known with certainty in advance,
but they may require significant intervention from network operators to support
additional services.
The two most common forms of proxy firewalls are HTTP proxy firewalls
[RFC2616] and SOCKS firewalls [RFC1928]. The first type, also called Web proxies,
work only for the HTTP and HTTPS (Web) protocols, but because these protocols
are so popular, such proxies are commonly used. These proxies act as Web servers for internal clients and as Web clients when accessing external Web sites. Such
proxies often also operate as Web caches. These caches save copies of Web pages so
that subsequent accesses can be served directly from the cache instead of from the
originating Internet Web server. Doing so can reduce latency to display Web pages
and improve the experience of users accessing the Web. Some Web proxies are
also used as content filters, which attempt to block access to certain Web sites based
on a “blacklist” of prohibited sites. Conversely, a number of so-called tunneling
proxy servers are available on the Internet. These servers (e.g., psiphon, CGIProxy)
essentially perform the opposite function—to allow users to avoid being blocked
by content filters.
Section 7.3 Network Address Translation (NAT)
The SOCKS protocol is more generic than HTTP for proxy access and is applicable to more services than just the Web. Two versions of SOCKS are currently
in use: version 4 and version 5. Version 4 provides the basic support for proxy
traversal, and version 5 adds strong authentication, UDP traversal, and IPv6
addressing. To use a SOCKS proxy, an application must be written to use SOCKS
(it must be “socksified”) and configured to know about the location of the proxy
and which version of SOCKS to use. Once this is accomplished, the client uses the
SOCKS protocol to request the proxy to perform network connections and, optionally, DNS lookups.
Network Address Translation (NAT)
NAT is essentially a mechanism for allowing the same sets of IP addresses to be
reused in different parts of the Internet. The primary motivation for the creation
of NAT was the limited and diminishing availability of IP address space. The most
common use case for a NAT is when a site with a single Internet connection is
assigned a small range of IP addresses (perhaps only a single address), but there
are multiple computers requiring Internet access. When all incoming and outgoing traffic passes through a single NAT device that partitions the inside (private)
address realm from the global Internet address realm, all the internal systems
can be provided Internet connectivity as clients using locally assigned, private IP
addresses. Allowing privately addressed systems to offer services on the Internet,
however, is somewhat more complicated. We discuss this case in Section 7.3.4.
NAT was introduced to solve two problems: address depletion and concerns regarding the scalability of routing. At the time of its introduction (early
1990s), NAT was suggested as a stopgap, temporary measure to be used until the
deployment of some protocol with a larger number of addresses (ultimately, IPv6)
became widespread. Routing scalability was being tackled with the development
of Classless Inter-Domain Routing (CIDR; see Chapter 2). NAT is popular because
it reduces the need for globally routable Internet addresses but also because it
offers some degree of natural firewall capability and requires little configuration.
Perhaps ironically, the development and eventual widespread use of NAT has contributed to significantly slow the adoption of IPv6. Among its other benefits, IPv6
was intended to make NAT unnecessary [RFC4864].
Despite its popularity, NAT has several drawbacks. The most obvious is that
offering Internet-accessible services from the private side of a NAT requires special configuration because privately addressed systems are not directly reachable from the Internet. In addition, for a NAT to work properly, every packet in
both directions of a connection or association must pass through the same NAT.
This is because the NAT must actively rewrite the addressing information in each
packet in order for communication between a privately addressed system and a
conventionally addressed Internet host to work. In many ways, NATs run counter
to a fundamental tenet of the Internet protocols: the “smart edge” and “dumb
middle.” To do their job, NATs require connection state on a per-association (or
Firewalls and Network Address Translation (NAT)
per-connection) basis and must operate across multiple protocol layers, unlike conventional routers. Modifying an address at the IP layer also requires modifying
checksums at the transport layer (see Chapters 10 and 13 regarding the pseudoheader checksum to see why).
NAT poses problems for some application protocols, especially those that
send IP addressing information inside the application-layer payload. The File
Transfer Protocol (FTP) [RFC0959] and SIP [RFC5411] are among the best-known
protocols of this type. They require a special application-layer gateway function
that rewrites the application content in order to work unmodified with NAT or
other NAT traversal methods that allow the applications to determine how to
work with the specific NAT they are using. A more complete list of considerations
regarding NAT appears in [RFC3027]. Despite their numerous problems, NATs
are very widely used, and most network routers (including essentially all low-end
home routers) support it. Today, NATs are so prevalent that application designers
are encouraged to make their applications “NAT-friendly” [RFC3235]. It is worth
mentioning that despite its shortcomings, NAT supports the basic protocols (e.g.,
e-mail, Web browsing) that are needed by millions of client systems accessing the
Internet every day.
A NAT works by rewriting the identifying information in packets transiting through a router. Most commonly this happens for two directions of a data
transfer. In its most basic form, NAT involves rewriting the source IP address of
packets as they are forwarded in one direction and the destination IP addresses of
packets traveling in the reverse direction (see Figure 7-3). This allows the source
IP address in outgoing packets to become one of the NAT router’s Internet-facing
interfaces instead of the originating host’s. Thus, to a host on the Internet, packets
coming from any of the hosts on the privately addressed side of the NAT appear
to be coming from a globally routable IP address of the NAT router.
Figure 7-3
A NAT isolates private addresses and the systems using them from the Internet. Packets
with private addresses are not routed by the Internet directly but instead must be translated as they enter and leave the private network through the NAT router. Internet hosts
see traffic as coming from a public IP address of the NAT.
Section 7.3 Network Address Translation (NAT)
Most NATs perform both translation and packet filtering, and the packet-filtering
criteria depend on the dynamics of the NAT state. The choice of packet-filtering
policy may have a different granularity—for example, the treatment of unsolicited packets (those not associated with packets originating from behind the NAT)
received by the NAT may depend on source and destination IP address and/or
source and destination port number. The behavior may vary between NATs or in
some cases vary over time through the same NAT. This presents challenges for
applications that must operate behind a wide variety of NATs.
Traditional NAT: Basic NAT and NAPT
The precise behavior of a NAT remained unspecified for many years. Nonetheless,
a taxonomy of NAT types has emerged, based largely on observing how different
implementations of the NAT idea behave. The so-called traditional NAT includes
both basic NAT and Network Address Port Translation (NAPT) [RFC3022]. Basic NAT
performs rewriting of IP addresses only. In essence, a private address is rewritten
to be a public address, often from a pool or range of public addresses supplied
by an ISP. This type of NAT is not the most popular because it does not help to
dramatically reduce the need for IP addresses—the number of globally routable
addresses must equal or exceed the number of internal hosts that wish to access
the Internet simultaneously. A much more popular approach, NAPT involves
using the transport-layer identifiers (i.e., ports for TCP and UDP, query identifiers
for ICMP) to differentiate which host on the private side of the NAT is associated
with a particular packet (see Figure 7-4). This allows a large number of internal
hosts (i.e., multiple thousands) to access the Internet simultaneously using a limited number of public addresses, often only a single one. We shall ordinarily use
the term NAT to include both traditional NAT and NAPT unless the distinction is
important in a particular context.
Figure 7-4 A basic IPv4 NAT (left) rewrites IP addresses from a pool of addresses and leaves port numbers
unchanged. NAPT (right), also known as IP masquerading, usually rewrites address to a single
address. NAPT must sometimes rewrite port numbers in order to avoid collisions. In this case, the
second instance of port number 23479 was rewritten to use port number 3000 so that returning
traffic for could be distinguished from the traffic returning to
Firewalls and Network Address Translation (NAT)
The addresses used in a private addressing realm “behind” or “inside” a NAT
are not enforced by anyone other than the local network administrator. Thus, it is
possible for a private realm to make use of global address space. In principle, this
is acceptable. However, when such global addresses are owned and being used
by another entity on the Internet, local systems in the private realm would most
likely be unable to reach the public systems using the same addresses because the
close proximity of the local systems would effectively “mask” the visibility of the
farther-away systems using the same addresses. To avoid this undesirable situation, there are three IPv4 address ranges reserved for use with private addressing realms [RFC1918]:,, and These address
ranges are often used as default values for address pools in embedded DHCP
servers (see Chapter 6).
As suggested earlier, a NAT provides some degree of security similar to that
of a firewall. By default, all systems on the private side of the NAT cannot be
reached from the Internet. In most NAT deployments, the internal systems use
private addresses. Consequently, communications between hosts in the private
addressing realm and those in the public realm can be facilitated only with participation from the NAT, according to its usage policies and behavior. While a large
variety of policies may be used in practice, a common policy allows almost all
outgoing and returning traffic (associated with outgoing traffic) to pass through
the NAT but blocks almost all incoming new connection requests. This behavior inhibits “probing” attacks that attempt to ascertain which IP addresses have
active hosts available to exploit. In addition, a NAT (especially a NAPT) “hides”
the number and configuration of internal addresses from the outside. Some users
feel this topology information is proprietary and should remain confidential. NAT
helps by providing so-called topology hiding.
As we shall now explore, NATs are tailored to the protocols and applications
that they need to support, so it is difficult to discuss NAT behavior without also
mentioning the particular protocol(s) it is being asked to handle. Thus, we now
turn to how NAT behaves with each major transport protocol and how it may be
used in mixed IPv4/IPv6 environments. Many of the behavioral specifics for NATs
have been the subject of the IETF Behavior Engineering for Hindrance Avoidance
(BEHAVE) working group. BEHAVE has produced a number of documents, starting in 2007, that clarify consistent behaviors for NATs. These documents are useful
for application writers and NAT developers so that a consistent expectation can be
established as to how NATs should operate. NAT and TCP
Recall from Chapter 1 that the primary transport-layer protocol for the Internet,
TCP, uses an IP address and port number to identify each end of a connection. A
connection is identified by the combination of two ends; each unique TCP connection is identified by two IP addresses and two port numbers. When a TCP
connection starts, an “active opener” or client usually sends a synchronization
(SYN) packet to a “passive opener” or server. The server responds with its own
Section 7.3 Network Address Translation (NAT)
SYN packet, which also includes an acknowledgment (ACK) of the client’s SYN.
The client then responds with an ACK to the server. This “three-way handshake”
establishes the connection. A similar exchange with finish (FIN) packets is used
to gracefully close a connection. The connection can also be forcefully closed right
away using a reset (RST) packet. (See Chapter 13 for more detail on TCP connections.) The behavioral requirements for traditional NAT with TCP are defined in
[RFC5382] and relate primarily to the TCP three-way handshake.
Referring to the example home network in Figure 7-3, consider a TCP connection initiated by the wireless client at destined for the Web server
on the host (IPv4 address Using the following
notation to indicate IPv4 addresses and port numbers—(source IP:source port;
destination IP:destination port)—the packet initiating the connection on the private segment might be addressed as (; The NAT/
firewall device, acting as the default router for the client, receives the first packet.
The NAT notices that the incoming packet is a new connection (because the SYN
bit in the TCP header is turned on; see Chapter 13). If policy permits (which it
typically does because this is an outgoing connection), the source IP address
is modified in the packet to reflect the routable IP address of the NAT router’s
external interface. Thus, when the NAT forwards this packet, the addressing is
(; In addition to forwarding the packet, the
NAT creates internal state to remember the fact that a new connection is being
handled by the NAT (called a NAT session). At a minimum, this state includes an
entry (called a NAT mapping) containing the source port number and IP address
of the client. This becomes useful when the Internet server replies. The server
replies to the endpoint (, the external NAT address, using the
port number chosen initially by the client. This behavior is called port preservation.
By matching the destination port number on the received datagram against the
appropriate NAT mapping, the NAT is able to ascertain the internal IP address of
the client that made the initial request. In our example, this address is, so
the NAT rewrites the response packet from (;
to (; and forwards it. The client then receives a
response to its request and for most purposes is now connected to the server.
This example conveys how a basic NAT session is established in the normal case, but not how the session is cleared. Session state is removed if FINs
are exchanged, but not all TCP connections are cleared gracefully. Sometimes a
computer is simply turned off, which can leave stale NAT mappings in the NAT’s
memory. Thus, a NAT must also remove mappings thought to have “gone dead”
because of a lack of traffic (or if an RST segment indicates some other form of
Most NATs include a simplified version of the TCP connection establishment
procedures and can distinguish between connection success and failure. In particular, when an outgoing SYN segment is observed, a connection timer is activated, and if no ACK is seen before the timer expires, the session state is cleared.
If an ACK does arrive, the timer is canceled and a session timer is created, with a
Firewalls and Network Address Translation (NAT)
considerably longer timeout (e.g., hours instead of minutes). When this happens,
the NAT may send an additional packet to the internal endpoint, just to doublecheck if the session is indeed dead (called probing). If it receives an ACK, the NAT
realizes that the connection is still active, resets the session timer, and does not
delete the session. If it receives either no response (after a close timer has expired)
or an RST segment, the connection has gone dead, and the state is cleared.
[RFC5382], a product of the BEHAVE working group, notes that a TCP connection can be configured to send “keepalive” packets (see Chapter 17), and the
default rate is one packet every 2 hours, if enabled. Otherwise, a TCP connection
can remain established indefinitely. While a connection is being set up or cleared,
however, the maximum idle time is 4 minutes. Consequently, [RFC5382] requires
(REQ-5) that a NAT wait at least 2 hours and 4 minutes before concluding that
an established connection is dead and at least 4 minutes before concluding that a
partially opened or closed connection is dead.
One of the tricky problems for a TCP NAT is handling peer-to-peer applications operating on hosts residing on the private sides of multiple NATs [RFC5128].
Some of these applications use a simultaneous open whereby each end of the connection acts as a client and sends SYN packets more or less simultaneously. TCP is
able to handle this case by responding with SYN + ACK packets that complete the
connection faster than with the three-way handshake, but many existing NATs do
not handle it properly. [RFC5382] addresses this by requiring (REQ-2) that a NAT
handle all valid TCP packet exchanges, and simultaneous opens in particular.
Some peer-to-peer applications (e.g., network games) use this behavior. In addition, [RFC5382] specifies that an inbound SYN for a connection about which the
NAT knows nothing should be silently discarded. This can occur when a simultaneous open is attempted but the external host’s SYN arrives at the NAT before the
internal host’s SYN. Although this may seem unlikely, it can happen as a result
of clock skew, for example. If the incoming external SYN is dropped, the internal
SYN has time to establish a NAT mapping for the same connection represented by
the external SYN. If no internal SYN is forthcoming in 6s, the NAT may signal an
error to the external host. NAT and UDP
The NAT behavioral requirements for unicast UDP are defined in [RFC4787].
Most of the same issues arise when performing NAT on a collection of UDP datagrams as arise when performing NAT on TCP. UDP is somewhat different, however, because there are no connection establishment and clearing procedures as
there are in TCP. More specifically, there are no indicators such as the SYN, FIN,
and RST bits to indicate that a session is being created or destroyed. Furthermore,
the participants in an association may not be completely clear. UDP does not use
a 4-tuple to identify a connection like TCP; instead, it can rely on only the two
endpoint address/port number combinations. To handle these issues, UDP NATs
use a mapping timer to clear NAT state if a binding has not been used “recently.”
There is considerable variation in the values used for this timer to determine what
Section 7.3 Network Address Translation (NAT)
“recently” means, but [RFC4787] requires the timer to be at least 2 minutes and recommends that it be 5 minutes. A related consideration is when the timer should be
considered refreshed. Timers can be refreshed when packets travel from the inside
to the outside of the NAT (NAT outbound refresh behavior) or vice versa (NAT
inbound refresh behavior). [RFC4787] requires NAT outbound refresh behavior to
be true. Inbound behavior may or may not be true.
As we discussed in Chapter 5 (and will see again in Chapter 10), UDP and IP
packets can be fragmented. Fragmentation allows for a single IP packet to span
multiple chunks (fragments), each of which is treated as an independent datagram. However, because of the layering of UDP above IP, an IP fragment other
than the first one does not contain the port number information needed by NAPT
to operate properly. This also applies to TCP and ICMP. Thus, in general, fragments cannot be handled properly by simple NATs or NAPTs. NAT and Other Transport Protocols (DCCP, SCTP)
Although TCP and UDP are by far the most widely used Internet transport protocols, there are two other protocols for which NAT behaviors have been defined
or are being defined. The Datagram Congestion Control Protocol (DCCP) [RFC4340]
provides a congestion-controlled datagram service. [RFC5597] gives NAT behavioral requirements with respect to DCCP, and [RFC5596] gives a modification to
DCCP to support a TCP-like simultaneous open procedure for use with DCCP. The
Stream Control Transmission Protocol (SCTP) [RFC4960] provides a reliable messaging service that can accommodate hosts with multiple addresses. Considerations
for NAT with SCTP are given in [HBA09] and [IDSNAT]. NAT and ICMP
ICMP, the Internet Control Message Protocol, is detailed in Chapter 8. It provides
status information about IP packets and can also be used for making certain measurements and gathering information about the state of the network. The NAT
behavioral requirements for ICMP are defined in [RFC5508]. There are two issues
involved when NAT is used for ICMP. ICMP has two categories of messages: informational and error. Error messages generally contain a (partial or full) copy of the
IP packet that induced the error condition. They are sent from the point where
an error was detected, often in the middle of the network, to the original sender.
Ordinarily, this presents no difficulty, but when an ICMP error message passes
through a NAT, the IP addresses in the included “offending datagram” need to
be rewritten by the NAT in order for them to make sense to the end client (called
ICMP fix-up). For informational messages, the same issues arise, but in this case
most message types are of a query/response or client/server nature and include
a Query ID field that is handled much like port numbers for TCP or UDP. Thus,
a NAT handling these types of messages can recognize outgoing informational
requests and set a timer in anticipation of a returning response.
Firewalls and Network Address Translation (NAT) NAT and Tunneled Packets
In some cases, tunneled packets (see Chapter 3) are to be sent through a NAT.
When this happens, not only must a NAT rewrite the IP header, but it may also
have to rewrite the headers or payloads of other packets that are encapsulated in
them. One example of this is the Generic Routing Encapsulation (GRE) header
used with the Point-to-Point Tunneling Protocol (PPTP; see Chapter 3). When
the GRE header is passed through a NAT, its Call-ID field could conflict with the
NAT’s (or with other hosts’ tunneled connections). If the NAT fails to handle this
mapping appropriately, communication is not possible. As we might imagine,
additional levels of encapsulation serve only to complicate a NAT’s job further. NAT and Multicast
So far we have discussed only unicast IP traffic with NATs. NATs can be configured to support multicast traffic (see Chapter 9), although this is rare. [RFC5135]
gives the requirements for handling multicast traffic through NATs. In effect, to
support multicast traffic a NAT is augmented with an IGMP proxy (see [RFC4605]
and Chapter 9). In addition, the destination IP addresses and port numbers of
packets traveling from a host on the outside to the inside of NAT are not modified.
For traffic flowing from inside to outside, the source addresses and port numbers
may be modified according to the same behaviors as with unicast UDP. NAT and IPv6
Given the tremendous popularity of NAT for IPv4, it is natural to wonder whether
NAT will be used with IPv6. At present, this is a contentious issue [RFC5902].
To many protocol designers, NAT arose as a necessary but undesirable “wart”
that has added a tremendous amount of complexity to the design of every other
protocol. Consequently, there is staunch resistance to supporting the use of NAT
with IPv6 based on the idea that saving address space is unnecessary with IPv6
and that other desirable NAT features (e.g., firewall-like functionality, topology
hiding, and privacy) can be better achieved using Local Network Protection (LNP)
[RFC4864]. LNP represents a collection of techniques with IPv6 that match or
exceed the properties of NATs.
Aside from its packet-filtering properties, NAT supports the coexistence of
multiple address realms and thereby helps to avoid the problem of a site having
to change its IP addresses when it switches ISPs. For example, [RFC4193] defines
Unique Local IPv6 Unicast Addresses (ULAs) that could conceivably be used with an
experimental version of IPv6-to-IPv6 prefix translation called NPTv6 [RFC6296]. It
uses an algorithm instead of a table to translate IPv6 addresses to (different) IPv6
addresses (e.g., in different realms) based on their prefix and as a result does not
require keeping per-connection state as with conventional NAT. In addition, the
algorithm modifies addresses in such a way that the resulting checksum computation for common transport protocols (TCP, UDP) remains the same. This significantly reduces the complexity of NAT because it does not have to modify the
Section 7.3 Network Address Translation (NAT)
data in a packet beyond the network layer and does not require access to transport layer port numbers in order to operate properly. However, applications that
require access to a NAT’s external address must still use a NAT traversal method
or depend on an ALG. In addition, NPTv6 does not by itself offer the packet-filtering capabilities of a firewall, so additional deployment considerations must be
Address and Port Translation Behavior
There has been considerable variation in the way NATs operate. Most of the details
relate to the specifics of the address and port mappings. One of the primary goals
of the BEHAVE working group in IETF was to clarify the common behaviors and
set guidelines as to which are the most appropriate. To better understand the
issues involved, we begin with a generic NAT mapping example (see Figure 7-5).
Figure 7-5 A NAT’s address and port behavior is characterized by what its mappings depend on.
The inside host uses IP address:port X:x to contact Y1:y1 and then Y2:y2. The address and
port used by the NAT for these associations are X1′:x1′ and X2′:x2′, respectively. If X1′:x1′
equals X2′:x2′ for any Y1:y1 or Y2:y2, the NAT has endpoint-independent mappings. If
X1′:x1′ equals X2′:x2′ if and only if Y1 equals Y2, the NAT has address-dependent mappings. If X1′:x1′ equals X2′:x2′ if and only if Y1:y1 equals Y2:y2, the NAT has addressand port-dependent mappings. A NAT with multiple external addresses (i.e., where X1′
may not equal X2′) has an address pooling behavior of arbitrary if the outside address is
chosen without regard to inside or outside address. Alternatively, it may have a pooling
behavior of paired, in which case the same X1 is used for any association with Y1.
In Figure 7-5, we use the notation X:x to indicate that a host in the private
addressing realm (inside host) uses IP address X with port number x (for ICMP,
the query ID is used instead of the port number). The IP address X is ordinarily
Firewalls and Network Address Translation (NAT)
chosen from the private IPv4 address space defined in [RFC1918]. To reach the
remote address/port combination Y:y, the NAT establishes a mapping using an
external (usually a public, globally routable) address X1′ and port number x1′.
Assuming that the internal host contacts Y1:y1 followed by Y2:y2, the NAT establishes mappings X1′:x1′ and X2′:x2′, respectively. In most cases, X1′ equals X2′
because most sites use only a single globally routable IP address. The mapping is
said to be reused if x1′ equals x2′. If x1′ and x2′ equal x, the NAT implements port
preservation, mentioned earlier. In some cases, port preservation is not possible,
so the NAT must deal with port collisions as suggested by Figure 7-4.
Table 7-1 and Figure 7-5 summarize the various NAT port and address behaviors based on definitions from [RFC4787]. Table 7-1 also gives filtering behaviors
that use similar terminology and that we discuss in Section 7.3.3. For all common
transports, including TCP and UDP, the required NAT address- and port-handling
behavior is endpoint-independent (a similar behavior is recommended for ICMP).
The purpose of this requirement is to help applications that attempt to determine
the external addresses used for their traffic to work more reliably. We discuss this
in more detail in Section 7.4 when we discuss NAT traversal.
Table 7-1 A NAT’s overall behavior is defined by both its translation and filtering behaviors. Each of these
may be independent of host address, dependent on address, or dependent on both address and port
Behavior Name
Translation Behavior
Filtering Behavior
X1′:x1′ = X2′:x2′ for all
Y2:y2 (required)
X1′:x1′ = X2′:x2′ iff
Y1 = Y2
Address- and
X1′:x1′ = X2′:x2′ iff
Y1:y1 = Y2:y2
Allows any packets for X1:x1 as long as any X1′:x1′
exists (recommended for greatest transparency)
Allows packets for X1:x1 from Y1:y1 as long as X1
has previously contacted Y1 (recommended for
more stringent filtering)
Allows packets for X1:x1 from Y1:y1 as long as X1
has previously contacted Y1:y1
As stated previously, a NAT may have several external addresses available to
use. The set of addresses is typically called the NAT pool or NAT address pool. Most
moderate to large-scale NATs use address pools. Note that NAT address pools are
distinct from the DHCP address pools discussed in Chapter 6, although a single
device may need to handle both NAT and DHCP address pools. One question
in such environments is that when a single host behind the NAT opens multiple
simultaneous connections, is each assigned the same external IP address (called
address pairing) or not? A NAT’s IP address pooling behavior is said to be arbitrary
if there is no restriction on which external address is used for any association. It
is said to be paired if it implements address pairing. Pairing is the recommended
NAT behavior for all transports. If pairing is not used, the communication peer
Section 7.3 Network Address Translation (NAT)
of an internal host may erroneously conclude that it is communicating with different hosts. For NATs with only a single external address, this is obviously not a
A very brittle type of NAT overloads not only addresses but also ports (called
port overloading). In this case, the traffic of multiple internal hosts may be rewritten to the same external IP address and port number. This is a dangerous prospect
because if multiple hosts associate with a service on the same external host, it is
no longer possible to determine the appropriate destination for traffic returning
from the external host. For TCP, this is a consequence of all four elements of the
connection identifier (source and destination address and port numbers) being
identical in the external network among the various connections. Such behavior
is now disallowed.
Some NATs implement a special feature called port parity. Such NATs attempt
to preserve the “parity” (evenness or oddness) of port numbers. Thus, if x1 is even,
x1′ is even and vice versa. Although not as strong as port preservation, such behavior is sometimes useful for specific application protocols that use special port
numberings (e.g., the Real-Time Protocol, abbreviated RTP, has traditionally used
multiple ports, but there are proposed methods for avoiding this issue [RFC5761]).
Port parity preservation is a recommended NAT feature but not a requirement. It
is also expected to become less important over time as more sophisticated NAT
traversal methods become widespread.
Filtering Behavior
When a NAT creates a binding for a TCP connection, UDP association, or various forms of ICMP traffic, not only does it establish the address and port mappings, but it must also determine its filtering behavior for the returning traffic if
it acts as a firewall, which is the common case. The type of filtering a NAT performs, although logically distinct from its address- and port-handling behavior, is
often related. In particular, the same terminology is used: endpoint-independent,
address-dependent, and address- and port-dependent.
A NAT’s filtering behavior is usually related to whether it has established an
address mapping. Clearly, a NAT lacking any form of address mapping is unable
to forward any traffic it receives from the outside to the inside because it would
not know which internal destination to use. For the most common case of outgoing traffic, when a binding is established, filtering is disabled for relevant return
traffic. For NATs with endpoint-independent behavior, as soon as any mapping
is established for an internal host, any incoming traffic is permitted, regardless
of source. For address-dependent filtering behavior, traffic destined for X1:x1
is permitted from Y1:y1 only if Y1 had been previously contacted by X1:x1. For
those NATs with address- and port-dependent filtering behavior, traffic destined
for X1:x1 is permitted from Y1:y1 only if Y1:y1 had been previously contacted
by X1:x1. The difference between the last two is that the last form takes the port
number y1 into account.
Firewalls and Network Address Translation (NAT)
Servers behind NATs
One of the most obvious problems with NATs is that a system wishing to provide
a service from behind a NAT is not directly reachable from the outside. Consider
the example in Figure 7-3 once again. If the host with address is to provide a service to the Internet, it cannot be reached without participation from the
NAT, for at least two reasons. First, the NAT is acting as the Internet router, so it
must agree to forward the incoming traffic destined for Second, and more
important, the IP address is not routable through the Internet and cannot
be used to identify the server by hosts in the Internet. Instead, the external address
of the NAT must be used to find the server, and the NAT must arrange to properly
rewrite and forward the appropriate traffic to the server so that it can operate. This
process is most often called port forwarding or port mapping.
With port forwarding, incoming traffic to a NAT is forwarded to a specific
configured destination behind the NAT. By employing NAT with port forwarding, it is possible to allow servers to provide services to the Internet even though
they may be assigned private, nonroutable addresses. Port forwarding typically
requires static configuration of the NAT with the address of the server and the
associated port number whose traffic should be forwarded. The port forwarding
directive acts like an always-present static NAT mapping. If the server’s IP address
is changed, the NAT must be updated with the new addressing information. Port
forwarding also has the limitation that it has only one set of port numbers for each
of its (IP address, transport protocol) combinations. Thus, if the NAT has only a
single external IP address, it can forward only a single port of the same transport
protocol to at most one internal machine (e.g., it could not support two independent Web servers on the inside being remotely accessible using TCP port 80 from
the outside).
Hairpinning and NAT Loopback
An interesting issue arises when a client wishes to reach a server and both reside on
the same, private side of the same NAT. NATs that support this scenario implement
so-called hairpinning or NAT loopback. Referring to Figure 7-6, assume that host X1
attempts to establish a connection to host X2. If X1 knows the private addressing information, X2:x2, there is no problem because the connection can be made
directly. However, in some cases X1 knows only the public address information,
X2′:x2′. In these cases, X1 attempts to contact X2 using the NAT with destination
X2′:x2′. The hairpinning process takes place when the NAT notices the existence of
the mapping between X2′:x2′ and X2:x2 and forwards the packet to X2:x2 residing
on the private side of the NAT. At this point a question arises as to which source
address is contained in the packet heading to X2:x2—X1:x1 or X1′:x1′?
If the NAT presents the hairpinned packet to X2 with source addressing
information X1′:x1′, the NAT is said to have “external source IP address and port”
hairpinning behavior. This behavior is required for TCP NAT [RFC5382]. The
Section 7.3 Network Address Translation (NAT)
Figure 7-6
A NAT that implements hairpinning or NAT loopback allows a client to reach a server on
the same side of the NAT using the server’s external IP address and port numbers. That
is, X1 can reach X2:x2 using the addressing information X2′:x2′.
justification for requiring this behavior is for applications that identify their peers
using globally routable addresses. In our example, X2 may be expecting an incoming connection from X1′ (e.g., because of coordination from a third-party system).
NAT Editors
Together, packets using the UDP and TCP transport protocols account for most
of the IP traffic carried on the Internet. These transport protocols, by themselves,
can be supported by NAT without additional complexity because their formats are
well understood. When application-layer protocols used in conjunction with them
carry transport-layer or lower-layer information such as IP addresses, the NAT
problem becomes considerably more complicated. The most common example is
FTP [RFC0959]. In normal operation, it communicates transport- and network-layer
endpoint information (an IP address and port number) so that additional connections can be made when bulk data is to be transferred. This requires a NAT to
rewrite not only the IP addresses and port numbers in the IP and TCP portions of a
datagram, but also some of the application payload itself. NATs with this capability
are sometimes called NAT editors. If a NAT changes the size of a packet’s application payload, considerable work may be required. For example, TCP numbers
every byte in the data transfer using sequence numbers (see Chapter 15), so if the
size of a packet is changed, the sequence numbers also require modification. PPTP
[RFC2637] also requires a NAT editor for transparent operation (see Chapter 3).
Service Provider NAT (SPNAT) and Service Provider IPv6 Transition
A relatively recent development involves the idea of moving NATs from the
customer premises into the ISP. This is sometimes called service provider NAT
(SPNAT), carrier-grade NAT (CGN), or large-scale NAT (LSN) and is intended to
further mitigate the IPv4 address depletion problem. With SPNAT, it is conceivable
Firewalls and Network Address Translation (NAT)
that many ISP customers could share a single global IPv4 address. In effect, this
moves the point of aggregation from the edge of the customer to the edge of the
ISP. In its basic form, there is no functional difference between conventional NAT
and SPNAT; the difference is really in the proposed domain of use. However, moving the NAT function from customer to ISP raises security concerns and brings
into question whether individual end users are able to deploy Internet servers
and control firewall policy [MBCB08]. A study from 2009 found that a significant
number of users accept incoming connections, largely because of peer-to-peer
programs [ANM09].
SPNAT can help with the IPv4 address depletion problem, but IPv6 is viewed
as the ultimate solution. For a number of reasons already discussed, however, IPv6
deployment has lagged expectations. Originally, a scheme known as dual-stack
(see [RFC4213]), whereby each system uses both IPv6 and IPv4 addresses, was
intended to support transition to IPv6, but this approach was anticipated to be
temporary and rendered unnecessary long before the depletion of IPv4 addresses.
An arguably more pragmatic approach is now being undertaken that combines
tunneling, address translation, and dual-stack systems in various configurations.
We’ll discuss some of these in Section 7.6 after exploring the methods that have
been developed for dealing with existing NATs.
NAT Traversal
As an alternative to the complexity of placing ALGs and NAT editors in NAT
devices, an application may attempt to perform its own NAT traversal. Usually
this involves the application trying to ascertain the external IP address and port
numbers that will be used when its traffic passes through a NAT and modifying
its protocol operations accordingly. If an application is distributed across the network (e.g., has multiple clients and servers, some of which are not behind NATs),
the servers can be used to shuttle (copy) data between the clients that connect
from behind NATs or enable such clients to discover each other’s NAT bindings
and possibly facilitate direct communication. Using a server to copy data between
clients is usually a last-resort option because of the overheads involved and potential for abuse. Consequently, most approaches attempt to provide for some method
that allows direct communication.
Direct methods have been popular for peer-to-peer file sharing, games, and
communication applications. However, such techniques are often confined to a
particular application, meaning that each new distributed application requiring
NAT traversal tends to implement its own method(s). This can lead to redundancy
and interoperability problems, ultimately increasing users’ frustration and cost.
To combat this situation, a standard approach for handling NAT traversal has
been established, and it depends on a collection of several distinct, subordinate
protocols that we discuss in the following sections. For now, we begin with one of
the more robust yet nonstandard approaches used by distributed applications. We
then move on to standardized frameworks for NAT traversal.
Section 7.4 NAT Traversal
Pinholes and Hole Punching
As discussed previously, a NAT typically includes both traffic rewriting and filtering capabilities. When a NAT mapping is established, traffic for a particular
application is usually permitted to traverse the NAT in both directions. Such mappings are narrow; they usually apply only to a single application for its duration of
execution. These types of mappings are called pinholes, because they are designed
to permit only a certain temporary traffic flow (e.g., a pair of IP address and port
number combinations). Pinholes are usually established and removed dynamically as a consequence of communication between programs.
A method that attempts to allow two or more systems, each behind a NAT,
to communicate directly using pinholes is called hole punching. It is described for
UDP in Section 3.3 of [RFC5128] and for TCP in Section 3.4. To punch a hole, a
client contacts a known server using an outgoing connection that establishes a
mapping in its local NAT. When another client contacts the same server, the server
has connections to each of the clients and knows their external addressing information. It then exchanges the client external addressing information between the
clients. Once this information is known, a client can attempt a direct connection to
the other client(s). The popular Skype peer-to-peer application uses this approach
(and some others).
Referring to Figure 7-7, assume client A contacts server S1 followed by client B.
S1 will have learned A’s and B’s external addressing information: IPv4 addresses and, respectively. By sending B’s information to A and vice
versa, A can attempt to contact B directly at its external address (and vice versa).
Whether this will work depends on the type of NATs that have been deployed.
NAT state for the (A,S1) connection lives in N1 and NAT state for (B,S1) lives in
both N2 and N3. If all NATs are endpoint-independent, this is sufficient for direct
connections to be possible. Any other type of NAT will not accept traffic from
other than S1 and will thus prohibit direct communication. Said another way, this
approach fails if both hosts are behind NATs with address-dependent or addressand port-dependent mapping behavior.
UNilateral Self-Address Fixing (UNSAF)
Applications employ a number of methods to determine the addresses their
traffic will use when passed through a NAT. This is called fixing (learning and
maintaining) the addressing information. There are indirect and direct methods
for address fixing. The indirect methods involve inferring a NATs behavior by
exchanging traffic through the NAT. The direct methods involve a direct conversation between the application and the NAT itself using one or more special protocols (that are not currently IETF standards). Considerable effort within IETF has
gone into development of the indirect methods, and they are widely supported in
certain applications, with VoIP applications being the most popular. Some of the
direct methods are now supported by some NATs. These methods also provide for
Firewalls and Network Address Translation (NAT)
Figure 7-7 Applications running on clients behind a NAT may require help from a server to engage
in direct communication. In hole punching, a server, often specialized for a specific
application, provides rendezvous information among clients that first establish NAT
state and then perform direct communication, if possible. Some applications attempt to
“fix” (determine and maintain) the addresses (and port numbers) their traffic will be
assigned when passing through a NAT using standard generic protocols. These methods
may encounter troubles in certain situations such as environments with multiple levels
of NAT. In this example, client A’s external address visible at S1 is and client
B’s is At S2, however, B’s external address is
basic configuration of NATs, so we discuss them later in the context of NAT setup
and configuration.
An application attempting to fix its address without help from the NAT performs the address fixing in a so-called unilateral fashion. Applications that do
so are said to perform UNilateral Self-Address Fixing (UNSAF) [RFC3424]. As the
name suggests, such methods are considered to be undesirable in the long run
but a necessary evil for the time being. UNSAF involves a set of heuristics and
is not guaranteed to work in all cases, especially because NAT behaviors vary
significantly based on vendor and particular circumstance. The BEHAVE documents mentioned earlier are aimed at specifying more consistent NAT behavior. If
widely adopted, UNSAF methods will work more reliably.
In most cases of interest, UNSAF methods operate in a client/server fashion
similar to hole punching, but with added generality. Figure 7-7 illustrates some
of the hazards that can arise in this situation. One issue is the lack of a single
“outside” address realm for every NAT. In this example, there are two levels of
NAT between client B and server S1. This situation can cause complications. For
example, if an application on B wishes to obtain its “outside” address by using
UNSAF with a server, it receives different answers depending on whether it contacts server S1 or S2. Finally, because UNSAF uses servers that are distinct from
Section 7.4 NAT Traversal
the NATs, there is always the possibility that the NAT behavior reported will
change over time or become inconsistent with what the UNSAF approach reports.
Given the various problems with NATs and UNSAF, the IAB, an elected group
of architectural advisers within the IETF, has indicated that UNSAF protocol proposals must include responses to the concerns in their specifications:
1. Define a limited-scope problem that the “short-term” UNSAF proposal
2. Define an exit strategy/transition plan.
3. Discuss what design decisions make the approach “brittle.”
4. Identify requirements for longer-term, sound technical solutions.
5. Discuss any noted practical issues or experiences known.
This is an unusual list of requirements imposed on a protocol specification,
but it results from long-standing interoperability problems between different
NATs and NAT traversal techniques. Despite all the aforementioned problems,
UNSAF methods are commonly used, partly because a wide range of NATs are
found in operation today with little consistent behavior. We now look at how these
methods are used as building blocks to form robust, general-purpose NAT traversal techniques to maximize the chances that communication among systems
behind NATs, even between systems across multiple NATs such as the one illustrated in Figure 7-7, will be possible.
Session Traversal Utilities for NAT (STUN)
One of the primary workhorses for UNSAF and NAT traversal is called Session
Traversal Utilities for NAT (STUN) [RFC5389]. STUN has evolved from a previous version called Simple Tunneling of UDP through NATs, now known as “classic
STUN.” Classic STUN has been used with VoIP/SIP applications for some time
but has been revised to be a tool that can be used by other protocols for performing NAT traversal. Applications requiring a complete solution for NAT traversal
are recommended to begin with other mechanisms we discuss in Section 7.4.5
(e.g., ICE and SIP-Outbound). These frameworks may make use of STUN in one
or more particular ways called STUN usages. Usages may extend the set of basic
STUN operations, message types, or error codes defined in [RFC5389].
STUN is a relatively simple client/server protocol that is able to ascertain
the external IP address and port numbers being used on a NAT in most circumstances. It can also keep NAT bindings current by using keepalive messages. It
requires a cooperating server on the “other” side of a NAT to be effective, and several public STUN servers are configured with globally reachable IP addresses and
are available for use on the Internet. The main job of a STUN server is to echo back
STUN requests sent to it in a way that allows the client addressing information to
Firewalls and Network Address Translation (NAT)
be fixed. As with UNSAF methods in general, the approach is not foolproof. However, the attraction of STUN is that it does not require modification of network
routers, application protocols, or servers. It requires only that clients implement
the STUN request protocol, and that at least one STUN server be available in an
appropriate location. STUN was envisioned as a “temporary” measure (as were
many standard protocols now in widespread use a decade or more after their creation) until a more sophisticated direct protocol was developed and implemented,
or NATs became obsolete because of the adoption of IPv6.
STUN operates using UDP, TCP, or TCP with Transport Layer Security (TLS; see
Chapter 18). STUN usage specifications define which transport protocols are supported for the particular usage. It uses port 3478 for UDP and TCP, and 3479 for
TCP/TLS. The STUN base protocol has two types of transactions: request/response
transactions and indication transactions. Indications do not require a response and
can be generated by either the client or the server. All messages include a type,
length, magic cookie with value 0x2112A442, and a random 96-bit transaction ID
used for matching requests with responses or for debugging. Each message begins
with two 0 bits and may contain zero or more attributes. STUN message types are
defined in the context of methods that support a particular STUN usage. The various STUN parameters, including method and attribute numbers, are maintained
by the IANA [ISP]. Attributes have their own types and can vary in length. The
basic STUN header, most often located immediately following a UDP transport
header in an IP packet, is shown in Figure 7-8.
The basic STUN header is 20 bytes in length (see Figure 7-8), and the Message Length field provides for an entire STUN message length of 216 - 1 bytes (the
20-byte header length is not included in the Message Length field), although messages are always padded to a multiple of 4 bytes so this field always has its 2
low-order bits set to 0. STUN messages sent over UDP/IP are supposed to form IP
datagrams less than the path MTU, if known, to avoid fragmentation (see Chapter
10). If not known, the entire datagram length (including IP and UDP headers and
any options) should be less than 576 bytes (IPv4) or 1280 bytes (IPv6). STUN has
no provision for cases where a response might exceed the path MTU in the reverse
direction, so servers should arrange to use messages of appropriate size.
STUN messages carried over UDP/IP are not reliable, so STUN applications
are required to implement their own reliability. This is accomplished by resending messages thought to be lost. The retransmission interval is based on the estimated time to send and receive a message from the peer called the round-trip time
(RTT). RTT computation and setting retransmission timers will be a major consideration when we discuss TCP (see Chapter 14). STUN uses a similar approach,
but with minor modifications to the standard TCP values. See [RFC5389] for more
details. Reliability issues for STUN over TCP/IP or TCP-with-TLS/IP are handled
by TCP. Multiple pending STUN transactions can be supported over TCP-based
STUN attributes are encoded in a TLV arrangement, a technique used by several other Internet protocols. The type and length portions of a TLV are each 16
Section 7.4 NAT Traversal
6 7810HVVDJH7\SH
6 781
6 781
Figure 7-8
(see [RFC5766])
STUN messages always begin with two 0 bits and are usually encapsulated in UDP, although
TCP is also allowed. The Message Type field gives both the method (e.g., binding) as well as class
(request, response, error, or success). The Transaction ID is a random 96-bit number used to match
requests with responses, or for debugging in the case of indications. Each STUN message can hold
zero or more attributes, depending on the particular usage of STUN.
bits, and the value portion is variable-length (up to 64KB, if supported), but padded to the next multiple of 4 bytes (padding bits may be any value). The same
attribute type may appear more than once in the same STUN message, although
only the first is necessarily processed by a receiver. Attributes with type numbers
below 0x8000 are called comprehension-required attributes, and the others are called
comprehension-optional attributes. If a STUN agent receives a message containing
comprehension-required attributes it does not know how to process, it generates
an error. Most of the attributes defined to date are comprehension-required [ISP].
[RFC5389] defines a single STUN method called binding, which can be used in
either request/response or indication transactions for address fixing and keeping
NAT bindings current. It also defines 11 attributes, given in Table 7-2.
Firewalls and Network Address Translation (NAT)
STUN, defined in [RFC5389] and sometimes called STUN2, replaces classic STUN. These 11 attributes may be used by a STUN2-compliant client or server.
Table 7-2
Contains an address family indicator and the
reflexive transport address (IPv4 or IPv6)
User name and password; used for message
integrity checks (up to 513 bytes)
Message authentication code value on the STUN
message (see Chapter 18 and [RFC5389])
Contains 3-bit error class, 8-bit error code value,
and variable-length textual description of error
Used with error messages to indicate the unknown
attributes (one 16-bit value per attribute)
Indicates the authentication “realm” name for longterm credentials
Nonrepeated value optionally carried in requests
and responses (see Chapter 18) to prevent replay
Textual description of the software that sent the
message (e.g., manufacturer and version number)
Provides an alternate IP address for a client to use;
encoded as with MAPPED-ADDRESS
CRC-32 of message XORed with 0x5354554E; must
be last attribute if used (optional)
Referring to Figure 7-5, a STUN client with addressing information X:x is
often interested in determining X1′:x1′, called the reflexive transport address or
mapped address. A STUN server at Y1:y1 includes the reflexive transport address
in a MAPPED-ADDRESS attribute in a STUN message returned to the client. The
MAPPED-ADDRESS attribute holds an 8-bit Address Family field, a 16-bit Port
Number field, and either a 32-bit or 128-bit Address field, depending on whether
IPv4 or IPv6 is indicated by the Address Family field (0x01 for IPv4; 0x02 for IPv6).
This attribute is included to remain backward-compatible with classic STUN. The
more important attribute is the XOR-MAPPED-ADDRESS attribute, which holds
exactly the same value as the MAPPED-ADDRESS attribute, but XORed with the
magic cookie value (for IPv4) or a concatenation with the magic cookie and transaction ID values (for IPv6). The reason for using XORed values in this way is to
detect and bypass generic ALGs that look through packets and rewrite whatever
IP addresses they find. Such ALGs are very brittle because they may rewrite information that protocols such as STUN require. Experience has shown that XORing
IP addresses in the packet payload is usually sufficient to bypass such ALGs.
Section 7.4 NAT Traversal
A STUN client, including most VoIP devices and “softphone” applications such
as pjsua [PJSUA], is initially configured with the IP address(es) or names of one or
more STUN servers. It is desirable to use STUN servers that are likely to “see” the
same IP addresses as the peer to which the application ultimately wishes to talk,
although that may be difficult to determine. Using STUN servers located on the
public Internet (e.g.,,,
is usually adequate. Some servers may be discovered using DNS Service (SRV)
records (see Chapter 11). An example STUN binding request is given in Figure 7-9.
Figure 7-9 A STUN binding request. The request contains a 96-bit transaction ID and the SOFTWARE attribute that identifies the client making the request. The attribute contains 10
characters, but this value is rounded up to the next multiple of 4, giving an attribute
value of 12. The message length of 16 also includes the 4 bytes used to include the attribute’s type and length (the STUN header is not included).
The sample STUN binding request in Figure 7-9 is initiated from a client.
The transaction ID has been selected randomly, and the request is sent to numb (with IPv4 addresses and, which is
both a STUN and a TURN server (see Section 7.4.4). The request contains the
SOFTWARE attribute that identifies the client application. In this case, the request
was initiated by pjnath-1.6. This is the “PJSIP NAT helper” application included
with pjsua. The message length includes 4 bytes for the attribute type and length,
plus 12 bytes used to hold the attribute. The length of pjnath-1.6 is only 10 bytes,
but attribute lengths are always rounded up to the nearest 4-byte multiple. After
passing through a NAT, the response is given as shown in Figure 7-10.
Firewalls and Network Address Translation (NAT)
Figure 7-10
A STUN binding response containing four attributes. The MAPPED-ADDRESS and XORMAPPED-ADDRESS attributes contain the server-reflexive addressing information. The other
attributes are used with an experimental NAT behavior discovery mechanism [RFC5780].
Section 7.4 NAT Traversal
The binding response shown in Figure 7-10 gives useful information to the client,
encoded as a collection of attributes. The MAPPED-ADDRESS and XOR-MAPPED
address attributes indicate that the STUN server determined the server-reflexive
attributes are used by an experimental facility for discovering NAT behavior
[RFC5780]. The first gives the communication endpoint used to send the STUN
message (, which matches the sending IPv4 address and UDP
port number). The second attribute indicates which source IPv4 address and port
number ( would have been used if the client requested “change
address” or “change port” behavior. This latter attribute is equivalent to the nowdeprecated CHANGED-ADDRESS attribute in classic STUN. If a change address or
port is specified in a request, a cooperating STUN server attempts to use a different
address when responding to the client, if possible.
STUN can be used to perform address fixing as well as a number of other
functions called mechanisms, including DNS discovery, a method to redirect to an
alternate server, and message integrity exchanges. Mechanisms are selected in
the context of a particular STUN usage, so in general they are considered optional
STUN features. One of the more important mechanisms provides authentication
and message integrity. It has two forms: the short-term credential mechanism and the
long-term credential mechanism.
Short-term credentials are intended to last for a single session; the particular
duration is defined by the STUN usage. Long-term credentials last across sessions;
they correspond to a login ID or account. Short-term credentials are often used
in particular message exchanges, and long-term credentials are used when some
particular resource is to be allocated (e.g., with TURN; see Section 7.4.4). Passwords are never sent in the clear where they could be intercepted.
The short-term credential mechanism uses the USERNAME and MESSAGEINTEGRITY attributes. Both are required on any request. The USERNAME gives
an indication of which credentials are required and allows the message sender to
use the appropriate shared password in forming an integrity check on the message (a MAC computed on the message contents; see Chapter 18). When using
short-term credentials, it is assumed that some form of credential information
(e.g., user name and password) has been exchanged earlier. The credential is used
for forming an integrity check on STUN messages that is encoded in the MESSAGE-INTEGRITY attribute. The ability to form a valid MESSAGE-INTEGRITY
attribute value is an indication that the sender holds a current (“fresh”) copy of the
appropriate credential.
The long-term credential mechanism ensures freshness in a different way,
using a digest challenge. When using this mechanism, a client initially makes a
request without any authentication material. The server rejects the request but provides a REALM attribute in response. This can be used by the client to determine
which credential is needed to provide adequate authentication, as the client may
have credentials for various services (e.g., multiple VoIP accounts). Along with the
REALM, the server provides a never-reused NONCE value, which the client uses in
Firewalls and Network Address Translation (NAT)
forming a subsequent request. This mechanism also uses a MESSAGE-INTEGRITY
attribute, but its integrity function is computed by including the NONCE value.
Thus, it is difficult for an eavesdropper that overheard a previous long-term credential exchange to simply replay a validated request (i.e., because the NONCE value
is different). The use of NONCE values in authentication and related concerns are
discussed in more detail in Chapter 18. The long-term credential mechanism cannot be used to protect STUN indications, as these transactions do not operate as
request/response pairs.
Traversal Using Relays around NAT (TURN)
Traversal Using Relays around NAT (TURN) [RFC5766] provides a way for two or
more systems to communicate even if they are located behind relatively uncooperative NATs. As a last-resort method to support communication in such circumstances, it involves a relay server that shuttles data between systems that could
otherwise not communicate. Using extensions to STUN and some TURN-specific
messages, it supports communication even when most other approaches have
failed, provided a common server that is not behind a NAT can be reached by each
client. If all NATs were compliant with the BEHAVE specifications, TURN would
not be necessary. Direct communication methods (i.e., that do not use TURN) are
almost always preferable to using TURN servers.
Referring to Figure 7-11, a TURN client behind a NAT contacts a TURN server,
usually on the public Internet, and indicates the other systems (called peers) with
which it wishes to communicate. Finding the server’s address and the appropriate
protocol to use for communication is accomplished using a special DNS NAPTR
record (see Chapter 11 and [RFC5928]) or by manual configuration. The client
obtains address and port information, called the relayed transport address, from the
server, which are the address and port number used by the TURN server to communicate with the peers. The client also obtains its own server-reflexive transport
address. Peers also have server-reflexive transport addresses that represent their
external addresses. These addresses are needed by the client and server to perform
the “plumbing” necessary to interconnect the client and its peers. The method
used to exchange this addressing information is not defined within the scope of
TURN. Instead, this information must be exchanged using some other mechanism
(e.g., ICE; see Section 7.4.5) in order for TURN servers to be used effectively.
The client uses TURN commands to create and maintain allocations on the
server. An allocation resembles a multiway NAT binding and includes the (unique)
relayed transport address that each peer can use to reach the client. Server/peer
data is sent using straightforward TURN messages traditionally carried in UDP/
IPv4. Enhancements support TCP [RFC6062] and IPv6 (and also relaying between
IPv4 and IPv6) [RFC6156]. Server/client data is encapsulated with an indication
of corresponding peer(s) that sent or should receive the associated data. The client/server connection has been specified for UDP/IPv4, TCP/IPv4, and TCP/IPv4
Section 7.4 NAT Traversal
Figure 7-11
Based on [RFC5766], a TURN server helps clients behind “bad” NATs to communicate by relaying traffic. Traffic flowing between client and server may use TCP, UDP, or TCP with TLS. Traffic
between the server and one or more peers uses UDP. Relaying is a last-resort measure for communication; direct methods are preferred if available.
with TLS. Establishing an allocation requires the client to be authenticated, usually using the STUN long-term credential mechanism.
TURN supports two methods for copying data between a client and its peers.
The first encodes data using STUN methods called Send and Data, defined in
[RFC5766], which are STUN indicators and therefore not authenticated. The other
uses a TURN-specific concept called channels. Channels are communication paths
between a client and a peer that have less overhead than the Send and Data methods. Messages carried over channels use a smaller, 4-byte header that is incompatible with the larger STUN-formatted messages ordinarily used by TURN. Up to
16K channels can be associated with an allocation. Channels were developed to
help some applications such as VoIP that prefer to use relatively small packets to
reduce latency and overhead.
In operation, the client makes a request to obtain an allocation using a TURNdefined STUN Allocate method. If successful, the server responds with a success
indicator and the allocated relayed transport address. A request might be denied
if the client fails to provide adequate authentication to the server. The client must
now send refresh messages to keep the allocation alive. Allocations expire in 10
minutes if not refreshed, unless the client included an alternate lifetime value,
encoded as a STUN LIFETIME attribute, in the allocation request. Allocations
may be deleted by requesting an allocation with zero lifetime. When an allocation
expires, so do all of its associated channels.
Firewalls and Network Address Translation (NAT)
Allocations are represented using a “5-tuple.” At the client, the 5-tuple includes
the client’s host transport address and port number, server transport address and
port number, and the transport protocol used to communicate with the server. At
the server, the same 5-tuple is used, except the client’s host transport address and
port are replaced with its server-reflexive address and port. An allocation may
have zero or more associated permissions, to limit the patterns of connectivity that
are permitted through the TURN server. Each permission includes an IP address
restriction such that only packets with the matching source address received at
the TURN server have their data payloads forwarded to the corresponding client.
Permissions are deleted if not refreshed within 5 minutes.
TURN enhances STUN with six methods, nine attributes, and six error response
codes. These can be partitioned roughly into support for establishing and maintaining allocations, authentication, and manipulating channels. The six methods and
their method numbers are as follows: Allocate (3), Refresh (4), Send (6), Data (7),
CreatePermission (8), and ChannelBind (9). The first two establish and keep allocations alive. Send and Data use STUN messages to encapsulate data from client
to server and vice versa, respectively. CreatePermission establishes or refreshes a
permission, and ChannelBind associates a particular peer with a 16-bit channel
number. The error messages indicate problems with TURN features such as authentication failure or running out of resources (e.g., channel numbers). The nine STUN
attribute names, values, and purposes defined by TURN are given in Table 7-3.
Table 7-3
STUN attributes defined by TURN
Indicates what channel associated data belongs to
Requested allocation timeout (seconds)
A peer’s address and port, using XORed encoding
Holds data for a Send or Data indication
Server’s address and port allocated for a client
Requests that the relayed transport addressing
information use an even port; optionally requests
allocation of the next port in sequence
Used in a client to request that a specific transport
be used in forming the transport address; values are
drawn from the IPv4 Protocol or IPv6 Next Hop header
field values
Requests that the server set the “don’t fragment” bit in
the IPv4 header in packets sent to peers
Unique identifier for a relayed transport address held
by the server; the value is provided to the client as a
Section 7.4 NAT Traversal
Figure 7-12 A TURN allocation request is a STUN message using message type 0x0003. This request
also includes the REQUESTED-TRANSPORT and SOFTWARE attributes. It does not
include authentication information. According to STUN long-term credentials, this
request will fail.
A TURN request takes the form of a STUN message whose message type is
an allocation request. Figure 7-12 shows an example. According to the STUN longterm credential mechanism, the initial allocation request shown in Figure 7-12 did
not include authentication information, so it is rejected by the server. The rejection
is indicated by an allocation error response, shown in Figure 7-13.
The error message in Figure 7-13 provides the REALM attribute (viagenie.
ca) and the NONCE value the client requires to form its next request. The message also includes the MESSAGE-INTEGRITY attribute so the client can check that
the message has not been modified and the requested REALM and NONCE are
correct. A subsequent request includes the USERNAME, NONCE, and MESSAGEINTEGRITY attributes. See Figure 7-14.
After receiving the request including long-term credentials, as shown in Figure 7-14, the server computes its own version of the message integrity value and
compares the result against the MESSAGE-INTEGRITY attribute value. If they
match, this is sufficient information for the TURN server to conclude that the client must hold the appropriate password. It then permits the allocation and indicates the result to the client (see Figure 7-15).
Firewalls and Network Address Translation (NAT)
Figure 7-13
A TURN allocation error response includes the ERROR-CODE attribute with value
401 (Unauthorized). The message is integrity-protected and includes the REALM and
NONCE attributes required by the client in forming another, authenticated allocation
The allocation request is successful, as shown in Figure 7-15, and the relayed
transport address is (note that Wireshark performs the XOR
operation to display the decoded address). At this point, the client can proceed
to use the TURN server for relaying to peers. Once this is finished, the allocation
can be removed. About 4s later, packets 5 and 6 in Figure 7-15 indicate the client’s
request to remove the allocation. The request is expressed as a refresh with lifetime set to 0. The server responds with a success indicator and removes the allocation. Note that the BANDWIDTH attribute has been included in the allocation and
refresh success indicators. This attribute, defined by a draft version of [RFC5766]
but ultimately deprecated, was intended to hold the peak bandwidth, in kilobytes
per second, permitted on the allocation. This attribute may be redefined in the
Section 7.4 NAT Traversal
Figure 7-14
A second TURN allocation request includes the USERNAME, REALM, NONCE, and
MESSAGE-INTEGRITY attributes. These are used by the server to verify integrity of the
message and the identity of the client. If successful, the server authenticates the request
and performs the allocation.
As suggested previously, TURN has the disadvantage that traffic must be
relayed through the TURN server, and this can lead to inefficient routing (i.e.,
the TURN server may be far away from a client and peer that are proximal). In
addition, certain other traffic contents are not passed through from peer to client
using TURN. This includes ICMP values (see Chapter 8), TTL (Hop Limit) field
values, and IP DS Field values. Also, a requesting TURN client must implement the
STUN long-term credential mechanism and have some form of login credential or
account assigned by the TURN server operator. This helps to avoid uncontrolled
use of open TURN servers but creates somewhat greater configuration complexity.
Firewalls and Network Address Translation (NAT)
Figure 7-15 A TURN allocation success response. The message is integrity-protected and includes
the XOR-RELAYED-ADDRESS attribute, identifying the port and address allocated by
the TURN server. The allocation is deleted if not refreshed.
Interactive Connectivity Establishment (ICE)
Given the large variety of NATs deployed and the various mechanisms that may
be necessary to traverse them, a generic facility called Interactive Connectivity
Establishment (ICE) [RFC5245] has been developed to help UDP-based applications
hosted behind a NAT establish connectivity. ICE is a set of heuristics by which an
application can perform UNSAF in a relatively predictable fashion. In its operation, ICE makes use of other protocols such as TURN and STUN. A proposal
extends the use of ICE to TCP-based applications [IDTI].
ICE works with and extends “offer/answer” protocols, such as the Session
Description Protocol (SDP) used with unicast SIP connection establishment
[RFC3264]. These protocols involve an offer of service with an accompanying set
of service parameters followed by an answer that also includes a set of selected
options. It is increasingly common to find ICE clients incorporated into VoIP
applications that use SDP/SIP for establishing communications. However, in
such circumstances, ICE is used for establishing NAT traversal for media streams
(such as the audio or video portion of a call carried using RTP [RFC3550] or SRTP
Section 7.4 NAT Traversal
[RFC3711]), while another mechanism, called SIP Outbound [RFC5626], handles
the SIP signaling information such as who is being called. Although in practice
ICE has been used primarily with SIP/SDP-based applications, it can also be used
as a generic NAT traversal mechanism for other applications. One such example
is the use of ICE (over UDP) with Jingle [XEP-0176], defined as an extension to the
core Extensible Messaging and Presence Protocol (XMPP) [RFC6120].
Ordinarily, ICE works to establish communication between two SDP entities
(called agents) by first determining a set of candidate transport addresses that each
agent might use for communicating with the other. Referring to Figure 7-11, these
addresses could be host transport, server-reflexive, or relayed addresses. ICE
may make use of both STUN and TURN to determine the candidate transport
addresses. ICE then orders these addresses according to a priority assignment
algorithm. The algorithm arranges for addresses that provide direct connectivity
to receive greater priority than those that require data relaying. ICE then provides
the set of prioritized addresses to its peer agent, which engages in a similar behavior. Ultimately, two agents agree on the best set of usable address pairs and indicate
the selected results to the other peer. Determination of which candidate transport
addresses are available is accomplished using a sequence of checks encoded as
STUN messages. ICE has several optimizations to decrease the latency of agreeing
on the selected candidate, which are beyond the scope of this discussion.
ICE begins by attempting to discover all available candidate addresses.
Addresses may be locally assigned transport addresses (multiple if the agent is
multihomed), server-reflexive addresses, or relayed addresses determined by
TURN. After assigning each address a priority, an agent sends the prioritized list
to its peer using SDP. The peer performs the same operation, resulting in each
agent having two prioritized lists. Each agent then forms an identical set of prioritized candidate pairs by pairing up the two lists. A set of checks are performed on
the candidate pairs in a particular order to determine which addresses will ultimately be selected. Generally, the priority ordering prefers candidate pairs with
fewer NATs or relays. The candidate pair ultimately selected is determined by a
controlling agent assigned by ICE. The controlling agent nominates which valid candidate pairs are to be used, according to its order of preference. The controlling
agent may try all pairs and subsequently make its choice (called regular nomination) or may use the first viable pair (called aggressive nomination). A nomination
is expressed as a flag in a STUN message referring to a particular pair; aggressive
nomination is performed by setting the nominate flag in every request.
Checks are sent as STUN binding request messages exchanged between the
two agents using the addressing information being checked. Checks are initiated
by timer, or scheduled as a result of an incoming check from a peer (called a triggered check). Responses arrive in the form of STUN binding responses that contain
addressing information. In some circumstances this may reveal a new serverreflexive address to the agent (e.g., because a different NAT is used between agents
from the one that was used when the candidate addresses were first determined
using STUN or TURN servers). Should this happen, the agent gains a new address
Firewalls and Network Address Translation (NAT)
called a peer-reflexive candidate, which ICE adds to the set of candidate addresses.
ICE checks are integrity-checked using STUN’s short-term credential mechanism
and use the STUN FINGERPRINT attribute. When TURN is used, the ICE client uses TURN permissions to limit the TURN binding to the remote candidate
address of interest.
ICE incorporates the concept of different implementations. Lite implementations are designed for deployment in systems that do not employ NAT. They do
not ever act as a controlling agent unless interacting with another Lite implementation. They also do not perform the checks mentioned earlier as do full implementations. The type of an ICE implementation is indicated in the STUN messages
it sends. All ICE implementations must comply with STUN [RFC5389], but Lite
implementations will only ever act as STUN servers. ICE extends STUN with the
attributes described in Table 7-4.
Table 7-4
STUN attributes defined by ICE
Computed priority of associated candidate address
Indicates selection of candidate by controlling agent
Indicates sender of message is controlled agent
Indicates sender of message is controlling agent
A check is a STUN binding request containing the PRIORITY attribute. The
value is equal to the value assigned by the algorithm described in Section 4.1.2
of [RFC5245]. The ICE-CONTROLLING and ICE-CONTROLLED attributes are
included in STUN requests when the sender is the controlling or controlled agent,
respectively. A controlling agent may also include a USE-CANDIDATE attribute.
If present, this attribute indicates which candidate the controlling agent wishes to
select for subsequent use.
Configuring Packet-Filtering Firewalls and NATs
Although NATs frequently require little configuration (unless port forwarding is
being used), firewalls usually do, and sometimes they require extensive configuration. In most home networks the same device is providing NAT, IP routing, and
firewall capabilities and may require some configuration. Although the configuration is logically separate for each of these, they are sometimes merged, either in
configuration files, command-line interfaces, Web page controls, or other network
management tools.
Section 7.5 Configuring Packet-Filtering Firewalls and NATs
Firewall Rules
A packet-filtering firewall must be given a set of instructions indicating criteria
for selecting traffic to be dropped or forwarded. Nowadays when configuring a
router, the network administrator usually configures a set of one or more ACLs.
Each ACL consists of a list of rules, and each rule typically contains pattern-matching criteria and an action. The matching criteria generally allow the rule to express
the values of packet fields at either the network or transport layer (e.g., source
and destination IP addresses, port numbers, ICMP type field, etc.) and a direction
specification. The direction pattern matches traffic in a direction-dependent manner and allows for a different set of rules to apply for incoming versus outgoing
traffic. Many firewalls also allow the rules to be applied at a certain point in the
order of processing within the firewall. Examples of this include the ability to
specify an ACL to be checked prior to or after the IP routing decision process. In
some circumstances (especially when more than one interface is used), this flexibility becomes important.
When a packet arrives, the matching criteria in the appropriate ACL are consulted in order. For most firewalls, the first matching rule is acted upon. Typical
actions include a specification to block or forward the traffic and may also adjust
a counter or write a log entry. Some firewalls may include additional features as
well, such as having some packets directed to applications or other hosts. Each
firewall vendor usually has its own method for specifying rules, although Cisco
Systems’ ACL format has emerged as a popular format supported by many vendors of enterprise-class routers. ACLs for home users are typically configured
using a simple Web interface.
One of the popular systems for building firewalls is included with modern
versions of Linux and is called iptables, built using a network filtering capability called NetFilter [NFWEB]. It is the evolution of an earlier facility called
ipchains and provides stateless and stateful packet-filtering support as well as
NAT and NAPT. We shall examine how it works to get a better understanding of
the types of capabilities a firewall and modern NAT provide.
iptables includes the concepts of filter tables and filter chains. A table contains several predefined chains and may contain zero or more user-defined
chains. Three predefined tables are named as follows: filter, nat, and mangle.
The default filter table is for basic packet filtering and contains the predefined
chains INPUT, FORWARD, and OUTPUT. These actions correspond to packets
destined for programs running on the firewall router itself, those passing through
it while being routed, and those originating at the firewall machine. The nat table
contains the chains PREROUTING, OUTPUT, and POSTROUTING. The mangle
table has all five chains. It is used for arbitrary rewriting of packets.
Each filter chain is a list of rules, and each rule has matching criteria and an
action. The action (called a target) may be to execute a special user-defined chain
or to perform one of the following predefined actions: ACCEPT, DROP, QUEUE,
and RETURN. A packet matching a rule with one of these targets is immediately
Firewalls and Network Address Translation (NAT)
acted on. ACCEPT (DROP) means the packet is forwarded (dropped). QUEUE
means the packet is delivered to a user program for arbitrary processing, and
RETURN means that processing continues in a previously invoked chain, which
forms a sort of packet filter chain subroutine call.
The design of a complete firewall configuration can be fairly complex and is
specific to the needs of particular users and the types of services they require, so
we will not attempt to give one here. Instead, the following examples illustrate
only a small number of the possible uses for iptables. The following gives an
example Linux firewall configuration file. It is invoked by a shell such as bash:
# matches all
# set default filter table policies to drop
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP
# all local traffic OK
# accept incoming DHCP requests on internal interface
iptables -A INPUT -i $INTIF -p udp -s \
--sport 67 -d --dport 68 -j ACCEPT
# drop unusual/suspect TCP traffic with no flags set
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
This example illustrates some of the flexibility one can employ in setting up a
filter criteria list. Initially, the chains are given a default policy (-P option), which
affects packets that fail to match any rules. Next, traffic to or from the local computer (which is delivered using the pseudo interface lo) is given to the ACCEPT
target (i.e., it is allowed) for the INPUT and OUTPUT chains in the default filter
table. The –j option indicates “jump” to a particular processing target. Next,
incoming UDP broadcast traffic originating from IPv4 address and destined for local/subnet broadcast using the DHCP port numbers (67, 68) is allowed
in via the internal interface. Next, the Flags fields of incoming TCP segments (see
Chapter 13) is ANDed with all 1s (ALL) and compared against zero (NONE). A
match occurs only if all the Flags fields are 0, which is not a very useful TCP segment (ordinarily all TCP segments after the first one contain a valid ACK bit, and
the first one contains a SYN).
While syntax illustrated by this example is specific to the iptables facility,
its capabilities are not. Most filtering firewalls are capable of performing similar
types of checks and actions.
Section 7.5 Configuring Packet-Filtering Firewalls and NATs
NAT Rules
In most simple routers, NAT can be configured in conjunction with firewall rules.
In basic Windows systems, NAT is called Internet Connection Sharing (ICS), and in
Linux it is called IP masquerading. On Windows XP, for example, ICS has a number
of special characteristics. It assigns the “internal” IPv4 address as to the
machine running ICS and starts a DHCP server and DNS server. Other computers
are assigned addresses in the 192.168.0/24 subnet, with the ICS machine as DNS
server. Therefore, ICS should not be enabled on networks where these services are
already being provided by another computer or router, or where the addresses
might conflict. A registry setting can be used to change the default address range.
Enabling ICS for an Internet connection on Windows XP can be accomplished
by using the Network Setup Wizard, or by changing the Advanced properties on
an already-operating Internet connection (under Settings | Network Connections).
At this point, the user may also decide to allow other users to control or disable the
shared Internet connection. This facility, known as Internet Gateway Device Discovery and Control (IGDDC), uses the Universal Plug and Play framework, described
in Section 7.5.3, for controlling a local Internet gateway from a client. The functions
supported include connect and disconnect, along with reading various status messages. The Windows firewall facility, which works in conjunction with ICS, supports the creation of service definitions. Service definitions are equivalent to port
forwarding, as defined previously. To enable it, the Advanced property tab on the
Internet connection is selected and a new service may be added (or an existing
one edited). The user is then given the opportunity to fill in the appropriate TCP
and UDP port numbers, both at the external interface and at the internal server
machine. It thus works as a way to configure NAPT for incoming connections.
As with Windows, Linux combines the masquerade capability with its firewall implementation. The following script configures masquerading in a simple
manner. Note that this script is only for illustration and is not recommended for
production use.
echo "Default FORWARD policy: DROP"
iptables -P FORWARD DROP
echo "Enabling NAT on $EXTIF for hosts"
iptables -t nat -A POSTROUTING -o $EXTIF -s \
echo "FORWARD policy: DROP unknown traffic"
iptables -A INPUT -i $EXTIF -m state --state NEW,INVALID -j DROP
iptables -A FORWARD -i $EXTIF -m state --state NEW,INVALID -j DROP
Here, the default policy for the FORWARDING chain in the filter table is
set to DROP. The next item arranges for hosts with IPv4 addresses assigned from
the subnet to have their addresses rewritten for any IPv4 traffic (via
Firewalls and Network Address Translation (NAT)
NAT, implemented by the nat table and -t nat options) after routing has determined the external interface to be the appropriate one. Because of the stateful way
that NAT works, it is now possible to adjust the filter table’s rules to allow only
traffic associated with a connection known to NAT. The last two lines adjust the
INPUT and FORWARD chains so that any incoming traffic that is either invalid or
unknown (NEW) is dropped. The special operators NEW and INVALID are defined
within the iptables command.
Direct Interaction with NATs and Firewalls: UPnP, NAT-PMP, and PCP
In many cases, a client system wishes to or needs to interact directly with its firewall. For example, a firewall may need to be configured or reconfigured for different services by allowing traffic destined for a particular port to not be dropped
(establishing a “pinhole”). In cases where a proxy firewall is in use, each client
must be informed of the proxy’s identity. Otherwise, communication beyond the
firewall is not possible. A number of protocols have been developed for supporting communication between clients and firewalls. The two most prevalent ones
are called Universal Plug and Play (UPnP) and the NAT Port Mapping Protocol (NATPMP). The standards for UPnP are developed by an industry group called the
UPnP Forum [UPNP]. NAT-PMP is currently an expired draft document within
the IETF [XIDPMP]. NAT-PMP is supported by most Mac OS X systems. UPnP
has native support on Windows systems and can be added to Mac OS and Linux
systems. UPnP is also used in support of consumer electronics device discovery
protocols for home networks being developed by the Digital Living Network Alliance (DLNA) [DLNA].
With UPnP, controlled devices are configured with IP addresses based first
upon DHCP and using dynamic link-local address configuration (see Chapter 6)
if DHCP is not available. Next, the Simple Service Discovery Protocol (SSDP) [XIDS]
announces the presence of the device to control points (e.g., client computers) and
allows the control points to query the devices for additional information. SSDP
uses two variants of HTTP with UDP instead of the more standard TCP. They are
called HTTPU and HTTPMU [XIDMU], and the latter uses multicast addressing
(IPv4 address, port 1900). For SSDP carried on IPv6, the following
addresses are used: ff01::c (node-local), ff02::c (link-local), ff05::c (site-local), ff08::c
(organization-local), and ff0e::c (global).
Subsequent control and event notification (“eventing”) is controlled by the
General Event Notification Architecture (GENA), which uses the Simple Object Access
Protocol (SOAP). SOAP supports a client/server remote procedure call (RPC) mechanism and uses messages encoded in the Extensible Markup Language (XML), which is
commonly used for Web pages. UPnP is used for a wide variety of consumer electronic devices, including audio and video playback and storage devices. NAT/firewall devices are controlled using the Internet Gateway Device (IGD) protocol [IGD].
IGD supports a variety of capabilities, including the ability to learn NAT mappings
and configure port forwarding. The interested reader may obtain a simple IGD
Section 7.6 NAT for IPv4/IPv6 Coexistence and Transition
client useful for experimentation from the MiniUPnP Project HomePage [UPNPC].
A second version of UPnP IGD [IGD2] adds general IPv6 support to UPnP.
While UPnP is a broad framework that includes NAT control and several other
unrelated specifications, NAT-PMP provides an alternative specifically targeted at
programmatic communications with NAT devices. NAT-PMP is part of Apple’s set
of Bonjour specifications for zero configuration networking. NAT-PMP does not
use a discovery process, as the device being managed is usually a system’s default
gateway as learned by DHCP. NAT-PMP uses UDP port 5351. NAT-PMP supports
a simple request/response protocol for learning a NAT’s outside address and configuring port mappings. It also supports a basic eventing mechanism that notifies
listeners when a NAT outside address changes. This is accomplished using a UDP
multicast message sent to address (the All Hosts address) when the outside address changes. NAT-PMP uses UDP port 5350 for client/server interactions
and 5351 for multicast event notification. The idea of NAT-PMP can be extended
for use with SPNAT, as proposed by the Port Control Protocol (PCP) [IDPCP].
NAT for IPv4/IPv6 Coexistence and Transition
With the depletion of the last top-level unicast IPv4 address prefixes in early in
2011, the embracing of IPv6 is beginning to accelerate. It was thought that hosts
could be equipped with dual-stack functionality (i.e., each implements a complete
IPv4 and IPv6 stack) [RFC4213] and network services would transition over to
IPv6-only operation. It is now understood that IPv4 and IPv6 are likely to coexist for an extended period of time, perhaps indefinitely, and that for various economic reasons network infrastructure may operate using either IPv4 or IPv6 or
both. Assuming that this is true, there will be an ongoing need to support communications between IPv4 and IPv6 systems, whether they are dual-stack or not.
The two major approaches that have been used to support combinations of IPv4
and IPv6 are tunneling and translation. The tunneling approaches include Teredo
(see Chapter 10), Dual-Stack Lite (DS-Lite), and IPv6 Rapid Deployment (6rd).
Although DS-Lite involves SPNAT as part of its architecture, a purer translation
approach is given by the framework described in [RFC6144], which uses the IPv4embedded IPv6 addresses we saw in Chapter 2. We will discuss both DS-Lite and
the translation framework in more detail in this section.
Dual-Stack Lite (DS-Lite)
DS-Lite [RFC6333] is an approach to make transition to IPv6 (and support for
legacy IPv4 users) easier for service providers that wish to run IPv6 internally. In
essence, it allows providers to focus on deploying an operational IPv6 core network yet provide IPv4 and IPv6 connectivity to their customers using a small number of IPv4 addresses. The approach combines IPv4-in-IPv6 “softwire” tunneling
[RFC5571] with SPNAT. Figure 7-16 shows the type of deployment envisioned.
Firewalls and Network Address Translation (NAT)
,3 Y,3 Y
,3 Y,3 Y
Figure 7-16
$)75 ,3 Y
,3 YLQ,3 Y
,3 Y
DS-Lite allows service providers to support IPv4 and IPv6 customer networks using
an IPv6-only infrastructure. IPv4 address usage is minimized by using SPNAT at the
provider’s edge.
In Figure 7-16, each customer network operates with any combination of IPv4
and IPv6. The service provider’s network is assumed to be managed using only
IPv6. Customer access to the IPv6 Internet is provided using conventional IPv6
routing. For IPv4 access, each customer uses a special “before” gateway (labeled
“B4” in Figure 7-16). A B4 element provides basic IPv4 services (e.g., DHCP service,
a DNS proxy, etc.) but also encapsulates the customer’s IPv4 traffic in multi-pointto-point tunnels terminated at the “after” element (labeled “AFTR” in Figure 7-16).
The AFTR element performs decapsulation of traffic headed to the IPv4 Internet
and encapsulation in the reverse direction. AFTR also performs NAT and acts as a
form of SPNAT. More specifically, the AFTR may use the identity of the customer’s
tunnel endpoint for disambiguating traffic returning to the AFTR from the IPv4
Internet. This allows multiple customers to use the same IPv4 address space. A B4
element can learn the name of its corresponding AFTR element using a DHCPv6
option called AFTR-Name [RFC6334].
It is instructive to recall the discussion of IPv6 rapid deployment (6rd) from
Chapter 6. Whereas DS-Lite provides IPv4 access to customers over a service provider’s IPv6 network, 6rd aims to provide IPv6 access to customers over a service
provider’s IPv4 network. In essence, they take opposite approaches with similar
architectural components. However, with 6rd, mapping from an IPv6 address to
the address of the corresponding IPv4 tunnel endpoint (and vice versa) is computed in a stateless fashion using an address-mapping algorithm. Stateless address
translation is also used in the framework for full protocol translation between
IPv4 and IPv6, which we discuss next.
IPv4/IPv6 Translation Using NATs and ALGs
The biggest disadvantage of using tunneling techniques for supporting IPv4/IPv6
coexistence is that network services running on hosts using one address family
Section 7.6 NAT for IPv4/IPv6 Coexistence and Transition
cannot be reached directly by the hosts using the other. Thus, an IPv6-only host
can communicate only with other IPv6-capable systems. This is an undesirable situation because many valuable services offered on the legacy IPv4 Internet would
remain unavailable to new systems that may support only IPv6. To address this
concern, a significant effort was undertaken between 2008 and 2010 to develop a
framework to provide direct translation between IPv4 and IPv6. This effort was
informed by poor experiences with NAT-PT [RFC2766], which was ultimately
determined to be too brittle and unscalable for ongoing use and was deprecated
The IPv4/IPv6 translation framework is given in [RFC6144]. The basic translation architecture involves both stateful and stateless methods to convert between
IPv4 and IPv6 addresses, translations for DNS (see Chapter 11), and the definition
of any additional behaviors or ALGs in cases where they are necessary (including
for ICMP and FTP). In this section, we will discuss the basics of the stateless and
stateful address translation for IP based on [RFC6145], [RFC6146], and the addressing from [RFC6052] we discussed in Chapter 2. Other protocol-specific translation
issues will be covered in subsequent chapters. IPv4-Converted and IPv4-Translatable Addresses
In Chapter 2, we discussed the structure of IPv4-embedded IPv6 addresses. Such
addresses are IPv6 addresses that can be used as input to a function that produces
a corresponding IPv4 address. The function is also easily inverted. There are two
important types of IPv4-embedded IPv6 addresses, called IPv4-converted addresses
and IPv4-translatable addresses. Each type of address mentioned is a subset of
the other types. That is, if we treat each address category as a set, then (IPv4translatable) ⊂ (IPv4-converted) ⊂ (IPv4-embedded) ⊂ (IPv6). IPv4-translatable
addresses are IPv6 addresses for which an IPv4 address can be determined in a
stateless fashion (see Section
Algorithmic translation between IPv4 and IPv6 addresses involves the use of
a prefix, as described in Chapter 2. The prefix may be either the Well-Known Prefix (WKP) 64:ff9b::/96 or another Network-Specific Prefix that is ordinarily owned
by a service provider and used specifically with its translators. The WKP is used
only in representing ordinary globally routable IPv4 addresses; private addresses
[RFC1918] are not to be used with the WKP. In addition, the WKP is not to be
used for creating IPv4-translatable addresses. Such addresses are intended to be
defined within the scope of a provider’s network, so it is not appropriate to use
them at a global scope.
The WKP is interesting because it is checksum-neutral with respect to the Internet checksum. Recall the Internet checksum calculation from Chapter 5. If we treat
the prefix 64:ff9b::/96 as being composed of the hexadecimal values 0064, ff9b,
0000, 0000, 0000, 0000, 0000, 0000, the sum of these values is ffff, which is equal
to 0 in one’s complement. Consequently, when an IPv4 address has the WKP prepended, the associated Internet checksums in packets created as a result of translation (e.g., in the IPv4 header, TCP, or UDP checksum) are unaffected. Naturally,
an appropriately chosen Network-Specific Prefix can also be checksum-neutral.
Firewalls and Network Address Translation (NAT)
In the following two subsections, we will use the notation To4(A6, P) to represent the IPv4 address derived from IPv6 address A6 in conjunction with prefix
P. P is either the WKP or some Network-Specific Prefix. We will use the notation
To6(A4, P) to represent the IPv6 address derived from IPv4 address A4 in conjunction with prefix P. Note that, with a few special exceptions, A6 = To6(To4(A6,P),P)
and A4 = To4(To6(A4,P),P). Stateless Translation
Stateless IP/ICMP Translation (SIIT) refers to a method of translating between IPv4
and IPv6 packets without using state tables [RFC6145]. The translation is performed without table lookups and uses IPv4-translatable addresses along with
a defined scheme to translate IP headers. For the most part, IPv4 options are not
translated (they are ignored), nor are IPv6 extension headers (except the Fragment header). The exception is an unexpired IPv4 Source Route option. If such an
option is present, the packet is dropped and a corresponding ICMP error message
(Destination Unreachable, Source Route Failed; see Chapter 8) is generated. Table
7-5 describes how the IPv6 header fields are assigned when translating an IPv4
datagram to IPv6.
Table 7-5
Methods for creating an IPv6 header when translating IPv4 to IPv6
IPv6 Field
Assignment Method
DS Field/ECN
Flow Label
Payload Length
Set to 6.
Copied from same values in IPv4 header
Set to 0.
Set to IPv4 Total Length minus length of the IPv4 header (including
Set to IPv4 Protocol field (or 58 if the Protocol field had value 1).
Set to value 44 to indicate a Fragment header if the IPv6 datagram
being created is a fragment or DF bit not set.
Set to the IPv4 TTL field minus 1 (if this value is 0, the packet is
discarded and an ICMP Time Exceeded message is generated; see
Chapter 8).
Set to To6(IPv4 Source IP Address, P).
Set to To6(IPv4 Destination IP Address, P).
Next Header
Hop Limit
Source IP Address
Destination IP
During the translation process, the IPv4 header is stripped and replaced with
an IPv6 header. If the arriving IPv4 datagram is too large to fit in the MTU for the
next link and the DF bit field in its header is not set, multiple IPv6 fragment packets
may be produced, each containing a Fragment header. This also occurs when the
arriving IPv4 datagram is a fragment. [RFC6145] recommends a Fragment header
Section 7.6 NAT for IPv4/IPv6 Coexistence and Transition
be included in the resulting IPv6 datagram whenever the arriving IPv4 datagram’s
DF bit field has value zero, whether or not the translator needs to perform fragmentation or the arriving datagram is a fragment. This allows the IPv6 receiver to
know that the IPv4 sender was likely not using PMTUD. When a Fragment header
is included, its fields are set according to the methods listed in Table 7-6.
Table 7-6 Methods for assigning fields of the Fragment header, if used, during IPv4-to-IPv6
Fragment Header Field
Assignment Method
Next Header
Fragment Offset
More Fragments Bit
Set to the IPv4 Protocol field.
Copied from the IPv4 Fragment Offset field.
Copied from the IPv4 More Fragments (M) bit field.
The low-order 16 bits are set from the IPv4 Identification field.
The high-order 16 bits are set to 0.
The reverse direction (IPv6-to-IPv4 translation) involves creating an IPv4
datagram with header field values based on fields in the arriving IPv6 header.
Obviously the much larger IPv6 address space does not allow an IPv4-only host to
access every host on the IPv6 Internet. Table 7-7 gives the methods used to assign
the fields in the outgoing IPv4 datagram’s header when an unfragmented IPv6
datagram arrives.
Table 7-7
Methods for creating an IPv4 header when translating unfragmented IPv6 to IPv4
IPv4 Header Field
Assignment Method
DS Field/ECN
Total Length
Fragment Offset
Set to 4.
Set to 5 (no IPv4 options).
Copied from same values in IPv6 header.
The value of the IPv6 Payload Length field plus 20.
Set to 0 (with option to set to some other predetermined value).
More Fragments (M) is set to 0. Don’t Fragment (DF) is set to 1.
Set to 0.
The value of the IPv6 Hop Limit field minus 1 (must be at least 1).
Copied from the first IPv6 Next Header field that does not refer
to a Fragment header, HOPOPT, IPv6-Route, or IPv6-Opts.
Value 58 is changed to 1 to support ICMP (see Chapter 8).
Computed for the newly created IPv4 header.
To4(IPv6 Source IP Address, P).
To4(IPv6 Destination IP Address, P).
Header Checksum
Source IP Address
Destination IP Address
Firewalls and Network Address Translation (NAT)
If the arriving IPv6 datagram includes a Fragment header, the outgoing IPv4
datagram uses field values based on assignment methods modified from those in
Table 7-7. Table 7-8 gives this case.
Table 7-8
Methods for creating an IPv4 header when translating fragmented IPv6 to IPv4
IPv4 Header Field
Assignment Method
Total Length
The value of the IPv6 Payload Length field minus 8 plus 20.
Copied from the low-order 16 bits in the Identification field of the
IPv6 Fragment header.
More Fragments (M) copied from the M bit field in the IPv6
Fragment header. Don’t Fragment (DF) is set to 0 to allow
fragmentation in the IPv4 network.
Copied from the Fragment Offset field of the IPv6 Fragment header.
Fragment Offset
In the case of fragmented IPv6 datagrams, the translator produces fragmented
IPv4 datagrams. Note that in IPv6 the Identification field is larger, so there is a possibility that certain fragments could fail to be reassembled properly if multiple
distinct IPv6 datagrams from the same host are fragmented in such a way that the
Identification field values they use share a common lower-order 16 bits. However,
this situation is no more risky than having the conventional IPv4 Identification
field wrap. Furthermore, integrity checks at higher layers make this issue nothing
much to worry about. Stateful Translation
In stateful translation, NAT64 [RFC6146] is used to support IPv6-only clients communicating with IPv4 servers. This is expected to be important during the period
when many important services continue to be offered using only IPv4. The translation method for headers is nearly identical to the methods described for stateless
translation in Section As a NAT, NAT64 complies with BEHAVE specifications and supports only endpoint-independent mappings, along with both endpoint-independent and address-dependent filtering. Thus, it is compatible with
the NAT traversal techniques (e.g., ICE, STUN, TURN) we discussed previously.
Lacking these additional protocols, NAT64 supports dynamic translation only for
IPv6 hosts initiating communications with IPv4 hosts.
NAT64 works much like conventional NAT (NAPT) across address families,
except translations in the IPv4-to-IPv6 direction are simpler than in the reverse
direction. A NAT64 device is assigned an IPv6 prefix, which can be used to form a
valid IPv6 address directly from an IPv4 address using the mechanism described
in Chapter 2 and [RFC6052]. Because of the comparative scarcity of the IPv4
address space, translations in the IPv6-to-IPv4 direction make use of a pool of
IPv4 addresses that are ordinarily managed dynamically. This requires NAT64 to
support NAPT functionality, whereby multiple distinct IPv6 addresses may map
Section 7.7 Attacks Involving Firewalls and NATs
to the same IPv4 address. NAT64 currently defines methods for translation of TCP,
UDP, and ICMP messages initiated by IPv6 nodes. (In the case of ICMP queries
and responses, the ICMP Identifier field is used instead of the transport-layer port
number; see Chapter 8.)
NAT64 handles fragments differently from its stateful counterpart. For arriving TCP or UDP fragments where the transport checksum is nonzero (see Chapter
10), the NAT64 may either queue the fragments and translate them together or
translate them individually. A NAT64 must handle fragments, even those arriving
out of order. A NAT64 may be configured with a time limit (at least 2s) bounding
the time during which fragments will be cached. Otherwise, the NAT could be
subject to a DoS attack resulting from the exhaustion of packet buffers holding
Attacks Involving Firewalls and NATs
Given that the primary purpose of deploying firewalls is to reduce the exposure
to attacks, it is not surprising that firewalls have fewer obvious shortcomings than
end hosts or routers. That said, they are not without their faults. The most common types of firewall problems result from incomplete or incorrect configuration.
Configuring firewalls is not a trivial task, especially for large enterprises where
many services may be employed on a daily basis. Other forms of attacks exploit
the weaknesses of some firewalls, including the inability of many of them (especially older ones) to deal with IP fragments.
One type of problem arises when a NAT/firewall can be hijacked from outside
to provide a masquerading capability for an attacker. If the firewall is configured
with NAT enabled, traffic arriving at its external interface may be rewritten so as
to appear to have come from the NAT device, thereby hiding an attacker’s actual
address. What is worse, this is “normal” behavior from the NAT’s point of view;
it just happens to be getting its input packets from outside rather than inside.
This has been a particular problem with ipchains-based NAT/firewall rules on
Linux. The simplest configuration for setting up masquerading:
allows this attack to take place and is therefore not recommended. As we can see,
it sets the default forwarding policy to masquerade, which potentially applies to
any IP forwarding.
Another type of problem that can arise with firewall and NAT rules is that
they may be stale. In particular, they may contain port forwarding entries or other
so-called holes that allow traffic through for services that are no longer used. A
related problem is that some routers keep more than one copy of the firewall rules
in memory, and the router must be specifically instructed when to enable which
rules. Finally, another common configuration problem is that many routers merge
Firewalls and Network Address Translation (NAT)
new firewall rules with the existing set when new ones are added. This can potentially lead to undesired results if the operator is unaware of this behavior.
The problem with fragments is related to how IP fragments are constructed.
When an IP datagram is fragmented (see Chapter 10), the transport header, which
contains the port numbers, appears only in the first fragment and in none of the
others. This is a direct result of the layering and encapsulation of the TCP/IP protocol architecture. Unfortunately for a firewall, receiving a fragment other than
the first provides little information about the transport layer or service to which
the packet relates. The only obvious way to make this association is to find the
first fragment (if there ever was one), and this obviously requires a stateful firewall capability, which might be subject to resources exhaustion attacks. Even
stateful firewalls could fall short: if the first fragment arrives after subsequent
fragments, the firewall may not be smart enough to perform reassembly prior to
its filtering operation. In some cases, the firewall simply drops fragments it cannot
fully identify, which could pose problems for legitimate traffic that happens to use
large datagrams.
Firewalls provide a mechanism for network administrators to restrict the flow
of information that may be harmful to end systems. The two major types of firewalls are packet-filtering firewalls and proxy firewalls. Packet-filtering firewalls
may be further separated into the stateful and stateless varieties, and they usually
act as IP routers. The stateful variety is more sophisticated and supports successful operation of a wider variety of application-layer protocols (and might do more
sophisticated logging or filtering across multiple packets in a packet stream). Proxy
firewalls usually act as a form of application-layer gateway. For these firewalls,
each application-layer service must have its own proxy handler on the firewall,
but this does allow handlers to make modifications even to the data portion of the
transiting traffic. Protocols such as SOCKS support proxy firewalls in a standardized way.
Network Address Translation (NAT) is a mechanism whereby a relatively
large number of end hosts can share one or more globally routable IP address(es).
NAT is used extensively for this purpose but can also be used in conjunction with
firewall rules to form a NAT/firewall combination. In this popular configuration,
computers “behind” the NAT are allowed to send traffic out to the global Internet,
but only traffic returning in response to the outgoing traffic is ordinarily admitted back. This presents a small problem for implementing services behind a NAT
that is handled by port forwarding, which allows the NAT to pass on incoming
traffic for a service to end hosts inside the NAT. NAT is also being proposed for
helping the transition from IPv4 to IPv6 by translating addresses between the two
realms. In addition, NAT is being considered for use within ISPs to further allay
IPv4 address depletion concerns. If this happens on a large scale, it may become
Section 7.9 References
(even more) difficult for ordinary users to offer Internet services from their home
Some applications use a set of heuristics in order to determine what addresses
are used on the outside of the NATs they are behind. Many of these operate unilaterally, without direct help from the NAT. Such applications are said to use
UNSAF (pronounced “unsafe”) methods and may not be completely reliable. A
set of documents (developed by the IETF BEHAVE working group) specifies the
proper behavior of NATs for different protocols, but not all NATs implement these
specifications. Consequently, NAT traversal techniques may need to be employed
to ensure that connectivity can take place.
NAT traversal involves determining a set of addresses and port numbers that
can be used to support communications even when one or more NATs must be
used. STUN is the primary workhorse protocol for determining addresses. TURN
is a particular STUN usage that relays traffic through a specially configured
TURN server, usually located in the Internet. Deciding which addresses or relays
to use can be accomplished using a complete NAT traversal protocol such as ICE.
ICE determines all possible addresses that can be used between a pair of communicating endpoints using local information, plus addresses determined using
STUN and TURN. It then selects the “best” addresses for subsequent communication. Mechanisms such as ICE have received the most attention for supporting
VoIP services that use the SIP protocol for signaling.
Firewalls and NATs may require configuration. The basic settings are adequate for many home users, but firewalls may require modifications to allow
certain services to work. In addition, if a user behind a NAT wishes to offer an
Internet service, port forwarding will likely have to be configured on the NAT
device. Some applications support configuration by performing direct communication with a NAT using protocols such as UPnP and NAT-PMP. When supported
and enabled, these allow a NAT to have its port forwarding and binding data
accessed and modified by the application automatically, without user intervention. For a home user to run a Web server behind a dynamically provisioned NAT
(i.e., one with an Internet-facing IP address that changes), additional services such
as dynamic DNS (see Chapter 11) may also be important.
[ANM09] S. Alcock, R. Nelson, and D. Miles, “Investigating the Impact of Service
Provider NAT on Residential Broadband Users,” University of Waikato, unpublished technical report, 2009.
[HBA09] D. Hayes, J. But, and G. Armitage, “Issues with Network Address Translation for SCTP,” Computer Communications Review, Jan. 2009.
Firewalls and Network Address Translation (NAT)
[IDPCP] D. Wing, ed., S. Cheshire, M. Boucadair, R. Penno, and P. Selkirk, “Port
Control Protocol (PCP),” Internet draft-ietf-pcp-base, work in progress, July 2011.
[IDSNAT] R. Stewart, M. Tuexen, and I. Ruengeler, “Stream Control Transmission
Protocol (SCTP) Network Address Translation,” Internet draft-ietf-behavesctpnat, work in progress, June 2011.
[IDTI] J. Rosenberg, A. Keranen, B. Lowekamp, and A. Roach, “TCP Candidates
with Interactive Connectivity Establishment (ICE),” Internet draft-ietf-mmusicice-tcp, work in progress, Sep. 2011.
[IGD] UPnP Forum, “Internet Gateway Devices (IGD) Standardized Device
Control Protocol V 1.0,” Nov. 2001.
[IGD2] UPnP Forum, “IDG:2 Improvements over IGD:1,” Mar. 2009.
[MBCB08] O. Maennel, R. Bush, L. Cittadini, and S. Bellovin, “A Better Approach
to Carrier-Grade-NAT,” Columbia University Technical Report CUCS-041-08,
Sept. 2008.
[RFC0959] J. Postel and J. Reynolds, “File Transfer Protocol,” Internet RFC 0959/
STD 0009, Oct. 1985.
[RFC1918] Y. Rekhter, B. Moskowitz, D. Karrenberg, G. J. de Groot, and E. Lear,
“Address Allocation for Private Internets,” Internet RFC 1918BCP 0005, Feb. 1996.
[RFC1928] M. Leech, M. Ganis, Y. Lee, R. Kuris, D. Koblas, and L. Jones, “SOCKS
Protocol Version 5,” Internet RFC 1928, Mar. 1996.
[RFC2616] R. Fielding, J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, and
T. Berners-Lee, “Hypertext Transfer Protocol—HTTP/1.1,” Internet RFC 2616,
June 1999.
[RFC2637] K. Hamzeh, G. Pall, W. Verthein, J. Taarud, W. Little, and G. Zorn,
“Point-to-Point Tunneling Protocol (PPTP),” Internet RFC 2637 (informational),
July 1999.
[RFC2766] G. Tsirtsis and P. Srisuresh, “Network Address Translation—Protocol
Translation (NAT-PT),” Internet RFC 2766 (obsoleted by [RFC4966]), Feb. 2000.
[RFC3022] P. Srisuresh and K. Egevang, “Traditional IP Network Address Translator (Traditional NAT),” Internet RFC 3022 (informational), Jan. 2001.
Section 7.9 References
[RFC3027] M. Holdrege and P. Srisuresh, “Protocol Complications with the IP
Network Address Translator,” Internet RFC 3027 (informational), Jan. 2001.
[RFC3235] D. Senie, “Network Address Translator (NAT)-Friendly Application
Design Guidelines,” Internet RFC 3235 (informational), Jan. 2002.
[RFC3264] J. Rosenberg and H. Schulzrinne, “An Offer/Answer Model with
Session Description Protocol (SDP),” Internet RFC 3264, June 2002.
[RFC3424] L. Daigle, ed., and IAB, “IAB Considerations for UNilateral SelfAddress Fixing (UNSAF) across Network Address Translation,” Internet RFC
3424 (informational), Nov. 2002.
[RFC3550] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson, “RTP: A
Transport Protocol for Real-Time Applications,” Internet RFC 3550/STD 0064,
July 2003.
[RFC3711] M. Baugher, D. McGrew, M. Naslund, E. Carrara, and K. Norrman,
“The Secure Real-Time Transport Protocol (SRTP),” Internet RFC 3711, Mar. 2004.
[RFC4193] R. Hinden and B. Haberman, “Unique Local IPv6 Unicast Addresses,”
Internet RFC 4193, Oct. 2005.
[RFC4213] E. Nordmark and R. Gilligan, “Basic Transition Mechanisms for IPv6
Hosts and Routers,” Internet RFC 4213, Oct. 2005.
[RFC4340] E. Kohler, M. Handley, and S. Floyd, “Datagram Congestion Control
Protocol (DCCP),” Internet RFC 4340, Mar. 2006.
[RFC4605] B. Fenner, H. He, B. Haberman, and H. Sandick, “Internet Group Management Protocol (IGMP)/Multicast Listener Discovery (MLD)-Based Multicast
Forwarding (IGMP/MLD Proxying),” Internet RFC 4605, Aug. 2006.
[RFC4787] F. Audet, ed., and C. Jennings, “Network Address Translation (NAT)
Behavioral Requirements for Unicast UDP,” Internet RFC 4787/BCP 0127, Jan.
[RFC4864] G. Van de Velde, T. Hain, R. Droms, B. Carpenter, and E. Klein, “Local
Network Protection for IPv6,” Internet RFC 4864 (informational), May 2007.
[RFC4960] R. Stewart, ed., “Stream Control Transmission Protocol,” Internet RFC
4960, Sept. 2007.
[RFC4966] C. Aoun and E. Davies, “Reasons to Move the Network Address
Translator-Protocol Translator (NAT-PT) to Historic Status,” Internet RFC 4966
(informational), July 2007.
[RFC5128] P. Srisuresh, B. Ford, and D. Kegel, “State of Peer-to-Peer (P2P) Communication across Network Address Translators (NATs),” Internet RFC 5128
(informational), Mar. 2008.
Firewalls and Network Address Translation (NAT)
[RFC5135] D. Wing and T. Eckert, “IP Multicast Requirements for a Network
Address Translator (NAT) and a Network Address Port Translator (NAPT),”
Internet RFC 5135/BCP 0135, Feb. 2008.
[RFC5245] J. Rosenberg, “Interactive Connectivity Establishment (ICE): A
Protocol for Network Address Translator (NAT) Traversal for Offer/Answer
Protocols,” Internet RFC 5245, Apr. 2010.
[RFC5382] S. Guha, ed., K. Biswas, B. Ford, S. Sivakumar, and P. Srisuresh, “NAT
Behavioral Requirements for TCP,” Internet RFC 5382/BCP 0142, Oct. 2008.
[RFC5389] J. Rosenberg, R. Mahy, P. Matthews, and D. Wing, “Session Traversal
Utilities for NAT (STUN),” Internet RFC 5389, Oct. 2008.
[RFC5411] J. Rosenberg, “A Hitchhiker’s Guide to the Session Initiation Protocol
(SIP),” Internet RFC 5411 (informational), Feb. 2009.
[RFC5508] P. Srisuresh, B. Ford, S. Sivakumar, and S. Guha, “NAT Behavioral
Requirements for ICMP,” Internet RFC 5508/BCP 0148, Apr. 2009.
[RFC5571] B. Storer, C. Pignataro, ed., M. Dos Santos, B. Stevant, ed., L. Toutain,
and J. Tremblay, “Softwire Hub and Spoke Deployment Framework with Layer
Two Tunneling Protocol Version 2 (L2TPv2),” Internet RFC 5571, June 2009.
[RFC5596] G. Fairhurst, “Datagram Congestion Control Protocol (DCCP) Simultaneous-Open Technique to Facilitate NAT/Middlebox Traversal,” Internet RFC
5596, Sept. 2009.
[RFC5597] R. Denis-Courmont, “Network Address Translation (NAT) Behavioral
Requirements for the Datagram Congestion Control Protocol,” Internet RFC
5597/BCP 0150, Sept. 2009.
[RFC5626] C. Jennings, R. Mahy, and F. Audet, eds., “Managing Client-Initiated
Connections in the Session Initiation Protocol (SIP),” Internet RFC 5626, Oct.
[RFC5761] C. Perkins and M. Westerlund, “Multiplexing RTP Data and Control
Packets on a Single Port,” Internet RFC 5761, Apr. 2010.
[RFC5766] R. Mahy, P. Matthews, and J. Rosenberg, “Traversal Using Relays
around NAT (TURN): Relay Extensions to Session Traversal Utilities for NAT
(STUN),” Internet RFC 5766, Apr. 2010.
[RFC5780] D. MacDonald and B. Lowekamp, “NAT Behavior Discovery Using
Session Traversal Utilities for NAT (STUN),” Internet RFC 5780 (experimental),
May 2010.
[RFC5902] D. Thaler, L. Zhang, and G. Lebovitz, “IAB Thoughts on IPv6 Network
Address Translation,” Internet RFC 5902 (informational), July 2010.
Section 7.9 References
[RFC5928] M. Petit-Huguenin, “Traversal Using Relays around NAT (TURN)
Resolution Mechanism,” Internet RFC 5928, Aug. 2010.
[RFC6052] C. Bao, C. Huitema, M. Bagnulo, M. Boucadair, and X. Li, “IPv6
Addressing of IPv4/IPv6 Translators,” Internet RFC 6052, Oct. 2010.
[RFC6062] S. Perreault, ed., and J. Rosenberg, “Traversal Using Relays around
NAT (TURN) Extensions for TCP Allocations,” Internet RFC 6062, Nov. 2010.
[RFC6120] P. Saint-Andre, “Extensible Messaging and Presence Protocol (XMPP):
Core,” Internet RFC 6120, Mar. 2011.
[RFC6144] F. Baker, X. Li, C. Bao, and K. Yin, “Framework for IPv4/IPv6 Translation,” Internet RFC 6144 (informational), Apr. 2011.
[RFC6145] X. Li, C. Bao, and F. Baker, “IP/ICMP Translation Algorithm,” Internet
RFC 6145, Apr. 2011.
[RFC6146] M. Bagnulo, P. Matthews, and I. van Beijnum, “Stateful NAT64:
Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers,”
Internet RFC 6146, Apr. 2011.
[RFC6156] G. Camarillo, O. Novo, and S. Perreault, ed., “Traversal Using Relays
around NAT (TURN) Extension for IPv6,” Internet RFC 6156, Apr. 2011.
[RFC6296] M. Wasserman and F. Baker, “IPv6-to-IPv6 Network Prefix Translation,” Internet RFC 6296 (experimental), June 2011.
[RFC6333] A. Durand, R. Droms, J. Woodyatt, and Y. Lee, “Dual-Stack Lite Broadband Deployments Following IPv4 Exhaustion,” Internet RFC 6333, Aug. 2011.
[RFC6334] D. Hankins and T. Mrugalski, “Dynamic Host Configuration Protocol
for IPv6 (DHCPv6) Option for Dual-Stack Lite,” Internet RFC 6334, Aug. 2011.
[XEP-0176] J. Beda, S. Ludwig, P. Saint-Andre, J. Hildebrand, S. Egan, and R.
McQueen, “XEP-0176: Jingle ICE-UDP Transport Method,” XMPP Standards
Foundation, June 2009,
[XIDAD] P. Gauthier, J. Cohen, M. Dunsmuir, and C. Perkins, “Web Proxy
Auto-Discovery Protocol,” Internet draft-ietf-wrec-wpad-01, work in progress
(expired), June 1999.
[XIDMU] Y. Goland, “Multicast and Unicast UDP HTTP Messages,” Internet
draft-goland-http-udp-01.txt, work in progress (expired), Nov. 1999.
[XIDPMP] S. Cheshire, M. Krochmal, and K. Sekar, “NAT Port Mapping Protocol
(NAT-PMP),” Internet draft-cheshire-nat-pmp-03.txt, work in progress (expired),
Apr. 2008.
Firewalls and Network Address Translation (NAT)
[XIDS] Y. Goland, T. Cai, P. Leach, Y. Gu, and S. Albright, “Simple Service Discovery Protocol/1.0 Operating without an Arbiter,” Internet draft-cai-ssdp-v1-03.txt,
work in progress (expired), Oct. 1999.
ICMPv4 and ICMPv6: Internet
Control Message Protocol
The IP protocol alone provides no direct way for an end system to learn the fate
of IP packets that fail to make it to their destinations. In addition, IP provides no
direct way of obtaining diagnostic information (e.g., which routers are used along
a path or a method to estimate the round-trip time). To address these deficiencies,
a special protocol called the Internet Control Message Protocol (ICMP) [RFC0792]
[RFC4443] is used in conjunction with IP to provide diagnostics and control information related to the configuration of the IP protocol layer and the disposition of
IP packets. ICMP is often considered part of the IP layer itself, and it is required
to be present with any IP implementation. It uses the IP protocol for transport.
So, precisely, it is neither a network nor a transport protocol but lies somewhere
between the two.
ICMP provides for the delivery of error and control messages that may require
attention. ICMP messages are usually acted on by the IP layer itself, by higherlayer transport protocols (e.g., TCP or UDP), and in some cases by user applications. Note that ICMP does not provide reliability for IP. Rather, it indicates certain
classes of failures and configuration information. The most common cause of
packet drops (buffer overrun at a router) does not elicit any ICMP information.
Other protocols, such as TCP, handle such situations.
Because of the ability of ICMP to affect the operation of important system
functions and obtain configuration information, hackers have used ICMP messages in a large number of attacks. As a result of concerns about such attacks,
network administrators often arrange to block ICMP messages with firewalls,
especially at border routers. If ICMP is blocked, however, a number of common
diagnostic utilities (e.g., ping, traceroute) do not work properly [RFC4890].
ICMPv4 and ICMPv6: Internet Control Message Protocol
When discussing ICMP, we shall use the term ICMP to refer to ICMP in general, and the terms ICMPv4 and ICMPv6 to refer specifically to the versions of
ICMP used with IPv4 and IPv6, respectively. As we shall see, ICMPv6 plays a far
more important role in the operation of IPv6 than ICMPv4 does for IPv4.
[RFC0792] contains the official base specification of ICMPv4, which is refined
and clarified in [RFC1122] and [RFC1812]. [RFC4443] provides the base specification for ICMPv6. [RFC4884] provides a method to add extension objects to certain ICMP messages. This facility is used for holding Multiprotocol Label Switching
(MPLS) information [RFC4950] and for indicating which interface and next hop
a router would use in forwarding a particular datagram [RFC5837]. [RFC5508]
gives standard behavioral characteristics of ICMP through NATs (also discussed
in Chapter 7). In IPv6, ICMPv6 is used for several purposes beyond simple error
reporting and signaling. It is used for Neighbor Discovery (ND) [RFC4861], which
plays the same role as ARP does for IPv4 (see Chapter 4). It also includes the
Router Discovery function used for configuring hosts (see Chapter 6) and multicast
address management (see Chapter 9). Finally, it is also used to help manage handoffs in Mobile IPv6.
Encapsulation in IPv4 and IPv6
ICMP messages are encapsulated for transmission within IP datagrams, as shown
in Figure 8-1.
Figure 8-1
Encapsulation of ICMP messages in IPv4 and IPv6. The ICMP header contains a checksum covering the ICMP data area. In ICMPv6, the checksum also covers the Source and
Destination IPv6 Address, Length, and Next Header fields in the IPv6 header.
Section 8.2 ICMP Messages
In IPv4, a Protocol field value of 1 indicates that the datagram caries ICMPv4.
In IPv6, the ICMPv6 message may begin after zero or more extension headers. The
last extension header before the ICMPv6 header includes a Next Header field with
value 58. ICMP messages may be fragmented like other IP datagrams (see Chapter
10), although this is not common.
Figure 8-2 shows the format of both ICMPv4 and ICMPv6 messages. The first
4 bytes have the same format for all messages, but the remainder differ from one
message to the next.
Figure 8-2
All ICMP messages begin with 8-bit Type and Code fields, followed by a 16-bit Checksum
that covers the entire message. The type and code values are different for ICMPv4 and
In ICMPv4, 42 different values are reserved for the Type field [ICMPTYPES],
which identify the particular message. Only about 8 of these are in regular
use, however. We will show the exact format of each commonly used message
throughout the chapter. Many types of ICMP messages also use different values
of the Code field to further specify the meaning of the message. The Checksum
field covers the entire ICMPv4 message; in ICMPv6 it also covers a pseudo-header
derived from portions of the IPv6 header (see Section 8.1 of [RFC2460]). The algorithm used for computing the checksum is the same as that used for the IP header
checksum defined in Chapter 5. Note that this is our first example of an end-to-end
checksum. It is carried all the way from the sender of the ICMP message to the
final recipient. In contrast, the IPv4 header checksum discussed in Chapter 5 is
changed at every router hop. If an ICMP implementation receives an ICMP message with a bad checksum, the message is discarded; there is no ICMP message
to indicate a bad checksum in a received ICMP message. Recall that the IP layer
has no protection on the payload portion of the datagram. If ICMP did not include
a checksum, the contents of the ICMP message might not be correct, leading to
incorrect system behavior.
ICMP Messages
We now look at ICMP messages in general and the most commonly used ones
in more detail. ICMP messages are grouped into two major categories: those
ICMPv4 and ICMPv6: Internet Control Message Protocol
messages relating to problems with delivering IP datagrams (called error messages), and those related to information gathering and configuration (called query
or informational messages).
ICMPv4 Messages
For ICMPv4, the informational messages include Echo Request and Echo Reply
(types 8 and 0, respectively), and Router Advertisement and Router Solicitation
(types 9 and 10, respectively, together called Router Discovery). The most common
error message types are Destination Unreachable (type 3), Redirect (type 5), Time
Exceeded (type 11), and Parameter Problem (type 12). Table 8-1 lists the message
types defined for standard ICMPv4 messages.
Table 8-1
The standard ICMPv4 message types, as determined by the Type field*
Official Name
0 (*)
3 (*)(+)
5 (*)
8 (*)
11 (*)(+)
12 (*)(+)
Echo Reply
Destination Unreachable
Source Quench
Router Advertisement
Router Solicitation
Time Exceeded
Parameter Problem
Echo (ping) reply; returns data
Unreachable host/protocol
Indicates congestion (deprecated)
Indicates alternate router should be used
Echo (ping) request (data optional)
Indicates router addresses/preferences
Requests Router Advertisement
Resource exhausted (e.g., IPv4 TTL)
Malformed packet or header
*Types marked with asterisks (*) are the most common. Those marked with a plus (+) may contain [RFC4884]
extension objects. In the fourth column, E is for error messages and I indicates query/informational messages.
For the commonly used messages (those with the asterisks next to the type
number in Table 8-1), the code numbers shown in Table 8-2 are used. Some messages are capable of carrying extended information [RFC4884] (those marked in
Table 8-1 with the plus sign).
The official list of message types is maintained by IANA [ICMPTYPES].
Many of these message types were defined by the original ICMPv4 specification [RFC0792] in 1981, prior to any significant experience using them. Additional
experience and the development of other protocols (e.g., DHCP) have resulted in
many of the messages defined then to cease being used. When IPv6 (and ICMPv6)
was designed, this fact was understood, so a somewhat more rational arrangement of types and codes has been defined for ICMPv6.
Section 8.2 ICMP Messages
Common ICMPv4 message types that use code numbers in addition to 0. Although all of these message types are relatively common, only a few of the codes are commonly used.
Table 8-2
Official Name
3 (*)
3 (*)
3 (*)
No route (at all) to destination
Known but unreachable host
Unknown (transport) protocol
Unknown/unused (transport) port
Needed fragmentation prohibited by DF
bit; used by PMTUD [RFC1191]
Intermediary hop not reachable
Deprecated [RFC1812]
Destination does not exist
Deprecated [RFC1812]
Deprecated [RFC1812]
5 (*)
Net Unreachable
Host Unreachable
Protocol Unreachable
Port Unreachable
Fragmentation Needed and Don’t
Fragment Was Set (PTB message)
Source Route Failed
Destination Network Unknown
Destination Host Unknown
Source Host Isolated
Communication with Destination
Network Administratively
Communication with Destination
Host Administratively Prohibited
Destination Network Unreachable
for Type of Service
Destination Host Unreachable for
Type of Service
Communication Administratively
Host Precedence Violation
Precedence Cutoff in Effect
Redirect Datagram for the Network
(or Subnet)
Redirect Datagram for the Host
Redirect Datagram for the Type of
Service and Network
Redirect Datagram for the Type of
Service and Host
Normal Router Advertisement
Does Not Route Common Traffic
11 (*)
12 (*)
Time to Live Exceeded in Transit
Fragment Reassembly Time
Pointer Indicates the Error
Missing a Required Option
Bad Length
Deprecated [RFC1812]
Type of service not available (net)
Type of service not available (host)
Communication prohibited by filtering
Precedence disallowed for src/dest/port
Below minimum ToS [RFC1812]
Indicates alternate router
Indicates alternate router (host)
Indicates alternate router (ToS/net)
Indicates alternate router (ToS/host)
Router’s address and configuration
With Mobile IP [RFC5944], router does not
route ordinary packets
Hop limit/TTL exceeded
Not all fragments of datagram arrived
before reassembly timer expired
Byte offset (pointer) indicates first problem
Packet had invalid Total Length field
ICMPv4 and ICMPv6: Internet Control Message Protocol
ICMPv6 Messages
Table 8-3 shows the message types defined for ICMPv6. Note that ICMPv6 is
responsible not only for error and informational messages but also for a great deal
of IPv6 router and host configuration.
Table 8-3
In ICMPv6, error messages have message types from 0 to 127. Informational messages have message
types from 128 to 255. The plus (+) notation indicates that the message may contain an extension
structure. Reserved, unassigned, experimental, and deprecated values are not shown.
Official Name
1 (+)
3 (+)
Destination Unreachable
Packet Too Big (PTB)
Time Exceeded
Parameter Problem
Reserved for private experimentation
Reserved for expansion of ICMPv6
error messages
Echo Request
Echo Reply
Multicast Listener Query
Unreachable host, port, protocol
Fragmentation required
Hop limit exhausted or
reassembly timed out
Malformed packet or header
Reserved for experiments
Hold for more error messages
Multicast Listener Report
Multicast Listener Done
Router Solicitation (RS)
Router Advertisement (RA)
Neighbor Solicitation (NS)
Neighbor Advertisement (NA)
Redirect Message
Inverse Neighbor Discovery
Solicitation Message
Inverse Neighbor Discovery
Advertisement Message
Version 2 Multicast Listener Report
ping request; may contain data
ping response; returns data
Queries multicast subscribers
Multicast subscriber report (v1)
Multicast unsubscribe
message (v1)
IPv6 RS with Mobile IPv6
IPv6 RA with Mobile IPv6
IPv6 Neighbor Discovery
IPv6 Neighbor Discovery
Use alternative next-hop router
Inverse Neighbor Discovery
request: requests IPv6 addresses
given link-layer address
Inverse Neighbor Discovery
response: reports IPv6 addresses
given link-layer address
Multicast subscriber report (v2)
Section 8.2 ICMP Messages
Table 8-3
In ICMPv6, error messages have message types from 0 to 127. Informational messages have message
types from 128 to 255. The plus (+) notation indicates that the message may contain an extension
structure. Reserved, unassigned, experimental, and deprecated values are not shown. (continued )
Official Name
Home Agent Address Discovery
Request Message
Home Agent Address Discovery Reply [RFC6275]
Mobile Prefix Solicitation
Mobile Prefix Advertisement
Certification Path Solicitation Message [RFC3971]
Certification Path Advertisement
Multicast Router Advertisement
Multicast Router Solicitation
Multicast Router Termination
FMIPv6 Messages
Reserved for private experimentation
Reserved for expansion of ICMPv6
informational messages
Requests Mobile IPv6 HA
address; send by mobile node
Contains MIPv6 HA address;
sent by eligible HA on home
Request home prefix while away
Provides prefix from HA to
Secure Neighbor Discovery
(SEND) request for a
certification path
SEND response to certification
path request
Provides address of multicast
Requests address of multicast
Done using multicast router
MIPv6 fast handover messages
Reserved for experiments
Hold for more informational
Immediately apparent in this list is the separation between the first set of message types and the second set (i.e., those messages with types below 128 and those
at or above). In ICMPv6, as in ICMPv4, messages are grouped into the informational and error classes. In ICMPv6, however, all the error messages have a 0 in the
high-order bit of the Type field. Thus, ICMPv6 types 0 through 127 are all errors,
and types 128 through 255 are all informational. Many of the informational messages are request/reply pairs.
In comparing the common ICMPv4 messages with the ICMPv6 standard messages, we conclude that some of the effort in designing ICMPv6 was to eliminate
the unused messages from the original specification while retaining the useful
ones. Following this approach, ICMPv6 also makes use of the Code field, primarily
to refine the meanings of certain error messages. In Table 8-4 we list those standard ICMPv6 message types (i.e., Destination Unreachable, Time Exceeded, and
Parameter Problem) for which more than the code value 0 has been defined.
ICMPv4 and ICMPv6: Internet Control Message Protocol
ICMPv6 standard message types with codes in addition to 0 assigned
Table 8-4
No Route to Destination
Administratively Prohibited
Beyond Scope of Source Address
Address Unreachable
Port Unreachable
Source Address Failed Policy
Route not present
Policy (e.g., firewall) prohibited
Destination scope exceeds source’s
Used if codes 0–2 are not appropriate
No transport entity listening on port
Ingress/egress policy violation
Reject Route to Destination
Hop Limit Exceeded in Transit
Reassembly Time Exceeded
Erroneous Header Field Found
Unrecognized Next Header
Unrecognized IPv6 Option
Specific reject route to destination
Hop Limit field decremented to 0
Unable to reassemble in limited time
General header processing error
Unknown Next Header field value
Unknown Hop-by-Hop or Destination option
In addition to the Type and Code fields that define basic functions in ICMPv6, a
large number of standard options are also supported, some of which are required.
This distinguishes ICMPv6 from ICMPv4 (ICMPv4 does not have options). Currently, standard ICMPv6 options are defined for use only with the ICMPv6 ND
messages (types 135 and 136) using the Option Format field discussed in [RFC4861].
We discuss these options when exploring ND in more detail in Section 8.5.
Processing of ICMP Messages
In ICMP, the processing of incoming messages varies from system to system. Generally speaking, the incoming informational requests are handled automatically
by the operating system, and the error messages are delivered to user processes
or to a transport protocol such as TCP [RFC5461]. The processes may choose to
act on them or ignore them. Exceptions to this general rule include the Redirect
message and the Destination Unreachable—Fragmentation Required messages.
The former results in an automatic update to the host’s routing table, whereas the
latter is used in the path MTU discovery (PMTUD) mechanism, which is generally
implemented by the transport-layer protocols such as TCP. In ICMPv6 the handling of messages has been tightened somewhat. The following rules are applied
when processing incoming ICMPv6 messages [RFC4443]:
1. Unknown ICMPv6 error messages must be passed to the upper-layer process that produced the datagram causing the error (if possible).
2. Unknown ICMPv6 informational messages are dropped.
Section 8.3 ICMP Error Messages
3. ICMPv6 error messages include as much of the original (“offending”) IPv6
datagram that caused the error as will fit without making the error message datagram exceed the minimum IPv6 MTU (1280 bytes).
4. When processing ICMPv6 error messages, the upper-layer protocol type is
extracted from the original or “offending” packet (contained in the body of
the ICMPv6 error message) and used to select the appropriate upper-layer
process. If this is not possible, the error message is silently dropped after
any IPv6-layer processing.
5. There are special rules for handling errors (see Section 8.3).
6. An IPv6 node must limit the rate of ICMPv6 error messages it sends. There
are a variety of ways of implementing the rate-limiting function, including
the token bucket approach mentioned in Section 8.3.
ICMP Error Messages
The distinction between the error and informational classes of ICMP messages mentioned in the previous section is important because certain restrictions are placed
on the generation of ICMPv4 error messages by [RFC1812] and on the generation
of ICMPv6 error messages by [RFC4443] that do not apply to queries. In particular,
an ICMP error message is not to be sent in response to any of the following messages: another ICMP error message, datagrams with bad headers (e.g., bad checksum), IP-layer broadcast/multicast datagrams, datagrams encapsulated in link-layer
broadcast or multicast frames, datagrams with an invalid or network zero source
address, or any fragment other than the first. The reason for imposing these restrictions on the generation of ICMP errors is to limit the creation of so-called broadcast
storms, a scenario in which the generation of a small number of messages creates an
unwanted traffic cascade (e.g., by generating error responses in response to error
responses, indefinitely). These rules can be summarized as follows:
An ICMPv4 error message is never generated in response to
• An ICMPv4 error message. (An ICMPv4 error message may, however, be
generated in response to an ICMPv4 query message.)
• A datagram destined for an IPv4 broadcast address or an IPv4 multicast
address (formerly known as a class D address).
• A datagram sent as a link-layer broadcast.
• A fragment other than the first.
• A datagram whose source address does not define a single host. This means
that the source address cannot be a zero address, a loopback address, a
broadcast address, or a multicast address.
ICMPv4 and ICMPv6: Internet Control Message Protocol
ICMPv6 is similar. An ICMPv6 error message is never generated in response to
• An ICMPv6 error message
• An ICMPv6 Redirect message
• A packet destined for an IPv6 multicast address, with two exceptions:
– The Packet Too Big (PTB) message
– The Parameter Problem message (code 2)
• A packet sent as a link-layer multicast (with the exceptions noted previously)
• A packet sent as a link-layer broadcast (with the exceptions noted previously)
• A packet whose source address does not uniquely identify a single node.
This means that the source address cannot be an unspecified address, an
IPv6 multicast address, or any address known by the sender to be an anycast address.
In addition to the rules governing the conditions under which ICMP messages
are generated, there is also a rule that limits the overall ICMP traffic level from a
single sender. In [RFC4443], a recommendation for rate-limiting ICMP messages
is to use a token bucket. With a token bucket, a “bucket” holds a maximum number
(B) of “tokens,” each of which allows a certain number of messages to be sent.
The bucket is periodically filled with new tokens (at rate N) and drained by 1 for
each message sent. Thus, a token bucket (or token bucket filter, as it is often called)
is characterized by the parameters (B, N). For small or midsize devices, [RFC4443]
provides an example token bucket using the parameters (10, 10). Token buckets
are a common mechanism used in protocol implementations to limit bandwidth
utilization, and in many cases B and N are in byte units rather than message units.
When an ICMP error message is sent, it contains a copy of the full IP header
from the “offending” or “original” datagram (i.e., the IP header of the datagram
that caused the error to be generated, including any IP options), plus any other
data from the original datagram’s IP payload area such that the generated IP/
ICMP datagram’s size does not exceed a specific value. For IPv4 this value is 576
bytes, and for IPv6 it is the IPv6 minimum MTU, which is at least 1280 bytes.
Including a portion of the payload from the original IP datagram lets the receiving ICMP module associate the message with one particular protocol (e.g., TCP
or UDP) from the Protocol or Next Header field in the IP header and one particular
user process (from the TCP or UDP port numbers that are in the TCP or UDP
header contained in the first 8 bytes of the IP datagram payload area).
Before the publication of [RFC1812], the ICMP specification required only the
first 8 bytes of the offending IP datagram to be included (because this is enough
to determine the port number for UDP and TCP; see Chapters 10 and 12), but as
Section 8.3 ICMP Error Messages
more complex protocol layerings have become popular (such as IP being encapsulated in IP), additional information is now needed for the effective diagnosis of
problems. In addition, several error messages may include extensions. We begin
by briefly discussing the extension method, and then we discuss each of the more
important ICMP error messages.
Extended ICMP and Multipart Messages
[RFC4884] specifies a method for extending the utility of ICMP messages by allowing an extension data structure to be appended to them. The extension structure
includes an extension header and extension objects that may contain a variable
amount of data, as illustrated in Figure 8-3.
Figure 8-3
Extended ICMPv4 and ICMPv6 messages include a 32-bit extension header and zero or more
associated objects. Each object includes a fixed-size header and a variable-length data area. For
compatibility, the primary ICMP payload area is at least 128 bytes.
ICMPv4 and ICMPv6: Internet Control Message Protocol
The Length field is repurposed from the sixth byte of the ICMPv4 header and
the fifth byte of the ICMPv6 header. (These bytes had previously been reserved
with value 0.) In ICMPv4, it indicates the offending datagram size in 32-bit word
units. For ICMPv6, it is in 64-bit units. These datagram portions are padded with
zeros as necessary to be 32-bit- and 64-bit-aligned, respectively. When extensions
are used, the ICMP payload area containing the original datagram must be at least
128 bytes long.
The extension structure may be used with ICMPv4 Destination Unreachable,
Time Exceeded, and Parameter Problem messages as well as ICMPv6 Destination
Unreachable and Time Exceeded messages. We will look at each of these in some
detail in the following sections.
Destination Unreachable (ICMPv4 Type 3, ICMPv6 Type 1) and Packet Too Big
(ICMPv6 Type 2)
We now look more closely at one of the more common ICMP message types, Destination Unreachable. Messages of this type are used to indicate that a datagram
could not be delivered all the way to its destination because of either a problem in
transit or the lack of a receiver interested in receiving it. Although 16 different codes
are defined for this message in ICMPv4, only 4 are commonly used. These include
Host Unreachable (code 1), Port Unreachable (code 3), Fragmentation Required/
Don’t-Fragment Specified (code 4), and Communication Administratively Prohibited (code 13). In ICMPv6, the Destination Unreachable message is type 1 with
seven possible code values. In ICMPv6, as compared with IPv4, the Fragmentation
Required message has been replaced by an entirely different type (type 2), but the
usage is very similar to the corresponding ICMP Destination Unreachable message,
so we discuss it here. In ICMPv6 this is called the Packet Too Big (PTB) message. We
will use the simpler ICMPv6 PTB terminology from here onward to refer to either
the ICMPv4 (type 3, code 4) message or the ICMPv6 (type 2, code 0) message.
The formats for all of the Destination Unreachable messages specified for
ICMPv4 and ICMPv6 are shown in Figure 8-4. For Destination Unreachable messages, the Type field is 3 for ICMPv4 and 1 for ICMPv6. The Code field indicates the
particular item or reason for the reachability failure. We now look at each of these
messages in detail. ICMPv4 Host Unreachable (Code 1) and ICMPv6 Address Unreachable
(Code 3)
This form of the Destination Unreachable message is generated by a router or
host when it is required to send an IP datagram to a host using direct delivery
(see Chapter 5) but for some reason cannot reach the destination. This situation
may arise, for example, because the last-hop router is attempting to send an ARP
request to a host that is either missing or down. This situation is explored in Chapter 4, which describes ARP. For ICMPv6, which uses a somewhat different mechanism for detecting unresponsive hosts, this message can be the result of a failure
in the ND process (see Section 8.5).
Section 8.3 ICMP Error Messages
Figure 8-4
The ICMP Destination Unreachable messages in ICMPv4 (left) and ICMPv6 (right). The Length
field, present in extended ICMP implementations that conform to [RFC4884], gives the number of
words used to hold the original datagram measured in 4-byte units (IPv4) or 8-byte units (IPv6).
An optional extension structure may be included. The ICMP field labeled various is used to hold
the next-hop MTU when the code value is 4, which is used by PMTUD. ICMPv6 uses a different
ICMPv6 PTB message (ICMPv6 type 2) for this purpose. ICMPv6 No Route to Destination (Code 0)
This message refines the Host Unreachable message from ICMPv4 to differentiate those hosts not reachable because of failure of direct delivery and those that
cannot be reached because no route is present. This message is generated only in
cases where an arriving datagram must be forwarded without using direct delivery, but where no route entry exists to indicate what router to use as a next hop. As
we have seen, IP routers must contain a valid next-hop forwarding entry for the
destination in any packets they receive if they are going to successfully perform
forwarding. ICMPv4 Communication Administratively Prohibited (Code 13) and
ICMPv6 Communication with Destination Administratively Prohibited
(Code 1)
In ICMPv4 and ICMPv6, these Destination Unreachable messages provide the ability to indicate that an administrative prohibition is preventing successful communication with the destination. This is typically the result of a firewall (see Chapter 7)
that intentionally drops traffic that fails to comply with some operational policy
enforced by the router that sent the ICMP error. In many cases, the fact that there is
a special policy to drop traffic should not be advertised, so it is generally possible
to disable the generation of these messages by either silently discarding incoming
packets or generating some other ICMP error message instead. ICMPv4 Port Unreachable (Code 3) and ICMPv6 Port Unreachable (Code 4)
The Port Unreachable message is generated when an incoming datagram is destined for an application that is not ready to receive it. This occurs most commonly
in conjunction with UDP (see Chapter 10), when a message is sent to a port number
ICMPv4 and ICMPv6: Internet Control Message Protocol
that is not in use by any server process. If UDP receives a datagram and the destination port does not correspond to a port that some process has in use, UDP
responds with an ICMP Port Unreachable message.
We can illustrate the operation of ICMPv4 Port Unreachable messages using
the Trivial File Transfer Protocol (TFTP) [RFC1350] client on Windows or Linux while
watching the packet exchange using tcpdump. The well-known UDP port for the
TFTP service is 69. However, while the TFTP client is available on many systems,
most systems do not run TFTP servers. Therefore, it is easy to see what happens
when we try to access a nonexistent server. In the example shown in Listing 8-1,
we execute the TFTP client, called tftp, on a Windows machine and attempt to
fetch a file from a Linux machine. The –s option for tcpdump causes 1500 bytes
to be captured per packet; the –i eth1 option tells tcpdump to monitor traffic on
the Ethernet interface named eth1; the –vv option causes additional descriptive
output to be included; and the expression icmp or port tftp causes traffic
matching either the TFTP port (69) or the ICMPv4 protocol to be included in the
Listing 8-1
TFTP client demonstrating an application timeout and ICMP rate limiting
C:\> tftp get /foo
Timeout occurred
try to fetch file "/foo" from
timeout occurred after about 9 seconds
Linux# tcpdump -s 1500 -i eth1 -vv icmp or port tftp
1 09:45:48.974812 IP (tos 0x0, ttl 128, id 9914, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
2 09:45:48.974812 IP (tos 0xc0, ttl 255, id 43734, offset 0, flags
[none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9914, offset 0,
flags [none], length: 44) > [udp sum ok] 16
RRQ "/foo" netascii
3 09:45:49.014812 IP (tos 0x0, ttl 128, id 9915, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
4 09:45:49.014812 IP (tos 0xc0, ttl 255, id 43735, offset 0, flags
[none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9915, offset 0,
Section 8.3 ICMP Error Messages
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
5 09:45:49.014812 IP (tos 0x0, ttl 128, id 9916, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
6 09:45:49.014812 IP (tos 0xc0, ttl 255, id 43736, offset 0, flags
[none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9916, offset 0,
flags [none], length: 44) > [udp sum ok] 16
RRQ "/foo" netascii
7 09:45:49.024812 IP (tos 0x0, ttl 128, id 9917, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
8 09:45:49.024812 IP (tos 0xc0, ttl 255, id 43737, offset 0,
flags [none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9917, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
9 09:45:49.024812 IP (tos 0x0, ttl 128, id 9918, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
10 09:45:49.024812 IP (tos 0xc0, ttl 255, id 43738, offset 0,
flags [none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9918, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
11 09:45:49.034812 IP (tos 0x0, ttl 128, id 9919, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
ICMPv4 and ICMPv6: Internet Control Message Protocol
12 09:45:49.034812 IP (tos 0xc0, ttl 255, id 43739, offset 0,
flags [none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 9919, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
13 09:45:49.034812 IP (tos 0x0, ttl 128, id 9920, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
14 09:45:57.054812 IP (tos 0x0, ttl 128, id 22856, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
15 09:45:57.054812 IP (tos 0xc0, ttl 255, id 43740, offset 0,
flags [none], length: 72) > icmp 52: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 22856, offset 0,
flags [none], length: 44) > [udp sum ok]
RRQ "/foo" netascii
16 09:45:57.064812 IP (tos 0x0, ttl 128, id 22906, offset 0,
flags [none], length: 51) > [udp sum ok]
23 ERROR EUNDEF timeout on receive"
17 09:45:57.064812 IP (tos 0xc0, ttl 255, id 43741, offset 0,
flags [none], length: 79) > icmp 59: udp port tftp unreachable
for IP (tos 0x0, ttl 128, id 22906, offset 0,
flags [none], length: 51) > [udp sum ok]
23 ERROR EUNDEF timeout on receive"
Here we see a set of seven requests grouped very close to each other in time.
The initial request (identified as RRQ for file /foo) comes from UDP port 3871,
destined for the TFTP service (port 69). An ICMPv4 Port Unreachable message is
immediately returned (packet 2), but the TFTP client appears to ignore the message, sending another UDP datagram right away. This continues immediately
six more times. After waiting about another 8s, the client tries one last time and
finally gives up.
Section 8.3 ICMP Error Messages
Note that the ICMPv4 messages are sent without any port number designation, and each 16-byte TFTP packet is from a specific port (3871) and to a specific
port (TFTP, equal to 69). The number 16 at the end of each TFTP read request
(RRQ) line is the length of the data in the UDP datagram. In this example, 16 is
the sum of the TFTP’s 2-byte opcode, the 5-byte null-terminated name /foo, and
the 9-byte null-terminated string netascii. The full ICMPv4 Unreachable message is depicted in Figure 8-5. It is 52 bytes long (not including the IPv4 header):
4 bytes for the basic ICMPv4 header, followed by 4 unused bytes (see Figure 8-5;
this implementation does not use [RFC4884] extensions), the 20-byte offending
IPv4 header, 8 bytes for the UDP header, and finally the remaining 16 bytes from
the original tftp application request (4 + 4 + 20 + 8 + 16 = 52).
Figure 8-5
An ICMPv4 Destination Unreachable – Port Unreachable error message contains as
much of the offending IPv4 datagram as possible such that the overall IPv4 datagram
does not exceed 576 bytes. In this example, there is enough room to include the entire
TFTP request message.
As mentioned previously, one reason ICMP includes the offending IP header
in error messages is that doing so helps ICMP know how to interpret the bytes that
follow encapsulated IP header (the UDP header in this example). Because a copy of
the offending UDP header is included in the returned ICMP message, the source
and destination port numbers can be learned. It is this destination port number
(tftp, 69) that caused the ICMP Port Unreachable message to be generated. The
source port number (3871) can be used by the system receiving the ICMP error to
associate the error with a particular user process (the TFTP client in this example,
although we saw that this client does not make much use of the indication).
Note that after the seventh request (packet 13), no error is returned for some
time. The reason for this is that the Linux-based server performs rate limiting. That
is, it limits the number of ICMP messages of the same type that can be generated
in a period of time, as suggested by [RFC1812]. If we look at the elapsed time
between the initial error message (packet 2, with timestamp 48.974812) and the
final message before the 8s gap (packet 12, with timestamp 49.034812), we compute
ICMPv4 and ICMPv6: Internet Control Message Protocol
that 60ms have elapsed. If we count the number of ICMP messages over this time,
we conclude that (6 messages/.06s) = 100 messages/s is the rate limit. This can be
verified by inspecting the values of the ICMPv4 rate mask and rate limit in Linux:
Linux% sysctl -a | grep icmp_rate
net.ipv4.icmp_ratemask = 6168
net.ipv4.icmp_ratelimit = 100
Here we see that several ICMPv4 messages are to be rate-limited, and that the
rate limit for all of them is 100 (measured in messages per second). The ratemask
variable indicates which messages have the limit applied to them, by turning on
the kth bit in the mask if the message with code number k is to be limited, starting
from 0. In this case, codes 3, 4, 11, and 12 are being limited (because 6168 = 0x1818
= 0001100000011000, where bits 3, 4, 11, and 12 from the right are turned on). If we
were to set the rate limit to 0 (meaning no limit), we would find that Linux returns
nine ICMPv4 messages, one corresponding to each tftp request packet, and the
tftp client times out almost immediately. This behavior also occurs when trying
to access a Windows XP machine, which does not perform ICMP rate limiting.
Why does the TFTP client keep retransmitting its request when the error messages are being returned? A detail of network programming is revealed here.
Most systems do not notify user processes using UDP that ICMP that messages for
them have arrived unless the process calls a special function (i.e., connect on the
UDP socket). Common TFTP clients do not call this function, so they never receive
the ICMP error notification. Without hearing any response regarding the fate of
its TFTP protocol requests, the TFTP client tries again and again to retrieve its file.
This is an example of a poor request and retry mechanism. Although TFTP does
have extensions for adjusting this behavior (see [RFC2349]), we shall see later (in
Chapter 16) that a more sophisticated transport protocol such as TCP has a much
better algorithm. ICMPv4 PTB (Code 4)
If an IPv4 router receives a datagram that it intends to forward, and if the datagram does not fit into the MTU in use on the selected outgoing network interface,
the datagram must be fragmented (see Chapter 10). If the arriving datagram has
the Don’t Fragment bit field set in its IP header, however, it is not forwarded but
instead is dropped, and this ICMPv4 Destination Unreachable (PTB) message is
generated. Because the router sending this message knows the MTU of the next
hop, it is able to include the MTU value in the error message it generates.
This message was originally intended to be used for network diagnostics but
has since been used for path MTU discovery. PMTUD is used to determine an
appropriate packet size to use when communicating with a particular host, on the
assumption that avoiding packet fragmentation is desirable. It is used most commonly with TCP, and we cover it in more detail in Chapter 14.
Section 8.3 ICMP Error Messages
371 ICMPv6 PTB (Type 2, Code 0)
In ICMPv6, a special message and type code combination is used to indicate that
a packet is too large for the MTU of the next hop (see Figure 8-6).
Figure 8-6
The ICMPv6 Packet Too Big message (type 2) works like the corresponding ICMPv4
Destination Unreachable message. The ICMPv6 variant includes 32 bits to hold the nexthop MTU.
This message is not a Destination Unreachable message. Recall that in IPv6,
packet fragmentation is performed only by the sender of a datagram and that
MTU discovery is always supposed to be used. Thus, this message is used primarily by the IPv6 PMTUD mechanism, but also in the (rare) circumstances that
a packet arrives that is too large to be carried over the next hop. Because routes
may change after the operation of PMTUD and after a packet is injected into the
network, it is always possible that a packet arriving at a router is too large for the
outgoing MTU. As is the case with modern implementations of ICMPv4 Destination Unreachable code 4 (PTB) messages, the suggested MTU size of the packet,
based on the MTU of the egress link of the router generating the ICMP message,
is carried in the indication. ICMPv6 Beyond Scope of Source Address (Code 2)
As we saw in Chapter 2, IPv6 uses addresses of different scopes. Thus, it is possible to construct a packet with source and destination addresses of different
scopes. Furthermore, it is possible that the destination address may not be reachable within the same scope. For example, a packet with a source address using
link-local scope may be destined for a globally scoped destination that requires
traversal of more than one router. Because the source address is of insufficient
scope, the packet is dropped by a router, and this form of ICMPv6 error is produced to indicate the problem.
ICMPv4 and ICMPv6: Internet Control Message Protocol ICMPv6 Source Address Failed Ingress/Egress Policy (Code 5)
Code 5 is a more refined version of code 1, to be used when a particular ingress
or egress filtering policy is the reason for prohibiting the successful delivery of a
datagram. This might be used, for example, when a host attempts to send traffic
using a source IPv6 address from an unexpected network prefix [RFC3704]. ICMPv6 Reject Route to Destination (Code 6)
A reject or blocking route is a special routing or forwarding table entry (see Chapter
5), which indicates that matching packets should be dropped and an ICMPv6 Destination Unreachable Reject Route message should be generated. (A similar type of
entry called a blackhole route also causes matching packets to be dropped, but usually without generating the Destination Unreachable message.) Such routes may
be installed in a router’s forwarding table to prevent leakage of packets sent to
unwanted destinations. Unwanted destinations may include martian routes (prefixes not used on the public Internet) and bogons (valid prefixes not yet allocated).
Redirect (ICMPv4 Type 5, ICMPv6 Type 137)
If a router receives a datagram from a host and can determine that it is not the correct next hop for the host to have used to deliver the datagram to its destination,
the router sends a Redirect message to the host and sends the datagram on to the
correct router (or host). That is, if it can determine that there is a better next hop
than itself for the given datagram, it redirects the host to update its forwarding
table so that future traffic for the same destination will be directed toward the
new node. This facility provides a crude form of routing protocol by indicating to
the IP forwarding function where to send its packets. The process of IP forwarding is discussed in detail in Chapter 5.
In Figure 8-7, a network segment has a host and two routers, R1 and R2. When
the host sends a datagram incorrectly through router R2, R2 responds by sending
the Redirect message to the host, while forwarding the datagram to R1. Although
hosts may be configured to update their forwarding tables based on ICMP redirects, routers are discouraged from doing so under the assumption that routers should already know the best next-hop nodes for all reachable destinations
because they are using dynamic routing protocols.
The ICMP Redirect message includes the IP address of the router (or destination host, if it is reachable using direct delivery) a host should use as a next hop for
the destination specified in the ICMP error message (see Figure 8-8). Originally
the redirect facility supported a distinction between a redirect for a host and a
redirect for a network, but once classless addressing was used (CIDR; see Chapter
2), the network redirect form effectively vanished. Thus, when a host receives a
host redirect, it is effective only for that single IP destination address. A host that
consistently chooses the wrong router can wind up with a forwarding table entry
for every destination it contacts outside its local subnet, each of which has been
added as the result of receiving a Redirect message from its configured default
router. The format of the ICMPv4 Redirect message is shown in Figure 8-8.
Section 8.3 ICMP Error Messages
Figure 8-7
The host incorrectly sends a datagram via R2 toward its destination. R2 realizes the
host’s mistake and sends the datagram to the proper router, R1. It also informs the host
of the error by sending an ICMP Redirect message. The host is expected to adjust its forwarding tables so that future datagrams to the same destination go through R1 without
bothering R2.
Figure 8-8
The ICMPv4 Redirect message includes the IPv4 address of the correct router to use as a
next hop for the datagram included in the payload portion of the message. A host typically checks the IPv4 source address of the incoming Redirect message to verify that it is
coming from the default router it is currently using.
We can examine the behavior of a Redirect message by changing our host to
use an incorrect router (another host on the same network) as its default next hop.
As an example, we first change our default route and then attempt to contact a
remote server. Our system will mistakenly attempt to forward its outgoing packets to the specified host:
C:\> netstat -rn
Network Dest
ICMPv4 and ICMPv6: Internet Control Message Protocol
C:\> route delete
delete default
C:\> route add mask
add new
C:\> ping
sends thru
Pinging [] with 32 bytes of data:
Ping statistics for
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 5ms, Average = 2ms
While this is taking place, we can run tcpdump to observe the activities (some
lines have been wrapped for clarity):
Linux# tcpdump host
1 20:27:00.759340 IP > icmp 40:
echo request seq 15616
2 20:27:00.759445 IP > icmp 68:
redirect to host
3 20:27:00.759468 IP > icmp 40:
echo request seq 15616
Here our host ( sends an ICMPv4 Echo Request (ping) message
to the host After the name is resolved by DNS (see
Chapter 11) to the IPv4 address, the Request message is sent to the
first hop,, rather than the correct default router, Because
the system with IPv4 address is properly configured, it understands that the original sending host should have used the router As
expected, it responds with an ICMPv4 Redirect message toward the host, indicating that in the future, any traffic destined for should
go through the router
In ICMPv6, the Redirect message (type 137) contains the target address and
the destination address (see Figure 8-9), and it is defined in conjunction with the
ND process (see Section 8.5). The Target Address field contains the correct node’s
link-local IPv6 address that should be used for the next hop. The Destination
Address is the destination IPv6 address in the datagram that evoked the redirect.
In the particular situation where the destination is an on-link neighbor to the host
receiving the redirect, the Target Address and Destination Address fields are identical. This provides a method for informing a host that another host is on the same
link, even if the two hosts do not share a common address prefix [RFC5942].
Section 8.3 ICMP Error Messages
Figure 8-9
The ICMPv6 Redirect message. The target address indicates the IPv6 address of a better
next-hop router for the node identified by the destination address. This message can also
be used to indicate that the destination address is an on-link neighbor to the node sending the message that induced the error message. In this case, the destination and target
addresses are the same.
As with other ND messages in ICMPv6, this message can include options. The
types of options include the Target Link-Layer Address option and the Redirected
Header option. The Target Link-Layer Address option is required in cases where
the Redirect message is used on a non-broadcast multiple access (NBMA) network,
because in such cases there may be no other efficient way for the host receiving
the Redirect message to determine the link-layer address for the new next hop.
The Redirected Header option holds a portion of the IPv6 packet that caused the
Redirect message to be generated. We discuss the format of these options and others in Section 8.5 when exploring IPv6 Neighbor Discovery.
ICMP Time Exceeded (ICMPv4 Type 11, ICMPv6 Type 3)
Every IPv4 datagram has a Time-to-Live (TTL) field in its IPv4 header, and every
IPv6 datagram has a Hop Limit field in its header (see Chapter 5). As originally
conceived, the 8-bit TTL field was to hold the number of seconds a datagram was
allowed to remain active in the network before being forcibly discarded (a good
thing if forwarding loops are present). Because of an additional rule that said that
any router must decrement the TTL field by at least 1, combined with the fact that
datagram forwarding times grew to be small fractions of a second, the TTL field
has been used in practice as a limitation on the number of hops an IPv4 datagram
is allowed to take before it is discarded by a router. This usage was formalized and
ultimately adopted in IPv6. ICMP Time Exceeded (code 0) messages are generated
when a router discards a datagram because the TTL or Hop Limit field is too low
(i.e., arrives with value 0 or 1 and must be forwarded). This message is important
for the proper operation of the traceroute tool (called tracert on Windows).
Its format, for both ICMPv4 and ICMPv6, is given in Figure 8-10.
ICMPv4 and ICMPv6: Internet Control Message Protocol
Figure 8-10
The ICMP Time Exceeded message format for ICMPv4 and ICMPv6. The message is
standardized for both the TTL or hop count being exceeded (code 0) or the time for reassembling fragments exceeding some preconfigured threshold (code 1).
Another less common variant of this message is when a fragmented IP datagram only partially arrives at its destination (i.e., all its fragments do not arrive
after a period of time). In such cases, a variant of the ICMP Time Exceeded message (code 1) is used to inform the sender that its overall datagram has been discarded. Recall that if any fragment of a datagram is dropped, the entire datagram
is lost. Example: The traceroute Tool
The traceroute tool is used to determine the routers used along a path from
a sender to a destination. We shall discuss the operation of the IPv4 version. The
approach involves sending datagrams first with an IPv4 TTL field set to 1 and
allowing the expiring datagrams to induce routers along the path to send ICMPv4
Time Exceeded (code 0) messages. Each round, the sending TTL value is increased
by 1, causing the routers that are one hop farther to expire the datagrams and
generate ICMP messages. These messages are sent from the router’s primary IPv4
address “facing” the sender. Figure 8-11 shows how this approach works.
Figure 8-11
The traceroute tool can be used to determine the routing path, assuming it does not
fluctuate too quickly. When using traceroute, routers are typically identified by the
IP addresses assigned to the interfaces “facing” or nearest to the host performing the
Section 8.3 ICMP Error Messages
In this example, traceroute is used to send UDP datagrams (see Chapter
10) from the laptop to the host (an Internet host with
IPv4 address, not shown in Figure 8-11). This is accomplished
using the following command:
Linux% traceroute –m 2
traceroute to (, 2 hops max,
52 byte packets
1 gw ( 3.213 ms 0.839 ms 0.920 ms
2 ( 1.524 ms 1.221 ms 9.176 ms
The –m option instructs traceroute to perform only two rounds: one using
TTL = 1 and one using TTL = 2. Each line gives the information found at the corresponding TTL. For example, line 1 indicates that one hop away a router with IPv4
address was found and that three independent round-trip-time measurements (3.213, 0.839, and 0.920ms) were taken. The difference between the
first and subsequent times relates to additional work that is involved in the first
measurement (i.e., an ARP transaction). Figures 8-12 and 8-13 show Wireshark
packet captures indicating how the outgoing datagrams and returning ICMPv4
messages are structured.
Figure 8-12 traceroute using IPv4 begins by sending a UDP/IPv4 datagram with TTL = 1 to destination port
number 33435. Each TTL value is tried three times before being incremented by 1 and retried. Each
expiring datagram causes the router at the appropriate hop distance to send an ICMPv4 Time Exceeded
message back to the source. The message’s source address is that of the router “facing” the sender.
ICMPv4 and ICMPv6: Internet Control Message Protocol
Looking at Figure 8-12, we can see that traceroute sends six datagrams, and
that each datagram is sent to a destination port number in sequence, starting with
33435. If we look more closely, we can see that the first three datagrams are sent
with TTL = 1 and the second set of three are sent with TTL = 2. Figure 8-12 shows
the first one. Each datagram causes an ICMPv4 Time Exceeded (code 0) message
to be sent. The first three are sent from router N3 (IPv4 address, and
the next three are sent from router N2 (IPv4 address Figure 8-13 shows the
last ICMP message in more detail.
Figure 8-13 The final ICMPv4 Time Exceeded message of the trace is sent by N2 (IPv4 address It includes a copy of the original datagram that caused the Time Exceeded
message to be generated. The TTL of the inner IPv4 header is 0 because N2 decremented
it from 1.
This is the final Time Exceeded message of the trace. It contains the original
IPv4 datagram (packet 11), as seen by N2 upon receipt. This datagram arrives with
TTL = 1, but after being decremented is too small for N2 to perform additional
forwarding to Consequently, N2 sends a Time Exceeded message
back to the source of the original datagram.
Section 8.3 ICMP Error Messages
Parameter Problem (ICMPv4 Type 12, ICMPv6 Type 4)
ICMP Parameter Problem messages are generated by a host or router receiving
an IP datagram containing some problem in its IP header that cannot be repaired.
When a datagram cannot be handled and no other ICMP message adequately
describes the problem, this message acts as a sort of “catchall” error condition
indicator. In both ICMPv4 and ICMPv6, if there is an error in the header such that
some field is out of acceptable range, a special ICMP error message Pointer field
indicates the byte offset of the field where the error was found, relative to the
beginning of the offending IP header. With ICMPv4, for example, a value of 1 in
the Pointer field indicates a bad IPv4 DS Field or ECN field (together, these fields
used to be called the IPv4 Type of Service or ToS Byte which has since been redefined and renamed; see Chapter 5). The format of the ICMPv4 Parameter Problem
message is shown in Figure 8-14.
Figure 8-14
The ICMPv4 Parameter Problem message is used when no other message applies. The
Pointer field indicates the byte index of the problematic value in the offending IPv4
header. Code 0 is most common. Code 1 was formerly used to indicate that a required
option was missing but is now historic. Code 2 indicates that the offending IPv4 datagram has a bad IHL or Total Length field.
Code 0 is the most common variant of the ICMPv4 Parameter Problem messages and is used when there is almost any problem with the IPv4 header, although
problems with the header or datagram Total Length fields may instead generate
code 2 messages. Code 1 was once used to indicate missing options such as security labels on packets but is now historic. Code 2, a more recently defined code,
indicates a bad length in the IHL or Total Length fields (see Chapter 5). The ICMPv6
version of this error message is shown in Figure 8-15.
In ICMPv6, the treatment of this error has been refined somewhat, relative to
the ICMPv4 version, into three cases: erroneous header field encountered (code
0), unrecognized Next Header type encountered (code 1), and unrecognized IPv6
option encountered (code 2). As with the corresponding error message in ICMPv4,
the ICMPv6 parameter problem Pointer field gives the byte offset into the offending IPv6 header that caused the problem. For example, a Pointer field of 40 would
indicate a problem with the first IPv6 extension header.
ICMPv4 and ICMPv6: Internet Control Message Protocol
Figure 8-15 The ICMPv6 Parameter Problem message. The Pointer field gives the byte offset into
the original datagram where an error was encountered. Code 0 indicates a bad header
field. Code 1 indicates an unrecognized Next Header type, and Code 2 indicates that an
unknown IPv6 option was encountered.
The erroneous header (code 0) error occurs when a field in one of the IPv6
headers contains an illegal value. A code 1 error occurs when an IPv6 Next Header
(header chaining) field contains a value corresponding to a header type that the
IPv6 implementation does not support. Finally, code 2 is used when an IPv6 header
option is received but not recognized by the implementation.
ICMP Query/Informational Messages
Although ICMP defines a number of query messages such as Address Mask
Request/Reply (types 17/18), Timestamp Request/Reply (types 13/14), and Information Request/Reply (types 15/16), these functions have been replaced by other,
more purpose-specific protocols (including DHCP; see Chapter 6). The only
remaining popular ICMP query/informational messages are the Echo Request/
Response messages, more commonly called ping, and the Router Discovery messages. Even the Router Discovery mechanism is not in wide use with IPv4, but its
analog (part of Neighbor Discovery) in IPv6 is fundamental. In addition, ICMPv6
has been extended to support Mobile IPv6 and the discovery of multicast-capable
routers. In this section, we investigate the Echo Request/Reply functions and the
messages used for basic router and Multicast Listener Discovery (also see Chapters 6 and 9). In the subsequent section, we explore the operation of Neighbor
Discovery in IPv6.
Echo Request/Reply (ping) (ICMPv4 Types 0/8, ICMPv6 Types 129/128)
One of the most commonly used ICMP message pairs is Echo Request and Echo
Response (or Reply). In ICMPv4 these are types 8 and 0, respectively, and in
ICMPv6 they are types 128 and 129, respectively. ICMP Echo Request messages
may be of nearly arbitrary size (limited by the ultimate size of the encapsulating
Section 8.4 ICMP Query/Informational Messages
IP datagram). With ICMP Echo Reply messages, the ICMP implementation is
required to return any data received back to the sender, even if multiple IP fragments are involved. The ICMP Echo Request/Response message format is shown
in Figure 8-16.
As with other ICMP query/informational messages, the server must echo the
Identifier and Sequence Number fields back in the reply.
Figure 8-16
Format of the ICMPv4 and ICMPv6 Echo Request and Echo Reply messages. Any
optional data included in a request must be returned in a reply. NATs use the Identifier
field to match requests with replies, as discussed in Chapter 7.
These messages are sent by the ping program, which is commonly used to
quickly determine if a computer is reachable on the Internet. At one time, if you
could “ping” a host, you could almost certainly reach it by other means (remote
login, other services, etc.). With firewalls in common use, however, this is now far
from certain.
The name ping is taken from the sonar operation to locate objects. The ping program was written by Mike Muuss, who maintained an amusing Web page describing its history [PING].
Implementations of ping set the Identifier field in the ICMP message to some
number that the sending host can use to demultiplex returned responses. In
UNIX-based systems, for example, the process ID of the sending process is typically placed in the Identifier field. This allows the ping application to identify the
returned responses if there are multiple instances of ping running at the same
time on the same host, because the ICMP protocol does not have the benefit of
transport-layer port numbers. This field is often known as the Query Identifier field
when referring to firewall behavior (see Chapter 7).
When a new instance of the ping program is run, the Sequence Number field
starts with the value 0 and is increased by 1 every time a new Echo Request
ICMPv4 and ICMPv6: Internet Control Message Protocol
message is sent. ping prints the sequence number of each returned packet, allowing the user to see if packets are missing, reordered, or duplicated. Recall that IP
(and consequently ICMP) is a best-effort datagram delivery service, so any of these
three conditions can occur. ICMP does, however, include a data checksum not
provided by IP.
The ping program also typically includes a copy of the local time in the
optional data area of outgoing echo requests. This time, along with the rest of the
contents of the data area, is returned in an Echo Response message. The ping program notes the current time when a response is received and subtracts the time
in the reply from the current time, giving an estimate of the RTT to reach the host
that was “pinged.” Because only the original sender’s notion of the current time is
used, this feature does not require any synchronization between the clocks at the
sender and receiver. A similar approach is used by the traceroute tool for its
RTT measurements.
Early versions of the ping program operated by sending an Echo Request
message once per second, printing each returning echo reply. Newer implementations, however, have increased the variability in output formats and behaviors.
On Windows, the default is to send four echo requests, one per second, print some
statistics, and exit; the -t option is required to allow the Windows ping application to continue until stopped by the user. On Linux, the behavior is the traditional
one—the default is to run until interrupted by the user, sending an echo request
each second and printing any responses. Many other variants of ping have been
developed over the years, and there are several other standard options. With some
versions of the application, a large packet can be constructed to contain special
data patterns. This has been used to look for data-dependent errors in network
communications equipment.
In the following example, we send an ICMPv4 Echo Request to the subnet
broadcast address. This particular version of the ping application (Linux) requires
us to specify the -b flag to indicate that it is indeed our intention (and it gives us
a warning regarding this, because it can generate a substantial volume of network
traffic) to use the broadcast address:
Linux% ping -b
WARNING: pinging broadcast address
PING ( from : 56(84) bytes of data.
64 bytes from icmp_seq=0 ttl=255 time=1.290 msec
64 bytes from icmp_seq=0 ttl=64 time=1.853 msec (DUP!)
64 bytes from icmp_seq=0 ttl=64 time=2.311 msec (DUP!)
64 bytes from icmp_seq=1 ttl=255 time=382 usec
64 bytes from icmp_seq=1 ttl=64 time=1.587 msec (DUP!)
64 bytes from icmp_seq=1 ttl=64 time=2.406 msec (DUP!)
64 bytes from icmp_seq=2 ttl=255 time=380 usec
64 bytes from icmp_seq=2 ttl=64 time=1.573 msec (DUP!)
64 bytes from icmp_seq=2 ttl=64 time=2.394 msec (DUP!)
64 bytes from icmp_seq=3 ttl=255 time=389 usec
64 bytes from icmp_seq=3 ttl=64 time=1.583 msec (DUP!)
64 bytes from icmp_seq=3 ttl=64 time=2.403 msec (DUP!)
Section 8.4 ICMP Query/Informational Messages
--- ping statistics --4 packets transmitted, 4 packets received,
+8 duplicates, 0% packet loss
round-trip min/avg/max/mdev = 0.380/1.545/2.406/0.765 ms
Here, 4 outgoing Echo Request messages are sent and we see 12 responses.
This behavior is typical of using the broadcast address: all receiving nodes are
compelled to respond. We therefore see the sequence numbers 0, 1, 2, and 3, but
for each one we see 3 responses. The (DUP!) notation indicates that an Echo Reply
has been received containing a Sequence Number field identical to a previously
received one. Observe that the TTL values are different (255 and 64), suggesting
that different kinds of computers are responding.
Note that this procedure (sending echo requests to the IPv4 broadcast address)
can be used to quickly populate the local system’s ARP table (see Chapter 4).
Those systems responding to the Echo Request message form an Echo Reply message directed at the sender of the request. When the reply is destined for a system
on the same subnet, an ARP request is issued looking for the link-layer address
of the originator of the request. In so doing, ARP is exchanged between every
responder and the request sender. This causes the sender of the Echo Request
message to learn the link-layer addresses of all the responders. In this example,
even if the local system had no link-layer address mappings for the addresses,, and, they would all be present in the ARP table
after the broadcast. Note that returning Echo Reply messages to requests sent to
the broadcast address is optional. By default, Linux systems return such replies
and Windows XP systems do not.
Router Discovery: Router Solicitation and Advertisement (ICMPv4 Types 9, 10)
In Chapter 6, we looked at how DHCP can be used for a host to acquire an IP
address and learn about the existence of nearby routers. An alternative option we
mentioned for learning about routers is called Router Discovery (RD). Although
specified for configuring both IPv4 and IPv6 hosts, it is not widely used with IPv4
because of widespread preference for DHCP. However, it is now specified for use
in conjunction with Mobile IP, so we provide a brief description. The IPv6 version
forms part of the IPv6 SLAAC function (see Chapter 6) and is logically part of
IPv6 ND. Therefore, we shall return to discussing it in the broader context of ND
in Section 8.5.
Router Discovery for IPv4 is accomplished using a pair of ICMPv4 informational messages [RFC1256]: Router Solicitation (RS, type 10) and Router Advertisement (RA, type 9). The advertisements are sent by routers in two ways. First, they
are periodically multicast on the local network (using TTL = 1) to the All Hosts
multicast address (, and they are also provided to hosts on demand that
ask for them using RS messages. RS messages are sent using multicast to the All
Routers multicast address ( The primary purpose of Router Discovery
is for a host to learn about all the routers on its local subnetwork, so that it can
ICMPv4 and ICMPv6: Internet Control Message Protocol
choose a default route among them. It is also used to discover the presence of routers that are willing to act as Mobile IP home agents. See Chapter 9 for details on
local network multicast. Figure 8-17 shows the ICMPv4 RA message format, which
includes a list of the IPv4 addresses that can be used by a host as a default router.
5RXWHU$GGUHVV>[email protected]
3UHIHUHQFH/HYHO>[email protected]
5 % + ) 0 * U 7 8 ; ,
Figure 8-17
The ICMPv4 Router Advertisement message includes a list of IPv4 addresses of routers that can
be used as default next hops. The preference level lets network operators arrange for some ordering of preferences to be applied with respect to the list (higher is more preferred). Mobile IPv4
[RFC5944] augments RA messages with extensions in order to advertise MIPv4 mobility agents
and the prefix lengths of the advertised router addresses.
In Figure 8-17, the Number of Addresses field gives the number of router address
blocks in the message. Each block contains an IPv4 address and accompanying
preference level. The Address Entry Size field gives the number of 32-bit words per
block (two in this case). The Lifetime field gives the number of seconds for which
Section 8.4 ICMP Query/Informational Messages
the list of addresses should be considered valid. The preference level is a 32-bit
signed two’s-complement integer for which higher values indicate greater preference. The default preference level is 0; the special value 0x80000000 indicates an
address that should not be used as a valid default router.
RA messages are also used by Mobile IP [RFC5944] for a node to locate a
mobility (i.e., home and/or foreign) agent. Figure 8-17 depicts a Router Advertisement message including a Mobility Agent Advertisement extension. This extension follows the conventional RA information and includes a Type field with value
16 and a Length field giving the number of bytes in the extension area (not including the Type and Length fields). Its value is equal to (6 + 4K), assuming that K careof addresses are included. The Sequence Number field gives the number of such
extensions produced by the agent since initialization. The registration gives the
maximum number of seconds during which the sending agent is willing to accept
MIPv4 registrations (0xFFFF indicates infinity). There are a number of Flags bit
fields with the following meanings: R (registration required for MIP services), B
(agent is too busy to accept new registrations), H (agent is willing to act as home
agent), F (agent is willing to act as foreign agent), M (the minimum encapsulation
format [RFC2004] is supported), G (the agent supports GRE tunnels for encapsulated datagrams), r (reserved zero), T (reverse tunneling [RFC3024] is supported),
U (UDP tunneling [RFC3519] is supported), X (registration revocation [RFC3543]
is supported), and I (foreign agent supports regional registration [RFC4857]).
In addition to the Mobility Agent Advertisement extension, one other extension has been designed to help mobile nodes. The Prefix-Lengths extension may
follow a Mobility Agent Advertisement extension and indicates the prefix length
of each corresponding router address provided in the base router advertisement.
The format is shown in Figure 8-18.
Figure 8-18
3UHIL[/HQJWK>[email protected]
The ICMPv4 optional RA Prefix-Lengths extension gives the number of significant prefix bits for
each of the N router addresses present in the basic Router Advertisement portion of the message.
This extension follows the Mobility Agent Advertisement extension, if present.
In Figure 8-18, the Length field is set equal to N, the Number of Addresses field
from the basic RA message. Each 8-bit Prefix Length field gives the number of bits
in the corresponding Router Address field (see Figure 8-17) in use on the local subnetwork. This extension can be used by a mobile node to help determine whether
it has moved from one network to another. Using algorithm 2 of [RFC5944], a
mobile node may cache the set of prefixes available on a particular link. A move
can be detected if the set of network prefixes has changed.
ICMPv4 and ICMPv6: Internet Control Message Protocol
Home Agent Address Discovery Request/Reply (ICMPv6 Types 144/145)
[RFC6275] defines four ICMPv6 messages used in support of MIPv6. Two of the
ICMPv6 messages are used for dynamic home agent address discovery, and the
other two are used for renumbering and mobile configuration. The Home Agent
Address Discovery Request message is used by an MIPv6 node when visiting a
new network to dynamically discover a home agent (see Figure 8-19).
Figure 8-19
The MIPv6 Home Agent Address Discovery Request message contains an identifier that
is returned in the response. It is sent to the Home Agents anycast address for the mobile
node’s home prefix.
The message is sent to the MIPv6 Home Agents anycast address for its home
prefix. The IPv6 source address is typically the care-of address—the address a
mobile node has acquired on the network it is currently visiting (see Chapter 5). A
Home Agent Address Discovery Response message (see Figure 8-20) is sent by a
node willing to act as a home agent for the given node and its home prefix.
Figure 8-20
The MIPv6 Home Agent Address Discovery Reply message contains the identifier from
the corresponding request and one or more addresses of a home agent willing to forward packets for the mobile node.
The home agent address is provided directly to the mobile node’s unicast
address, which is most likely a care-of address. These messages are intended to
handle cases where a mobile node’s HA has changed while transitioning between
networks. After reestablishing an appropriate HA, the mobile may initiate MIPv6
binding updates (see Chapter 5).
Section 8.4 ICMP Query/Informational Messages
Mobile Prefix Solicitation/Advertisement (ICMPv6 Types 146/147)
The Mobile Prefix Solicitation message (see Figure 8-21) is used to solicit a routing
prefix update from an HA when a node’s home address is about to become invalid.
The mobile includes a Home Address option (IPv6 Destination Options; see Chapter 5) and protects the solicitation using IPsec (see Chapter 18).
Figure 8-21
The MIPv6 Mobile Prefix Solicitation message is sent by a mobile node when away to
request a home agent to provide a mobile prefix advertisement.
The solicitation message includes a random value in the Identifier field, used
to match requests with replies. It is similar to a Router Solicitation message but is
sent to a mobile node’s HA instead of to the local subnetwork. In the advertisement form of this message (see Figure 8-22), the encapsulating IPv6 datagram
must include a type 2 routing header (see Chapter 5). The Identifier field contains
a copy of the identifier provided in the solicitation message. The M (Managed
Address) field indicates that hosts should use stateful address configuration and
avoid autoconfiguration. The O (Other) field indicates that information other than
the address is provided by a stateful configuration method. The advertisement
then contains one or more Prefix Information options.
Figure 8-22 The MIPv6 Mobile Prefix Advertisement message. The Identifier field matches the corresponding field in the solicitation. The M (Managed) flag indicates that the address is
provided by a stateful configuration mechanism. The O (Other) flag indicates that other
information beyond the address is supplied by stateful mechanisms.
The Mobile Prefix Advertisement message is designed to inform a traveling
mobile node that its home prefix has changed. This message is normally secured
using IPsec (see Chapter 18) in order to help a mobile node protect itself from
spoofed prefix advertisements. The Prefix Information option, which uses the
ICMPv4 and ICMPv6: Internet Control Message Protocol
format described in [RFC4861], contains the prefix(es) the mobile node should use
for configuring its home address(es).
Mobile IPv6 Fast Handover Messages (ICMPv6 Type 154)
A variant of MIPv6 defines fast handovers [RFC5568] for MIPv6 (called FMIPv6). It
specifies methods for improving the IP-layer handoff latency when a mobile node
moves from one network access point (AP) to another. This is accomplished by
predicting the routers and addressing information that will be used prior to the
handoff taking place. The protocol involves the discovery of so-called proxy routers, which behave like routers a mobile is likely to encounter after it is handed off
to a new network. There are corresponding ICMPv6 Proxy Router Solicitation and
Advertisement messages (called RtSolPr and PrRtAdv, respectively). The basic format of the RtSolPr and PrRtAdv messages is given in Figure 8-23.
Figure 8-23 The common ICMPv6 message type used for FMIPv6 messages. The Code and Subtype
fields give further information. Solicitation messages use code 0 and subtype 2 and may
include the sender’s link-layer address and the link-layer address of its preferred next
access point (if known) as options. Advertisements use codes 0–5 and subtype 3. The different code values indicate the presence of various options, whether the advertisement
was solicited, if the prefix or router information has changed, and the handling of DHCP.
A mobile node may have some information available regarding the addresses
or identifiers of APs it will use in the future (e.g., by “scanning” for 802.11 networks). A RtSolPr message uses code 0 and subtype 2 and must contain at least
one option, the New Access Point Link-Layer Address option. This is used to indicate which AP the mobile is requesting information about. The RtSolPr message
may also contain a Link-Layer Address option identifying the source, if known.
These options use the IPv6 ND option format, so we shall defer discussion of them
until we look at ND in detail.
Multicast Listener Query/Report/Done (ICMPv6 Types 130/131/132)
Multicast Listener Discovery (MLD) [RFC2710][RFC3590] provides management of
multicast addresses on links using IPv6. It is similar to the IGMP protocol used by
IPv4, described in Chapter 9. That chapter deals with the operation of IGMP and the
use of this ICMPv6 message in detail; here we describe the message formats that
Section 8.4 ICMP Query/Informational Messages
constitute MLD (version 1), including the Multicast Listener Query, Report, and Done
messages. The basic format is given in Figure 8-24. These messages are sent with an
IPv6 Hop Limit field value of 1 and the Router Alert Hop-by-Hop IPv6 option.
Figure 8-24
ICMPv6 MLD version 1 messages are all of this form. Queries (type 130) are either
general or multicast-address-specific. General queries ask hosts to report which multicast addresses they have in use, and address-specific queries are used to determine
if a specific address is (still) in use. The maximum response time gives the maximum
number of milliseconds a host may delay sending a report in response to a query. The
destination multicast address is 0 for general queries and the multicast address in question for specific reports. For Report (type 131) and Done messages (type 132), it includes
the address related to the report or what address is no longer of interest, respectively.
The main purpose of MLD is for multicast routers to learn the multicast
addresses used by the hosts on each link to which they are mutually attached.
MLDv2 (described in the next section) extends this capability by allowing hosts
to specify particular hosts from which they wish to (or not to) receive traffic. Two
forms of MLD queries are sent by multicast routers: general queries and multicast-address-specific queries. Generally, routers send the query messages and hosts
respond with reports, either in response to the queries, or unsolicited if a host’s
multicast address membership changes.
The Maximum Response Time field, nonzero only in queries, gives the maximum number of milliseconds a host may delay sending a report in response
to a query. Because the multicast router need only know that at least one host is
interested in traffic destined for a particular multicast address (because link-layer
multicast support allows the router to not have to replicate the message for each
destination), nodes may intentionally and randomly delay their reports, suppressing them entirely if they notice that another neighbor has responded already.
This field provides an upper bound on how long this delay may be. The Multicast Address field is 0 for general queries and the address for which the router is
ICMPv4 and ICMPv6: Internet Control Message Protocol
interested in reports otherwise. For MLD Report messages (type 131) and MLD
Done messages (type 132) it includes the address related to the report or what
address is no longer of interest, respectively.
Version 2 Multicast Listener Discovery (MLDv2) (ICMPv6 Type 143)
[RFC3810] defines extensions to the MLD facility described in [RFC2710]. In particular, it defines a way for a multicast listener to specify a desire to hear from only one
specific set of senders (or, alternatively, to exclude one specific set). It is therefore
useful in supporting source-specific multicast (SSM; see Chapter 9 and [RFC4604]
[RFC4607]). It is basically a translation of the IGMPv3 protocol used with IPv4 for use
with IPv6, which uses ICMPv6 for most multicast address management. Therefore,
we will describe the message format here, but the detailed operation of multicast
address dynamics is covered in Chapter 9. MLDv2 extends the MLD Query message
with additional information pertaining to specific sources (see Figure 8-25). The
first 24 bytes of the message are identical to the common MLD format.
The Maximum Response Code field specifies the maximum time allowed before
sending an MLD Response message. The value of this field is special and therefore
is interpreted slightly differently than in MLDv1: if it is less than 32,768, the maximum response delay is set equal to the value (in milliseconds) as in MLDv1. If the
value is equal to or greater than 32,769, the field encodes a floating-point number
using the format shown in Figure 8-26.
In this case, the maximum response delay is set equal to ((mant | 0x1000) <<
(exp + 3)) ms. The reason for this seemingly complex encoding strategy is to allow
small and large values of the response delay to be encoded in this field and retain
some compatibility with MLDv1. In particular, it allows for carefully adjusting the
leave latency and affecting the report burstiness (see Chapter 9).
The Multicast Address field is set to 0 for a general query. For a multicastaddress-specific query or multicast-address- and source-specific query it is set
to the multicast address being queried. The S field indicates whether router-side
processing should be suppressed. When set, it indicates to any receiving multicast
router that it must suppress the normal timer updates computed when hearing a
query. It does not indicate that querier election or normal “host-side” processing
should be suppressed if the router is itself a multicast listener.
The QRV (Querier Robustness Variable) field, if set, contains a value of no more
than 7. If the sender’s internal QRV value exceeds 7, this field is set to 0. Robustness
variables, described in Chapter 9, are used to fine-tune the rate of MLD updates
based on an expectation of packet loss on a subnetwork. The QQIC (Querier’s
Query Interval Code) field encodes the query interval and is shown in Figure 8-27.
The query interval, measured in seconds, is computed from the QQIC field as follows: if QQIC < 128, then QQI = QQIC; otherwise, QQI = ((mant | 0x10) << (exp + 3)).
The Number of Sources (N) field indicates the number of source addresses
present in the query. This field contains 0 for a general query or for a multicastaddress-specific query. It is nonzero for multicast-address- and source-specific
query messages.
Section 8.4 ICMP Query/Informational Messages
5HVY 6 459
6RXUFH$GGUHVV>[email protected]
Figure 8-25
The MLDv2 Query message format, which is compatible with the MLD version 1 message common format. The major difference is the capability to limit or exclude specific
multicast sources from the host’s list of interests.
01 3456789ABCDEF
Figure 8-26 Floating-point format used with MLDv2 Query messages when the Maximum Response Code
value is at least 32,768. In these cases, the delay is set to ((mant | 0x1000) << (exp + 3))ms.
ICMPv4 and ICMPv6: Internet Control Message Protocol
01 34567
Figure 8-27
The MLDv2 Querier’s Query Interval Code encodes the interval between MLDv2 queries.
The (unencoded) version of this value is called the Querier’s Query Interval and is measured in seconds. The QQI is computed as follows: QQI = QQIC (if QQIC < 128) and QQI
= ((mant | 0x10) << (exp + 3)) otherwise.
The multicast address records used in the MLDv2 reports (see Figures 8-28
and 8-29) contain indicators of modifications to the source address filter being
used by an IPv6 node (see Chapter 9 on multicast for more information on the
operation of such filters, which describe sets of sending hosts that are or are not of
interest to a particular receiving host).
0XOWLFDVW$GGUHVV5HFRUG>[email protected]
Figure 8-28
The MLDv2 Report message includes a vector of multicast address records.
Section 8.4 ICMP Query/Informational Messages
6RXUFH$GGUHVV>[email protected]
Figure 8-29
A multicast address (group) record. Multiple such records may be present in an MLDv2
Report message. The Record Type field is one of the following: MODE_IS_INCLUDE,
MLDv2 by removing the EXCLUDE modes. The Aux Data Len field contains the amount
of auxiliary data present in the record, in 32-bit-word units. For MLDv2, as specified in
[RFC3810], this field must contain the value 0, indicating no auxiliary data.
ICMPv4 and ICMPv6: Internet Control Message Protocol
The record types fall into three primary categories: current state records, filter mode change records, and source list change records. The first category includes
the MODE_IS_INCLUDE (IS_IN) and MODE_IS_EXCLUDE (IS_EX) types, which
indicate that the filter mode for the address is “include” or “exclude,” respectively,
for the specified sources (at least one of which must be present). The filter mode
(TO_EX) types are similar to the current state records but are sent when there is a
change and need not include a nonempty source list. The source list change types,
are used when the filter state (include/exclude) is not changed but only the list
of sources is modified. A modification to MLDv2 (and IGMPv3) removes the
EXCLUDE modes in order to simplify the operation of MLDv2 [RFC5790]. This
“lightweight” approach, called LW-MLDv2 (and LW-IGMPv3), uses the same
previously defined message formats but removes support for the seldom-used
EXCLUDE directives that require multicast routers to keep additional state.
Multicast Router Discovery (MRD) (IGMP Types 48/49/50, ICMPv6 Types
[RFC4286] describes Multicast Router Discovery (MRD), a method defining special
messages that can be used with ICMPv6 and IGMP to discover the presence of
routers capable of forwarding multicast packets and some of their configuration
parameters. It is envisioned primarily for use in conjunction with “IGMP/MLD
snooping.” IGMP/MLD snooping is a mechanism by which systems other than
hosts and routers (e.g., layer 2 switches) can also learn about the location of network layer multicast routers and interested hosts. We discuss it in more detail in
the context of IGMP in Chapter 9. MRD messages are always sent with the IPv4
TTL or IPv6 Hop Limit field set to 1 with a Router Alert option and may be one of
the following types: Advertisement (151), Solicitation (152), or Termination (153).
Advertisements are sent periodically at a configured interval to indicate a router’s
willingness to forward multicast traffic. The Termination message indicates the
cessation of such willingness. Solicitation messages may be used to induce routers
to produce Advertisement messages. The Advertisement message format is shown
in Figure 8-30.
The Advertisement message is sent from the router’s IP address (a link-local
address for IPv6) to the All Snoopers IP address: for IPv4 and the linklocal multicast address ff02::6a for IPv6. A receiver is able to learn the router’s
advertising interval and MLD parameters (QQI and QRV, described in more detail
in Chapter 9). Note that the QQI value is the query interval (in seconds), and not
the QQIC (encoded version of the QQI value) as previously described for MLDv2
The formats of Solicitation and Termination messages are nearly the same (see
Figure 8-31), differing only in the value of the Type field.
Section 8.5 Neighbor Discovery in IPv6
Figure 8-30
The MRD Advertisement message (ICMPv6 type 151; IGMP type 48) contains the
advertisement interval (in seconds) indicating how often unsolicited advertisements
are sent, the sender’s query interval (QQI), and the robustness variable as defined by
MLD. The IP address of the sender is used to indicate to a receiver the router that is able
to forward multicast traffic. The message is sent to the All Snoopers multicast address
(IPv4,; IPv6, ff02::6a).
Figure 8-31 The ICMPv6 MRD Solicitation (ICMPv6 type 152; IGMP type 49) and Termination
(ICMPv6 type 153; IGMP type 50) messages use a common format. MRD messages set
the IPv6 Hop Limit field or IPv4 TTL field to 1 and include the Router Alert option.
Solicitations are sent to the All Routers multicast address (IPv4,; IPv6, ff02::2).
Figure 8-31 shows the (nearly) common format used for Solicitation and Termination messages. The Solicitation message induces a multicast router to send
an Advertisement message on demand. Such messages are sent to the All Routers address: for IPv4 and the link-local multicast address ff02::2 for IPv6.
Termination messages are sent to the All Snoopers IP address to indicate that the
sending router is no longer willing to forward multicast traffic.
Neighbor Discovery in IPv6
The Neighbor Discovery Protocol in IPv6 (sometimes abbreviated as NDP or
ND) [RFC4861] brings together the Router Discovery and Redirect mechanisms
of ICMPv4 with the address-mapping capabilities provided by ARP. It is also
specified for use in supporting Mobile IPv6. In contrast to ARP and IPv4, which
generally use broadcast addressing (except for Router Discovery), ICMPv6 makes
extensive use of multicast addressing, at both the network and link layers. (Recall
from Chapters 2 and 5 that IPv6 does not even have broadcast addresses.)
ND is designed to allow nodes (routers and hosts) on the same link or segment to find each other, determine if they have bidirectional connectivity, and
determine if a neighbor has become inoperative or unavailable. It also supports
ICMPv4 and ICMPv6: Internet Control Message Protocol
stateless address autoconfiguration (see Chapter 6). All of the ND functionality is
provided by ICMPv6 at or above the network layer, making it largely independent
of the particular link-layer technology employed underneath. However, ND does
prefer to make use of link-layer multicast capabilities (see Chapter 9), and for this
reason operation on non-broadcast- and non-multicast-capable link layers (called
non-broadcast multiple access or NBMA links) may differ somewhat.
The two main parts of ND are Neighbor Solicitation/Advertisement (NS/NA),
which provides the ARP-like function of mapping between network- and linklayer addresses, and Router Solicitation/Advertisement (RS/RA), which provides
the functions of router discovery, Mobile IP agent discovery, and redirects, as
well as some support for autoconfiguration. A secure variant of ND called SEND
[RFC3971] adds authentication and special forms of addressing, primarily by
introducing additional ND options.
ND messages are ICMPv6 messages sent using an IPv6 Hop Limit field value
of 255. Receivers verify that incoming ND messages have this value to protect
against off-link senders that may attempt to spoof local ICMPv6 messages (such
messages would arrive with values less than 255). ND has a rich set of options that
messages may carry. First we discuss the primary message types and then detail
the available options.
ICMPv6 Router Solicitation and Advertisement (ICMPv6 Types 133, 134)
Router Advertisement (RA) messages indicate the presence and capabilities of a
nearby router. They are sent periodically by routers, or in response to a Router
Solicitation (RS) message. The RS message (see Figure 8-32) is used to induce
on-link routers to send RA messages. RS messages are sent to the All Routers
multicast address, ff02::2. A Source Link-Layer Address option is supposed to be
included if the sender of the message is using an IPv6 address other than the
unspecified address (used during autoconfiguration). It is the only valid option
for such messages as of [RFC4861].
Figure 8-32 The ICMPv6 Router Solicitation message is very simple but ordinarily contains a Source
Link-Layer Address option (unlike its ICMPv4 counterpart). It may also contain an
MTU option if an unusual MTU value is in use on the link.
Section 8.5 Neighbor Discovery in IPv6
The Router Advertisement (RA) message (see Figure 8-33) is sent by routers to
the All Nodes multicast address (ff02::1) or the unicast address of the requesting
host, if the advertisement is sent in response to a solicitation. RA messages inform
local hosts and other routers of configuration details relevant to the local link.
Figure 8-33
An ICMPv6 Router Advertisement message is sent to the All Nodes multicast address
(ff02::1). Receiving nodes check to make sure that the Hop Limit field is 255, ensuring
that the packet has not been forwarded through a router. The message includes three
flags: M (Managed address configuration), O (Other stateful configuration), and H
(Home Agent).
The Current Hop Limit field specifies the default hop limit hosts are supposed
to use for sending IPv6 datagrams. A value of 0 indicates that the sending router
does not care. The next byte contains a number of bit fields, as summarized and
extended in [RFC5175]. The M (Managed) field indicates that the local assignment
of IPv6 addresses is handled by stateful configuration, and that hosts should avoid
using stateless autoconfiguration. The O (Other) field indicates that other stateful information (that is, other than IPv6 addresses) uses a stateful configuration
mechanism (see Chapter 6). The H (Home Agent) field indicates that the sending
router is willing to act as a home agent for Mobile IPv6 nodes. The Pref (Preference) field gives the level of preference for the sender of the message to be used
as a default router as follows: 01, high; 00, medium (default); 11, low; 10, reserved
(not used). More details about this field are given in [RFC4191]. The P (Proxy) flag
is used in conjunction with the experimental ND proxy facility [RFC4389]. It provides a proxy-ARP-like capability (see Chapter 4) for IPv6.
The Router Lifetime field indicates the amount of time during which the sending router can be used as a default next hop, in seconds. If it is set to 0, the sending
router should never be used as a default router. This field applies only to the use of
the sending router as a default router; it does not affect other options carried in the
same message. The Reachable Time field gives the number of milliseconds in which
ICMPv4 and ICMPv6: Internet Control Message Protocol
a node is to assume that another is reachable, assuming mutual communications
have taken place. This is used by the Neighbor Unreachability Detection mechanism
(see Section 8.5.4). The Retransmission Timer field dictates the time, in milliseconds,
during which hosts delay sending successive ND messages.
This message usually includes the Source Link-Layer option (if applicable)
and should include an MTU option if variable-length MTUs are used on the link.
The router should also include Prefix Information options that indicate which
IPv6 prefixes are in use on the local link. Chapter 6 includes an example of how
RS and RA messages are used (e.g., see Figures 6-24 and 6-25).
ICMPv6 Neighbor Solicitation and Advertisement (IMCPv6 Types 135, 136)
The Neighbor Solicitation (NS) message in ICMPv6 (see Figure 8-34) effectively
replaces the ARP Request messages used with IPv4. Its primary purpose is to convert IPv6 addresses to link-layer addresses. However, it is also used for detecting
whether nearby nodes can be reached, and if they can be reached bidirectionally
(that is, whether the nodes can talk to each other). When used to determine address
mappings, it is sent to the Solicited-Node multicast address corresponding to the
IPv6 address contained in the Target Address field (prefix f02::1:f/104, combined
with the low-order 24 bits of the solicited IPv6 address). For more details on how
Solicited-Node multicast addressing is used, see Chapter 9. When this message
is used to determine connectivity to a neighbor, it is sent to that neighbor’s IPv6
unicast address instead of the Solicited-Node address.
Figure 8-34
The ICMPv6 Neighbor Solicitation message is similar to the RS message but contains
a target IPv6 address. These messages are sent to Solicited-Node multicast addresses
to provide ARP-like functionality and to unicast addresses to test reachability to other
nodes. NS messages contain a Source Link-Layer Address option on links that use
lower-layer addressing.
Section 8.5 Neighbor Discovery in IPv6
The NS message contains the IPv6 address for which the sender is trying
to learn the link-layer address. The message may contain the Source Link-Layer
Address option. This option must be included in networks that use link-layer
addressing when the solicitation is sent to a multicast address and should be
included for unicast solicitations. If the sender of the message is using the unspecified address as its source address (e.g., during duplicate address detection), this
option is not to be included.
The ICMPv6 Neighbor Advertisement (NA) message (see Figure 8-35) serves
the purpose of the ARP Response message in IPv4 in addition to helping with
neighbor unreachability detection (see Section 8.5.4). It is either sent as a response
to an NS message or sent asynchronously when a node’s IPv6 address changes. It is
sent either to the unicast address of the soliciting node, or to the All Nodes multicast
address if the soliciting node used the unspecified address as its source address.
Figure 8-35 The ICMPv6 Neighbor Advertisement message contains the following flags: R indicates
that the sender is a router, S indicates that the advertisement is a response to a solicitation, and O indicates that the message contents should override other cached address
mappings. The Target Address field contains the IPv6 address of the sender of the message (generally, the unicast address of the solicited node from the ND solicitation). A
Target Link-Layer Address option is included to enable ARP-like functionality for IPv6.
The R (Router) field indicates that the sender of the message is a router. This
could change, for example, if a router ceases being a router and becomes only a
host instead. The S (Solicited) field indicates that the advertisement is in response to
a solicitation received earlier. This field is used to verify that bidirectional connectivity between neighbors has been achieved. The O (Override) field indicates that
information in the advertisement should override any previously cached information the receiver of the message has. It is not supposed to be set for solicited
ICMPv4 and ICMPv6: Internet Control Message Protocol
advertisements, for anycast addresses, or in solicited proxy advertisements. It is
supposed to be set in other (solicited or unsolicited) advertisements.
For solicited advertisements, the Target Address field is the IPv6 address being
looked up. For unsolicited advertisements, it is the IPv6 address that corresponds to a
link-layer address that has changed. This message must contain the Target Link-Layer
Address option on networks that support link-layer addressing when the advertisement was solicited via a multicast address. We will now look at a simple example. Example
Here we see the results of using ICMPv6 Echo Request/Reply, in conjunction with
NDP. The sender is a Windows XP system with IPv6 enabled, and a packet trace
is captured on a nearby Linux system. Some lines have been wrapped for clarity.
C:\> ping6 -s fe80::210:18ff:fe00:100b fe80::211:11ff:fe6f:c603
Pinging fe80::211:11ff:fe6f:c603
from fe80::210:18ff:fe00:100b with 32 bytes of data:
Ping statistics for fe80::211:11ff:fe6f:c603:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Linux# tcpdump –i eth0 -s1500 -vv -p ip6
tcpdump: listening on eth0,
link-type EN10MB (Ethernet), capture size 1500 bytes
1 21:22:01.389656 fe80::211:11ff:fe6f:c603 > ff02::1:ff00:100b:
[icmp6 sum ok] icmp6: neighbor sol: who has
(src lladdr: 00:11:11:6f:c6:03)
(len 32, hlim 255)
2 21:22:01.389845 fe80::210:18ff:fe00:100b > fe80::211:11ff:fe6f:c603:
[icmp6 sum ok] icmp6: neighbor adv: tgt is
(tgt lladdr: 00:10:18:00:10:0b)
(len 32, hlim 255)
3 21:22:02.390713 fe80::210:18ff:fe00:100b > fe80::211:11ff:fe6f:c603:
[icmp6 sum ok] icmp6: echo request seq 18
(len 40, hlim 128)
4 21:22:02.390780 fe80::211:11ff:fe6f:c603 > fe80::210:18ff:fe00:100b:
[icmp6 sum ok] icmp6: echo reply seq 18
(len 40, hlim 64)
... continues ...
Section 8.5 Neighbor Discovery in IPv6
The ping6 program is available on Windows XP and Linux. (Later versions
of Windows incorporate the IPv6 functionality into the regular ping program.)
The –s option tells it which source address to use. Recall that with IPv6 a host
may have multiple addresses from which to choose, and here we have chosen one
of its link-local addresses, fe80::211:11ff:fe6f:c603. The trace shows the NS/
NA exchange and an ICMP Echo Request/Reply pair. Observe that all of the ND
messages use IPv6 Hop-Limit field values of 255, and the ICMPv6 Echo Request
and Echo Reply messages use a value of 128 or 64.
The NS message is sent to the multicast address ff02::1:ff00:100b, which
is the Solicited-Node multicast address corresponding to the IPv6 address being
solicited (fe80::210:18ff:fe00:100b). We see that the soliciting node also
includes its own link-layer address, 00:11:11:6f:c6:03, in a Source Link-Layer
Address option.
The NA response message is sent using link-layer (and IP-layer) unicast
addressing back to the soliciting node. The Target Address field contains the value
requested in the solicitation: fe80::210:18ff:fe00:100b. In addition, we see that
the S and O flag fields are set, indicating that the advertisement is in response to
the earlier solicitation provided, and that the information being provided should
override any other information the soliciting node may have cached. The R flag
field is unset, indicating that the responding host is not acting as a router. Finally,
the solicited node includes the most important information in a Target Link-Layer
Address option: the solicited node’s link-layer address of 00:10:18:00:10:0b.
ICMPv6 Inverse Neighbor Discovery Solicitation/Advertisement (ICMPv6
Types 141/142)
The Inverse Neighbor Discovery (IND) facility in IPv6 [RFC3122] originated from a
need to determine IPv6 addresses given link-layer addresses on Frame Relay networks. It resembles reverse ARP, a protocol once used with IPv4 networks primarily
for supporting diskless computers. Its main function is to ascertain the networklayer address(es) corresponding to a known link-layer address. Figure 8-36 shows
the basic format of IND Solicitation and Advertisement messages.
Figure 8-36
The ICMPv6 IND Solicitation (type 141) and Advertisement (type 142) messages have
the same basic format. They are used to map known link-layer addresses to IPv6
addresses in environments where this is useful.
ICMPv4 and ICMPv6: Internet Control Message Protocol
The IND Solicitation message is sent to the All Nodes multicast address at
the IPv6 layer but is encapsulated in a unicast link-layer address (the one being
looked up). It must contain both a Source Link-Layer Address option and a Destination Link-Layer Address option. It may also contain a Source/Target Address
List option and/or an MTU option.
Neighbor Unreachability Detection (NUD)
One of the important features of ND is to detect when reachability between two
systems on the same link has become lost or asymmetric (i.e., is not available in
both directions). This is accomplished using the Neighbor Unreachability Detection
(NUD) algorithm. It is used to manage the neighbor cache present on each node.
The neighbor cache is analogous to the ARP cache described in Chapter 4; it is
a (conceptual) data structure that holds the IPv6-to-link-layer-address mapping
information required to perform direct delivery of IPv6 datagrams to on-link
neighbors as well as information regarding the state of the mapping. Figure 8-37
shows how it maintains entries in the neighbor cache.
6H 6
Figure 8-37
Neighbor Unreachability Detection helps maintain the neighbor cache consisting of
several neighbor entries. Each entry is in one of five states at any given time. Confirmations of reachability are accomplished by receiving Neighbor Advertisement messages
or using other higher-layer protocol information, if available. Unsolicited evidence
includes unsolicited Neighbor and Router Advertisement messages.
Section 8.5 Neighbor Discovery in IPv6
Each mapping may be in one of five states: INCOMPLETE, REACHABLE,
STALE, DELAY, or PROBE. The transition diagram in Figure 8-37 shows the initial states to be either INCOMPLETE or STALE. When an IPv6 node has a unicast
datagram to send to a destination, it checks its destination cache to see if an entry
corresponding to the destination is present. If so, and the destination is on-link,
the neighbor cache is consulted to see if the neighbor’s state is REACHABLE.
If so, the datagram is sent using direct delivery (see Chapter 5). If no neighbor
cache entry is present but the destination appears to be on-link, NUD enters the
INCOMPLETE state and sends an NS message. Successful receipt of a solicited NA
message provides confirmation that the node is reachable, and the entry enters
the REACHABLE state. The STALE state corresponds to apparently valid entries
that have not yet been confirmed. This state is entered when either an entry has
not been updated for some time when it was previously REACHABLE, or when
unsolicited information is received (e.g., a node has changed its address and sent
an unsolicited NA message). These cases suggest that reachability is possible, but
confirmation in the form of a valid NA is still required.
The other states, DELAY and PROBE, are temporary states. DELAY is used when
a packet is sent but ND has no current evidence to suggest that reachability is possible. The state gives upper-layer protocols an opportunity to provide additional evidence. If after DELAY_FIRST_PROBE_TIME seconds (the constant 5) no evidence is
received, the state changes to PROBE. In the PROBE state, ND sends periodic NS
messages (every RetransTimer milliseconds, with constant default value RETRANS_
TIMER equal to 1000). If no evidence has been received after sending MAX_UNICAST_SOLICIT NS messages (default 3), the entry is supposed to be deleted.
Secure Neighbor Discovery (SEND)
SEND [RFC3971] is a special set of enhancements aimed at providing additional
security for ND messages. This is to help resist various spoofing attacks in which
one host or router might masquerade as another (see Section 8.6, Chapter 18, and
[RFC3756] for additional details). It specifically aims to protect against nodes masquerading as others when responding to NS messages. SEND does not use IPsec
(see Chapter 18) but instead its own special mechanism. This mechanism is also
used for securing FMIPv6 handoffs [RFC5269].
SEND operates in a framework with a set of assumptions. First, each SENDcapable router has a certificate, or cryptographic credential, that it can use to prove
its identity to a host. Next, each host is also equipped with a trust anchor—configuration information enabling the credential to be verified. Finally, each node
generates a public/private key pair when configuring the IPv6 addresses it will
use. Details of credentials, trust anchors, key pairs, and other associated security
techniques are given in Chapter 18. Cryptographically Generated Addresses (CGAs)
Perhaps its most interesting feature, SEND uses an entirely different type of IPv6
address called a cryptographically generated address (CGA) [RFC3972][RFC4581]
ICMPv4 and ICMPv6: Internet Control Message Protocol
[RFC4982]. This type of address is based on a node’s public key information, thereby
linking the address to the node’s credential. Consequently, a node or address owner
in possession of the corresponding private key is able to prove it is the authorized
user of a particular CGA. CGAs also encode the subnet prefix with which they
are associated so they cannot be moved trivially from one subnet to another. This
approach is quite different from how addresses are typically assigned.
An IPv6 CGA is generated by ORing a 64-bit subnet prefix with a specially
constructed interface identifier. The CGA interface identifier is computed using
a secure hash function (a hash function believed difficult to invert; see Chapter 18)
called Hash1 with inputs derived from the node’s public key and a special CGA
parameters data structure. These parameters are also used as input to another
secure hash function, Hash2, which provides a hash extension technique that effectively extends the number of bits of output for the hash function, increasing its
security (i.e., strength against an adversary producing a different input resulting
in the same hash value) [A03][RFC6273]. The CGA technique allows for the address
owner’s public key to be self-generated, so this approach can be used without an
accompanying public key infrastructure (PKI) or other trusted third party.
The CGA parameters data structure is shown in Figure 8-38. The Modifier field
is initialized with a random value, and the Collision Count field is initialized to 0.
The structure includes an Extension Fields area that can be adapted for future uses
Figure 8-38
The SEND method for computing CGAs. The CGA parameters data structure is used as input to
two cryptographic hash functions, Hash1 and Hash2. The Hash2 value must have (16*Sec) initial
0 bits, where Sec is a 3-bit parameter. The Modifier is changed until Hash2 computes appropriately.
The resulting values are used to compute Hash1, which is combined with Sec and the subnet
prefix to produce the CGA.
Section 8.5 Neighbor Discovery in IPv6
A 3-bit unsigned parameter called Sec influences how resistant the approach
is to mathematical compromise, which secure hash function is used [RFC4982],
and how computationally expensive the computations are (they are exponential
in the Sec value). The IANA maintains a registry for Sec values [SI]. The Hash1
and Hash2 functions operate on the same CGA parameter block in conjunction
with the Sec value. The address owner begins by picking a random value for the
Modifier field, treating the subnet prefix field as 0, and computing the Hash2 value.
The result is required to have (16*Sec) initial 0 bits, so the input is modified by
incrementing the modifier value by 1 and recomputing Hash2 until the condition
is satisfied. This computation has time complexity O(216*Sec) and therefore becomes
much more expensive as Sec increases. However, this computation is required
only when the address is initially established.
Once the proper modifier has been found, 59 bits of the Hash1 value are used
in forming the low-order 59 bits of the interface identifier. The top 3 bits constitute
the 3-bit Sec value, and bits 6–7 (from the left) contain two 0 bits (corresponding
to the u and g address bits described in Chapter 2). If the address is found to be
in conflict (e.g., using duplicate address detection, described in Chapter 6), the
Collision Count field is incremented and Hash1 is recomputed. The collision count
value is not permitted to grow beyond 2. Given that address collisions are unlikely
to begin with, multiple such collisions should be considered evidence of a configuration error or attack. Once all the necessary calculations are complete, the complete CGA can be formed by concatenating the subnet prefix, Sec value, and Hash1
value. Note that if the subnet prefix changes, only Hash1 needs to be recomputed
as the modifier value can remain the same. (The reader interested in alternatives
to CGAs should consult [RFC5535], which describes hash-based addresses, or HBAs.
HBAs are used for multihomed hosts using multiple prefixes in a somewhat different context and with a different form of cryptography that is less computationally expensive, although HBA-CGA-compatible options have also been defined.)
At this point we have seen how a CGA is generated but not how it is used for
security. Note that anyone can generate a CGA given a subnet prefix, Sec value,
and their own (or someone else’s) public key. To ensure that a CGA is well formed
and is using an appropriate subnet prefix, it must be verified, a process called CGA
verification. A verifier requires knowledge of the CGA and CGA parameters. The
verification process involves ensuring all of the following: the collision count is
not greater than 2, the CGA’s subnet prefix matches that in the CGA parameters,
Hash1 computed on the CGA parameters matches the interface identifier portion
of the CGA (where the first 3 bits and bits 6 and 7 are “don’t cares”), and the value
of Hash2 computed on the CGA parameters with the Subnet Prefix and Collision
Count fields set to 0 has (16*Sec) initial 0 bits. If all of these checks are successful,
the CGA is a legitimate one for the corresponding subnet prefix. This computation
involves at most two hash functions; it is far simpler than the address generation
To verify that a CGA is being used by its authorized address owner, called signature verification, the owner forms a typed message and attaches a CGA signature
ICMPv4 and ICMPv6: Internet Control Message Protocol
that can be computed only with knowledge of the private key corresponding to
the public key used with the CGA. A verifier forms a data block by concatenating
a special 128-bit type tag with the message. The CGA ownership is verified using
an RSA signature (RSASSA-PKCS1-v1_5 [RFC3447]) with the public key (extracted
from the CGA parameters), data block, and signature as parameters. Generally, a
CGA and its user are considered valid only if both the CGA verification and signature verification processes have completed successfully.
The handling of CGAs and verification is accomplished using two ICMPv6
messages and six options defined in [RFC3971]. The RFC also defines two IANAmanaged registries for holding Name Type fields in the Trust Anchor option and
the Cert Type field in the Certificate option (see Section [RFC3972] defines
the CGA Message Type registry, with the 128-bit value 0x086FCA5E10B200C99C8
CE00164277C08 given in [RFC3971] (other values are defined for uses other than
SEND). A registry for Sec values is defined by [RFC4982] but at present provides
only for values 0, 1, and 2, which correspond to use of the SHA-1 secure hash
function using 0, 16, or 32 initial 0 bits for the Hash2 function, respectively. An
extension format defined in [RFC4581] supports TLV encodings that can be used
for future standard extensions, but only one has been defined to date [RFC5535].
We will now describe the two ICMPv6 messages used with SEND and defer discussion of the options until we cover all of the ICMPv6 options in the next section. Certification Path Solicitation/Advertisement (ICMPv6 Types 148/149)
SEND defines Solicitation and Advertisement messages to help hosts determine
certificates constituting a certification path. This is used for a host to verify the
authenticity of router advertisements. Figure 8-39 shows the Solicitation message.
Figure 8-39 The Certification Path Solicitation message. The sender requests a particular certificate by position index, provid