This course will help prepare you to take the

This course will help prepare you to take the
This course will help prepare you to take the Ethernet Access Network Specialist certification exam.
1
3
4
The EANS certification exam is the first exam you must pass in order to achieve a Calix certification. The test is 50 questions, and you must answer 80% correctly to pass.
You must take the test in a single session. You cannot stop and re‐start the test. After 90 minutes your test will automatically be submitted and scored. Before starting the test make sure you will not be interrupted for the entire duration of the test. This policy is strictly enforced. Once you start the test, you cannot stop until you complete the test or your time is up.
You are allowed one re‐test. If you do not pass on the second attempt, you must re‐register for the test and pay the testing fee again.
5
6
7
Access Network Overview
8
A communication service provider network has two major sections: The core network and the access network. The access network connects subscribers to the service infrastructure in the core network. Using this network, the service provider delivers voice, video, and data services to residential and business subscribers. This course focuses building secure, scalable Layer 2 access networks. 9
The Customer Premises Equipment (or CPE) and the nearest access node form the subscriber service network. These two devices form the “first mile” in the network. The “first‐mile”, depending on technology in use and the layout of the network, can be a few hundred feet or a hundred kilometers. The access link may be a DSL, active Ethernet, or passive optical network connection.
The CPE, also called the subscriber demarcation device, resides either at the subscriber premises or right outside the subscriber premises. The service provider network and customer premises network meet at the CPE, where the user network interface (or UNI) is located. This device separates the customer network from the
service provider network. An access node directly interfaces with the subscriber demarcation point device. The CPE is considered a semi‐trusted or untrusted device. The access node uses extra security measures to ensure that the network is not affected by a CPE that is not behaving as expected. Third‐party modems serve as the CPE for xDSL access technology, and Calix P‐Series ONTs serve as the CPE for GPON and Active Ethernet access technologies. The subscriber may have additional devices beyond the CPE which can be integrated with CPE by enabling the CPE to act as a residential gateway. The subscriber may also use a third party residential gateway. Residential Gateways are discussed later in this course. 10
The transport network aggregates traffic from access nodes toward an edge or aggregation router. This segment of the network is commonly referred to as the “second mile”. Access nodes integrate transport capability via bridges that use Ethernet switches to transport packet traffic. Access nodes can connect directly to the edge router, or can connect through aggregation nodes in the transport network. The transport function is a logical component and the access node may also integrate the transport capability. Transport networks are designed to deliver traffic with low latency and reduced jitter. Latency is the delay between when the source device sends a packet and when the destination device receives the packet. Jitter is the variation in that delay. As traffic increases on the network, latency and jitter can increase. Service providers must design their transport networks to efficiently deliver services to subscribers.
11
Beyond the transport network is the core network. The transport network typically
connects to the core network through an edge router, which is also sometimes called an aggregation router. A Broadband Network Gateway, or BNG, can also be used. The functionality of a BNG differs from an edge router, but these specifics are not discussed in this course. Both devices perform the key function of terminating the access network and connecting the access network to the core network and services, such as voice, video, and data services. Edge routers are used more often in Layer 2 access networks and will be used in the examples in this course. The edge router is the barrier between the routed core network and the Ethernet switched access network. Layer 3 IP awareness exists on access nodes and CPE, but the access network does not route traffic using Layer 3 protocols. Some access nodes can include layer 3 routing capabilities. This course focuses on Layer 2 technologies.
Service providers deploy other components in their networks, such as DHCP servers, TFTP servers, domain name servers, IPTV middleware, video encoders, video acquisition equipment, class 5 switches, and element and network management systems. In some cases the video or voice components could be maintained by another provider. All these components are deployed outside the access network and are attached to the core network via a router. The key thing to remember is that all services enter the access network via the edge router. 12
Ethernet Fundamentals
13
This module will cover the OSI model and an overview of Ethernet and layer 2 networks. 14
The Open Systems Interconnection (or OSI) basic reference model describes a network architecture composed of seven layers. The OSI model describes how applications in networks communicate with each other. Each layer reflects different functionality. Collectively, the seven layers are called the stack.
15
Access nodes often operate at Layer 2 but have Layer 3 awareness. As data descends through the OSI model, a segment, which is the product of the Transport layer, arrives at layer three, the Network layer. Network‐relevant information (like the source and destination IP address) are added to the segment, wherein it becomes known as a packet.
The network function that operates on packets is called routing. Routers divide networks into collision domains.
The basic purpose of segmenting devices (such as computers or printers) into collision domains is to better manage the traffic and reduce collisions between devices. Collisions are a fact of life in an Ethernet network – more traffic means more collisions and a less efficient network (meaning slow data delivery).
16
Layer 2 is the Data Link layer. This is where a packet changes its identity and becomes a frame.
Bridges communicate in terms of MAC (Media Access Control) addresses. A MAC address uniquely identifies every device connected to the network, whether that connection is over wire, fiber optic, or WiFi. Bridges are commonly referred to as MAC layer bridges, because that is the component of the Ethernet frame that it uses. A bridge will perform a simple form of error checking using CRC – cyclical redundancy check.
Ethernet hubs also operate as Layer 2 devices in the OSI model, the same as network switches, however without the same degree of “intelligence” or error checking. While a hub offers comparable functionality, nearly all mainstream home network equipment today utilizes network switch technology instead of hubs due to the performance benefits of switches. A hub can be useful for temporarily replacing a broken network switch or when performance is not a critical factor of the network. 17
Layer 1 defines the path for moving data around a network. This is also where the components are defined – the cable size, twist rate for copper cable, and types of connectors. This is also where the 802.11 standards take effect. 18
Ethernet defines signaling and wiring standards for the physical layer of the OSI basic reference model. Originally designed for transmitting over coaxial cable at 3 Mbps, Ethernet now travels over many media types (including 10‐BaseT, 100‐BaseT, 1000‐BaseT, fiber optics, and wireless) and at various speeds (including 100 Mbps, 1 Gbps, and 10 Gbps).
Ethernet devices communicate with each other by sending small blocks of data packets inside structured frames. These frames use a specific format that includes the Media Access Control (MAC) address that uniquely identifies the source and destination devices. The Ethernet frame also contains a header, the payload data, and error checking segments. All segments of the Ethernet frame are of a fixed length except the payload that can be as large as 1500 bytes.
19
Ethernet Layer 2 standards define framing, addressing, and encapsulation of data on Ethernet links.
Ethernet devices communicate with each other by sending small blocks of data packets inside structured Ethernet frames. These frames use a specific format that includes address information, called a Media Access Control (or MAC) address. The frame includes the MAC address of the device sending the frame as well as the address of the destination device. The standard also addresses how IP packets are converted into Ethernet frames for transmission on the physical medium.
20
Historically, several different formats have been used for Ethernet frames, most access networks use Ethernet II as defined in IEEE 802.3. The preamble is used for synchronization and consists of eight bytes. The last byte is the start‐of‐frame delimiter. The frame header contains the destination MAC address, source MAC address, and Ethertype. The Ethertype indentifies which protocol is encapsulated in the payload of the frame. For example, Internet Protocol uses an EtherType value of 0x0800.
The payload is the actual data being transmitted in the frame. The length of the payload varies. The frame checksum is a 32‐bit cyclic redundancy check value used to detect corrupted data within the entire frame.
After a frame has been sent, devices transmit a minimum of 12 bytes of idle line state before transmitting the next frame. 21
The Maximum transmission Unit (or MTU) is the maximum frame size a device can transmit or receive. The Ethernet standard defines the payload as up to 1500 bytes, but many networks use larger frames, called Jumbo frames. Jumbo frames can be up to 9000 bytes. In an access network, the MTU size can vary between devices. Ideally, all devices in a transport network should have a consistent MTU size that is larger than the MTU of the CPE. When a host receives a frame that is larger than its MTU, it discards the frame because it thinks the frame is corrupt. If a subscriber sends a frame larger than the MTU of the access node or transport network, the frame is dropped. 22
A MAC address uniquely identifies each node in a network. That node may be an individual computer, a printer, switch, or customer CPE. Ethernet MAC addresses are 48 bits, or 6 bytes. Every Ethernet port on any device should have a unique MAC address that is hard‐
coded by the hardware manufacturer. The IEEE 802 standards stipulate that the MAC address format is six hexadecimal octets, represented in two nibbles. The first three octets represent the vendor’s (or manufacturer’s) unique identification code, and the last three represent the unique device identification, known as the Network Interface Controller. In this example, the vendor’s identification code is 00‐1C‐23. Vendor codes are regulated by the IEEE and the manufacturer must submit an application for their code. You can look up vendor codes on the Internet at the address located on the screen here. 23
Ethernet Networking
24
As discussed earlier, switches keep track of where hosts are connected by reading the source MAC address of incoming frames. Traffic is switched to interfaces based on the destination MAC address. There are three types of MAC addresses: unicast, multicast, and broadcast.
A unicast address identifies a single unique Ethernet port. Multicast addresses are used to transmit to a group of devices for a common purpose, such as multicast video. Multicast addresses have a “1” in the last bit of the first byte of the MAC address.
A broadcast address is sent to all devices on the network. A broadcast address has a “1” in every bit of the MAC address.
A source MAC address in a frame MUST be a unicast address. Only the destination MAC address can be a broadcast or multicast address.
25
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. In a small, two‐host network, the source and destination for data packets is simple. Computer A sends data to Computer B, and vice versa.
Adding more hosts (such as computers or other devices) to the network increases the level of complexity in the network. As hosts are added, the likelihood of data collisions increases. This example shows devices connected in a network using a shared media, such as coaxial cable or a hub. Collisions occur when two hosts send data at the same time. Multiple access in a shared media network uses half‐duplex mode with Carrier Sense Multiple Access with Collision Detection, or CSMA‐CD to help avoid collisions. When a device has data to send, it “listens” for activity on the media. If it senses no traffic, it transmits the data. In the event that two devices happen to transmit simultaneously (thus causing a collision) they both wait for a random period of time (in increments of milliseconds), then retransmit the data.
26
As networks grew, hubs were introduced. Ethernet hubs were used in 10‐Mbps and 100‐
Mbps twisted‐pair Ethernet networks. A hub acts as a simple repeater, repeating the signal from each port onto every other port. All frames are transmitted to all ports, regardless of where the intended recipient is located. Devices determine who the data is for by examining the destination address within the data frame. Only the intended recipients keep the frames. All other devices discard the frames. Since each transmission is repeated to all devices, a hub and its attached devices operate in Half Duplex mode and require the use of CSMA/CD in order to allow multiple access to the media.
Ethernet hubs were used in 10‐Mbps and 100‐Mbps networks, but because they flood data to all ports, they are not used in larger networks or access networks.
27
A switch allows point‐to‐point connections between a host and a switch. This means they can operate in full duplex mode without collisions. In full duplex mode only two endpoints share the media and both endpoints have a dedicated transmit path to the other endpoint. Both endpoints can transmit at the full bandwidth of the media simultaneously. CSMA/CD is disabled because only two devices are sending and receiving data. The switch manages all traffic between connected hosts. Today hubs are rarely used and have been replaced by switches in most networks. 28
When a switch receives a frame, it looks up the destination MAC address in its bridge table. A destination lookup failure occurs if the address isn’t found in the table. When this happens the frame is flooded out all ports, except the port it came from. Even though it is called a failure, this is a normal switching operation and is not an error condition. For example, if host A sends data destined for host C, the switch floods the frame out all other ports because it does not know where host C is connected. 29
Ethernet switches process and forward data at layer 2 of the OSI model. A switch is a learning bridge that switches frames based on the destination MAC address. Switches operate through a store and forward mechanism. As unicast frames arrive on a port, the switch learns the source address of the attached device, or devices. Once a switch has learned the location of a host, it will only transmit, or switch, frames destined for that address out the associated port. Switches maintain a table of ports and associated MAC addresses. For example, as Host A transmits data, the switch learns that it is connected to port 1. If Host B sends data destined for Host A, the switch will only forward the data to port 1 where Host A is connected. Likewise, the switch learns that Host B is connected to port 8. 30
A Virtual Local Area Network (or VLAN) is a collection of devices in a larger network grouped together to segment traffic. For example, a corporate network might separate Accounting from Human Resources using VLANs. Multiple VLANs may be transmitted on the same LAN segment, but the traffic on one VLAN is logically isolated from the traffic on another VLAN. In this example, traffic in the accounting VLAN is separated from traffic in the HR VLAN. A VLAN has the same attributes as a physical LAN, but it allows for hosts to be grouped together even if they are not connected to the same switch. VLAN membership can be configured through software instead of physically relocating devices or connections.
VLANs limit broadcast traffic by limiting broadcast domains to VLAN members. In a physical LAN, all connected devices are part of the broadcast domain, meaning all devices receive broadcast traffic. By using VLANs, switches limit broadcast domains and only transmit broadcast traffic to members of the VLAN. 31
The purpose of IEE802.1ad is to define how service providers can separate customer traffic using one or more VLAN tags. The standard defines two tags: the S‐Tag and the C‐Tag. In an Ethernet frame the outer tag is the S‐Tag, or Service tag. The second VLAN tag is the C‐Tag, or customer tag. Although more than two tags can be added to frames, this practice is rare in access networks, so this course focuses on the use of the S‐ and C‐ tags. 32
The term Provider Bridge is used to refer to Ethernet devices that perform switching functions in a service provider network. This includes equipment that connects subscribers, provides transport, or connects to the edge router and core network. All Calix access platforms – B6, C7, and E‐Series products – act as a provider bridge. Provider Edge Bridges connect directly to customers, or subscribers. They are at the subscriber edge of the access network. These are commonly called access nodes. TR‐101 defines an access node as being able to terminate the ATM layer of a DSL connection, but the term “access node” also refers to equipment used to connect GPON and Active Ethernet subscribers. TR‐101 also defines aggregation nodes, which aggregate traffic from access nodes. “Aggregation node” is a functional description and in many cases an access node aggregates traffic from subtended nodes while simultaneously providing subscriber connections. The important thing to keep in mind is that all layer 2 access nodes are provider bridges, and some operate at the edge of the network. These are functional descriptions and a single device may act in more than one role. 33
Ethernet interfaces that connect to other access nodes, transport nodes, and core networks are called Provider Network Ports. They are also called “trunk” ports and are considered trusted interfaces. Trunk ports are only S‐tag aware; they only switch frames based on the S‐tag – they do not process C‐tags. 34
Ethernet interfaces that connect to subscribers and receive and transmit frames for a single customer are referred to as Customer Edge Ports. These interfaces are at the edge of the access network and are sometimes simply called edge interfaces. They can be associated with physical ports, as is sometimes the case with ONT data service, but are often provisioning objects within the access node, as with xDSL service. Edge ports are considered “untrusted” interfaces. Edge interfaces receive tagged, untagged, or priority‐tagged frames from the subscriber. Based on rules, the edge interface can add or change VLAN tags, including C‐tags and S‐
tags. 35
In this example, Internet data service is transported through the access network using VLAN 4081 in a VLAN‐per‐service model. As untagged frames arrive at the edge interface from the subscriber, the access node inserts a VLAN tag with VLAN ID 4081. Because Internet data is considered “best‐effort” traffic, the P‐bit is set to zero. The access node then sends the frames out to the transport network.
The process for a VLAN‐per‐port model is essentially the same, except that each edge interface would add a different VLAN ID to uniquely identify each subscriber.
36
In the downstream direction, the opposite action is performed. The access node switches the frames to the edge interface based on destination MAC address. The VLAN tag is removed and the frame is sent to the subscriber.
37
VLAN double‐tagging adds an inner VLAN tag (C‐tag) and an outer VLAN tag (S‐tag) to Ethernet traffic from downstream subscribers. The C‐tag is typically used on a per‐port basis. An edge interface can add both an S‐tag and C‐tag to frames. In this example 4081 is added as the S‐VLAN and 101 is added as the customer VLAN. As frames arrive at the edge interface, the VLAN tags are inserted into the frame. In the downstream direction, the access node removes the VLAN tags.
38
Here is a capture of an Ethernet frame taken from Wireshark. Let’s zoom into the frame.
This is the Ethernet header summary. Notice that this frame, like most Ethernet frames today, follows the Ethernet II format, and that it has a VLAN tag. Underneath, you can see the source and destination MAC addresses.
Next, the VLAN tag summary shows the VLAN ID, 1666, and categorized priority, best effort.
The identifier shows this is an 802.1Q VLAN‐tagged frame.
Here we see the numerical priority, which has a P‐bit value of 0.
Finally, we again see the VLAN ID of 1666, notice it also appears in binary format.
39
The transport network aggregates traffic from access nodes toward an edge router. Access nodes integrate transport capability via bridges that use Ethernet switches to transport packet traffic. This section of the course explains transport protocols and discusses common transport topologies.
40
This section includes an overview of transport networks, RSTP, ERPS, Link Aggregation, and network topologies.
41
In a Layer 2 access network, the transport network should efficiently transport traffic between the subscriber and the core network. It should provide redundancy to minimize service disruptions if a link goes down. The physical and logical design of the network can help minimize service disruptions. Rapid Spanning Tree Protocol (RSTP) and Ethernet Ring Protection Switching (ERPS) are topology protocols used in the transport network. These protocols allow loop‐
free logical paths in a ring with redundant physical links. Link Aggregation Groups can be used to provide point‐to‐point link protection in a network. Network topology protocols such as RSTP and ERPS are only required when access nodes are connected in redundant topologies. Most access networks have multiple physical paths through the network and use one or more protocols to manage traffic flows. 42
Different network protocols can be used in the transport network to provide link protection and rapid switchover. Ring topologies include Ethernet Ring Protection Switching, Rapid Spanning Tree Protocol, and Ethernet Protection Switching, or EPS. EPS is only used in the Calix B6 products, so we are going to focus on ERPS and RSTP in this training since they are more widely deployed. Protocols used in point‐to‐point topologies include link aggregation and RSTP.
43
Rapid Spanning Tree Protocol
44
In an Ethernet network, broadcast frames are sent out all other ports. If devices are configured in a ring, as shown here, broadcast frames would loop continuously. Eventually there would be so much traffic that the network would go down. This is called a broadcast storm. Spanning Tree Protocol (or STP) was developed to allow link redundancy without creating forwarding loops. It is standardized as IEEE 802.1D. STP disables the forwarding of frames on selected interfaces within the network to create a loop‐free forwarding topology.
45
STP works by creating a tree topology. At the base of the tree is a “root” bridge (or switch). Each bridge in the network determine the “best” path to reach the root bridge. Alternate paths to reach the root bridge are disabled. The root bridge is determined by a user‐
configured bridge priority. The lowest bridge priority in the network is selected as the root bridge. In the event of a tie, the bridge with the lowest MAC address wins the tie.
Spanning tree protocol uses an algorithm to determine the correct topology at all times. The port with the shortest path to the root is enabled; all other paths to the root are disabled. The port facing the root is called the Root port. The active port facing the “leaf” nodes is called the designated port. Alternate ports are simply designated ports along the link that is disabled by STP to prevent loops. If there is a link failure, designated ports start forwarding traffic.
46
In STP, ports in an Ethernet switch have different roles. Root ports carry traffic to the root bridge. Designated ports carry traffic away from the root bridge. Alternate ports are on the link the STP has disabled traffic on. If there is a failure in the network, the alternate ports may become root ports.
47
STP automatically determines which Ethernet switch gets to becomes the root bridge. Each Ethernet device has a bridge priority value. This is set by default, but network engineers often change the bridge priority value to force a certain device to become the root bridge. The device with the lowest bridge priority value is elected the root bridge. Values ranging from 0 to 61,400 are valid bridge priority values, but only in increments of 4096. For example, 24,576 is a valid bridge priority valued, but 24,000 is not. If two or more devices have the same bridge priority value, then the device with the lowest MAC address is elected root. In an access network, Calix recommends making the device closest to the edge router the root bridge.
48
The recommended path cost values for STP are based on the port rate used in the link. In
general, the higher the port speed, the lower the link cost value. STP considers the best path back to the root bridge as the path having the lowest link cost value. RSTP and STP use different link cost values.
49
When a new link in the network is enabled, the entire tree reconverges. This is called a Topology Change. Switches send Bridge Protocol Data Units to inform adjacent switches of the topology change. The path to the root from each node is based on the lowest total link cost, which is based on link speed.
50
STP uses bridge protocol data unit (or BPDU) messages to update adjacent nodes about topology changes. BPDU messages are sent periodically to keep the topology fresh.
There are three types of BPDUs:
‐ Configuration BPDU, which is used for Spanning Tree computations,
‐ Topology Change Notification, which is used to announce a change in network topology, ‐ Topology Change Notification Acknowledgement, which is sent to acknowledge other notifications
By default, BPDUs are sent on each port every two seconds. A bridge sends a BPDU frame using the unique MAC address of the port itself as a source address, and a destination address of the STP multicast address 01:80:C2:00:00:00. Ethernet bridges process BDPUs when they are received on a port, and are not forwarded to other ports. BPDUs are sent untagged.
51
When it was introduced, STP was effective at preventing loops and providing a backup path in case a link fails. But the original version of STP has some limitations. The biggest limitation is convergence time. Convergence time is the time it takes to reconfigure a topology change, which is necessary when a link goes down. When there is a change in a network, STP can take from 30 to 50 seconds to reconverge. Generally, the more nodes there are in a network, the longer the convergence time. Backup paths are computed only after a failure – if possible backup paths were precomputed before a failure, convergence time would be less. Finally, there can be only instance of STP in a network, so all VLANs must use the same spanning tree. 52
Ethernet Ring Protection Switching
53
With ERPS, you designate one node in the ring as the master node and all others as transit nodes. The master node protects the Ethernet ring from network loops and coordinates bridge table flushing. You designate a primary port and secondary port on each node. Primary ports forward traffic while one of the secondary ports in the ring blocks traffic to prevent loops. If there is a break along the ring the blocking port is made forwarding.
Create a control VLAN on each node in a ring. The control VLAN passes control packets between all nodes in the ring. The master node uses this control VLAN to send health packets around the ring. Health packets are used to determine if a ring failure has occurred. Once the control VLAN is provisioned and the ring is operational, any change to the control VLAN affects service. A master node normally blocks data VLANs on the secondary port, but the control VLAN is not blocked since the master node needs it to verify the health of the ring. Control VLAN frames received on the secondary port are not forwarded.
54
If there is a break along the ring the blocking port is made forwarding. This happens automatically in under 100 milliseconds. The master node periodically sends a health protocol data unit, or PDU, out the primary ring port. The master node then expects to receive the health PDU on the secondary port. If the health PDU is not received by the designated timeframe, a ring failure is declared. When a ring failure occurs, the master node flushes its bridging table for the ring‐based VLANs, except for the control VLAN. The master node then enables the secondary port for the ring‐based VLANs and sends a RING_DOWN PDU out both the primary and secondary ports, instructing the transit nodes to flush their local bridging tables and begin learning the new topology. The transit nodes also immediately send RING_DOWN PDUs out the ports that are up.
There are other ways the master node can detect a link failure. A loss of signal from the link will generate a link down message. Also, IEEE 802.1ag connectivity check can generate a link down message. 55
Link Aggregation
56
Link aggregation “bundles” or “bonds” multiple physical Ethernet ports into a single pipe. Link aggregation provides link redundancy in case one of the links fails. Link aggregation is managed by Link Aggregation Control Protocol (or LACP). LACP sends
link aggregation protocol data units down each link in the group. This helps detect link failures. LACP is also able to provide some load‐balancing capability to the group. The link used for each frame is decided by LACP based on an algorithm involving the source and destination MAC address. A LAG may run in either Active or Passive mode. A group in Active mode will negotiate links via LACP and will actively attempt to start the LACP conversation. A group in Passive mode negotiates links using LACP but does not actively attempt to start the LACP conversation, the far end must start the conversation. At least one end (or both) needs to be provisioned in Active mode for the group to ever come up.
Link aggregation is defined in more detail in IEEE 802.1ad.
57
Subscriber Connections
58
This section covers Internet Protocol version 4 addressing, Address Resolution Protocol, Dynamic Host Configuration Protocol, and Point‐to‐Point Protocol.
59
Internet Protocol Version 4 Addressing
60
IP addresses have 32 bits. IP addresses are binary, but are often expressed in base‐10 numerals because it is easier to read. For added convenience, the address is presented in four groups of eight bits separated by a dot. Each host in a network must have a unique IP address.
An IP address has two parts. It includes a network ID and host ID. The network ID is first, followed by the host ID. The part of the IP address that is the network, and the part that is the host, can vary depending on the subnet mask that is used. When a subnet mask is applied to an IP address, all “1”s in the subnet mask, as represented in binary, or shown here as “255” in base‐10, refer to the portion of the IP address that is the network ID.
61
Here is a recap of how the IP address and subnet mask work together to determine the network and host. Here is an IP address represented in binary. Here is the subnet mask that is applied to the IP address. The portions of the subnet mask that are represented as 1s indicate the network ID. The 0s indicated the hosts. Therefore, the network number would be 43.56.33.0. For more information on IP subnetting visit subnet‐calculator.com.
62
There are five classes of IP address, labeled class A through class E. In a class A address, the first binary bit in the address is always 0. In numeric notation, this encompasses the range of addresses from 0.0.0.0 through 127.255.255.255. In a class A address, the first octet is the network number, and the last three octets are used for hosts. This means that there only 128 class A networks, but each network is very large, containing up to 16 million hosts.
In class B, the first two bits of a binary IP address are always 1 0. In numeric form this represents addresses 128.0.0.0 through 191.255.255.255. In class B, the first two octets are used for the network, and the last two octets are used for the host. Approximately 65,000 class B networks are available, each providing approximately 65,000 hosts.
In class C, the first three bits of a binary IP address are always 1 1 0. This represents an address range from 192.0.0.0 through 223.255.255.255. In class C, the first three octets represent the network number, so 16 million class C networks are available. Since the last octet is used for hosts, each class C network only has 256 hosts.
Class D is used for multicast traffic. Access networks often use class D addresses for IP video. Class E is experimental, and is generally not used. Some of the IP address ranges are not available for use. For instance, the 127 network range is reserved.
63
Address Resolution Protocol
64
Address Resolution Protocol (or ARP) is an automatic process that matches layer‐3 IP
addresses with layer‐2 MAC addresses. ARP uses Request and Reply protocol. 65
For a layer 3 device to send a packet to another layer 3 device, it must know the layer 2 address of the destination.
An ARP request is used by the sender to learn the MAC address of the destination. An ARP request is a layer 2 broadcast Ethernet frame with a destination MAC address of FF:FF:FF:FF:FF:FF. An ARP reply is sent in response to an ARP request. It is a unicast frame sent to the Source MAC address of the ARP request. Only the host with the requested IP address should reply. This tells the source of the ARP request what the MAC address is of the host with the requested IP address.
66
In this example, Host A and Host B are directly connected to the same Ethernet switch, and are configured with IP addresses in the same subnet. The ARP cache of all devices is empty. Host A wants to send a packet to Host B. To do so, Host A must learn the layer 2 address (MAC address) of Host B. Host A sends an ARP request. Upon receiving the ARP request, the switch and Host B puts Host A in its MAC table. Because the ARP request is a broadcast, it is sent to all ports on the switch.
67
Dynamic Host Configuration Protocol
68
Dynamic Host Configuration Protocol (or DHCP), is way to automatically assign IP addresses to devices. A pool of IP addresses are kept in a DHCP server. When a device comes online, it knows to request an IP address from the DHCP server. DHCP provides boot time information to the requesting device using a simple handshake.
The device, called a client, broadcasts a DISCOVER message to the server. The server checks its database and then broadcasts an OFFER of lease terms to the client. The client, if the terms are acceptable, broadcasts a request to the server for the offered lease terms. The server sets up the lease and broadcasts an ACK to the client. At this point, the client can use the address assigned by the DHCP server.
69
DHCP Option number 82 allows for a set of parameters (called suboptions) to be added by a relay agent to DHCP messages.
Suboption 1 is the Circuit ID. It defines the device and port on which the DHCP traffic entered the access network.
Suboption 2 is the Remote ID. This provides additional information about the arrival port and sometimes the remote device connected to it. It includes the MAC Address of the arrival port, and a description which is a text string associated with the port. This allows for customers who are requesting specific information such as phone number or street address.
70
Point‐to‐Point Protocol Over Ethernet and ATM
71
Point‐to‐Point Protocol is a layer 2 protocol that sets up and maintains a connection between a DSL access concentrator and a subscriber’s DSL modem or PC. Point‐to‐Point Protocol is responsible for triple‐A functions including authentication, administration, and accounting.
72
DSL services may run over ATM in parts of a network, and over Ethernet in other parts. In this example, Ethernet is used for transport and ATM is used from the access interface to the subscriber. At this interface, PPPoA to PPPoE conversion is used to convert the PPP packets from ATM to Ethernet, and vice versa in the opposite direction. Nowadays, PPPoE
to PPPoA conversion is enabled by default in most access products. This is just one example, conversion can happen elsewhere in a network depending on where Ethernet and ATM converge.
73
To reduce this traffic, an access device can act as an intermediate agent. An intermediate agent filters PPP packets and blocks unnecessary PADI and PADO packets. It can send PADI packets directly to a single access concentrator and forward the PADO response directly to the subscriber device.
74
Managing Layer 2 Traffic
75
This module covers quality of service, VLAN models, and subscriber edge models.
76
Quality of Service
77
Different types of traffic in an access network can be given different priorities using a priority bit, or P‐bit. In cases of network congestion, traffic with a higher P‐bit value will naturally be given a priority over traffic with a lower P‐bit. In other words, lower‐priority traffic could get dropped or delayed in buffers. The P‐bit value is part of an Ethernet frame header. This chart shows the recommended P‐bit values for different types of traffic. Data traffic has the lowest priority since data usually is not time‐sensitive and can be retransmitted if some data is lost. Video and voice traffic is time sensitive, since data loss is perceived as an interruption in service, so they are given higher priorities.
78
P‐bit assignment occurs at layer 2, but layer 3 also has a traffic priority system called Differentiated Services Code Point (or DSCP). The DSCP field is located in the header of an IP packet. You can assign a DSCP priority to different traffic types. The main challenge in an access network is to make sure that the layer‐2 P‐bit and layer‐3 IP priorities match up as traffic leaves the access network and reaches the edge router, and vice versa. 79
In this example we show voice traffic in a network. In the core, voice traffic is given a DSCP value of 40 and P‐bit of 5. At the edge router, these values are mapped to each other, so as layer 3 voice traffic leaves the router and moves to layer 2, it maintains a high priority. Now, it is the job of the access network to maintain this priority. Since voice traffic leaves the edge router on VLAN 4078, we just need to add P‐bit 5 to all traffic entering the access network on VLAN 4078. On ONTs, we know that traffic coming in on voice ports is voice traffic, so we assign both VLAN 4078 and P‐bit 5 to traffic entering on these ports. As part of the same action, in the opposite direction, the VLAN tag and P‐bit value is removed as voice traffic leaves the access network heading toward the subscriber.
80
VLAN Models
81
In a VLAN per service model, voice, video and data service are transported through the network on separate service VLANs. For example, voice over IP service may be assigned to VLAN 4074, IPTV to VLAN 4070 and Internet data to VLAN 4081. As traffic from subscribers enters the access network, it is assigned to the service VLAN and transported upstream to the core network. This is sometimes to referred to an N:1 VLAN because multiple subscribers share a VLAN.
82
In a VLAN‐per‐port model each subscriber is assigned a VLAN. This is typically done for Internet data service. As frames from the subscriber enter the access network, a unique tag is added to the frames. For larger networks, a single VLAN tag is not scalable because you are limited to 4094 VLANs.
83
A more scalable version of the VLAN‐per‐port model uses two tags. As frames from the subscriber enter the access network, each subscriber is assigned a unique C‐VLAN ID. An S‐
tag is added to group subscribers. In this example, subscribers from each access node are grouped into unique S‐tags. C‐VLAN IDs can be re‐used with subscribers on different access devices, because the addition of the S‐VLAN ID creates a unique S‐tag/C‐tag combination. This maintains subscriber traffic isolation and allows for scalability. 84
Service providers offer Transparent LAN service to businesses that have remote offices that need to be networked together. Service providers transports the business network through the access network, but uses VLANs to separate business traffic from other subscriber traffic. At the customer site, a switch connects to the access network. This can be through an ONT, or directly to an Ethernet port in the access node. The customer switch is configured to send tagged frames, for example using VLAN 100. As frames arrive at the access node, the service provider adds another VLAN tag, for example VLAN 200. The frames are transported across the provider’s network using VLAN 200, to the other site. When the traffic reaches the end of the Service Provider Network, the switch will strip the VLAN tag of 200 from the frame before transmitting it to the Customer switch, resulting in a single‐tagged frame with the tag of 100. In essence, the Service Provider’s network is transparent to the Customer network.
85
Transparent LAN services can be provided to multiple customers. In this example, customer 1 is assigned VLAN 200. The customer switch is configured to send tagged frames using VLAN 100. The service provider network adds VLAN 200 as an outer tag and transports the traffic across the network. Additional customers can be assigned a different VLAN in the service provider network. Even though Customer 2 is sending tagged frames using VLAN 100, the traffic is isolated because the service provider network is adding an outer tag with VLAN 250. Because the service provider assigns each customer a unique Transparent LAN VLAN, customers are able to use whatever VLAN ID they wish for their traffic. 86
Subscriber Edge Models
87
There are three subscriber edge models discussed in this section: bridged with layer 3 awareness, residential gateway with layer 3 protection, and a hybrid model which is a mixture of the previous two.
88
In the bridged model the demarcation device is configured as a layer‐2 bridge –
there is no layer 3 isolation between the subscriber network and the service provider network. The service provider may provide multiple services to the subscriber through a single interface or multiple interfaces. When using a fully bridged subscriber edge, the common practice is to provide public addresses to data devices and private addresses to set‐top boxes – thus the set‐top boxes and data devices are assigned to different VLANs. The demarcation device ensures that the video and data traffic is segregated into different VLANs. This could be accomplished by either using a different port for each service or using a MAC OUI or DHCP option 60 to segregate the set‐top box traffic from the data traffic. MAC OUI classification has been used commonly, but this method is becoming harder to deploy as the line between set‐top box and data device becomes blurred. Some equipment vendors produce set‐top boxes and data devices, so it is no longer easy to cleanly separate customer traffic using an organizational identifier. Service providers prefer to use different ports on the demarcation device to isolate video traffic from data traffic. This ensures that a misbehaving data device does not impact the video service quality. The demarcation device supports port‐based traffic prioritization, which helps ensure that video traffic is provided a higher quality of service.
89
A residential gateway (or RG) is a type of broadband router that connects devices in a home or business to the Internet or some other WAN, and may be integrated with a modem or ONT. An RG provides a IP layer demarcation, allowing the LAN ports to operate as a single Ethernet bridge/switch supporting value‐added services such as media sharing to be enabled only within the home. Additionally, the RG provides enhanced QoS control between the home and the WAN and an advanced firewall capability, providing enhanced security and protection between the home and the WAN. For IPv4, the RG supports Network Address Translation (NAT), which allows all the devices in the home to receive unique private addresses while sharing a smaller number of public addresses on the WAN. For IPv6, the RG advertises one or more subnets to the home which are assigned by the service provider.
90
The hybrid model is a mixture of the residential gateway and bridged models. In the hybrid model, the demarcation device provides network address translation (NAT) capability for data devices, and set‐top boxes are bridged. The data devices and set‐
top boxes are connected to different ports on the demarcation device. A service provider may have to deploy a hybrid model if their video middleware requires them to have IP visibility to each and every set‐top box. Although less common, it is possible to deploy multiple NATed domains on a residential gateway. The demarcation‐point devices are capable of segregating the traffic on their subscriber ports and creating one or more bridged or layer3 domains. A small number of customers still use PPPoE for data devices and bridged deployment models for set‐top boxes, although use of this model is on the decline. We classify this as a hybrid model as well. The demarcation device is considered untrusted or semitrusted. If the demarcation device is marking the traffic per service (using priority bits or tags), these tags are always modified by the access node. Other than security, the second reason for this is that the operators generally shy away from managing and configuring the demarcation device, thus they prefer demarcation devices to have an identical configuration for all customers; clearly we cannot put all subscribers into the same VLANs and need to modify the VLAN markings in line with the appropriate policy for that access network segment. 91
Securing Subscriber Connections
92
This module covers ways to secure a layer 2 access network, including IEEE 802.1x port authentication, broadcast control, match lists, MAC address per port, MAC forced forwarding, and DHCP security.
93
Port authentication allows an access node to limit access from an interface until the attached device is authenticated using subscriber credentials. This prevents an unauthorized device from accessing the network, and avoids interoperability issues. IEEE 802.1X enables support for Remote Authentication Dial In User Service (or RADIUS), which validates users from a central location such as an external server. A RADIUS server stores acceptable login credentials in its database. The RADIUS server can also store login and logout information such as timestamps. The subscriber device must support the IEEE 802.1X protocol. 94
Certain protocols (like ARP, DHCP, and IGMP) produce broadcast frames that must be processed by the CPU, also known as “slow path" processing. Too much broadcast traffic can reduce network performance. Excess broadcast traffic can result from several factors, including: • Too many devices in a broadcast domain
• A device on the network is generating excessive broadcast frames
• A Denial of Service attack has been initiated by a subscriber in the access network, or by any entity in the routed network
• A loop has been created by a configuration error or excessive host processor load, or
• A mass outage due to power failure in an area has resulted in many devices coming on line at once and making DHCP requests
Calix products support broadcast control for broadcast, multicast, and DLF traffic on network interfaces and subscriber interfaces. Broadcast control limits the ingress rate of traffic at the packet level (in packets per second) or at the bandwidth level (at bits per second). The implementation of broadcast control varies by product. Enabling broadcast control on edge ports protects the access node and ensures fair distribution of resources among subscribers. Factors such as network size and node configuration (for example, whether MACFF is enabled) contribute to selecting appropriate broadcast rate limiting values. Note that you should not enable broadcast control on an interface using transparent LAN service, since the customer will want to pass the broadcast traffic between their sites. Also, do not enable broadcast control for multicast traffic on network interfaces; instead, design the network to distribute and manage multicast content using IGMP. 95
Packet filtering restricts network use by certain users or devices. Access Control Lists (ACLs) and match lists are a collection of deny and permit conditions used to filter traffic as it passes through a router or switch, and to permit or deny packets crossing specified interfaces or VLANs. This capability is also referred to as Access Control Logic. ACLs and match lists can be used to limit traffic based on different Ethernet fields, including: •Source MAC •Destination MAC •VLAN ID •P‐bits
•Organizational Unique Identifier (OUI), and
•EtherType
96
MAC address limiting protects against bridge table flooding by setting a limit on the number of MAC addresses that can be learned on a port. The system dynamically learns new MAC addresses until it reaches a limit that you set, then incoming packets with new MAC addresses are dropped. Learned MAC addresses age out based on a configurable timer. DHCP lease limiting achieves similar results to MAC address limiting, and is typically used for residential services. MAC address limiting provides an important security feature for business services that might not use DHCP. You can also configure a bridge table not to learn MAC addresses. 97
MAC forced forwarding sends all subscriber traffic within a particular VLAN directly to the upstream access router, regardless of the destination IP address. The access router then forwards the traffic to its destination. With MAC forced forwarding enabled, a Calix ONT learns host IP and MAC addresses through DCHP and static provisioning, and learns router addresses by snooping ARP requests. The ONT can then filter upstream packets that are not destined for the upstream access router. Also, the ONT responds to upstream ARP requests on behalf of clients. This prevents the client from learning the source IP of the ARP requestor.
98
The IP source verify feature prevents subscribers from statically assigning an IP address to a device and passing traffic on it. IP source verify also prevents passage of any traffic not known to the system. The system is aware of permitted traffic either learned through DHCP or static assignments.
99
DHCP snooping allows an access node to identify where an IP address and MAC address exist on a system. DHCP snooping listens for address requests and offers, and builds a persistent DHCP lease table which you can view, filter, and search. DHCP snooping also drops DCHP offers that come from ONTs. 100
A DHCP lease limit specifies the maximum number of DHCP messages and leases that are allowed on a subscriber port. A DHCP lease limit is a security feature that prevents the subscriber from flooding the network with new MAC addresses using DHCP. An access node enforces a limit on the number of DHCP leases on a per‐port basis. A unique MAC address can only have one DHCP lease assigned by the server at a time. VLANs with DHCP snooping enabled on the subscriber port are subject to the DHCP lease limit. The DHCP lease limit is enforced on ports with DHCP proxy enabled. DHCP lease limiting achieves similar results to MAC address limiting, and is typically used for residential services. For most access nodes, a learned MAC address is retained for a specified period of time (for example, five minutes) before aging out. The specifics of how lease limits are applied, such as the number of supported leases per port, depends on the type of access node. 101
A DHCP server is not always in the same subnet or same broadcast domain as a DHCP client. For this to be so, a service provider may need to have hundreds of DHCP servers. To pass DHCP traffic across a routed network, a DHCP relay agent must be enabled, typically on the router that is connected to the subnet. The DHCP relay agent is provisioned with the IP addresses of DHCP servers in the network. Any DHCP messages sent on a local subnet are then forwarded by the DHCP agent to the DHCP server. For example, the device on port 4 broadcasts a DHCP request, which is received by the router acting as the DHCP relay agent. The relay agent uses its own IP address in the IP address field for the DHCP request. The server sends a response directly to the relay agent. The relay agent forwards the response from the server to the locally attached clients. 102
Securing Subscriber Connections
103
The video head end is where video content, such as a television channel, is distributed into the IPTV network. The head end includes middleware servers, encoders, video‐on‐demand servers, and encryption. Middleware servers run software that manages subscriber information, including billing, service packages, and account information. Middleware servers also store and distribute the software that runs on set‐top boxes at the subscriber premises. They send electronic programming guides and emergency service announcements to set‐top boxes. They also act as a DHCP server that hands out IP addresses to set‐top boxes.
Encoders receive video signals from different formats and encodes it into MPEG2 or MPEG4 format, then sends the signals out into the core network as a multicast stream. Encoders are not needed if service providers already receive video signals in MPEG2 or MPEG4 format.
Video‐on‐demand, or VOD, servers allow subscribers to view a catalog of video content, such as movies or TV shows, and select one to watch, typically for a fee. For VOD, the MPEG video stream is sent directly to the subscriber as unicast traffic. VOD content is typically stored on servers and the service provider’s head end, and is integrated with the middleware service for billing and account information.
Since content providers only want permitted set‐top boxes to watch video streams, video content is encrypted at the head end and is decrypted at the subscriber’s set‐top box. Set‐top boxes receive encryption keys from the head end in order to decrypt the video content.
The core network is typically comprised of routers and switches that are capable of processing IGMP or PIM packets to carry multicast video traffic to various access networks.
Access networks have ATM or Ethernet uplinks that connect to the core network. Access networks use DSL or PON to extend video to subscriber premises. Access networks may be intelligent, meaning they can intercept and process IGMP packets sent from the network (called queries) or subscriber (called joins and leaves). Access networks can also be pass‐through, meaning they merely forward IGMP packets to the core for processing.
Equipment at the customer premises typically includes either a DSL modem, in the case of copper‐loop xDSL line, or optical network terminal (ONT), in the case of an optical fiber passive optical network, and a set‐top box and video monitor. The set‐top box decodes the MPEG video streams into a signal appropriate for a TV set. The set‐top box also sends joins and leaves when the subscriber turns on or off the TV or changes channels. Set‐top boxes connect to Ethernet ports on the DSL modem or ONT.
104
IGMP is a communications protocol used by hosts and adjacent routers on IP networks to establish multicast group memberships. This allows Ethernet switches to be more intelligent and efficient with forwarding multicast traffic. Typically, networks with multicast traffic have routers configured with Protocol‐Independent Multicast to control multicast traffic in a network. The IGMP router is also sometimes called a querier because it periodically sends IGMP queries to clean up multicast groups. IGMP uses different protocol packet types. A membership report, also called a Join, is sent by a device that wants to join a multicast group. When a device sends a join request, the destination IP address is the same IP address as the multicast group. A leave group message, also called a Leave, is sent when a device wants to leave a group. When a device sends a leave request, the destination IP address is always 224.0.0.2, also called the “all routers group”. Sometimes when a device sends a leave request, the data could get lost or dropped due to network congestion or dropped connection, meanwhile multicast traffic still flows to the device, wasting bandwidth. So, an IGMP router periodically sends membership queries to determine if any devices are still members of multicast groups. A query asks devices if they are still members of a multicast group. An IGMP router can send two types of queries: a general query and group‐specific query. A group‐specific query targets members of a specific multicast group. A general query is sent to all members from all groups. When IGMP routers send queries, the destination IP address is always 224.0.0.1, also called the “all hosts group”. If no devices respond to the query, it stops transmitting the multicast channel.
IGMP packets have a time‐to‐live value of 1, minimizing the amount of resource consumption by IGMP packets. 105
Here are some packet captures of IGMP traffic. The top picture is a membership report packet. The destination addresses are of the multicast groups being joined.
The second picture is a leave group packet. A leave group packet always has a destination IP address of 224.0.0.2. This is the address of the multicast group that the host is leaving.
106
The top picture is a general query packet, and the bottom is a group‐specific query packet. This shows that the packet is a query. It gives members ten seconds to respond to the query. The destination address for a general query packet is 224.0.0.1. The all zeros address requests all multicast groups.
This query packet is directed to a specific group, which has a one‐second response time. The specific destination address in this example is 236.200.200.1, which is the address of the group being queried. 107
IGMP snooping allows Ethernet switches to be more efficient when handling multicast traffic. With IGMP snooping, a switch only forwards multicast traffic to ports that have established a membership with a multicast channel. The switch looks for IGMP membership reports and leave requests to determine which ports have devices requesting multicast traffic. Multicast packets are only sent to ports where an IGMP membership was received. To do this, the switch must maintain a multicast forwarding table that keeps track of multicast group members and the ports they are connected to. If no devices attached to a switch have sent an IGMP membership report, all multicast traffic is dropped until a membership report is received.
108
Once a device sends a membership report, the switch updates its multicast forwarding table and begins forwarding multicast traffic from that multicast group to the port, and continues dropping traffic for other groups. In this example, host 10.1.1.30 sends a membership report to multicast group 225.1.1.123. The switch updates its multicast forwarding table that 10.1.1.30 on port 4 has requested traffic for 225.1.1.123. The membership report is flooded out all ports. The IGMP router receives the membership report, updates its multicast forwarding table, and sends 225.1.1.123 out Interface E1. The switch sees the traffic coming in on port 1 and knows to forward it to port 4. 109
When a device no longer wants to receive multicast traffic, it sends a leave group packet. The switch snoops this packet, removes the IP host from the multicast forwarding table, and floods the packet out all other ports. In this example, host 10.1.1.30 is actively listening to multicast group 225.1.1.123 but wants to leave, so it sends an IGMP leave group message to leave this multicast group. The switch updates its table that 10.1.1.30 on port 4 is no longer a member of group 225.1.1.123. The leave group is sent to all other ports. The IGMP router receives the leave group and sends a group‐specific query to see if there are any other hosts still actively listening to the multicast group. If there is no response within a specified time period, the IGMP router updates its multicast forwarding table and stops sending traffic for multicast group 225.1.1.123 out Interface E1.
110
IGMP routers occasionally send general query messages to all multicast devices to make sure that they still intend to be members of multicast groups. This cleans up multicast group membership and helps reduce unnecessary traffic. Sending general queries is needed because sometimes, for whatever reason, a leave request may not reach the IGMP router. In this example, host 10.1.1.30 sends a leave request, but due to network congestion, the packet is lost, so the IGMP router never receives the request, and meanwhile bandwidth is still being used to deliver the multicast traffic that is not wanted. Fortunately, the IGMP router sends a general query every 60 seconds. The general query basically gives devices a certain amount of time, usually 10 seconds, to confirm they are indeed members of a multicast group. If a device is still part of a multicast group, it sends a membership report. If the router does not any receive any membership reports within the query period for a particular multicast channel, it stops sending that channel.
111
IGMP proxy allows Ethernet switches to be even more efficient in handling multicast traffic. When configured for IGMP proxy, an Ethernet switch acts on behalf of multicast receivers. It terminates IGMP packets from devices, such as joins and leaves, and sends its own IGMP packets using its own IP address. To do this the switch must be assigned a layer 3 IP addresses. The switch also learns which interface is attached to the IGMP router.
112
With IGMP proxy, when a switch receives a membership request from a device, it terminates the packet, updates its multicast forwarding table, and sends an IGMP membership request using its own IP address as the source. In this example, the switch has already learned which port is connected to the IGMP router. Host 10.1.1.30 sends a membership report requesting multicast group 225.1.1.123. The switch updates its multicast forwarding table. It terminates the membership report, and sends a new membership report to the IGMP router using its own IP address of 10.1.1.2. It does not need to flood it to all ports. The IGMP router receives the membership report and sends multicast traffic for 255.1.1.123, which the switch forwards to host 10.1.1.30. If another device on the switch requests the same multicast channel, it simply forwards the traffic to that port; it does not have to send another membership report to the IGMP router. This reduces the amount of IGMP traffic in a network.
113
With IGMP proxy, when a device sends a leave group message, the switch looks at its forwarding table. If that is the last active listener for that multicast group, also known as a “last member query”, the switch sends a group query. If there are no responses, the switch removes the multicast group from its forwarding table and sends a leave group message to the IGMP router. If there are responses to the query, or if there are still active listeners on that multicast group, the switch removes that one host from its multicast forwarding table and continues forwarding to the other active listeners. It does not forward the leave group message to the IGMP router. The IGMP router also sends a group‐specific query to validate that there are no other hosts listening to the multicast group.
114
Securing Subscriber Connections
115
Voice over IP, or VoIP, is a way to digitize and send analog voice signals across an IP data network. Let’s look at some of the components that comprise a VoIP network.
With VoIP you can use a standard analog telephone, but the signal must be digitized to work with the data network. The analog telephone adapter, or ATA, performs the analog‐to‐digital conversion and packetizes the voice signal for transport across the IP network. In most consumer applications, a customer plugs their analog phone into a VoIP router, which also has the ATA function. An ATA is not needed for a pure IP or VoIP phone. Many businesses are now using pure VoIP phones.
Telephones on a VoIP network are also called VoIP clients or VoIP endpoints. VoIP endpoints are connected to a media gateway. On a basic level, the media gateway is responsible for managing the VoIP endpoints connected to it, and for interfacing with the media gateway controller. The media gateway interprets incoming IP flows and determines what to do with the data. The media gateway performs voice compression and may have echo cancellation capabilities. An example of a media gateway is a VoIP‐enabled DSLAM.
The media gateway controller, or MGC, supervises all of the VoIP devices assigned to it. The media gateway controller is responsible for setting up and tearing down calls, managing traffic flow between networks, and controlling the media gateways. The MGC is also sometimes called a call agent or softswitch. The MGC interfaces with application gateways that provide necessary services for the VoIP network, such as an application server that provides call messaging or voice‐mail services, or an account server that provides account and billing services. Increasingly the trend is for more services to reside in software on the same physical device rather than separate servers for each application. Finally, the MGC interfaces with a signaling gateway or PSTN switch, which allows the VoIP network to communicate with the PSTN.
Media gateways do not necessarily need to be located close to an MGC. Often times they are located across the Internet. Calls themselves do not always travel through the MGC. The MGC provides routing information for calls, and communicates with media gateways using control links. A lot of engineering work goes into proper placement of VoIP devices while maintaining a high quality of service.
116
A VoIP network has two “planes” of operation, the control plane and the bearer plane. The
control plane is responsible for the setup and teardown of calls. The bearer plane carries the data for actual phone calls. In the bearer plane, the packetized voice signal is carried as prioritized data through the IP network between the calling party and called party.
117
H.248, SIP, and MGCP are used on control links to setup and teardown calls, and for other communications between media gateways and MGCs. They are control‐plane protocols.
RTP is used to make actual calls between VoIP endpoints, between a calling party and called party. RTP is primarily a bearer‐plane protocol.
118
Most VoIP protocols use UDP as the layer 4 transport protocol rather than TCP. TCP retransmits packets that it thinks have been lost. This is great for data, but voice traffic is a real‐time application, so the delay introduced by retransmission would cause latency and reduce call quality. For voice, it is better to drop a few packets than to wait to make sure all packets arrive.
119
Securing Subscriber Connections
120
This module covers how business services are deployed in an access network, and include topics on the Metro Ethernet Forum and transparent LAN service, pseudowire applications, and mobile backhaul.
121
One of the main challenges in providing MEF‐compliant business Ethernet service is keeping the subscriber’s and service provider’s layer‐2 networks separate. The point of separation between the two networks is called the demarcation point. The physical manifestation of the demarcation point is the Ethernet interface that a service provider provides to a subscriber, which is referred to in MEF terminology as a user‐network interface. Transparent LAN service is often used to connect the subscriber’s remote networks together using a service provider’s network as a bridge in‐between. With transparent LAN service, the customer’s perception of the network is a direct connection between sites, even though there may be many devices and great distance in between. The customer should not notice the service provider network. More importantly, subscriber traffic must not interfere with the service provider network.
122
The Metro Ethernet Forum standardizes specifications and terminology regarding business Ethernet interfaces and performance monitoring. Part of this terminology is the concept of an Ethernet Virtual Circuit, or EVC, which is simply an Ethernet connection between sites. A point‐to‐point EVC between two sites is called an E‐Line service, and a network of EVCs connecting multiple sites is called an E‐LAN service.
123
Pseudowire
124
A pseudowire connects two time‐division multiplexing circuits over an Ethernet or packet‐
switched network. In this example two T1 endpoints are connected together using a pseudowire. The endpoints convert a T1 signal into Ethernet frames for transport across an Ethernet network, and converts the frames back into a T1 at the opposite endpoint. This technology is called pseudowire end‐to‐end emulation, shortened to PWE3.
125
The primary challenge of pseudowire emulation is timing. TDM signals include timing information which helps ensure that bits arrive at their destination at precise intervals. A T1 signal is called plesiochronous since although it is timed, it doesn’t use the more advanced timing mechanisms employed by fully synchronous protocols such as SONET or SDH. By contrast, an Ethernet or packet‐switched network is asynchronous, meaning it is not timed. Frames travelling across an Ethernet network arrive at their destination with random amounts of delay. This is not acceptable for a timed T1 signal, and would cause timing slips that ultimately result in lost data. However, there are ways to recover this timing information.
126
Structure‐Agnostic TDM over Packet, or SAToP, is a protocol that converts a TDM signal into UDP segments for transport over a packet‐switched network. SAToP inserts the raw bit stream from a T1 into UDP segments. The segments are encapsulated into packets, then frames, as they descend the OSI layer. Since a packet‐switched network is asynchronous, the segments arrive at random intervals. SAToP reconverts the UDP segments into a timed TDM bit stream with the help of a jitter buffer and timing algorithms at the opposite endpoint. For more information, SAToP is defined in RFC 4553.
127
Adaptive timing takes the average difference in the arrival time of incoming frames and uses this average to maintain the timing. Frames are sent at regular intervals since their source is timed, but arrive with random delay due the asynchronous nature of a packet‐
switched network. The average difference between the arrival times of frames is calculated, and algorithms use this information to recreate the timing for the outgoing T1 signal. Adaptive timing is usually used if no external reference timing source is available. If using adaptive timing, use loopback timing at the opposite end. We recommend using adaptive timing on the CPE side and loopback timing on the CO side.
128
Differential timing uses an external clock to help maintain timing. With differential timing, both pseudowire endpoints use the same primary reference source. The endpoints receive RTP timestamps. Upon receiving the RTP timestamps, they can process the difference between the timestamps and the actual arrival time of packets. Endpoints use this difference to calculate the timing for the outgoing T1. 129
Loopback timing takes the timing information received from a T1 and uses the same timing information to transmit a T1. It may seem like a good idea to use loopback timing at both endpoints, but this is not recommended unless both T1s are timed from the same source. Loopback timing is usually used along with adaptive or differential timing on the opposite end. In these situations loopback timing is recommended for the CO side.
130
There are multiple ways to configure timing over two pseudowire endpoints. Only some of these timing combinations are recommended. One recommendation is to use loopback timing on the CO end and adaptive timing on the CPE end. Another recommendation is to use loopback timing on the CO end and differential timing on the CPE end. You can also use differential timing on both ends. All other timing combinations are not recommended, although you can use loopback timing on both ends if both T1s are timed from the same source. For more information on deploying timing consult the Calix T1 PseudoWire
Applications Guide.
131
Mobile Backhaul
132
Access networks can aggregate voice and data service at cell towers. Access networks can cost‐effectively transport this data to the nearest gateway or CO. Historically, cell towers have used T1s to trunk data to a gateway or CO. An Ethernet access network can use pseudowires to emulate the transport of T1 or E1 signals. Access nodes packetize the T1 or E1 signals and transmit the data over a pseudowire. This saves money over using dedicated T1 or E1 equipment. In MEF terminology this remote site is called a radio access network customer edge, or RAN CE.
133
134
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement