Lecture -4
The Network Layer
The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the
network, and determines the best way to move data, which means that the Network layer must transport traffic
between devices that aren’t locally attached. Routers (layer 3 devices) are specified at the Network layer and
provide the routing services within an internetwork. It happens like this: First, when a packet is received on a
router interface, the destination IP address is checked. If the packet isn’t destined for that particular router, it
will look up the destination network address in the routing table. Once the router chooses an exit interface, the
packet will be sent to that interface to be framed and sent out on the local network.
In Figure 1.13, I’ve given you an example of a routing table. The routing table used in a router includes the
following information:
Network addresses Protocol-specific network addresses. A router must maintain a routing table for individual
routing protocols because each routing protocol keeps track of a network with a different addressing scheme.
Interface The exit interface a packet will take when destined for a specific network.
Metric The distance to the remote network. Different routing protocols use different ways of computing this
distance. Know that some routing protocols use something called a hop count (the number of routers a packet
passes through en route to a remote network), while others use bandwidth, delay of the line. And as I
mentioned earlier, routers break up broadcast domains, which means that by default, broadcasts aren’t
forwarded through a router. Routers also break up collision domains, but you can also do that using layer 2
(Data Link layer) switches. Because each interface in a router represents a separate network, it must be
assigned unique network identification numbers, and each host on the network connected to that router must
use the same network number. Figure 1.14 shows how a router works in an internetwork.
- 20 -
The Data Link Layer
The Data Link layer provides the physical transmission of the data and handles error notification, network
topology, and flow control. This means that the Data Link layer will ensure that messages are delivered to the
proper device on a LAN using hardware addresses and will translate messages from the Network layer into bits
for the Physical layer to transmit. The Data Link layer formats the message into pieces, each called a data
frame , and adds a customized header containing the hardware destination and source address. These various
pieces of equipment were useful only during certain stages of space flight and were stripped off the module and
discarded when their designated stage was complete. Data traveling through networks is similar.
Figure 1.15 shows the Data Link layer with the Ethernet and IEEE specifications. When you check it out,
notice that the IEEE 802.2 standard is used in conjunction with and adds functionality to the other IEEE
standards.
It’s important for you to understand that routers, which work at the Network layer, don’t care at all about
where a particular host is located. They’re only concerned about where networks are located and the best way
to reach them—including remote ones. Routers are totally obsessive when it comes to networks. And for once,
this is a good thing! It’s the Data Link layer that’s responsible for the actual unique identification of each
device that resides on a local network. For a host to send packets to individual hosts on a local network as well
as transmit packets between routers, the Data Link layer uses hardware addressing.
Each time a packet is sent between routers, it’s framed with control information at the Data Link layer, but that
information is stripped off at the receiving router and only the original packet is left completely intact. This
framing of the packet continues for each hop until the packet is finally delivered to the correct receiving host.
It’s really important to understand that the packet itself is never altered along the route; it’s only encapsulated
with the type of control information required for it to be properly passed on to the different media types.
The IEEE Ethernet Data Link layer has two sub-layers:
Media Access Control (MAC) 802.3 Defines how packets are placed on the media. Contention media access is
“first come/first served” access where everyone shares the same bandwidth—hence the name. Physical
addressing is defined here, as well as logical topologies. What’s a logical topology? It’s the signal path through
a physical topology. Line discipline, error notification (not correction), ordered delivery of frames, and
optional flow control can also be used at this sub-layer.
- 21 -
Logical Link Control (LLC) 802.2 Responsible for identifying Network layer protocols and then
encapsulating them. An LLC header tells the Data Link layer what to do with a packet once a frame is
received. It works like this: A host will receive a frame and look in the LLC header to find out where the
packet is destined—say, the IP protocol at the Network layer. The LLC can also provide flow control and
sequencing of control bits.
The switches and bridges I talked about near the beginning of the chapter both work at the Data Link layer and
filter the network using hardware (MAC) addresses. We will look at these in the following section.
Switches and Bridges at the Data Link Layer
Layer 2 switching is considered hardware-based bridging because it uses specialized hardware called an
application-specific integrated circuit (ASIC) . ASICs can run up to gigabit speeds with very low latency rates.
NOTE: Latency is the time measured from when a frame enters a port to the time it exits a port.
Bridges and switches read each frame as it passes through the network. The layer 2 device then puts the source
hardware address in a filter table and keeps track of which port the frame was received on. This information
(logged in the bridge’s or switch’s filter table) is what helps the machine determine the location of the specific
sending device. Figure 1.16 shows a switch in an internetwork.
The real estate business is all about location, location, location, and it’s the same way for both layer 2 and layer
3 devices. Though both need to be able to negotiate the network, it’s crucial to remember that they’re
concerned with very different parts of it. Primarily, layer 3 machines (such as routers) need to locate specific
networks, whereas layer 2 machines (switches and bridges) need to eventually locate specific devices. So,
networks are to routers as individual devices are to switches and bridges. And routing tables that “map” the
internetwork are for routers as filter tables that “map” individual devices are for switches and bridges. After a
filter table is built on the layer 2 device, it will forward frames only to the segment where the destination
hardware address is located. If the destination device is on the same segment as the frame, the layer 2 device
will block the frame from going to any other segments. If the destination is on a different segment, the frame
can be transmitted only to that segment. This is called transparent bridging .
When a switch interface receives a frame with a destination hardware address that isn’t found in the device’s
filter table, it will forward the frame to all connected segments. If the unknown device that was sent the
“mystery frame” replies to this forwarding action, the switch updates its filter table regarding that device’s
location. But in the event the destination address of the transmitting frame is a broadcast address, the switch
will forward all broadcasts to every connected segment by default.
All devices that the broadcast is forwarded to are considered to be in the same broadcast domain. This can be a
problem; layer 2 devices propagate layer 2 broadcast storms that choke performance, and the only way to stop
a broadcast storm from propagating through an internetwork is with a layer 3 device—a router.
- 22 -
The biggest benefit of using switches instead of hubs in your internetwork is that each switch port is actually its
own collision domain. (Conversely, a hub creates one large collision domain.) But even armed with a switch,
you still can’t break up broadcast domains. Neither switches nor bridges will do that. They’ll typically simply
forward all broadcasts instead. Another benefit of LAN switching over hub-centered implementations is that
each device on every segment plugged into a switch can transmit simultaneously—at least, they can as long as
there is only one host on each port and a hub isn’t plugged into a switch port. As you might have guessed, hubs
allow only one device per network segment to communicate at a time.
The Physical Layer
Finally arriving at the bottom, we find that the Physical layer does two things: It sends bits and receives bits.
Bits come only in values of 1 or 0. The Physical layer communicates directly with the various types of actual
communication media. Different kinds of media represent these bit values in different ways. Some use audio
tones, while others employ state transitions —changes in voltage from high to low and low to high.
Specific protocols are needed for each type of media to describe the proper bit patterns to be used, how data is
encoded into media signals, and the various qualities of the physical media’s attachment interface.
Hubs at the Physical Layer
A hub is really a multiple-port repeater. A repeater receives a digital signal and re-amplifies or regenerates
that signal and then forwards the digital signal out all active ports without looking at any data. An active hub
does the same thing. Any digital signal received from a segment on a hub port is regenerated or re-amplified
and transmitted out all ports on the hub. This means all devices plugged into a hub are in the same collision
domain as well as in the same broadcast domain. Figure 1.17 shows a hub in a network.
Hubs, like repeaters, don’t examine any of the traffic as it enters and is then transmitted out to the other parts of
the physical media. Every device connected to the hub, or hubs, must listen if a device transmits. A physical
star network—where the hub is a central device and cables extend in all directions out from it—is the type of
topology a hub creates. Visually, the design really does resemble a star, whereas Ethernet networks run a
logical bus topology, meaning that the signal has to run through the network from end to end.
NOTE: Hubs and repeaters can be used to enlarge the area covered by a single LAN segment, although I do
not recommend this. LAN switches are affordable for almost every situation.
Ethernet Networking
Ethernet is a contention media access method that allows all hosts on a network to share the same bandwidth
of a link. Ethernet is popular because it’s readily scalable, meaning that it’s comparatively easy to integrate
new technologies, such as Fast Ethernet and Gigabit Ethernet, into an existing network infrastructure. It’s also
relatively simple to implement in the first place, and with it, troubleshooting is reasonably straightforward.
- 23 -
Ethernet uses both Data Link and Physical layer specifications. Ethernet networking uses Carrier Sense
Multiple Access with Collision Detection (CSMA/CD) , a protocol that helps devices share the bandwidth
evenly without having two devices transmit at the same time on the network medium. CSMA/CD was created
to overcome the problem of those collisions that occur when packets are transmitted simultaneously from
different nodes. And trust me—good collision management is crucial, because when a node transmits in a
CSMA/CD network, all the other nodes on the network receive and examine that transmission. Only bridges
and routers can effectively prevent a transmission from propagating throughout the entire network!
So, how does the CSMA/CD protocol work? Let’s start by taking a look at Figure 1.18.
When a host wants to transmit over the network, it first checks for the presence of a digital signal on the wire.
If all is clear (no other host is transmitting), the host will then proceed with its transmission. But it doesn’t stop
there. The transmitting host constantly monitors the wire to make sure no other hosts begin transmitting. If the
host detects another signal on the wire, it sends out an extended jam signal that causes all nodes on the segment
to stop sending data (think busy signal). The nodes respond to that jam signal by waiting a while before
attempting to transmit again. Backoff algorithms determine when the colliding stations can retransmit. If
collisions keep occurring after 15 tries, the nodes attempting to transmit will then timeout. Pretty clean! When
a collision occurs on an Ethernet LAN, the following happens:
· A jam signal informs all devices that a collision occurred.
· The collision invokes a random backoff algorithm.
· Each device on the Ethernet segment stops transmitting for a short time until the timers expire.
· All hosts have equal priority to transmit after the timers have expired.
The following are the effects of having a CSMA/CD network sustaining heavy collisions:
· Delay
· Low throughput
· Congestion
NOTE : Backoff on an 802.3 network is the retransmission delay that’s enforced when a collision occurs.
When a collision occurs, a host will resume transmission after the forced time delay has expired. After this
backoff delay period has expired, all stations have equal priority to transmit data.
- 24 -
In the following sections, I am going to cover Ethernet in detail at both the Data Link layer (layer 2) and the
Physical layer (layer 1).
Half- and Full-Duplex Ethernet
Half-duplex Ethernet is defined in the original 802.3 Ethernet. Certainly, the IEEE specifications discuss the
process of half duplex somewhat differently, but what Cisco is talking about is a general sense of what is
happening here with Ethernet.
It also uses the CSMA/CD protocol to help prevent collisions and to permit retransmitting if a collision does
occur. If a hub is attached to a switch, it must operate in half-duplex mode because the end stations must be
able to detect collisions. Half-duplex Ethernet—typically 10BaseT—is only about 30 to 40 percent efficient as
Cisco sees it because a large 10BaseT network will usually only give you 3 to 4Mbps, at most.
But full-duplex Ethernet uses two pairs of wires instead of one wire pair like half duplex. And full duplex uses
a point-to-point connection between the transmitter of the transmitting device and the receiver of the receiving
device. This means that with full-duplex data transfer, you get a faster data transfer compared to half duplex.
And because the transmitted data is sent on a different set of wires than the received data, no collisions will
occur.
The reason you don’t need to worry about collisions is because now it’s like a freeway with multiple lanes
instead of the single-lane road provided by half duplex. Full-duplex Ethernet is supposed to offer 100 percent
efficiency in both directions—for example, you can get 20Mbps with a 10Mbps Ethernet running full duplex or
200Mbps for Fast Ethernet. But this rate is something known as an aggregate rate, which translates as “you’re
supposed to get” 100 percent efficiency. No guarantees, in networking as in life.
Full-duplex Ethernet can be used in three situations:
· With a connection from a switch to a host
· With a connection from a switch to a switch
· With a connection from a host to a host using a crossover cable
NOTE: Full-duplex Ethernet requires a point-to-point connection when only two nodes are present. You can
run full-duplex with just about any device except a hub.
Now, if it’s capable of all that speed, why wouldn’t it deliver? Well, when a full-duplex Ethernet port is
powered on, it first connects to the remote end and then negotiates with the other end of the Fast Ethernet link.
This is called an auto-detect mechanism . This mechanism first decides on the exchange capability, which
means it checks to see if it can run at 10 or 100Mbps. It then checks to see if it can run full duplex, and if it
can’t, it will run half duplex.
NOTE: Remember that half-duplex Ethernet shares a collision domain and provides a lower effective
throughput than full-duplex Ethernet, which typically has a private collision domain and a higher effective
throughput.
Lastly, remember these important points:
· There are no collisions in full-duplex mode.
· A dedicated switch port is required for each full-duplex node.
· The host network card and the switch port must be capable of operating in full-duplex mode.
Now let’s take a look at how Ethernet works at the Data Link layer.
Ethernet at the Data Link Layer
Ethernet at the Data Link layer is responsible for Ethernet addressing, commonly referred to as hardware
addressing or MAC addressing. Ethernet is also responsible for framing packets received from the Network
layer and preparing them for transmission on the local network through the Ethernet contention media access
method.
Ethernet Addressing
Here’s where we get into how Ethernet addressing works. It uses the Media Access Control (MAC) address
burned into each and every Ethernet network interface card (NIC). The MAC, or hardware, address is a 48-bit
- 25 -
(6-byte) address written in a hexadecimal format.
Figure 1.19 shows the 48-bit MAC addresses and how the bits are divided.
FIGURE 1.19 Ethernet addressing using MAC addresses
The organizationally unique identifier (OUI) is assigned by the IEEE to an organization. It’s composed of 24
bits, or 3 bytes. The organization, in turn, assigns a globally administered address (24 bits, or 3 bytes) that is
unique (supposedly, again—no guarantees) to each and every adapter it manufactures. Look closely at the
figure. The high-order bit is the Individual/Group (I/G) bit. When it has a value of 0, we can assume that the
address is the MAC address of a device and may well appear in the source portion of the MAC header. When it
is a 1, we can assume that the address represents either a broadcast or multicast address in Ethernet or a
broadcast or functional address in TR and FDDI (who really knows about FDDI?).
The next bit is the global/local bit, or just G/L bit (also known as U/L, where U means universal ).When set
to 0, this bit represents a globally administered address (as by the IEEE). When the bit is a 1, it represents a
locally governed and administered address (as in what DECnet used to do).
The low-order 24 bits of an Ethernet address represent a locally administered or manufacturer-assigned code.
This portion commonly starts with 24 0s for the first card made and continues in order until there are 24 1s for
the last (16,777,216th) card made. You’ll find that many manufacturers use these same six hex digits as the last
six characters of their serial number on the same card.
- 26 -
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement