8.6.1 IPsec
IETF has known for years that security was lacking in the Internet. Adding it
was not easy because a war broke out about where to put it. Most security experts
believe that to be really secure, encryption and integrity checks have to be end to
end (i.e., in the application layer). That is, the source process encrypts and/or integrity protects the data and sends them to the destination process where they are
decrypted and/or verified. Any tampering done in between these two processes,
including within either operating system, can then be detected. The trouble with
this approach is that it requires changing all the applications to make them security aware. In this view, the next best approach is putting encryption in the transport layer or in a new layer between the application layer and the transport layer,
making it still end to end but not requiring applications to be changed.
The opposite view is that users do not understand security and will not be capable of using it correctly and nobody wants to modify existing programs in any
way, so the network layer should authenticate and/or encrypt packets without the
users being involved. After years of pitched battles, this view won enough support that a network layer security standard was defined. In part, the argument was
that having network layer encryption does not prevent security-aware users from
doing it right and it does help security-unaware users to some extent.
The result of this war was a design called IPsec (IP security), which is described in RFCs 2401, 2402, and 2406, among others. Not all users want encryption (because it is computationally expensive). Rather than make it optional,
it was decided to require encryption all the time but permit the use of a null algorithm. The null algorithm is described and praised for its simplicity, ease of implementation, and great speed in RFC 2410.
The complete IPsec design is a framework for multiple services, algorithms,
and granularities. The reason for multiple services is that not everyone wants to
pay the price for having all the services all the time, so the services are available a
la carte. The major services are secrecy, data integrity, and protection from
replay attacks (where the intruder replays a conversation). All of these are based
on symmetric-key cryptography because high performance is crucial.
The reason for having multiple algorithms is that an algorithm that is now
thought to be secure may be broken in the future. By making IPsec algorithm-independent, the framework can survive even if some particular algorithm is later
The reason for having multiple granularities is to make it possible to protect a
single TCP connection, all traffic between a pair of hosts, or all traffic between a
pair of secure routers, among other possibilities.
One slightly surprising aspect of IPsec is that even though it is in the IP layer,
it is connection oriented. Actually, that is not so surprising because to have any
security, a key must be established and used for some period of time—in essence,
a kind of connection by a different name. Also, connections amortize the setup
SEC. 8.6
costs over many packets. A ‘‘connection’’ in the context of IPsec is called an SA
(Security Association). An SA is a simplex connection between two endpoints
and has a security identifier associated with it. If secure traffic is needed in both
directions, two security associations are required. Security identifiers are carried
in packets traveling on these secure connections and are used to look up keys and
other relevant information when a secure packet arrives.
Technically, IPsec has two principal parts. The first part describes two new
headers that can be added to packets to carry the security identifier, integrity control data, and other information. The other part, ISAKMP (Internet Security
Association and Key Management Protocol), deals with establishing keys.
ISAKMP is a framework. The main protocol for carrying out the work is IKE
(Internet Key Exchange). Version 2 of IKE as described in RFC 4306 should be
used, as the earlier version was deeply flawed, as pointed out by Perlman and
Kaufman (2000).
IPsec can be used in either of two modes. In transport mode, the IPsec
header is inserted just after the IP header. The Protocol field in the IP header is
changed to indicate that an IPsec header follows the normal IP header (before the
TCP header). The IPsec header contains security information, primarily the SA
identifier, a new sequence number, and possibly an integrity check of the payload.
In tunnel mode, the entire IP packet, header and all, is encapsulated in the
body of a new IP packet with a completely new IP header. Tunnel mode is useful
when the tunnel ends at a location other than the final destination. In some cases,
the end of the tunnel is a security gateway machine, for example, a company firewall. This is commonly the case for a VPN (Virtual Private Network). In this
mode, the security gateway encapsulates and decapsulates packets as they pass
through it. By terminating the tunnel at this secure machine, the machines on the
company LAN do not have to be aware of IPsec. Only the security gateway has
to know about it.
Tunnel mode is also useful when a bundle of TCP connections is aggregated
and handled as one encrypted stream because it prevents an intruder from seeing
who is sending how many packets to whom. Sometimes just knowing how much
traffic is going where is valuable information. For example, if during a military
crisis, the amount of traffic flowing between the Pentagon and the White House
were to drop sharply, but the amount of traffic between the Pentagon and some
military installation deep in the Colorado Rocky Mountains were to increase by
the same amount, an intruder might be able to deduce some useful information
from these data. Studying the flow patterns of packets, even if they are encrypted,
is called traffic analysis. Tunnel mode provides a way to foil it to some extent.
The disadvantage of tunnel mode is that it adds an extra IP header, thus increasing
packet size substantially. In contrast, transport mode does not affect packet size
as much.
The first new header is AH (Authentication Header). It provides integrity
checking and antireplay security, but not secrecy (i.e., no data encryption). The
use of AH in transport mode is illustrated in Fig. 8-27. In IPv4, it is interposed
between the IP header (including any options) and the TCP header. In IPv6, it is
just another extension header and is treated as such. In fact, the format is close to
that of a standard IPv6 extension header. The payload may have to be padded out
to some particular length for the authentication algorithm, as shown.
IP header
TCP header
Payload + padding
32 Bits
Next header
Payload len
Security parameters index
Sequence number
Authentication data (HMAC)
Figure 8-27. The IPsec authentication header in transport mode for IPv4.
Let us now examine the AH header. The Next header field is used to store the
value that the IP Protocol field had before it was replaced with 51 to indicate that
an AH header follows. In most cases, the code for TCP (6) will go here. The
Payload length is the number of 32-bit words in the AH header minus 2.
The Security parameters index is the connection identifier. It is inserted by
the sender to indicate a particular record in the receiver’s database. This record
contains the shared key used on this connection and other information about the
connection. If this protocol had been invented by ITU rather than IETF, this field
would have been called Virtual circuit number.
The Sequence number field is used to number all the packets sent on an SA.
Every packet gets a unique number, even retransmissions. In other words, the retransmission of a packet gets a different number here than the original (even
though its TCP sequence number is the same). The purpose of this field is to
detect replay attacks. These sequence numbers may not wrap around. If all 232
are exhausted, a new SA must be established to continue communication.
Finally, we come to Authentication data, which is a variable-length field that
contains the payload’s digital signature. When the SA is established, the two
sides negotiate which signature algorithm they are going to use. Normally, public-key cryptography is not used here because packets must be processed extremely rapidly and all known public-key algorithms are too slow. Since IPsec is based
on symmetric-key cryptography and the sender and receiver negotiate a shared
key before setting up an SA, the shared key is used in the signature computation.
One simple way is to compute the hash over the packet plus the shared key. The
shared key is not transmitted, of course. A scheme like this is called an HMAC
SEC. 8.6
(Hashed Message Authentication Code). It is much faster to compute than first
running SHA-1 and then running RSA on the result.
The AH header does not allow encryption of the data, so it is mostly useful
when integrity checking is needed but secrecy is not needed. One noteworthy feature of AH is that the integrity check covers some of the fields in the IP header,
namely, those that do not change as the packet moves from router to router. The
Time to live field changes on each hop, for example, so it cannot be included in
the integrity check. However, the IP source address is included in the check,
making it impossible for an intruder to falsify the origin of a packet.
The alternative IPsec header is ESP (Encapsulating Security Payload). Its
use for both transport mode and tunnel mode is shown in Fig. 8-28.
Payload + padding
Authentication (HMAC)
New IP
Old IP
Payload + padding
Authentication (HMAC)
Figure 8-28. (a) ESP in transport mode. (b) ESP in tunnel mode.
The ESP header consists of two 32-bit words. They are the Security parameters index and Sequence number fields that we saw in AH. A third word that generally follows them (but is technically not part of the header) is the Initialization
vector used for the data encryption, unless null encryption is used, in which case it
is omitted.
ESP also provides for HMAC integrity checks, as does AH, but rather than
being included in the header, they come after the payload, as shown in Fig. 8-28.
Putting the HMAC at the end has an advantage in a hardware implementation: the
HMAC can be calculated as the bits are going out over the network interface and
appended to the end. This is why Ethernet and other LANs have their CRCs in a
trailer, rather than in a header. With AH, the packet has to be buffered and the
signature computed before the packet can be sent, potentially reducing the number
of packets/sec that can be sent.
Given that ESP can do everything AH can do and more and is more efficient
to boot, the question arises: why bother having AH at all? The answer is mostly
historical. Originally, AH handled only integrity and ESP handled only secrecy.
Later, integrity was added to ESP, but the people who designed AH did not want
to let it die after all that work. Their only real argument is that AH checks part of
the IP header, which ESP does not, but other than that it is really a weak argument. Another weak argument is that a product supporting AH but not ESP might
have less trouble getting an export license because it cannot do encryption. AH is
likely to be phased out in the future.
8.6.2 Firewalls
The ability to connect any computer, anywhere, to any other computer, anywhere, is a mixed blessing. For individuals at home, wandering around the Internet is lots of fun. For corporate security managers, it is a nightmare. Most companies have large amounts of confidential information online—trade secrets, product development plans, marketing strategies, financial analyses, etc. Disclosure of
this information to a competitor could have dire consequences.
In addition to the danger of information leaking out, there is also a danger of
information leaking in. In particular, viruses, worms, and other digital pests can
breach security, destroy valuable data, and waste large amounts of administrators’
time trying to clean up the mess they leave. Often they are imported by careless
employees who want to play some nifty new game.
Consequently, mechanisms are needed to keep ‘‘good’’ bits in and ‘‘bad’’ bits
out. One method is to use IPsec. This approach protects data in transit between
secure sites. However, IPsec does nothing to keep digital pests and intruders from
getting onto the company LAN. To see how to accomplish this goal, we need to
look at firewalls.
Firewalls are just a modern adaptation of that old medieval security standby:
digging a deep moat around your castle. This design forced everyone entering or
leaving the castle to pass over a single drawbridge, where they could be inspected
by the I/O police. With networks, the same trick is possible: a company can have
many LANs connected in arbitrary ways, but all traffic to or from the company is
forced through an electronic drawbridge (firewall), as shown in Fig. 8-29. No
other route exists.
Internal network
DeMilitarized zone
Web Email
server server
Figure 8-29. A firewall protecting an internal network.
SEC. 8.6
The firewall acts as a packet filter. It inspects each and every incoming and
outgoing packet. Packets meeting some criterion described in rules formulated by
the network administrator are forwarded normally. Those that fail the test are
uncermoniously dropped.
The filtering criterion is typically given as rules or tables that list sources and
destinations that are acceptable, sources and destinations that are blocked, and default rules about what to do with packets coming from or going to other machines.
In the common case of a TCP/IP setting, a source or destination might consist of
an IP address and a port. Ports indicate which service is desired. For example,
TCP port 25 is for mail, and TCP port 80 is for HTTP. Some ports can simply be
blocked. For example, a company could block incoming packets for all IP addresses combined with TCP port 79. It was once popular for the Finger service to
look up people’s email addresses but is little used today.
Other ports are not so easily blocked. The difficulty is that network administrators want security but cannot cut off communication with the outside world.
That arrangement would be much simpler and better for security, but there would
be no end to user complaints about it. This is where arrangements such as the
DMZ (DeMilitarized Zone) shown in Fig. 8-29 come in handy. The DMZ is the
part of the company network that lies outside of the security perimeter. Anything
goes here. By placing a machine such as a Web server in the DMZ, computers on
the Internet can contact it to browse the company Web site. Now the firewall can
be configured to block incoming TCP traffic to port 80 so that computers on the
Internet cannot use this port to attack computers on the internal network. To allow
the Web server to be managed, the firewall can have a rule to permit connections
between internal machines and the Web server.
Firewalls have become much more sophisticated over time in an arms race
with attackers. Originally, firewalls applied a rule set independently for each
packet, but it proved difficult to write rules that allowed useful functionality but
blocked all unwanted traffic. Stateful firewalls map packets to connections and
use TCP/IP header fields to keep track of connections. This allows for rules that,
for example, allow an external Web server to send packets to an internal host, but
only if the internal host first establishes a connection with the external Web server. Such a rule is not possible with stateless designs that must either pass or drop
all packets from the external Web server.
Another level of sophistication up from stateful processing is for the firewall
to implement application-level gateways. This processing involves the firewall
looking inside packets, beyond even the TCP header, to see what the application
is doing. With this capability, it is possible to distinguish HTTP traffic used for
Web browsing from HTTP traffic used for peer-to-peer file sharing. Administrators can write rules to spare the company from peer-to-peer file sharing but allow
Web browsing that is vital for business. For all of these methods, outgoing traffic
can be inspected as well as incoming traffic, for example, to prevent sensitive
documents from being emailed outside of the company.
As the above discussion should make clear, firewalls violate the standard layering of protocols. They are network layer devices, but they peek at the transport
and applications layers to do their filtering. This makes them fragile. For
instance, firewalls tend to rely on standard port numbering conventions to determine what kind of traffic is carried in a packet. Standard ports are often used, but
not by all computers, and not by all applications either. Some peer-to-peer applications select ports dynamically to avoid being easily spotted (and blocked). Encryption with IPSEC or other schemes hides higher-layer information from the
firewall. Finally, a firewall cannot readily talk to the computers that communicate
through it to tell them what policies are being applied and why their connection is
being dropped. It must simply pretend to be a broken wire. For all these reasons,
networking purists consider firewalls to be a blemish on the architecture of the Internet. However, the Internet can be a dangerous place if you are a computer.
Firewalls help with that problem, so they are likely to stay.
Even if the firewall is perfectly configured, plenty of security problems still
exist. For example, if a firewall is configured to allow in packets from only specific networks (e.g., the company’s other plants), an intruder outside the firewall
can put in false source addresses to bypass this check. If an insider wants to ship
out secret documents, he can encrypt them or even photograph them and ship the
photos as JPEG files, which bypasses any email filters. And we have not even
discussed the fact that, although three-quarters of all attacks come from outside
the firewall, the attacks that come from inside the firewall, for example, from disgruntled employees, are typically the most damaging (Verizon, 2009).
A different problem with firewalls is that they provide a single perimeter of
defense. If that defense is breached, all bets are off. For this reason, firewalls are
often used in a layered defense. For example, a firewall may guard the entrance to
the internal network and each computer may also run its own firewall. Readers
who think that one security checkpoint is enough clearly have not made an international flight on a scheduled airline recently.
In addition, there is a whole other class of attacks that firewalls cannot deal
with. The basic idea of a firewall is to prevent intruders from getting in and secret
data from getting out. Unfortunately, there are people who have nothing better to
do than try to bring certain sites down. They do this by sending legitimate packets
at the target in great numbers until it collapses under the load. For example, to
cripple a Web site, an intruder can send a TCP SYN packet to establish a connection. The site will then allocate a table slot for the connection and send a SYN
+ ACK packet in reply. If the intruder does not respond, the table slot will be tied
up for a few seconds until it times out. If the intruder sends thousands of connection requests, all the table slots will fill up and no legitimate connections will
be able to get through. Attacks in which the intruder’s goal is to shut down the
target rather than steal data are called DoS (Denial of Service) attacks. Usually,
the request packets have false source addresses so the intruder cannot be traced
easily. DoS attacks against major Web sites are common on the Internet.
SEC. 8.6
An even worse variant is one in which the intruder has already broken into
hundreds of computers elsewhere in the world, and then commands all of them to
attack the same target at the same time. Not only does this approach increase the
intruder’s firepower, but it also reduces his chances of detection since the packets
are coming from a large number of machines belonging to unsuspecting users.
Such an attack is called a DDoS (Distributed Denial of Service) attack. This attack is difficult to defend against. Even if the attacked machine can quickly
recognize a bogus request, it does take some time to process and discard the request, and if enough requests per second arrive, the CPU will spend all its time
dealing with them.
8.6.3 Virtual Private Networks
Many companies have offices and plants scattered over many cities, sometimes over multiple countries. In the olden days, before public data networks, it
was common for such companies to lease lines from the telephone company between some or all pairs of locations. Some companies still do this. A network
built up from company computers and leased telephone lines is called a private
Private networks work fine and are very secure. If the only lines available are
the leased lines, no traffic can leak out of company locations and intruders have to
physically wiretap the lines to break in, which is not easy to do. The problem
with private networks is that leasing a dedicated T1 line between two points costs
thousands of dollars a month, and T3 lines are many times more expensive. When
public data networks and later the Internet appeared, many companies wanted to
move their data (and possibly voice) traffic to the public network, but without giving up the security of the private network.
This demand soon led to the invention of VPNs (Virtual Private Networks),
which are overlay networks on top of public networks but with most of the properties of private networks. They are called ‘‘virtual’’ because they are merely an
illusion, just as virtual circuits are not real circuits and virtual memory is not real
One popular approach is to build VPNs directly over the Internet. A common
design is to equip each office with a firewall and create tunnels through the Internet between all pairs of offices, as illustrated in Fig. 8-30(a). A further advantage
of using the Internet for connectivity is that the tunnels can be set up on demand
to include, for example, the computer of an employee who is at home or traveling
as long as the person has an Internet connection. This flexibility is much greater
then is provided with leased lines, yet from the perspective of the computers on
the VPN, the topology looks just like the private network case, as shown in
Fig. 8-30(b). When the system is brought up, each pair of firewalls has to negotiate the parameters of its SA, including the services, modes, algorithms, and keys.
If IPsec is used for the tunneling, it is possible to aggregate all traffic between any
Figure 8-30. (a) A virtual private network. (b) Topology as seen from the inside.
two pairs of offices onto a single authenticated, encrypted SA, thus providing integrity control, secrecy, and even considerable immunity to traffic analysis. Many
firewalls have VPN capabilities built in. Some ordinary routers can do this as
well, but since firewalls are primarily in the security business, it is natural to have
the tunnels begin and end at the firewalls, providing a clear separation between
the company and the Internet. Thus, firewalls, VPNs, and IPsec with ESP in tunnel mode are a natural combination and widely used in practice.
Once the SAs have been established, traffic can begin flowing. To a router
within the Internet, a packet traveling along a VPN tunnel is just an ordinary
packet. The only thing unusual about it is the presence of the IPsec header after
the IP header, but since these extra headers have no effect on the forwarding process, the routers do not care about this extra header.
Another approach that is gaining popularity is to have the ISP set up the VPN.
Using MPLS (as discussed in Chap. 5), paths for the VPN traffic can be set up across the ISP network between the company offices. These paths keep the VPN
traffic separate from other Internet traffic and can be guaranteed a certain amount
of bandwidth or other quality of service.
A key advantage of a VPN is that it is completely transparent to all user software. The firewalls set up and manage the SAs. The only person who is even
aware of this setup is the system administrator who has to configure and manage
the security gateways, or the ISP administrator who has to configure the MPLS
paths. To everyone else, it is like having a leased-line private network again. For
more about VPNs, see Lewis (2006).
8.6.4 Wireless Security
It is surprisingly easy to design a system using VPNs and firewalls that is logically completely secure but that, in practice, leaks like a sieve. This situation can
occur if some of the machines are wireless and use radio communication, which
passes right over the firewall in both directions. The range of 802.11 networks is
SEC. 8.6
often a few hundred meters, so anyone who wants to spy on a company can simply drive into the employee parking lot in the morning, leave an 802.11-enabled
notebook computer in the car to record everything it hears, and take off for the
day. By late afternoon, the hard disk will be full of valuable goodies. Theoretically, this leakage is not supposed to happen. Theoretically, people are not supposed to rob banks, either.
Much of the security problem can be traced to the manufacturers of wireless
base stations (access points) trying to make their products user friendly. Usually,
if the user takes the device out of the box and plugs it into the electrical power
socket, it begins operating immediately—nearly always with no security at all,
blurting secrets to everyone within radio range. If it is then plugged into an Ethernet, all the Ethernet traffic suddenly appears in the parking lot as well. Wireless
is a snooper’s dream come true: free data without having to do any work. It therefore goes without saying that security is even more important for wireless systems
than for wired ones. In this section, we will look at some ways wireless networks
handle security. Some additional information is given by Nichols and Lekkas
802.11 Security
Part of the 802.11 standard, originally called 802.11i, prescribes a data linklevel security protocol for preventing a wireless node from reading or interfering
with messages sent between another pair of wireless nodes. It also goes by the
trade name WPA2 (WiFi Protected Access 2). Plain WPA is an interim scheme
that implements a subset of 802.11i. It should be avoided in favor of WPA2.
We will describe 802.11i shortly, but will first note that it is a replacement for
WEP (Wired Equivalent Privacy), the first generation of 802.11 security protocols. WEP was designed by a networking standards committee, which is a completely different process than, for example, the way NIST selected the design of
AES. The results were devastating. What was wrong with it? Pretty much everything from a security perspective as it turns out. For example, WEP encrypted
data for confidentiality by XORing it with the output of a stream cipher. Unfortunately, weak keying arrangements meant that the output was often reused. This
led to trivial ways to defeat it. As another example, the integrity check was based
on a 32-bit CRC. That is an efficient code for detecting transmission errors, but it
is not a cryptographically strong mechanism for defeating attackers.
These and other design flaws made WEP very easy to compromise. The first
practical demonstration that WEP was broken came when Adam Stubblefield was
an intern at AT&T (Stubblefield et al., 2002). He was able to code up and test an
attack outlined by Fluhrer et al. (2001) in one week, of which most of the time
was spent convincing management to buy him a WiFi card to use in his experiments. Software to crack WEP passwords within a minute is now freely available
and the use of WEP is very strongly discouraged. While it does prevent casual
access it does not provide any real form of security. The 802.11i group was put
together in a hurry when it was clear that WEP was seriously broken. It produced
a formal standard by June 2004.
Now we will describe 802.11i, which does provide real security if it is set up
and used properly. There are two common scenarios in which WPA2 is used. The
first is a corporate setting, in which a company has a separate authentication server that has a username and password database that can be used to determine if a
wireless client is allowed to access the network. In this setting, clients use standard protocols to authenticate themselves to the network. The main standards are
802.1X, with which the access point lets the client carry on a dialogue with the
authentication server and observes the result, and EAP (Extensible Authentication Protocol) (RFC 3748), which tells how the client and the authentication server interact. Actually, EAP is a framework and other standards define the protocol messages. However, we will not delve into the many details of this exchange
because they do not much matter for an overview.
The second scenario is in a home setting in which there is no authentication
server. Instead, there is a single shared password that is used by clients to access
the wireless network. This setup is less complex than having an authentication
server, which is why it is used at home and in small businesses, but it is less
secure as well. The main difference is that with an authentication server each client gets a key for encrypting traffic that is not known by the other clients. With a
single shared password, different keys are derived for each client, but all clients
have the same password and can derive each others’ keys if they want to.
The keys that are used to encrypt traffic are computed as part of an
authentication handshake. The handshake happens right after the client associates
with a wireless network and authenticates with an authentication server, if there is
one. At the start of the handshake, the client has either the shared network password or its password for the authentication server. This password is used to derive
a master key. However, the master key is not used directly to encrypt packets. It
is standard cryptographic practice to derive a session key for each period of usage,
to change the key for different sessions, and to expose the master key to observation as little as possible. It is this session key that is computed in the handshake.
The session key is computed with the four-packet handshake shown in Fig. 831. First, the AP (access point) sends a random number for identification. Random numbers used just once in security protocols like this one are called nonces,
which is more-or-less a contraction of ‘‘number used once.’’ The client also picks
its own nonce. It uses the nonces, its MAC address and that of the AP, and the
master key to compute a session key, KS . The session key is split into portions,
each of which is used for different purposes, but we have omitted this detail. Now
the client has session keys, but the AP does not. So the client sends its nonce to
the AP, and the AP performs the same computation to derive the same session
keys. The nonces can be sent in the clear because the keys cannot be derived from
them without extra, secret information. The message from the client is protected
SEC. 8.6
with an integrity check called a MIC (Message Integrity Check) based on the
session key. The AP can check that the MIC is correct, and so the message indeed
must have come from the client, after it computes the session keys. A MIC is just
another name for a message authentication code, as in an HMAC. The term MIC
is often used instead for networking protocols because of the potential for confusion with MAC (Medium Access Control) addresses.
has KS
NonceC, MICS
Distribute group key, KG
Access Point (AP)
Compute session
keys KS from MAC
addresses, nonces,
and master key
Compute session
keys KS, same
as the client
has KS
Figure 8-31. The 802.11i key setup handshake.
In the last two messages, the AP distributes a group key, KG , to the client, and
the client acknowledges the message. Receipt of these messages lets the client
verify that the AP has the correct session keys, and vice versa. The group key is
used for broadcast and multicast traffic on the 802.11 LAN. Because the result of
the handshake is that every client has its own encryption keys, none of these keys
can be used by the AP to broadcast packets to all of the wireless clients; a separate copy would need to be sent to each client using its key. Instead, a shared key
is distributed so that broadcast traffic can be sent only once and received by all
the clients. It must be updated as clients leave and join the network.
Finally, we get to the part where the keys are actually used to provide security. Two protocols can be used in 802.11i to provide message confidentiality, integrity, and authentication. Like WPA, one of the protocols, called TKIP (Temporary Key Integrity Protocol), was an interim solution. It was designed to improve security on old and slow 802.11 cards, so that at least some security that is
better than WEP can be rolled out as a firmware upgrade. However, it, too, has
now been broken so you are better off with the other, recommended protocol,
CCMP. What does CCMP stand for? It is short for the somewhat spectacular
name Counter mode with Cipher block chaining Message authentication code Protocol. We will just call it CCMP. You can call it anything you want.
CCMP works in a fairly straightforward way. It uses AES encryption with a
128-bit key and block size. The key comes from the session key. To provide confidentiality, messages are encrypted with AES in counter mode. Recall that we
discussed cipher modes in Sec. 8.2.3. These modes are what prevent the same
message from being encrypted to the same set of bits each time. Counter mode
mixes a counter into the encryption. To provide integrity, the message, including
header fields, is encrypted with cipher block chaining mode and the last 128-bit
block is kept as the MIC. Then both the message (encrypted with counter mode)
and the MIC are sent. The client and the AP can each perform this encryption, or
verify this encryption when a wireless packet is received. For broadcast or multicast messages, the same procedure is used with the group key.
Bluetooth Security
Bluetooth has a considerably shorter range than 802.11, so it cannot easily be
attacked from the parking lot, but security is still an issue here. For example, imagine that Alice’s computer is equipped with a wireless Bluetooth keyboard. In
the absence of security, if Trudy happened to be in the adjacent office, she could
read everything Alice typed in, including all her outgoing email. She could also
capture everything Alice’s computer sent to the Bluetooth printer sitting next to it
(e.g., incoming email and confidential reports). Fortunately, Bluetooth has an elaborate security scheme to try to foil the world’s Trudies. We will now summarize
the main features of it.
Bluetooth version 2.1 and later has four security modes, ranging from nothing
at all to full data encryption and integrity control. As with 802.11, if security is
disabled (the default for older devices), there is no security. Most users have security turned off until a serious breach has occurred; then they turn it on. In the
agricultural world, this approach is known as locking the barn door after the horse
has escaped.
Bluetooth provides security in multiple layers. In the physical layer, frequency hopping provides a tiny little bit of security, but since any Bluetooth device
that moves into a piconet has to be told the frequency hopping sequence, this sequence is obviously not a secret. The real security starts when the newly arrived
slave asks for a channel with the master. Before Bluetooth 2.1, two devices were
assumed to share a secret key set up in advance. In some cases, both are
hardwired by the manufacturer (e.g., for a headset and mobile phone sold as a
unit). In other cases, one device (e.g., the headset) has a hardwired key and the
user has to enter that key into the other device (e.g., the mobile phone) as a
decimal number. These shared keys are called passkeys. Unfortunately, the
passkeys are often hardcoded to ‘‘1234’’ or another predictable value, and in any
case are four decimal digits, allowing only 104 choices. With simple secure pairing in Bluetooth 2.1, devices pick a code from a six-digit range, which makes the
passkey much less predictable but still far from secure.
SEC. 8.6
To establish a channel, the slave and master each check to see if the other one
knows the passkey. If so, they negotiate whether that channel will be encrypted,
integrity controlled, or both. Then they select a random 128-bit session key, some
of whose bits may be public. The point of allowing this key weakening is to comply with government restrictions in various countries designed to prevent the
export or use of keys longer than the government can break.
Encryption uses a stream cipher called E0 ; integrity control uses SAFER+.
Both are traditional symmetric-key block ciphers. SAFER+ was submitted to the
AES bake-off but was eliminated in the first round because it was slower than the
other candidates. Bluetooth was finalized before the AES cipher was chosen;
otherwise, it would most likely have used Rijndael.
The actual encryption using the stream cipher is shown in Fig. 8-14, with the
plaintext XORed with the keystream to generate the ciphertext. Unfortunately,
E 0 itself (like RC4) may have fatal weaknesses (Jakobsson and Wetzel, 2001).
While it was not broken at the time of this writing, its similarities to the A5/1
cipher, whose spectacular failure compromises all GSM telephone traffic, are
cause for concern (Biryukov et al., 2000). It sometimes amazes people (including
the authors of this book), that in the perennial cat-and-mouse game between the
cryptographers and the cryptanalysts, the cryptanalysts are so often on the winning side.
Another security issue is that Bluetooth authenticates only devices, not users,
so theft of a Bluetooth device may give the thief access to the user’s financial and
other accounts. However, Bluetooth also implements security in the upper layers,
so even in the event of a breach of link-level security, some security may remain,
especially for applications that require a PIN code to be entered manually from
some kind of keyboard to complete the transaction.
Authentication is the technique by which a process verifies that its communication partner is who it is supposed to be and not an imposter. Verifying the identity of a remote process in the face of a malicious, active intruder is surprisingly
difficult and requires complex protocols based on cryptography. In this section,
we will study some of the many authentication protocols that are used on insecure
computer networks.
As an aside, some people confuse authorization with authentication.
Authentication deals with the question of whether you are actually communicating
with a specific process. Authorization is concerned with what that process is permitted to do. For example, say a client process contacts a file server and says: ‘‘I
am Scott’s process and I want to delete the file cookbook.old.’’ From the file server’s point of view, two questions must be answered: