“Mitigating Performance Degradation in Congested Sensor Networks,”

“Mitigating Performance Degradation in Congested Sensor Networks,”
VOL. 7,
NO. 7,
JULY 2008
Mitigating Performance Degradation in
Congested Sensor Networks
Raju Kumar, Student Member, IEEE, Riccardo Crepaldi, Student Member, IEEE,
Hosam Rowaihy, Student Member, IEEE, Albert F. Harris III, Member, IEEE,
Guohong Cao, Senior Member, IEEE, Michele Zorzi, Fellow, IEEE, and
Thomas F. La Porta, Fellow, IEEE
Abstract—Data generated in wireless sensor networks may not all be alike: some data may be more important than others and hence
may have different delivery requirements. In this paper, we address differentiated data delivery in the presence of congestion in
wireless sensor networks. We propose a class of algorithms that enforce differentiated routing based on the congested areas of a
network and data priority. The basic protocol, called Congestion-Aware Routing (CAR), discovers the congested zone of the network
that exists between high-priority data sources and the data sink and, using simple forwarding rules, dedicates this portion of the
network to forwarding primarily high-priority traffic. Since CAR requires some overhead for establishing the high-priority routing zone,
it is unsuitable for highly mobile data sources. To accommodate these, we define MAC-Enhanced CAR (MCAR), which includes
MAC-layer enhancements and a protocol for forming high-priority paths on the fly for each burst of data. MCAR effectively handles the
mobility of high-priority data sources, at the expense of degrading the performance of low-priority traffic. We present extensive
simulation results for CAR and MCAR, and an implementation of MCAR on a 48-node testbed.
Index Terms—Wireless sensor networks, routing, congestion, differentiated service.
ENSOR network deployments may include hundreds or
thousands of nodes. Since deploying such large-scale
networks has a high cost, it is increasingly likely that
sensors will be shared by multiple applications and gather
various types of data: temperature, the presence of lethal
chemical gases, audio and/or video feeds, etc. Therefore,
data generated in a sensor network may not all be equally
With large deployment sizes, congestion becomes an
important problem. Congestion may lead to indiscriminate
dropping of data (i.e., high-priority (HP) packets may be
dropped while low-priority (LP) packets are delivered). It
also results in an increase in energy consumption to route
packets that will be dropped downstream as links become
saturated. As nodes along optimal routes are depleted
of energy, only nonoptimal routes remain, further
compounding the problem. To ensure that data with
higher priority is received in the presence of congestion
due to LP packets, differentiated service must be provided.
. R. Kumar, H. Rowaihy, G. Cao, and T.F. La Porta are with the Department
of Computer Science and Engineering, The Pennsylvania State University,
State College, PA 16802.
E-mail: {rajukuma, rowaihy, gcao, tlp}@cse.psu.edu.
. R. Crepaldi is with the Department of Computer Science, University of
Illinois, Urbana-Champaign, IL 61801-2302. E-mail: [email protected]
. A.F. Harris III is with the Center for Remote Sensing of Ice Sheets
(CReSIS), University of Kansas, Lawrence, KS 66045-7612.
E-mail: [email protected]
. M. Zorzi is with Department of Information Engineering, University of
Padova, 35131 Padova, Italy. E-mail: [email protected]
Manuscript received 6 Aug. 2007; revised 16 Nov. 2007; accepted 10 Jan.
2008; published online 28 Jan. 2008.
For information on obtaining reprints of this article, please send e-mail to:
[email protected], and reference IEEECS Log Number TMC-2007-08-0232.
Digital Object Identifier no. 10.1109/TMC.2008.20.
1536-1233/08/$25.00 ß 2008 IEEE
In this work, we are interested in congestion that results
from excessive competition for the wireless medium.
Existing schemes detect congestion while considering all
data to be equally important. We characterize congestion
as the degradation of service to HP data due to competing
LP traffic. In this case, congestion detection is reduced to
identifying competition for medium access between HP
and LP traffic.
Congestion becomes worse when a particular area is
generating data at a high rate. This may occur in deployments
in which sensors in one area of interest are requested to gather
and transmit data at a higher rate than others (similar to
bursty convergecast [25]). In this case, routing dynamics can
lead to congestion on specific paths. These paths are usually
close to each other, which leads to an entire zone in the
network facing congestion. We refer to this zone, essentially
an extended hotspot, as the congestion zone (conzone).
In this paper, we examine data delivery issues in
the presence of congestion. We propose the use of data
prioritization and a differentiated routing protocol and/or a
prioritized medium access scheme to mitigate its effects on
HP traffic. We strive for a solution that accommodates
both LP and HP traffic when the network is static or near
static and enables fast recovery of LP traffic in networks
with mobile HP data sources. Our solution uses a
differentiated routing approach to effectively separate
HP traffic from LP traffic in the sensor network. HP traffic
has exclusive use of nodes along its shortest path to the
sink, whereas LP traffic is routed over uncongested nodes
in the network but may traverse longer paths.
Our contributions in this work are listed as follows:
Design of Congestion-Aware Routing (CAR). CAR
is a network-layer solution to provide differentiated
Published by the IEEE CS, CASS, ComSoc, IES, & SPS
service in congested sensor networks. CAR also
prevents severe degradation of service to LP data by
utilizing uncongested parts of the network.
. Design of MAC-Enhanced CAR (MCAR). MCAR
is primarily a MAC-layer mechanism used in
conjunction with routing to provide mobile and
lightweight conzones to address sensor networks
with mobile HP data sources and/or bursty
HP traffic. Compared to CAR, MCAR has a
smaller overhead but degrades the performance of
LP data more aggressively.
We compare CAR and MCAR to an AODV scheme
enhanced with priority queues (AODVþPQ). Both CAR and
MCAR lead to a significant increase in the successful packet
delivery ratio of HP data and a clear decrease in the average
delivery delay compared to AODVþPQ. CAR and MCAR
also provide low jitter. Moreover, they use energy more
uniformly in the deployment and reduce the energy
consumed in the nodes that lie on the conzone, which
leads to an increase in connectivity lifetime. In the presence
of sufficient congestion, CAR also allows an appreciable
amount of LP data to be delivered. We further show that, in
the presence of mobile HP data sources, MCAR provides
mobile conzones, which follow the HP traffic.
We also present the implementation of MCAR on our
sensor network testbed. The implementation shows the
feasibility of MAC-layer enhancements and differentiated
routing on current hardware. We demonstrate that using an
actual implementation, HP delivery rates similar to those
seen in simulation can be achieved in a practical system.
The rest of this paper is organized as follows: Section 2
presents related work. Details of CAR and MCAR are
presented in Section 3. Simulation details and results are
presented in Section 4. Section 5 discusses our testbed
implementation and results. Finally, Section 6 presents
conclusions and future directions.
An obvious solution to enhance service to HP data is to use
priority queues to provide differentiated services (see [4],
[15], and [25]). However, in such schemes, though
HP packets get precedence over LP packets within a node,
at the MAC layer, they still compete for a shared channel
with LP traffic sent by surrounding nodes. As a result,
without a routing scheme to address the impact of
congestion and hotspots in the network, local solutions like
priority queuing are not sufficient to provide adequate
priority service to important data.
QoS in sensor networks has been the focus of current
research (e.g., [4], [8], and [26]). SPEED [8] provides soft
real-time guarantees for end-to-end traffic using feedback
control and location awareness. It also concludes that local
adaptation at the MAC layer alone is insufficient to address
the problem of hotspots and that routing is essential to the
solution. Akkaya and Younis [4] propose an energy-aware
QoS routing protocol to support the delivery of real-time
data in the presence of interfering non-real-time data by
using multiple queues in each node in a cluster-based
network; they do not consider the impact of congestion in
the network and the interference that non-real-time traffic
VOL. 7,
NO. 7, JULY 2008
can cause to real-time data. Zhang et al. [26] propose a
generic model for achieving multiple QoS objectives.
Degrading service to one type of data to provide better
service to another has been used in schemes like RAP [15]
and SWAN [3]. Similar to these works, we segregate data;
however, instead of real-time delivery demands, we use
data priority as the basis for our segregation.
Approaches like 802.11e [1] and other differentiated
MAC schemes that assign higher priority to important data
(e.g., VoIP for 802.11e) via MAC-layer mechanisms succeed
at providing better service to HP data by assigning them
preferential medium access. Funneling-MAC [2], proposed
by Ahn et al., addresses the issue of increased traffic
intensity in the proximity of a sink by using a schedulebased and contention-based MAC hybrid. As with data
aggregation schemes like [16] and [21], it serves to delay the
occurrence of congestion. Back pressure and rate limiting
(also used in SPEED [8] and Fusion [10]) are essential to
avoid situations where the network capacity is less than the
amount of traffic being injected into the medium.
Rangwala et al. [20] propose Interference-Aware Fair Rate
Control (IFRC), which employs schemes to achieve fair and
efficient rate limiting. It uses a tree rooted at each sink to
route all data. When congestion occurs, the rates of the flows
on the interfering trees are throttled. But, these schemes do
not adopt differentiated routing. Also, in a large network
that is under congestion in a constrained area, our approach
leverages the large uncongested parts of the network that is
often underutilized to deliver LP traffic.
RAP [15], SPEED [8], and MMSPEED [7] use velocitymonotonic scheduling. Applications assign an expected speed
to each data packet, which is then ensured by these
schemes. The speed that the application should assign to
a packet if the network is congested is unclear. These
schemes spread traffic around hotspots, but they do not
give preference to HP data. In fact, if LP data has led to a
hotspot in an area, routes for HP data that later enter the
network will circumvent this hotspot. This will increase the
number of hops over which this data has to be routed and
increase the energy consumed in the network. In the worst
case, no path for HP data may be found, and these packets
will be dropped. Additionally, MMSPEED [7] achieves
reliability by duplicating packets and routing them over
different paths to the destination. Duplication of packets in
congested networks may further precipitate congestion.
Also, these schemes do not explicitly separate LP and
HP traffic generated in the same area.
Our schemes are different from these schemes, because
we use differentiated routing to provide the best possible
service to HP data while trying to decrease the energy
consumption in the conzone.
Congestion in sensor networks has been addressed
in works like CODA [22], Fusion [10], and by Ee and
Bajcsy [6]. Though these schemes take important steps to
mitigate congestion in sensor networks, they treat all data
equally. These schemes are complementary to the capability
provided by CAR and MCAR. Similarly, our solutions do
not preclude the use of priority queues, which can be added
as a simple extension.
Existing work on congestion in sensor networks has
two aspects: detection and mitigation. As mentioned earlier,
we do not concern ourselves with congestion detection
Fig. 1. A critical area of a sensor network may generate HP data at a
high rate. This causes congestion in a part of the network exacerbated
by the presence of LP data being routed in that area.
schemes in this work. Most mitigation schemes differ in how
they invoke back pressure and rate limiting. Fusion’s [10]
mitigation scheme (other than back pressure and rate limiting) is assigning preferential medium access to parents in
the routing tree. This assumes that all data in a network is
destined to a single sink, which might not always be the case.
In contrast, in our scenario, LP data can be sent from any
node to any other node. As a result, Fusion’s preferential
MAC scheme is not applicable. Also, congestion in Fusion
occurs due to the accumulation of packets close to the sink.
In contrast, we address the degradation of performance of
HP data delivery due to an extended hotspot in the network
resulting from competition for medium access between LP
and HP data. Also, Fusion does not do data differentiation
based on priorities or provide differentiated routing.
In Section 3.1, we introduce the network scenario and
present an overview of our schemes, which are then
detailed in Sections 3.2 and 3.3.
3.1 Overview
An example of the problem scenario that we consider is
shown in Fig. 1. An important event occurs in one portion
of the sensor field, which we call the critical area. This
critical area will typically consist of multiple nodes. In such
a scenario, there is a data processing center for collecting
sensitive information from the critical area. Such data is
assigned a higher priority than other data. There might also
be several nodes collecting different types of LP information
from other parts of the network. In the presence of this
background LP traffic, without differentiating between the
two priority classes, congestion will degrade the service
provided to HP data. This may result in HP data being
dropped or delayed so long that it is of no use to the data
processing center. We refer to the area that contains the
shortest paths from the critical area to the sink as the
conzone. HP data would ideally traverse the conzone but
will face competition for medium access due to LP traffic.
Our basic solution, called CAR, operates solely in the
network layer. Packets are classified as HP or LP by the
data sources, and nodes within a conzone only forward
HP traffic. LP traffic is routed out of and/or around the
conzone. In effect, we segment the network into two parts
by using forwarding rules. One limitation with this system
is that it requires some overhead to discover the conzone.
While this overhead is reasonable, it may still be too
heavyweight if the data source is moving often and the
conzone is changing frequently or if the HP traffic is short
lived. Hence, CAR is designed for static or nearly static
networks with long-lived HP flows.
To address a mobile conzone (i.e., the conzone formed
when sources of HP traffic are mobile) and/or bursty
HP traffic, we define a MAC-layer-based protocol combined
with routing to form conzones on the fly with each burst of
data. This protocol handles mobility effectively but at the cost
of drastically degrading the delivery of LP traffic, because
there is no opportunity to establish alternate routes for such
data. We call this second protocol MCAR. The combination of
CAR and MCAR allows us to accommodate HP and LP traffic
as best as possible, given the type of HP data source and the
duration of HP traffic. For static sources, LP traffic finds
alternate routes and suffers minor degradation using CAR.
For networks with mobile nodes or bursty HP traffic,
LP traffic is effectively interrupted and dropped when in
contention with an HP source using MCAR.
3.2 Congestion Aware Routing
CAR comprises three steps: HP network formation, conzone
discovery, and differentiated routing. The combination of
these functions segments the network into on-conzone and
off-conzone nodes. Only HP traffic is routed by on-conzone
nodes. Note that the protocol specifically accommodates
LP traffic, albeit with less efficient routes than HP traffic.
For the purposes of this discussion, we assume that
there is one HP sink and a contiguous part of the network
(critical area) that generates HP data in the presence of
networkwide background LP traffic. We also assume that
nodes are location aware (as in [8] and [14]) and densely
deployed with uniform distribution.
Since nodes in the scenario in Fig. 1 send all HP data to a
single sink, tree-based routing, with the HP sink being the
root, is most appropriate. However, Hull et al. [10] show
that tree-based routing schemes suffer from congestion,
especially if the number of messages generated at the leaves
is high. This problem becomes even worse when we have a
mixture of LP and HP traffic traveling through the network.
Therefore, even when the rate of HP data is relatively low,
the background noise created by LP traffic will create a
conzone that spans the network from the critical area to
the HP sink. Due to this congestion, service provided to
HP data may degrade, and nodes within this area may die
sooner than others, leading to only suboptimal paths being
available for HP data, or a network partition may result,
isolating the sink from the critical area.
If a standard ad hoc routing scheme (e.g., AODV [18] or
DSR [13]) is used to route the burst of HP data instead of the
tree-based routing scheme, congestion occurs. Fig. 2 shows
the conzone that is formed when AODV is used for routing
all data in a deployment of 120 nodes. There is one HP sink
and two LP sinks, as shown in the figure. Only critical area
nodes send HP data, while all other nodes in the network
send LP data to either of the LP sinks. We do not
show a similar figure for Directed Diffusion [12] (using
Fig. 2. Presence of congestion with AODV routing in a network
subjected to HP data rate of 30 packets/sec (pps) and background
LP traffic rate of 0.5 pps. Thin lines represent LP traffic, while thick lines
represent HP traffic.
the One-Phase Pull Filter), because the control overhead of
the initial flooding required with such a large number of
data sources of LP and HP traffic was prohibitive and led to
no HP data to be delivered.
We now present the algorithms used by CAR to build
HP routing networks, to perform dynamic conzone discovery, and to provide differentiated routing. This is followed by
the description of two enhancements of the basic CAR.
3.2.1 High-Priority Routing Network Formation
After the deployment of sensor nodes, the HP data
collection center (the sink) initiates the process of building
the HP routing network (HiNet). This network covers all
nodes, because at the time of deployment, the sink will
usually have no information on the whereabouts of the
critical area nodes. Also, based on the locations of events
that can occur during the lifetime of the network, different
nodes may constitute the critical area.
Since all HP data is destined to a single sink, the HiNet is
based on a minimum distance spanning tree rooted at the
sink. As with TAG [16], this structure ensures that all nodes
have shortest path routes to the sink. However, instead of
every node having a single parent, as in other tree-based
schemes, we allow nodes to have multiple parents. A node
that has multiple neighbors with depths (the number of
hops to the sink) less than its own considers them all as
parents (see Fig. 3). We leverage this property to support
multipath forwarding, thus providing load balancing and
making the routing network more resilient to failures.
We now consider the HiNet formation process. Once the
sink discovers its neighbors, it broadcasts a “Build HiNet”
message (containing the ID and depth of the node) asking
all nodes in the network to organize as a graph. Once a
neighboring node hears this message, it checks if it has
already joined the HiNet (i.e., if it knows its depth); if not,
it sets its depth to one plus the depth in the message
received and sets the source of the message as a parent. This
node then rebroadcasts the Build HiNet message, with its
own ID and depth. If a node is already a member of the graph,
it checks the depth in the message, and if that depth is one less
than its own, then the source of the message is added as a
parent. In this case, the message is not rebroadcast.
If a node receives a Build HiNet message with a depth
value less than that of its parent’s depth, it updates its own
value to the received value, plus one. It then removes all
current parents and adds the source of the message as a
VOL. 7,
NO. 7, JULY 2008
Fig. 3. In a dense deployment, multiple nodes can be parents of a node.
Each parent lies on a different shortest path route to the sink. This
structure is used for shortest multipath routing.
new parent. Finally, the Build HiNet message is rebroadcast
with the new depth value. In this fashion, the Build HiNet
message is sent down the network until all nodes become
part of the graph. Similar to TAG [16], the Build HiNet
message can be periodically broadcast to maintain the
topology and adapt to changes caused by the failure or
addition of nodes.
3.2.2 Dynamic Conzone Discovery
Nodes discover if they are on the conzone by using the
conzone discovery mechanism. After building the HiNet,
the next task is to dynamically discover the conzone. The
conzone is formed when one area is generating HP data. We
refer to this area as the critical area. This conzone discovery
is done dynamically, because the critical area can change
during the lifetime of the deployment and is triggered when
an area starts generating HP data.
The conzone can be discovered and destroyed either
from the critical area nodes to the sink or vice versa. The
conzone discovery algorithms allow nodes, in a distributed
fashion, to determine if they are on a potentially congested
path between the critical area and the sink. If they are, they
mark themselves as “on conzone.” The conzone discovery
schemes are summarized in Fig. 4.
For brevity, we only present conzone discovery from the
critical area to the sink in detail. In this case, critical area
nodes detect an event that triggers discovery. A conzone
must be then discovered from that neighborhood to the sink
for the delivery of HP data. To do this, critical area nodes
broadcast “discover conzone to sink” (ToSink) messages.
This message includes the ID of the source and its depth
and is overheard by all neighbors. The depth is included
here to ensure that nodes do not respond to the ToSink
messages heard from their parents. When a node hears
more than distinct ToSink messages coming from its
children, it marks itself as on conzone and propagates a
single ToSink message. This message is overheard by
neighbors who mark this neighbor as being on the conzone
in their neighborhood table. In our scheme, this threshold is a linear function of the neighborhood size (i.e., the
number of nodes within the communication range) and of
the depth of the node in the HiNet, as shown in (1). For
node x with depth dx and neighborhood size nx , we have
x ¼ dx dx nx :
Fig. 4. Conzone discovery algorithms in CAR for node x.
Since the depth and neighborhood size can vary for
different nodes, is set accordingly. Setting correctly for
different depths ensures that the conzone is of an appropriate width. As becomes smaller, the conzone becomes
wider. Depth must also be taken into account, because if is the same for different depths, the conzone will become
very narrow as it approaches the sink. Note that due to the
assumption of uniform deployments, the neighborhood size
is related to the number of children by a constant factor.
Hence, (1) can be adapted to use the number of children,
but we use the neighborhood size instead.
An important goal of the conzone discovery algorithm is
to split the parents and siblings (nodes with the same
depth) in the HiNet into on-conzone and off-conzone
neighbors. Initially, all parents and siblings are marked as
off conzone. Since a node will forward a ToSink message
only if it becomes on conzone, when a node hears such a
broadcast from its parent(s) or sibling(s), it marks that
neighbor as on conzone.
Since the presence of a conzone leads to suboptimal
routing for LP data due to on-conzone nodes being
dedicated to serving HP data, after the HP stream comes
to an end, the conzone is destroyed by flooding a “destroy
conzone” message in the conzone.
3.2.3 Differentiated Routing
Once the conzone is discovered, HP data is routed in the
conzone, and LP data is routed off the conzone. Since the
critical area is part of the conzone, all HP data will be
generated inside the conzone. Hence, the routing of HP data
is simple: a node always forwards the data to one of its onconzone parents. This parent is chosen randomly from the
on-conzone parent list to balance the load among them. If,
for some reason, the links to all parents are broken, for
example, because of node failures, the node will forward
the data to a sibling that is on the conzone. If that is
impossible, it will forward the data to any of its neighbors,
hoping that it can return to an on-conzone node.
LP data generated inside the conzone is routed out using
the following approach. When an on-conzone node gets an
LP message, it forwards it to an off-conzone parent, if there
are any. Otherwise, the LP data is forwarded to an offconzone sibling. If there are no parents or siblings that are
off conzone, we resort to the following method. After
discovering the conzone, the sink sends a message through
the conzone, which contains the coordinates of a line that
cuts the conzone in half. This line connects the sink to the
center of the critical area. Using this information and its
own coordinates, a node can determine on which half of the
conzone it lies and hence routes LP data to the parent that is
closest to the conzone boundary, i.e., farthest from the line.
With the assumption of uniform deployment density, this
ensures that all LP data generated inside the conzone is
routed out efficiently and along the shortest path.
The routing scheme described above is highly efficient
for LP traffic flowing in the same direction as the HP traffic.
Though it is not optimal for LP traffic flowing in different
directions, it will still correctly deliver the data while
keeping the routing out cost low.
It is important to note here that to keep the routing
overhead low, LP routing decisions inside the conzone are
static. So, once a node decides to which neighbor it is
going to forward LP data, it uses the same neighbor for all
LP packets. If that neighbor fails, an alternative must be
found using the same scheme. In-conzone routing for both
LP and HP data is summarized in Fig. 5.
LP data generated outside the conzone or routed out of
the conzone has to be routed to the appropriate LP sink
without using the conzone nodes. Hence, routing LP data
outside the conzone can use any of the known routing
schemes such as AODV, with modifications to prevent
LP data from being routed from an off-conzone node into
the conzone. We used AODV in the off-conzone nodes to
route LP data, with the modification that the on-conzone
nodes do not propagate route request or reply messages for
LP data. Using this modified routing scheme, LP data
generated outside or routed out of the conzone is routed to
its destination via off-conzone nodes only.
3.2.4 Enhancements
In CAR, LP data generated inside the conzone requires the
conzone nodes to dedicate some of their resources to route
such data out of the conzone. As an enhancement to better
serve HP data, on-conzone nodes stop generating or
forwarding any LP data. We call this enhancement CAR+.
Due to the shared nature of the wireless channel,
HP messages can be dropped by the critical area nodes
themselves due to collisions with other LP data from
neighboring nodes. This is especially true if the amount
Fig. 5. Routing algorithm for CAR for LP and HP data inside the conzone.
of LP traffic surrounding the critical area is large. As the
second improvement, we disable generating and forwarding
of LP data in all nodes that are within the communication
range of any critical area node. Since nodes know their
neighbors and their status, once a node discovers that one of
its neighbors is on the critical area, it disables generating and
forwarding of any LP data. We call this enhancement CAR++.
3.3 MAC-Enhanced Congestion Aware Routing
In this section, we present MCAR, a combined MAC and
routing scheme designed to support situations in which
critical events may move or the sensors generating HP data
may move. Though conzone discovery is dynamic in CAR,
the overhead required to maintain the HiNet in a dynamic
environment may be prohibitive. As a result, we use a
lightweight dynamic differentiated routing mechanism to
accommodate mobile data sources. MCAR is based on
MAC-layer enhancements that enable the formation of a
conzone on the fly with each burst of data. The trade-off is
that it effectively preempts the flow of LP data, thereby
seriously degrading its service.
Unlike CAR, MCAR does not form an HP network.
Instead, HP paths are dynamically created, since the sources
(or the sinks) are expected to be mobile. Thus, MCAR
discovers the conzone while discovering the paths from
HP sources to the sink.
The enhanced MAC-layer of MCAR uses an RTS/CTS
protocol that is augmented to carry information about the
priority level of the data being transferred. Each RTS and
CTS packet is tagged with a priority level. During channel
contention, if a node has HP data to send and overhears an
LP RTS, it jams the channel with an HP CTS, causing nodes
forwarding LP data to back off. Furthermore, if a node with
LP data overhears an HP RTS or CTS, it will back off the
channel, as described in the following section.
Though 802.11e [1] is similar to MCAR in that they both
prioritize access to the medium, the prioritized RTS/CTS
messages in highly congested networks may be dropped.
802.11e’s policy of guarding every transmission with an
RTS/CTS exchange leads to a prohibitive overhead. Woo and
Culler [23] state that RTS/CTS exchange imposes an overhead of up to 40 percent. The extent of overhead experienced
depends on the relative size of the RTS/CTS packets and
the data packets. In sensor networks, data packet sizes are
VOL. 7,
NO. 7, JULY 2008
not large enough to justify the cost of RTS/CTS exchange to
guard every packet. Hence, 802.11e is unsuitable for
sensor networks. MCAR uses a silencing mechanism that
does not require preempting all LP data transmissions in
the neighborhood for each HP data to be sent. Rather,
MCAR silences the conzone and its neighborhood during
route discovery and/or maintenance.
Though the cost of an RTS/CTS exchange for each
data packet may be considerable for a sensor network, even
S-MAC [24], a widely used MAC scheme for sensor networks,
uses one RTS/CTS exchange for a collection of message
fragments. Similarly, the cost of RTS/CTS imposed by
MCAR is not prohibitive, since it uses these RTS/CTS
packets only during the route discovery/maintenance
phase. Hence, the scalability of the RTS/CTS overhead
for MCAR is not an issue.
In MCAR, nodes discover if they are on the conzone by
using the conzone discovery explained in the following. Like
CAR, this conzone discovery is triggered when an area starts
generating HP data. For the conzone to be discovered
dynamically, MCAR uses two timers to regulate when a
node decides it is no longer part of the HP path. One timer,
called the overhearing timer, monitors how long it has been
since the last HP packet was heard. This timer is used to
control nodes in the communication range of the conzone
but that are not necessarily involved in forwarding the
packets. The overhearing timer is reset any time an
HP packet is overheard or any time an HP packet is received
(since nodes involved in forwarding packets are clearly
within the communication range of nodes transmitting
those packets). The second timer, called the received timer,
controls nodes either generating or forwarding HP data.
In MCAR, each node in the network can be in one of
three states, dictating whether it is a part of the conzone or
not or within the communication range of the conzone but
not a part of it (see Fig. 6). This last mode creates a shadow
area that separates HP traffic from LP traffic.
3.3.1 State Machine
The node state machine used by MCAR to support
differentiated routing based on MAC-layer enhancements
is shown in Fig. 6.
LP mode. In this mode, nodes forward LP data. All
nodes in the network are initially in the LP mode. Upon
receiving or overhearing an LP packet, nodes remain in the
LP mode and, if appropriate, forward any data. If a node in
the LP mode overhears an HP packet, it transitions to the
shadow mode. Finally, upon receiving an HP event that
needs to be forwarded (either because it sensed an HP event
or because it was chosen as the next hop toward the sink),
a node transitions to the HP mode.
HP mode. Nodes in the path of HP data are in the
HP mode. Upon transitioning to this state, the node sets
two timers: a received timer and an overhearing timer.
The values for these timers should be on the order of twice
the expected interarrival delay of HP data.
If a node in this mode receives an HP transmission, it
begins channel contention by using our modified RTS/CTS
protocol and forwards the data. It resets its received and
overhearing timers and remains in the HP mode. Upon
to the routing mechanisms used by MCAR. Nodes in the
HP or the shadow mode drop LP data. Hence, there is no
need to route LP data out of the HP zone in MCAR. As a
result, MCAR is more aggressive in dropping LP data and
eliminates all competition for the shared channel among
the LP and HP packets. This is one of the trade-offs between
Although both schemes support HP data delivery, CAR
is able to route LP traffic out of the conzone, while MCAR
cannot. CAR requires the formation of the HiNet, which
incurs higher overhead than the dynamic path establishment of MCAR. CAR is more permissive of LP traffic than
MCAR: it allows nodes that would be in the shadow mode
in MCAR to forward LP data. MCAR, on the other hand,
performs more similarly to CAR++ in this respect, limiting
the use of nodes in the conzone to only HP data. Section 4
quantifies these trade-offs through simulation studies.
Fig. 6. MCAR state machine.
overhearing HP data, the node resets its overhearing timer
only and stays in the HP mode.
If a node in the HP mode overhears or receives an
LP RTS, it sends a jamming HP CTS to clear the channel of
LP data and to announce the existence of an HP path and
stays in the HP mode.
If the received timer expires, the node transitions to the
shadow mode, maintaining the value of its overhearing
timer. While this is the normal exit out of the HP mode,
if both the received timer and overhearing timer expire at
the same time, the node transitions back to the LP mode.
Shadow mode. Nodes in this state are within the
communication range of HP traffic but not on a forwarding
path. Nodes in this state suppress LP traffic, thus preventing it from interfering with HP traffic in the network. Upon
overhearing an HP packet, the node resets its overhearing
timer and stays in this state. A node transitions to the
HP mode upon receiving an HP packet itself.
If a node in the shadow mode overhears an LP packet, it
stays in the shadow mode and takes no action. If the node is
the intended recipient of the LP data, it silently discards the
packet and stays in the shadow mode. It should be pointed
out that this is an aggressive action to maximize the service
given to HP data. Finally, if the overhearing timer expires,
the node transitions to the LP mode.
3.3.2 Routing
Route discovery is performed dynamically at the time of HP
event detection. Essentially, MCAR performs on-demand
route discovery similar to schemes like AODV. The route
discovery and reply packets are marked according to the
priority of impending data, causing nodes along the route
for HP data to transition to the HP mode. Once the route
is built, HP data flows along this path. In the event of a
route break due to node failure or mobility, route recovery
is performed, again using HP control packets. Nodes
on segments of the old route will transition back to the
LP mode as their timers expire, and LP flows that were not
forwarded can now be transmitted.
Only nodes in the LP mode forward LP data, including
any LP route requests. The routing of this data can be
performed using any routing mechanism and is orthogonal
3.4 Congestion Aware Routing and Mobility
In many applications, sensor networks are static or are
characterized by low mobility. While MCAR is designed for
networks with high mobility, CAR is suitable for very low
mobility applications. Though MCAR can be applied to
scenarios with low mobility, it aggressively degrades service
to LP traffic. CAR can be adapted to avoid such degradation
of service to LP data. Each component of CAR—HP routing
network formation, conzone discovery, and differentiated
routing—can be modified to handle low mobility. Every time
a neighbor moves, it needs to discover its new parents and
children in the HP network. This can be achieved by a
periodic beacon message flood that organizes the network as
a tree. As long as a conzone node stays in the conzone or an
off-conzone node stays off the conzone, no change is required.
If a conzone node moves to an off-conzone area, we can use a
time-out mechanism similar to the one in MCAR to switch it
to the off-conzone mode. If an off-conzone node moves to the
conzone, it can switch itself to conzone if a sufficient number
of its new children transmit HP packets. Though we do not
verify the efficacy of CAR under such low mobility cases, our
results for the static scenarios give us confidence in these
expectations, as described in the following.
In this section, we describe our simulation setups used to
test CAR and MCAR and discuss the results in detail.
Table 1 provides a brief summary of our proposed schemes.
Since our implementation testbed consists of only
48 nodes, we use larger setups in simulations to gather
insights about CAR-based schemes and MCAR. Hence, we
present extensive simulation results in this section, which
are complemented by implementation results in Section 5
to complete the picture.
4.1 Simulation Setup
The simulations were conducted in NS-2 [17], with a
deployment area of 560 m 280 m. In this area, 120 nodes
are placed in a 15 8 grid, as shown in Fig. 7, with the
separation between neighboring nodes along both axes
being 40 m. Note that we use grids as deployments in this
paper to emulate uniformly dense deployments and such
grids are not a requirement of our algorithms. As long as
Summary of Schemes
the neighborhood relationships are similar, the results will
not differ significantly from those presented in this paper.
Two LP sinks receive all LP data, while a single sink
receives all HP data. Three nodes form the critical area
and send HP data. The rest of the nodes, other than the
three sinks and the three critical area nodes, send LP data to
either LP sink (see Fig. 7). This LP data serves as the
background traffic in our simulations. Note that the
HP sources in our simulations were placed at the edge of
the deployment to get a sufficient number of hops from
them to the HP sink. In a large deployment of hundreds of
nodes, these HP sources need not be at the edge of the
deployment. Results were recorded when the system
reached a steady state. CAR uses AODV to route LP data
outside the conzone, with a modification to ensure that offconzone nodes do not route such data into the conzone.
The IEEE 802.11 is used as the MAC layer operating at
11 Mbps. 802.11 is a CSMA/CA MAC layer that uses RTS/
CTS to avoid the hidden terminal problem. Sensor network
MAC schemes like S-MAC [24] and B-MAC [19] employ
CSMA/CA and cut down the overhead of RTS/CTS.
However, in dense networks that are under high congestion, these schemes will need to use RTS/CTS for each
data packet to avoid the hidden terminal problem. Hence,
802.11 is a reasonable approximation of S-MAC [24]
under congested conditions. Actually, the results in [24]
show that a node that forwards sufficient traffic uses less
energy with 802.11 than with S-MAC. Since nodes in the
congested part of the network will often forward a
significant amount of traffic, 802.11 is more energy efficient
than S-MAC for congested networks.
Fig. 7. Simulation scenario.
VOL. 7,
NO. 7, JULY 2008
We compare CAR (and its improvements) and MCAR to
AODV and to an enhanced version of AODV that we
implemented, that is, AODVþPQ. AODVþPQ maintains
two queues at each node. The first is an HP queue.
Messages in this queue are transmitted if present. The
second queue is an LP queue. When the HP queue is empty,
messages from this queue are transmitted. This policy
provides absolute privilege to HP data within a node.
AODVþPQ is a simple generalization of priority-queuebased schemes such as the ones used in [4], [15], and [25].
We also generated results for DSR [13] and Directed
Diffusion [12] but do not present them here. In our
environment of large multihop networks, DSR fails to route
any HP data successfully. DSR is intended to work over
networks with a small number of hops, as reported in [5].
Similarly, Directed Diffusion was unable to route any
HP data successfully due to the large control overhead
involved in the initial flooding that is required to set up the
data paths. One-Phase Pull Filter was used in the simulations, and though it is expected to route LP packets
successfully, our simulations showed that as the number
of senders in the deployment was increased beyond 10,
Directed Diffusion failed to route any data. As with DSR,
Directed Diffusion is not intended for such applications. It
was mainly designed to work in cases where the number of
sinks and senders is small.
We do not compare our work to solutions that propose
rate limiting, back pressure, or throttling of senders as
mitigation schemes (Fusion [10] and CODA [22]), because
they treat all data equally and do not utilize the
uncongested parts of the network to deliver LP data. Also,
Fusion’s prioritized medium access scheme is designed for
situations in which all data in the network is routed to the
same sink. This might not always be true, e.g., three sinks
are used in an experiment in CODA [22]. We also do not
compare our schemes with SPEED [8], because its Stateless
Nondeterministic Geographic Forwarding scheme is agnostic of
data priority. If LP data leads to an extended hotspot in the
network, HP flows that later enter the network will be
forced to circumvent the hotspot. In contrast, we provide
shortest paths for HP flows and force LP data to
circumnavigate the conzone. In the worst case, SPEED will
drop HP packets due to the unavailability of a suitable
downstream node if all neighbors are sufficiently congested
with LP traffic. These are the scenarios in which we try to
prevent degradation of service to HP data.
A routing scheme has different aspects: route formation
overhead, the quality of routes, route maintenance overhead, etc. For this work, only the quality of routes is vital.
The rest of the aspects of a routing scheme are not central to
the problem of performance in congested networks. Since
AODV uses shortest paths, we selected the same. Also,
AODV is a widely used routing scheme, and several other
routing schemes are based on it. Hence, like SPEED, we
compare the performance of our schemes to AODV.
In our simulations, CAR builds the conzone from the
critical area to the sink. Nodes are added to the conzone if
they receive at least ¼ 2 ToSink messages. For example,
with a transmission range of 130 m, the neighborhood size of
a node away from the edge of the deployment is equal to 36.
For a node at depth 3, 3 is set to 0.018. For nodes with
Fig. 8. Routing views for all data and packet drop analysis for HP packets for AODV and CAR for range ¼ 130 m, LP data rate ¼ 0:5 pps, and
HP data rate ¼ 30 pps. (a) Routing view for AODV. (b) Routing view for CAR. (c) HP packet drop analysis.
depth 2, 2 is set to 0.027, and so on. MCAR uses the
algorithms described in Section 3.3 to build its conzones.
We now provide a high-level comparison of CAR
with AODV. Figs. 8a and 8b depict the routing of LP and
HP packets by AODV and CAR. CAR+, CAR++, and
MCAR look similar to CAR, as they all route only HP traffic
in the conzone. Lightly shaded edges denote LP data, while
heavily shaded edges denote HP data. The thickness of
edges is directly proportional to the number of packets
routed over them. A circle around a node denotes that the
node has dropped HP packets. The radius of the circle is
directly proportional to the number of HP packets dropped.
Also, the larger nodes in Fig. 8b denote the nodes that
belong to the discovered conzone.
In Fig. 8a, we observe that AODV routes both LP and
HP data along the same paths and these paths may not be
the shortest possible. As a result, many HP packets are
dropped at the critical area nodes themselves. Additionally,
in CAR, a conzone is formed (this can be observed in
Fig. 8b). The set of shortest paths from the critical area to the
sink routes all HP data, while the rest of the network routes
LP data. LP data generated inside the conzone for CAR is
effectively routed out using the minimum number of hops
inside the conzone. It can be seen in Fig. 8b that CAR
effectively performs differentiated and multipath routing
and successfully routes routed-out or off-conzone generated
LP data around the conzone.
In the examples in Figs. 8a and 8b, AODV routes only
5.7 percent of HP data successfully, while CAR delivers
96.2 percent of such data. The most prominent reason for
AODV dropping packets, based on our analysis shown in
Fig. 8c, is that the MAC layer fails to route a packet after
several retransmission attempts (MAC Callback). This is due
to congestion, which makes it difficult for a node to capture
the channel to transmit data. Also, while AODV delivers
78 percent of LP data, CAR delivers 89 percent of such data.
Note that in all simulations, the queue size at each node
was set to 1,000 packets, while the size of the packets was
50 bytes. This requires around a 50-Kbyte memory space,
which is easily available in different motes (e.g., EyesIFXv2
[11] motes have a 512-Kbyte serial EPROM, and MicaZ
motes have a 512-Kbyte measurement flash). Note that
RAP [15] uses a queue size of 300 packets. Since the queues
in nodes are almost always nonempty during congestion,
the rate of “MAC Callback” errors will remain the same.
AODV benefits more than CAR-based schemes and MCAR
with a large queue size, because it drops more HP packets
due to buffer overflow (see Fig. 8c). With a small queue,
AODV will drop more data due to buffer overflow, while
CAR-based schemes and MCAR will drop much less data.
4.2 Simulation Results
We analyze two aspects of the CAR-based schemes:
feasibility and performance. For feasibility, we have analyzed the delays to form the routing network (HiNet for
CAR, CAR+, and CAR++ and dynamic route creation in
MCAR), discover the conzone, and destroy it. The delays for
HiNet formation, conzone discovery, and destruction tend to
decrease as the number of hops from the critical area to
the sink decreases. HiNet formation delay stays under
11 seconds at the maximum and decreases as the transmission range increases. The conzone discovery and destruction
delays were found to be less than 1 second. Since CAR is
meant for static networks with comparatively long-lived
HP streams, these delays are small compared to the duration
of HP data flood. In low-mobility scenarios, the delay
required for some nodes to refresh their parents and children
will be less than the HiNet formation delay, and the delay for
nodes that move from off-conzone areas to the conzone to
switch to the conzone mode will be small as well.
In our simulations for CAR, this flood duration is set to
50 seconds. CAR is hence a feasible solution that can
quickly adjust to different events that require the rediscovery
of the conzone. MCAR incurs no HiNet formation delay
but incurs a delay in finding a route at the start of
HP transmissions. These delays depend on the number of
hops but, in our tests, were consistently less than 1 second.
In the following, we present results comparing the
performance of our schemes for varying ranges, varying
LP data rates, and varying HP data rates.
4.2.1 Varying Transmission Range
In this group of simulations, the transmission ranges were
varied between 90, 130, 170, and 210 m. As the transmission
range increases, the number of hops from the edge of the
network to the sink decreases from 6 to 3. The LP data rate
of each node, other than the critical area nodes and the
sinks, was set to 0.5 pps, while the HP data rate of critical
area nodes was set to 30 pps. These simulations show the
gains of CAR schemes and MCAR as the node density of a
deployment increases.
We first make some general observations on the behavioral differences between CAR and AODVþPQ. Priority queues
provide better service to HP data compared to AODV.
However, because each node makes the best decision locally,
such a scheme may not be able to provide better service
globally. Consider the case in which a node has an empty
VOL. 7,
NO. 7, JULY 2008
Fig. 9. Varying transmission range, LP data rate ¼ 0:5 pps, and HP data rate ¼ 30 pps. (a) HP data delivery fraction. (b) LP data delivery fraction.
(c) HP data delivery delay.
HP queue but a nonempty LP queue. This node will start
injecting LP traffic into the network, which, due to the shared
medium, may degrade the service provided to HP packets in
nearby nodes. CAR and its enhancements, on the other hand,
separate the traffic into two regions and hence eliminate
most of the interference that can be caused by having both
LP and HP traffic routed on the same paths.
Fig. 9a plots the fraction of HP data delivered to the sink.
As the transmission range increases, the network becomes
more congested, and more collisions occur. As a result, the
performance of AODV degrades severely, and it routes less
than 10 percent of HP data successfully. On the other hand,
AODVþPQ and CAR-based schemes route a higher fraction
of the data, although CAR-based schemes route more
HP data than AODVþPQ for all ranges. At ranges larger
than or equal to 130 m, CAR-based schemes route more
than 90 percent of the data. We note that CAR++ routes
more data than CAR+, which, in turn, routes more data
than CAR. Finally, MCAR routes nearly all of the HP data,
as it uses MAC-layer mechanisms to silence the conzone
and its neighborhood in terms of LP traffic.
Fig. 9b shows the fraction of LP data routed successfully.
Although our focus is to provide better service to HP data
in the presence of congestion, CAR also effectively utilizes
the uncongested off-conzone nodes to prevent severe
degradation of LP data. Hence, in addition to improving
HP delivery, CAR also enhances delivery of LP traffic as the
range increases. The AODV delivery ratio decreases sharply
as the range increases, while AODVþPQ routes the highest
percentage of LP data. Note that since AODVþPQ routes
less HP data (see Fig. 9a) and more LP data than CAR-based
schemes, it is clear that priority-queue-based schemes alone
are not sufficient to provide better service to critical data.
CAR routes more LP data than AODV as the range
increases, since it prevents LP data from entering the
conzone and getting dropped. AODVþPQ routes more
LP data than CAR, because it does not as aggressively
degrade service to LP data as CAR. At large ranges (i.e., in
networks with few hops from the sink to the critical area)
AODVþPQ routes more LP data and approximately the
same amount of HP data as CAR. This is because in CAR,
congestion may occur in off-conzone areas, as LP data from
the conzone is routed out into such areas. Note that CAR+
and CAR++ deliver less LP data compared to CAR, because
they, by design, drop more such data.
MCAR drops virtually all LP data. This is due to the
close proximity of the LP sinks to the HP source.
MCAR silences all nodes in the communication range of
the HP stream; therefore, the LP sink is also silenced. Later
in this section, the performance of MCAR in a wider
network is presented, and only LP traffic in the communication range of the HP flows is dropped.
Fig. 9c shows that as the range increases, the average
HP data delivery delay for AODV increases while such
delay for AODVþPQ and CAR-based schemes decreases.
This is due to the increasing congestion that AODV faces.
Furthermore, the jitter introduced by data forwarding is
always less for CAR-based schemes as compared to AODV
and AODVþPQ (not shown).
In our simulations, we observed that despite shorter
paths being available, AODV and AODVþPQ do not
necessarily route data along such paths. MCAR also suffers
from this problem. CAR, CAR+, and CAR++ schemes
always find a shorter path. Routing along shortest paths has
several implications, including less overall network energy
usage. Since MCAR uses AODV’s route discovery algorithm and does not use the HiNet, it also sometimes finds
suboptimal paths to the sink.
Fig. 10 shows the maximum energy used by any node in
the deployment. This includes the energy used to route all
possible traffic, both LP and HP. The energy used by AODV
and AODVþPQ is more than that for CAR. CAR+ uses less
energy, and CAR++ uses the minimum energy among all
the schemes. MCAR falls in between: it saves energy over
AODV by reducing the amount of LP contention during
HP traffic times; however, MCAR routes may contain extra
hops as compared to CAR, because it does not benefit from
the HiNet creation phase.
Fig. 10. Maximum node energy used: varying transmission range,
LP data rate ¼ 0:5 pps, and HP data rate ¼ 30 pps.
Fig. 11. Varying LP data rate, transmission range ¼ 130 m, and HP data rate ¼ 30 pps. (a) HP data delivery fraction. (b) LP data delivery fraction.
(c) HP data delivery delay.
4.2.2 Varying Low Priority Data Rate
In this set of simulations, the range is set to 130 m, and the
HP data rate of each critical area node is set to 30 pps, while
the LP data rate is varied. These simulations compare the
performance of CAR schemes and MCAR with AODV and
AODVþPQ as the network has to contend with increasingly
intense background LP traffic.
As the LP data rate increases, the fraction of HP packets
routed by AODV sharply falls to zero (see Fig. 11a).
Although AODVþPQ performs better than AODV, it still
faces the same fate. In contrast, though the fraction of data
routed successfully to the sink by the CAR-based schemes
decreases, these schemes still route more than 60 percent of
the data, even when AODV and AODVþPQ do not route
any data at all. MCAR is less sensitive to the LP data rate,
since it silences all nodes within the communication range
of the HP flows. Note that the curve for CAR++ overlaps
the curve for MCAR in Figs. 11a and 11c.
Also, the fraction of LP data successfully routed by
AODV drops (see Fig. 11b). For CAR, it decreases from
around 90 percent to 85 percent. For CAR+ and CAR++,
this fraction stays almost constant. MCAR still drops nearly
all of the LP data due to the close proximity of the LP sinks to
the HP sink.
The delays for AODV and CAR increase, while they stay
almost constant for CAR+, CAR++, and MCAR (see
Fig. 11c). The standard deviations of the delivery delays
(not shown) exhibit higher variations for AODV and
AODVþPQ. Such high variations correspond to larger jitter
values, which are a problem for real-time data delivery.
Since AODV routes a very small fraction of HP packets
and a smaller fraction of LP packets, the maximum energy
used by AODV stays the same as the rate of LP data varies
(not shown). The energy consumed in the CAR-based
schemes increases as the LP data rate becomes larger. In all
cases, it is lower than AODV.
4.2.3 Varying High Priority Data Rate
In the final set of simulations, the HP data rate was varied,
and the LP data rate was fixed to 0.5 pps. The communication range was fixed at 130 m. These simulations compare
the performance of CAR schemes and MCAR with AODV
and AODVþPQ for HP data rates ranging from the same as
the LP data rate of 0.5-30 pps.
As shown in Fig. 12a, though the fraction of HP data
successfully delivered is 100 percent for all schemes when
the HP data rate is low, as it increases, the fraction of
HP data routed decreases faster for AODV and AODVþPQ
than for CAR-based schemes. Again, MCAR is insensitive
to the increase, as long as the total HP traffic level
remains below the network capacity. Note that the curve
for CAR++ overlaps the curves for CAR+ and MCAR in
Figs. 12a and 12c. Fig. 12b depicts the LP data delivery
fraction as the HP data rate was varied. These results are
consistent with those already presented.
Though the average delivery delay increases for all
schemes, as shown in Fig. 12c, MCAR has the smallest
delay. As the HP data rate increases, AODV and
AODVþPQ have higher variation in delivery delay as
compared to CAR-based schemes (not shown). The
Fig. 12. Varying HP data rate, transmission range ¼ 130 m, and LP data rate ¼ 0:5 pps. (a) HP data delivery fraction. (b) LP data delivery fraction.
(c) HP data delivery delay.
VOL. 7,
NO. 7, JULY 2008
located anywhere in the network and not necessarily on a
horizontal line with the HP sink.
To demonstrate that MCAR does not silence the entire
network, tests were run on a static network 12 nodes deep
and 21 nodes wide (see Fig. 13). HP sources are placed in
the middle of the long edge and route data across to a sink
on the opposite wide edge. LP sinks are in two corners of
the network, with all other nodes generating LP traffic
destined for one of the two LP sinks at a rate of 0.5 pps. The
HP nodes generate traffic at a rate of 30 pps.
As expected, the center of the network is reserved for
HP flows, but nodes outside the communication range of
those flows are free to transfer LP data. As the communication range is increased, a lower percentage of LP data
is received due to an increase in the number of nodes in
the shadow mode. However, at all communication ranges,
MCAR delivers more than 90 percent of the HP data and
45 percent and 65 percent of the LP traffic for all but the
largest transmission ranges (see Fig. 14a).
Fig. 13. Wide network deployment for static and mobile experiments.
CAR variants also provide even energy consumption as
compared with AODV and AODVþPQ (not shown).
MAC-Enhanced Congestion Aware Routing in
Wider Networks
Since MCAR requires nodes within the communication
range of the HP flows to drop all LP data, in narrow
networks such as the one tested in the previous simulations,
a large portion of LP data is suppressed. This is in contrast
with CAR, which delivers much more LP data than MCAR.
MCAR makes this trade-off to ensure that the largest
possible percentage of HP data is routed successfully
through the network. The performance of CAR in a wider
network with the same amount of HP traffic will be the
same in terms of the HP data delivery and better in terms of
LP data delivery due to the absence of congestion in a larger
part of the network. Since CAR delivers some LP data, even
in such narrow networks, we now explore the performance
of MCAR for LP data delivery in wider networks. Note that
CAR does not require the critical area to be on a horizontal
line from the sink. The simulation setup used in this paper
uses such critical areas for the simplicity of presentation.
CAR’s component algorithms can handle critical areas
4.4 Dynamic High Priority Sources
Here, we present simulations to demonstrate the ability of
MCAR to deliver HP packets when the sources of the
HP data are dynamic. Additionally, these experiments
show that LP flows recover rapidly after HP streams end.
To demonstrate the flow setup and tear down speed, we
use the 250-node network configuration in Section 4.3.
Fig. 14b shows the number of HP and LP packets received
per second of simulation. The three HP sources generate data
at 30 pps for 10 seconds and then go silent for 10 seconds.
Fig. 14c shows the same information, with HP sources
generating packets at 30 pps for 1 second and then going
silent for 9 seconds. LP data reception rapidly decreases
during times when the HP sources generate data and then
increases rapidly directly after those sources go silent.
Longer burst times for HP data, of course, suppress a greater
number of LP packets. This periodic behavior is exactly the
goal of the design of MCAR. HP packets get use of the paths
that they need and suppress LP traffic only along those paths
and only during the times when HP data is present in the
network. The overall delivery percentages for HP and
LP data are similar to those of the previous sections.
Finally, to show that MCAR supports mobility effectively,
simulations are run with the network configuration depicted
in Fig. 13 with mobile HP sources. At the start of the
simulation, all nodes, except the sinks and the HP sources,
Fig. 14. Fraction of HP and LP packets received. (a) Fraction of HP and LP data received, with varying transmission range. (b) HP and LP packets
received, with 0.5 duty cycle. (c) HP and LP packets received, with 0.1 duty cycle.
Fig. 15. Mobility: fraction of HP and LP data received.
are sending LP data at a rate of 0.5 pps. All three HP sources
begin generating data and then begin moving at 8 m/s
toward the sink for 60 seconds. Then, the middle HP node
stops moving, and the other two nodes turn toward the edges
of the network and move away from the HP sink.
With a short reception range, MCAR delivers over
60 percent of the HP packets. Most of the HP loss is due
to route changes because of mobility (see Fig. 15). Again, as
the reception range increases, the percentage of HP packets
received at the source increases rapidly. Finally, LP nodes
outside the communication range of the HP traffic still
successfully deliver LP packets.
To demonstrate the practicality of our algorithms and
to verify the conclusions drawn from our simulations,
we implemented MCAR on our sensor network testbed.
Our testbed is deployed in a 10 m 11 m laboratory in a
grid suspended 0.6 m from the ceiling and 2.4 m above the
floor. The network is made up of 48 EyesIFXv2 nodes [11],
separated by 1.6 m in one direction and 1.2 m in the other
The EyesIFXv2 nodes were developed during a 3-year
European research project on self-organizing energy-efficient sensor networks [11]. The nodes use an ultra-lowpower MSP430 processor with a 10-Kbyte on-chip RAM,
a 48-Kbyte flash/ROM, and an additional a 512-Kbyte
serial EPROM. The radio chip is a low-power FSK/ASK
transceiver, providing half-duplex low-data-rate communication in the 868-MHz ISM band. It operates using FSK
modulation, with a sensitivity of < 109 dBm, enabling
half-duplex wireless connectivity of up to 64 kilobits per
second (Kbps). The radio chip provides a digital potentiometer to allow the transmission power and, hence, the
transmission range of a node to be adjusted. This allows the
testbed to range from one to five hops from corner to
corner. A USB backplane is used to monitor the network
without interfering with the flow of data.
The EyesIFX platform is equipped with a temperature
and a light intensity sensor. Also, the TDA5250 chip
provides an RSS indicator via both an internal register that
can be queried or an analog pin, whose Vcc varies from
0 mV (silence) up to 1,350 mV, linked to the ADC of the
MSP430. A minimum RSSI threshold can be set to consider
the received power to be valid data.
Environmental Monitoring with MAC-Enhanced
Congestion Aware Routing
We chose MCAR to demonstrate that the differentiated
routing algorithms and MAC-layer enhancements in MCAR
used for mobility support can be implemented on the
current hardware. MCAR is also suited for our particular
application, since the causes of changes in environmental
conditions (e.g., fires) would be expected to spread and
therefore be a potentially mobile source of data.
To put MCAR to use, an environmental monitoring
application that tracks temperature readings throughout
the sensor network was augmented. Our application
monitors the environment, periodically sending temperature
readings (LP data) from every node to a sink at a low rate
(one every few seconds). In the event that the temperature
increases beyond a threshold, events are generated at a
higher rate (HP data), and alarms are triggered at the sink.
Additionally, nodes send information about the ambient
light levels periodically (LP data). To ensure that messages
are delivered to the sink in the face of node failure(s),
the application uses a very simple flooding mechanism.
One major challenge in the system is that nodes rapidly
became congested with traffic, and often, when an alarm
should be triggered, the relevant data never reach the sink.
The testbed periodically simulates a temperature alarm
by telling a node that its temperature is above the alarm
threshold, and this alarm is transmitted to the HP sink.
Alarm messages are transmitted at a frequency five times
higher than that of the LP polling messages. The HP sink is
in a corner of the network, and the data is delivered to it
through a corridor. In this corridor, flooding is being used
to propagate LP data. In case of a fire alarm, it is important
to deliver the messages as soon as possible, and a static
route would be inappropriate, because nodes in the
congested region of the network drop far too many packets;
however, routing around the region is also impossible.
We also test MCAR’s ability to track moving sources by
having the testbed simulate temperature alarms that traverse
the edge of the network. Static HP routes such as those built by
CAR will not work for varying locations of the temperature
alarm, allowing the demonstration of MCAR’s flexibility.
We implemented the applications described above and
the MCAR combined MAC and routing algorithms with its
modified RTS/CTS described in Section 3.3 in TinyOS 1 [9].
The environmental monitoring application has a timer that
runs periodically. When the timer fires, the battery, light,
and temperature sensors are polled, and the measurements
are converted to digital values. If the temperature reading is
above a user-defined threshold, which can be adjusted
interactively by users via commands sent over the radio or
USB backbone, the node transitions into the HP mode.
Finally, the send procedure is called with the appropriate
priority setting.
The size of the whole application is 22 Kbytes, which can
completely reside in the ROM of the sensor node. Most of
the size of the application is related to the RF components
and hardware management, and the MCAR modifications
amount to less than 100 bytes.
The primary challenge in implementing MCAR involved
the strictly modular design of TinyOS. Because MCAR
relies on priority information from the application layer and
Fig. 16. Testbed: fraction of HP and LP data received versus HP traffic
duty cycle.
alters both the routing and MAC layers, it was necessary to
find clean ways to pass information between the layers.
Additionally, the purely event-driven nature required a
state-machine-style algorithm design. Fortunately, because
MCAR’s mechanisms work in a top-down manner (i.e., the
adaptations are driven by the application priority settings),
only these priorities need to be exposed to all layers. For
example, any route setup packets for an HP flow must be
assigned an HP, or they risk being dropped. However,
route setup in many standard protocols is not tagged with
flow information. Therefore, application priorities must be
used at the routing layer, and all routing mechanisms used
to service an HP flow must themselves be HP. While such
changes in protocols are small, in terms of code size, they
are critical for protocol correctness.
5.2 Testbed Results
In this section, we present results from running our
application enhanced with MCAR. The network is configured with an HP sink on one edge of the network and
LP sinks at two corners of the network. All nodes, except
the sinks, run the monitoring application. A source is
chosen as the trigger of an alarm.
The first test performed involves varying the HP duty
cycle in the range of [0.05, 0.95], keeping the HP data rate at
a constant 5 pps and keeping the LP data rate at a constant
0.33 pps from each node. Fig. 16 shows that MCAR
maintains an HP delivery rate above 90 percent while
the LP data delivery rate varies between 47 percent and
60 percent. The reason that the fraction of LP data received
increases as the HP duty cycle increases is that since many
LP nodes are shadowed, there is less LP contention in the
VOL. 7,
NO. 7, JULY 2008
network due to the flooding algorithm. Therefore, congestion at the LP sinks decreases.
We also test the setup and recovery time of HP flows by
choosing a node at the edge of the network and moving
its temperature threshold down to trigger HP events at
duty cycles of 0.05, 0.33, and 0.66, representing increasingly
longer HP flows in the network. Figs. 17a, 17b, and 17c
depict the number of HP and LP packets received in each
3-second interval. As can be seen, the decrease in
LP packets received corresponds directly to the HP traffic,
and the fraction of LP packets received increases soon after
the HP traffic stops.
Finally, we test the ability of MCAR to continue
delivering HP packets in the face of mobility. For these
tests, the HP duty cycle was kept in the range of [0.05, 0.95],
the HP data rate was set at a constant 5 pps, and the LP data
rate was set at a constant 0.33 pps from each node. A
heating event was moved through the network, following
the mobility pattern in the simulation (see Fig. 13). Fig. 18
shows that MCAR maintains an HP delivery rate above
78 percent for all but the lowest transmit power setting.
Without MCAR active, the HP traffic suffers heavy losses
due to the congestion in the network.
In this paper, we addressed data delivery issues in the
presence of congestion in wireless sensor networks. We
proposed CAR, which is a differentiated routing protocol
and uses data prioritization. We also develop MCAR,
which deals with mobility and dynamics in the sources of
HP data.
Our extensive simulations show that as compared to
AODV and AODVþPQ, CAR and its variants increase the
fraction of HP data delivery and decrease delay and jitter
for such delivery while using energy more uniformly in
the deployment. CAR also routes an appreciable amount
of LP data in the presence of congestion. We additionally
show that MCAR maintains HP data delivery rates in the
presence of mobility and show that the route setup and
tear-down times associated with the HP flows are
Both CAR and MCAR support effective HP data delivery
in the presence of congestion. CAR is better suited for
static networks with long-duration HP floods. For bursty
HP traffic and/or mobile HP sources, MCAR is a better fit.
Fig. 17. Testbed: HP and LP packets received. (a) Duty cycle of 0.05. (b) Duty cycle of 0.33. (c) Duty cycle of 0.66.
Fig. 18. Testbed with mobility: fraction of HP and LP data received
versus HP traffic duty cycle.
We also presented the implementation of an environmental monitoring system that uses MCAR as its MAC and
routing layer. Our experiments on the testbed verify the
conclusions drawn from the simulation study and show
that MCAR is suitable for implementation on currently
available hardware.
Because of the low jitter rates and maintainable delay,
CAR and its variants appear suitable to real-time data
delivery. To ensure QoS for video streams, reactive
dropping methods could be combined into the routing
protocol. Our future work looks at the effectiveness of such
techniques in sensor network environments. Also, while
MCAR merges multiple conzones naturally, we are now
exploring the interactions of differentiated routing and
multiple conzones, which may be overlapping or disjoint in
CAR and its two enhancements. Finally, we will also
explore the impact of different sizes and shapes of conzones
on data delivery in the future.
The authors would like to thank Farooq Anjum of
Telcordia Technologies for insightful discussions. This
research was sponsored in part by the US Army
Research Laboratory and the UK Ministry of Defence and
was accomplished under Agreement W911NF-06-3-0001
and by the US National Science Foundation under Grant
C N S - 0 51 9 46 0. T hi s w or k w a s c on d u c t e d w he n
Riccardo Crepaldi and Albert F. Harris III were with
Department of Information Engineering, University
of Padova.
Draft Supplement to Part 11: Wireless Medium Access Control (MAC)
and Physical Layer (PHY) Specifications: Medium Access Control
(MAC) Enhancements for Quality of Service (QoS), IEEE 802.11e/
D4.0, Nov. 2002.
G.-S. Ahn, S.G. Hong, E. Miluzzo, A.T. Campbell, and F. Cuomo,
“Funneling-MAC: A Localized, Sink-Oriented MAC for Boosting
Fidelity in Sensor Networks,” Proc. Fourth ACM Conf. Embedded
Networked Sensor Systems (SenSys), 2006.
G.-S. Ahn, L.-H. Sun, A. Veres, and A.T. Campbell, “Swan: Service
Differentiation in Stateless Wireless Ad Hoc Networks,” Proc.
K. Akkaya and M.F. Younis, “An Energy-Aware QoS Routing
Protocol for Wireless Sensor Networks,” Proc. 23rd IEEE Int’l Conf.
Distributed Computing Systems (ICDCS ’03), pp. 710-715, 2003.
S.R. Das, C.E. Perkins, and E.M. Belding-Royer, “Performance
Comparison of Two On-Demand Routing Protocols for Ad Hoc
Networks,” Proc. IEEE INFOCOM ’00, pp. 3-12, 2000.
C.T. Ee and R. Bajcsy, “Congestion Control and Fairness for
Many-to-One Routing in Sensor Networks,” Proc. Second ACM
Conf. Embedded Networked Sensor Systems (SenSys ’04), pp. 148-161,
E. Felemban, C.-G. Lee, and E. Ekici, “MMSPEED: Multipath
Multi-SPEED Protocol for QoS Guarantee of Reliability and
Timeliness in Wireless Sensor Networks,” IEEE Trans. Mobile
Computing, vol. 6, pp. 738-754, 2006.
T. He, J.A. Stankovic, C. Lu, and T. Abdelzaher, “Speed:
A Stateless Protocol for Real-Time Communication in Sensor
Networks,” Proc. 23rd IEEE Int’l Conf. Distributed Computing
Systems (ICDCS), 2003.
J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister,
“System Architecture Directions for Network Sensors,” Proc.
Ninth Int’l Conf. Architectural Support for Programming Languages
and Operating Systems (ASPLOS ’00), Nov. 2000.
B. Hull, K. Jamieson, and H. Balakrishnan, “Mitigating Congestion
in Wireless Sensor Networks,” Proc. Second ACM Conf. Embedded
Networked Sensor Systems (SenSys), 2004.
Eyesifxv2 Version 2.0. Infineon, http://www.infineon.com, 2008.
C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directed
Diffusion: A Scalable and Robust Communication Paradigm for
Sensor Networks,” Proc. ACM MobiCom ’00, Aug. 2000.
D.B. Johnson and D.A. Maltz, “Dynamic Source Routing in
Ad Hoc Wireless Networks,” Mobile Computing. Kluwer Academic
Publishers, pp. 153-181, Feb. 1996.
B. Karp and H. Kung, “GPSR: Greedy Perimeter Stateless Routing
for Wireless Networks,” Proc. ACM MobiCom, 2000.
C. Lu, B. Blum, T. Abdelzaher, J. Stankovic, and T. He, “RAP: A
Real-Time Communication Architecture for Large-Scale Wireless
Sensor Networks,” Proc. Eighth IEEE Real-Time and Embedded
Technology and Applications Symp. (RTAS ’02), pp. 55-66, 2002.
S. Madden, M. Franklin, J. Hellerstein, and W. Hong, “Tag: A Tiny
Aggregation Service for Ad-Hoc Sensor Networks,” Proc.
Fifth Symp. Operating System Design and Implementation (OSDI),
ns2: Network Simulator, http://www.isi.edu/nsnam/ns/, 2008.
C.E. Perkins and E.M. Royer, “Ad Hoc On-Demand Distance
Vector Routing,” Proc. Second IEEE Workshop Mobile Computing
Systems and Applications (WMCSA ’99), Feb. 1999.
J. Polastre, J. Hill, and D. Culler, “Versatile Low Power Media
Access for Wireless Sensor Networks,” Proc. Second ACM Conf.
Embedded Networked Sensor Systems (SenSys), 2004.
S. Rangwala, R. Gummadi, R. Govindan, and K. Psounis,
“Interference-Aware Fair Rate Control in Wireless Sensor
Networks,” Proc. ACM SIGCOMM, 2006.
N. Shrivastava, C. Buragohain, D. Agrawal, and S. Suri, “Medians
and Beyond: New Aggregation Techniques for Sensor Networks,”
Proc. Second ACM Conf. Embedded Networked Sensor Systems
(SenSys), 2004.
C.-Y. Wan, S.B. Eisenman, and A.T. Campbell, “CODA: Congestion Detection and Avoidance in Sensor Networks,” Proc.
First ACM Conf. Embedded Networked Sensor Systems (SenSys ’03),
pp. 266-279, 2003.
A. Woo and D.E. Culler, “A Transmission Control Scheme for
Media Access in Sensor Networks,” Proc. ACM MobiCom, 2001.
W. Ye, J. Heidemann, and D. Estrin, “An Energy-Efficient MAC
Protocol for Wireless Sensor Networks,” Proc. IEEE INFOCOM,
H. Zhang, A. Arora, Y. Choi, and M. Gouda, “Reliable Bursty
Convergecast in Wireless Sensor Networks,” Proc. ACM MobiHoc,
Y. Zhang, M.P.J. Fromherz, and L.D. Kuhn, “Smart Routing with
Learning-Based QoS-Aware Meta-Strategies,” Proc. First Workshop
Quality of Service Routing (WQoSR ’04), pp. 298-307, 2004.
Raju Kumar received the BTech degree in
computer science and engineering from the
Indian Institute of Technology, Kanpur, in 2003.
He is currently working toward the PhD degree
in the Department of Computer Science and
Engineering, The Pennsylvania State University,
where he is also a member of the Networking
and Security Research Center. His research
interests include ad hoc, sensor, and mesh
networks. He is a student member of the IEEE.
Riccardo Crepaldi received the MS (Laurea)
degree in telecommunications engineering from
the University of Padova, Padova, Italy, in 2006.
After his graduation, he was a research scientist
with the Signet Group, University of Padova,
until 2007. His research focused on wireless
sensor networks (WSNs), and he designed
and deployed a WSN testbed and developed
management tools for it. He also worked on
the design and performance analysis of
routing and localization algorithms for WSNs. He is currently working
toward the PhD degree in the Department of Computer Science,
University of Illinois, Urbana-Champaign, under the supervision of
Prof. Robin Kravets. His research interests include systems design and
management and service discovery for wireless sensor networks and
delay tolerant networks. He is a student member of the IEEE.
Hosam Rowaihy received the BS degree
in computer engineering from King Fahd University of Petroleum and Minerals, Dhahran,
Saudi Arabia, and the MS degree in electrical
engineering from the University of Maryland,
College Park. He is currently working toward the
PhD degree in the Department of Computer
Science and Engineering, The Pennsylvania
State University, where he is also a member of
the Networking and Security Research Center.
His research interests include resource management in sensor networks,
peer-to-peer networks, and RFID systems. He is a student member of
the IEEE.
Albert F. Harris III received the PhD degree
in computer science from the University of
Illinois, Urbana-Champaign, specializing in
wireless network protocol and mobile systems
design. He is currently a research assistant
professor in the Center for Remote Sensing
of Ice Sheets (CReSIS), University of Kansas.
He has led computer science research on the
US-Army-funded projects for both Global
Information Systems Technology, Inc. and
CPResearch, Inc. Prior to graduate school, he operated a computer
network consulting business in Kansas City, Missouri, for more than
five years. He is a member of the IEEE.
Guohong Cao received the BS degree from
Xian Jiaotong University, Xian, China, and the
MS and PhD degrees in computer science from
the Ohio State University in 1997 and 1999,
respectively. Since then, he has been with
the Department of Computer Science and
Engineering, The Pennsylvania State University,
where he is currently an associate professor. His
research interests include wireless networks and
mobile computing. He has published more than
100 papers in sensor networks, wireless network security, data
dissemination, resource management, and distributed fault-tolerant
computing. He has served on the editorial board of the IEEE Transactions
on Mobile Computing and the IEEE Transactions on Wireless Communications and on the program committee of many conferences. He was a
recipient of a US National Science Foundation Faculty Early Career
Development (CAREER) Award in 2001. He is a senior member of the
IEEE and the IEEE Computer Society.
VOL. 7,
NO. 7, JULY 2008
Michele Zorzi was born in Venice, Italy, in 1966.
He received the Laurea and PhD degrees in
electrical engineering from the University of
Padova, Padova, Italy, in 1990 and 1994,
respectively. During the academic year 19921993, he was on leave from the University of
California, San Diego (UCSD), attending graduate courses and doing research on multiple
access in mobile radio networks. In 1993, he
joined the faculty of the Dipartimento di
Elettronica e Informazione, Politecnico di Milano, Milano, Italy. After
spending three years with the Center for Wireless Communications,
UCSD, in 1998, he joined the School of Engineering, University of
Ferrara, Ferrara, Italy. In 2003, he joined the Department of Information
Engineering, University of Padova, where he is currently a professor. His
research interests include performance evaluation in mobile communications systems, random access in mobile radio networks, ad hoc and
sensor networks, and energy-constrained communications protocols.
From 2003 to 2005, he was the editor in chief of the IEEE Wireless
Communications Magazine. He currently serves on the steering
committee of the IEEE Transactions on Mobile Computing and is on
the editorial boards of the IEEE Transactions on Communications, the
IEEE Transactions on Wireless Communications, the Wiley Journal of
Wireless Communications and Mobile Computing, and the ACM/URSI/
Kluwer Journal of Wireless Networks. He was also guest editor of the
IEEE Personal Communications Magazine (special issue on energy
management in personal communications systems) and the
IEEE Journal on Selected Areas in Communications (special issue on
multimedia network radios). He is a fellow of the IEEE.
Thomas F. La Porta received the BSEE and
MSEE degrees from the Cooper Union,
New York, and the PhD degree in electrical
engineering from Columbia University,
New York. He is a distinguished professor in
the Department of Computer Science and
Engineering, The Pennsylvania State University
(Penn State), where he is also the director of the
Networking and Security Research Center. Prior
to joining Penn State, he was the director of the
Mobile Networking Research Department, Bell Laboratories (Bell Labs),
where he led various projects in wireless and mobile networking. He is
the founding editor in chief of the IEEE Transactions on Mobile
Computing. His research interests include mobility management,
signaling and control for wireless networks, mobile data systems, and
protocol design. He has published more than 100 technical papers and is
the holder of 30 patents. He is a fellow of the IEEE and the Bell Labs. He
received a Thomas Alva Edison Patent Award.
. For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/publications/dlib.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF