Radio Revolution The Coming Age of Unlicensed Wireless By Kevin Werbach

Radio Revolution The Coming Age of Unlicensed Wireless By Kevin Werbach
Radio Revolution
The Coming Age of Unlicensed Wireless
By Kevin Werbach
N E W A M E R I C A F O U N D AT I O N
PUBLIC KNOWLEDGE
N EW A ME RICA
F O U N D A T I O N
1630 Connecticut Ave., NW
7th Floor
Washington, DC 20009
Phone: 202-986-2700 • Fax: 202-986-3696
W W W. N E WA M E R I C A . N E T
1875 Connecticut Ave., NW
Suite 650
Washington, DC 20009
Phone: 202-518-0020 • Fax: 202-986-2539
P U B L I C K N OW L E D G E . O R G
Radio Revolution
The Coming Age of Unlicensed Wireless
By Kevin Werbach
N EW A ME RICA
F O U N D A T I O N
Washington, DC
Author
Kevin Werbach is the founder of the Supernova Group, a technology analysis and consulting firm.
He advises organizations on the strategic and legal implications of emerging trends in communications,
digital media and software. As Counsel for New Technology Policy at the Federal Communications
Commission from 1994 to 1998, he helped develop the United States Government's e-commerce
policy, shaped the FCC's approach to Internet issues, and authored Digital Tornado, a seminal analysis
of the Internet's impact on telecommunications policy. He has also served as editor of the influential
publication Release 1.0. His writing has appeared in Harvard Business Review, Fortune, Wired, Harvard
Law Review, Slate, Red Herring, and Business 2.0, among other publications, and he appears frequently
as a commentator in print and broadcast media.
Radio Revolution is the second publication Kevin Werbach has authored for the New America Foundation.
His Working Paper, "Open Spectrum: The New Wireless Paradigm" was published in October of 2002
and is available at www.spectrumpolicy.org.
The New America Foundation is a non-partisan, non-profit, public
policy institute in Washington, D.C. Relying on a venture capital
EW ME RICA approach, the Foundation invests in outstanding individuals and ideas
F O U N D A T I O N that transcend the conventional political debate. Through its Fellowship
Program and Strategic Initiatives, New America sponsors a wide range of research, published writing,
conferences, and events. New America’s Spectrum Policy Program advocates a more fair, efficient, and
democratic allocation of the public airwaves. Many additional publications on this topic, including
New America’s Citizen’s Guide to the Airwaves, can be found at www.spectrumpolicy.org.
N
A
Public Knowledge is a public-interest advocacy organization dedicated
to fortifying and defending a vibrant information commons. This
Washington, D.C.-based group works with a wide spectrum of stakeholders – libraries, educators,
scientists, artists, musicians, journalists, consumers, software programmers, civic groups, and enlightened businesses – to promote the core conviction that some fundamental democratic principles and
cultural values – openness, access, and the capacity to create and compete – must be given new
embodiment in the digital age.
Contributors
Nigel Holmes, who is principal of Explanation Graphics, www.nigelholmes.com, created four original
illustrations for this report; two of his illustrations from the Citizen’s Guide to the Airwaves are reprinted
here as well. In addition, Donald Norwood Design created the layout and design of the report.
Matt Barranca, a Program Associate at the New America Foundation, wrote the WISP profile sidebars. The Acoustic Analogy sidebar was adapted from New America’s forthcoming “The Cartoon
Guide to Government Spectrum Policy: What if the Government Regulated the Acoustic Spectrum
the Way it Regulates the Electromagnetic Spectrum?” by J.H. Snider. Hannah Fischer led the copyediting and production efforts and was assisted by New America’s Michael Calabrese, J.H. Snider,
Matt Barranca, and Max Vilimpoc. Spectrum experts Dewayne Hendricks, Anthony Townsend,
Patrick Leary, and Mark McHenry provided valuable feedback for some of the technical content
of this report.
Acknowledgments
We thank the Ford Foundation, the Open Society Institute, the Arca Foundation, and the Joyce
Foundation for their support for New America Foundation’s Spectrum Policy Program. Without
their support, this report would not have been possible.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/1.0/ or
send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Contents
Part I: Introduction ........................................................................................1
Wireless Fundamentals ...................................................................................2
Believe in Magic ..............................................................................................2
Part II: Wireless Fundamentals .............................................................5
Basic Concepts .................................................................................................5
The Role of Government ................................................................................11
Part III: Paradigm Shift: From Static to Dynamic .................13
The Traditional Approach .............................................................................13
When the Devices Get Smart ........................................................................14
Survey of Dynamic Wireless Techniques......................................................16
Implications of Dynamic Approaches ..........................................................19
WiFi as a Case Study.....................................................................................22
Part IV: The Unlicensed World ............................................................25
The Spectrum of Spectrum-Use Regimes .....................................................25
Current Unlicensed Products ........................................................................27
Success Stories ..............................................................................................30
Part V: Future Scenarios ..........................................................................37
Expanding the Space of Possibilities...........................................................37
The Last Wireless Mile ..................................................................................39
Interoperable Public Safety Communications.............................................40
Adaptive Mobile Phones ...............................................................................40
Personal Broadcast Networks .......................................................................41
Part VI: Policy Recommendations.....................................................43
Part VII: Conclusion ....................................................................................47
Bibliography .............................................................................................................49
Endnotes.....................................................................................................................51
Introduction
e stand at the threshold of a
wireless paradigm shift. New
technologies promise to replace
scarcity with abundance, dumb terminals with smart radios able to adapt to
their surroundings, and governmentdefined licenses with flexible sharing of
the airwaves. Early examples suggest
that such novel approaches can provide
affordable broadband connections to a
wide range of users.
These are not just incremental
advances. The fundamental assumptions
governing radio communication since
its inception no longer hold. The static
wireless paradigm is giving way to
dynamic approaches based on cooperating systems of intelligent devices. It is
time for policy-makers to consider how
regulation should change in response.
The radio revolution is the single
greatest communications policy issue of
the coming decade, and perhaps the
coming century. The economics of
entire industries could be transformed.
Every significant public policy challenge
W
could be implicated: competition; innovation; investment; diversity of programming; job creation; equality of access;
coverage for rural and underserved areas;
and promotion of education, health care,
local communities, public safety, and
national security. Yet the benefits of the
paradigm shift are not guaranteed.
Exploiting the radio revolution will
require creativity and risk-taking by both
the private and public sectors. At every
step, there will be choices between preserving the status quo and unleashing
the forces of change. The right answers
will seem obvious only in hindsight.
The only way to appreciate the
opportunity before us is to comprehend
the fundamentals of radio communication, and the profound ways they are
changing. For all its significance to daily
life and economic activity, wireless technology is poorly understood. This paper
seeks to explain the established “static”
wireless paradigm, the emerging
“dynamic” alternative, and the implications of the coming revolution.
1
Introduction
Wireless Fundamentals
Wireless. The very word belies its significance.
Wireless communication is defined by what it is
not, like the horseless carriage or the fat-free
muffin.1 Yet the real value of a satellite television
broadcast, a WiFi connection to a laptop, or a
mobile phone call from your car to your mother
isn’t the absence of dangling wires. Mobility,
portability, ubiquity, and affordability are all
enhanced when signals pass through the air rather
than through strands of
“The existing legal
copper or optical fiber.
Talking on a mobile
and policy framework for
phone is different, and in
many ways better, than
spectrum management
using a landline connection. If it weren’t, more
has not kept pace with
than one billion people
wouldn’t have signed up
the dramatic changes
for mobile phone service,
despite the alternative of a
in technology and
century-old wired phone
industry.
spectrum use.”
Wireless communication is the foundation of
— WHITE HOUSE MEMO TO
industries generating hunFEDERAL AGENCIES, JUNE 2003
dreds of billions of dollars
in revenue and selling
hundreds of millions of
devices every year. It is crucial to how we communicate, work, learn, entertain ourselves, access
health care, and protect our nation. It is also
heavily regulated everywhere in the world.
Governments today face critical decisions concerning the future of wireless communication. Is
there a “spectrum shortage,” and if so, how can it
be alleviated? Should more spectrum be set aside
for “unlicensed” uses? Should spectrum licensees
be given property rights to resell or otherwise
control their spectrum more thoroughly? Do we
need different rules to deal with interference?
Should new “open spectrum” technologies be
allowed to “underlay” or “interweave” with existing licensed services? Can government, military,
and public safety spectrum be managed more
effectively? These questions will shape the communications environment of the 21st century.
2
Unfortunately, wireless communication remains
deeply misunderstood and under-appreciated. Basic
concepts like spectrum and interference suffer
from widespread misconceptions. Technological
developments of recent decades have not penetrated the public consciousness, even as the fruits
of these developments become part of daily life for
hundreds of millions of people. The great paradigm shift from static to dynamic wireless communication has barely registered in business and
policy circles. Just as economists know that information technology must have a role in productivity
growth but have trouble finding it in their statistics, the wireless industry is experiencing a transformation that even many of its own experts do not
fully appreciate.
Believe in Magic
In the words of legendary science fiction author
Arthur C. Clarke, “any sufficiently advanced technology is indistinguishable from magic.”2 Wireless
communication is a form of magic. Words and pictures fly over invisible pathways with near instantaneous speed. We control devices at a distance, with
no apparent means of connection. Scores of signals, carrying many different types of messages,
traverse the air simultaneously. A time traveler
from the Middle Ages would surely see divine
intervention – or witchcraft – all around.
Yet for us, wireless communication is a familiar
form of magic. It drives the radios we have had in
our homes since our grandparents’ day, the mobile
phones that many of us use to communicate, the
televisions we watch an average of seven hours
each day, the remote controls that start those TVs,
and even the throwaway boxes that open our
garage doors. This familiarity breeds contentment.
We think we understand how wireless communication works. We don’t.
Our intuitions about wireless, by and large, are
mistaken. They are based on outdated technologies
and inaccurate analogies. If we hope to move
forward in exploitation of the airwaves, we must
take a step back. We must understand wireless communication for what it really is. And then we must
re-evaluate our assumptions about what it could be.
This paper presents a set of analogies to help
Radio Revolution
explain the basic physics of radio, and the radical
shift that emerging technologies represent. The
strangeness of wireless communication vanishes
when we see that it is no different than acoustic
communication, otherwise known as speech.
Paradigm shifts are both difficult and essential
for progress.3 Copernicus and Galileo showed that
the Earth revolves around the sun, contrary to the
perceived wisdom of the day. Eventually their view
prevailed, launching an age of extraordinary discovery. In the last century, quantum mechanics
overthrew the long-established Newtonian worldview. A hundred years of subsequent physics experiments confirm that our universe contains no such
thing as solid matter or definite cause and effect.
These ideas are so weird that most of us simply
refuse to accept them. We live in the familiar
classical environment of our commonsense awareness. At the same time, we blithely accept technologies such as the integrated circuit and the laser,
which could not exist without the scientific fruits
of the alien quantum world.
The new dynamic paradigm reveals that wireless
communication is more magical than we assume.
More than one service can occupy the “same”
spectrum, in the same place, at the same time. The
frequencies that now carry one signal could someday carry thousands...or billions. There could be as
many video broadcasters as today there are mobile
phone subscribers. Government could cease the
frustrating and inefficient task of parceling out
spectrum, and instead allow users to share the
airwaves without licensing. Broadband Internet
connections could be far more ubiquitous and
affordable. Innovation could proceed by leaps and
bounds rather than a hesitant, drawn-out shuffle.
Appreciating the potential of wireless technology
has always been difficult. When Guglielmo Marconi invented the radio, he envisioned it being used
for person-to-person communication, not one-tomany broadcasting. Alexander Graham Bell
invented the telephone while developing tools to
help deaf people, and thought it would be used to
broadcast music concerts. If these scientific giants
could be so wrong about their own creations, might
we not be wrong in our assumptions about wireless?
This is not mere idle speculation. Decisions
made in the 1920s to zone spectrum by service
and to assign exclusive
licenses to users have
A huge market sits atop
defined the contours of
the existing regulatory
wireless communication
ever since. A huge market
framework, which in turn
sits atop the existing
regulatory framework,
sits atop conceptual and
which in turn sits atop
conceptual and technical
technical assumptions.
assumptions. Alter those
assumptions, and we can
Alter those assumptions,
alter the framework. Alter
the framework, and the
and we can alter the
market could become
something far greater than
framework. Alter the
it is today. Maintain the
status quo or worse, and
framework, and the
the opposite might result.
“Unlicensed” wireless
market could become
communications systems
are the manifestation of
something far greater
the dramatic change from
the static to the dynamic
than it is today.
paradigm. The word
unlicensed, like the word
wireless, emphasizes what is missing rather than
the true significance of the concept. What is so
extraordinary about unlicensed devices is what
they can do, and the incentives they create
for innovation and growth. Already, wireless
Internet service providers (WISPs) and non-profit
community networks are using unlicensed systems
to deliver broadband connectivity where it was
otherwise unavailable. Several are profiled in
sidebars throughout this paper. In the future,
unlicensed systems may support more significant
new communications scenarios, which are detailed
in the last section.
3
Wireless Fundamentals
Basic Concepts
The Secret Life of Radio Waves
A wireless communications system
involves one or more transmitters and
one or more receivers. There is nothing
in the middle. Transmitters radiate, and
receivers receive, within a certain range
of frequencies known as the radio frequency spectrum. However, these are
properties of the equipment, not some
distinct medium the signals pass
through. By the same token, what governments regulate are the capabilities of
transmitters, and to a lesser extent
receivers, rather than the spectrum itself.
Radio waves are a form of electromagnetic radiation, like lasers or lightning bolts. “Radio frequency” signals
oscillate at frequencies between about 3
kilohertz (kHz) and 100 gigahertz
(GHz). Their propagation characteristics are well-understood by physicists. In
free space, radio waves can propagate
indefinitely, with declining power over
distance, unless dissipated by obstacles
such as walls or the Earth’s atmosphere.
Their susceptibility to such obstacles
depends on the frequency, bandwidth,
and power of the transmission.
The point of this physics lesson is
that most of the topics spectrum policy
concentrates on, such as “interference”
and “spectrum,” are value judgments
based on our uses of wireless communication. Radio waves do not bounce off
one another, or cancel each other out.
When two or more signals share the
same space at the same time, it can be
difficult for receivers to distinguish
them, just as the human ear has
difficulty focusing on two simultaneous
sound sources. In practical terms, the
TV picture gets fuzzy or the mobile
phone drops a call (see Figure 1).
We call this interference. The effect is
identical to what happens when you try
to listen to a radio broadcast and a CD
at the same time. The sounds still reach
your ear, but you may have trouble sorting them out.
5
Wireless Fundamentals
The Acoustic Analogy*
nals, government regulates the electromagnetic
spectrum to minimize interference in ways that would
The sound waves used in ordinary speech are analobe inconceivable for acoustic communications.
gous to the radio waves used in wireless communicaDynamic wireless systems are closing the gap
tion. Both are radiation employed to send messages
between acoustic and radio communication. Newer
between transmitters and receivers. The acoustic
devices employ sophisticated computer processing to
spectrum involves lower frequencies than the radio
encode and decode wireless signals. They also employ
frequency spectrum, but this has no effect on the
cooperative techniques and adaptive mechanisms
physics involved. Our ears are tuned to pick up
that bring to mind the social behavior of human
acoustic waves, just as radio receivers are tuned to
beings. The more wireless systems can discriminate
receive radio waves. And our vocal chords produce
the way the human ear does, the less regulation is
acoustic waves, just as radio transmitters produce
needed to avoid confusion.
radio waves.
Imagine a crowd of people at a football stadium.4
The major difference between acoustic and radio
Though thousands of them are talking at the same
communication is that humans and other animals
time, many of them screaming at the top of their
have evolved exquisitely sophisticated tools for encodlungs, there is no need for regulation to ensure effecing and interpreting speech. Our vocal apparatus and
tive communication. There isn’t even a need for rules
ears are magnificently precise yet highly adaptive.
to ensure that the public address system can be heard
over the thousands of independent voices. A private
Standing behind them is the human brain, the most
regime of property rights to speak in the stadium is
powerful computing device ever created. Our brains
just as superfluous as a government licensing system.
can pick out sound waves from the surrounding
Our mouths and ears are sufficiently adaptive to
background noise and quickly interpret them with
separate signals from noise when both parties are
phenomenal accuracy. We take all this for granted,
trying to communicate. This despite the fact that the
because speech is so basic to our very existence.
acoustic spectrum is far narrower than the radio
Radios have historically been far less intelligent
spectrum, and our biological
than the human systems for
senses are much less precise
voice communication. In particNo talking in this stadium!
than today’s digital communiular, radio receivers are simple
Violators will be thrown in jail!
cations devices. When transmitdevices that tune to a specific
frequency. Because radio devices
ters and receivers are as smart as
haven’t been smart enough to dishumans, the best rules to prevent
tinguish among overlapping siginterference are no rules at all.
* Adapted from J. H. Snider, “The Cartoon Guide to Federal Spectrum Policy: What if the Government Regulated
the Acoustic Spectrum the Way it Regulates the Electromagnetic Spectrum,” New America Foundation, Forthcoming.
6
Radio Revolution
FIGURE 1 – NOTIONS OF INTERFERENCE: When two or more signals
share the same space at the same time, some receivers have difficulty
distinguishing them and interference can result.
Interference among wireless systems sounds similar
to what happens when a landline telephone call
generates an “all circuits are busy” message. In
reality, the two situations are quite different. In the
landline case, the connection literally stops at an
overloaded phone company switch. The call
reaches that point and goes no farther. In wireless,
the signal keeps on going. Only the useful information is lost, not the actual radio waves.
This seemingly arcane distinction is critical. For
the blocked phone switch, nothing the caller or the
called party can do makes any difference. The electrical or optical signal terminates in the middle of
the network. In the wireless case, the signal gets to
its destination but cannot be understood. If the
transmitter or receiver were smarter, the same
signal might be intelligible. Better technology at
the endpoints can reconstruct useless noise back
into useful information.
In other words, change the communications
devices or the regulatory environment, and you
change the capacity of the system. Therefore, any
statement about interference or spectrum scarcity
assumes a particular set of technical and regulatory
conditions.
Capacity
As the previous section demonstrates, interference
matters because of its implications for capacity.
Capacity is the essential metric for wireless com-
munications.5 Only so many radio stations,
TV channels, phone conversations, or
Internet connections can successfully
operate at once. However, this number is
not fixed. Marconi, the inventor of radio,
originally thought that only one signal
could be transmitted in a given geographic
area, because other radios would interfere
with it. He later developed a technique for
adding capacity based on the principle that
tuning forks can be made to vibrate at the
same frequency across distances. Using this
model, one radio signal can be associated
with a carrier wave of a particular frequency, and additional radios on different
frequencies can operate in the same area.
In effect, Marconi figured out how to use
frequency to multiplex radio signals. Each station
got its own unique frequency: hence the familiar
radio call numbers like 102.7, 88.5, or 97.1. Because
frequency division was the only viable means of
operating multiple simultaneous transmitters when
radio developed as a commercial service, it became
the basis for government radio policy. Regulating
radio meant regulating frequencies, by parceling
out the usable spectrum to licensees and service
categories. And so it remains today. We don’t use
the same numbers to identify TV channels or
mobile phone networks, but these systems are
assigned frequencies in a similar way.
Frequency division, however, is not the only
means of multiplexing radio signals.6 Another
possibility is time. The government could have
allowed each broadcaster to transmit only during
a certain hour of the day, for example. Frequency
division was obviously a better solution, both on
capacity and practical grounds. In other cases,
though, time division makes sense. Some mobile
phone systems, for example, chop up their licensed
frequencies into split-second time slots, and
interweave digital communications signals among
them.7 In addition to time and frequency, spatial
multiplexing can be done based on the threedimensional relative location of the transmitter
and the angle at which a signal hits an antenna.
But again, spectrum regulation talks primarily
about frequencies.
7
Wireless Fundamentals
These multiplexing techniques, along with
improvements in tuners and signal processors, are
the reason the radio spectrum can now accommodate services such as television, mobile telephony,
satellite radio, and wireless Internet connectivity
where once there was only radio. Instead of the
original two mobile phone competitors in each
market, for example, we now have six national
carriers in the U.S. The “usable” spectrum of
frequencies is five thousand times wider than it
was in 1927, when the Radio Act was passed.
Have we now exhausted the capacity of the spectrum? Yes and no. If we only focus on frequencies,
virtually every useful band has been allocated to
Radio:
AM
one use or another. Any new service, such as digital
television or third-generation (3G) mobile phone
networks, requires that incumbents be “cleared”
out of existing bands to make room. On the other
hand, if we actually sample the spectrum to test for
signals, we get a very different picture. Most of the
spectrum is empty in most places most of the time.
There may be a rectangle filled in on a frequency
chart, but there is no detectable signal actually
using that frequency8 (see Figure 2).
Many advances in wireless technology make use
of excess capacity that shows up in the real world
but not on the official chart. Because the frequency
chart represents official boundaries, however, these
Remotecontrolled
toys
FM radio
Cordless
phones*
Pagers
Wireless
medical
telemetry
Medical
implants
Shortwave
Broadcast
TV
Broadcast
TV
CB
radio
Broadcast
TV
Car alarms
Cordless
phones
100
megahertz (MHz)
(see note* at bottom of page)
Garage door
openers
200
300
400
Broadcast
TV
Mobile
phones
Family
Radio
Service
(walkie
talkies)
Highway
toll tags*
*overlapping
use
Mobile
phones
500
MHz
600
700
800
900
1 GHz
(1,000 MHz)
FIGURE 2 – TWO VIEWS OF CAPACITY: The top image, from The Citizen’s Guide to the Airwaves, depicts allocations
on the low-frequency, broadcast bands. The bottom image represents the actual usage of the most active channels
of the broadcast bands, as recorded in New America Foundation’s spectrum usage measurements taken during peak
hours in the highly populated, Dupont Circle area of Washington, DC.
8
Radio Revolution
technologies can only be used in certain frequency
bands. Between the view of spectrum as filled up
and the reality of its emptiness lies an almost
incalculable opportunity.
Architecture
Even the gaggle of capacity-enhancing techniques
listed earlier is incomplete. Capacity depends not
just on the way a device distinguishes one signal
from another, but on the design or “architecture” of
the overall communications system it is part of.
Two systems with the same “amount” of spectrum
may have very different capacity profiles. What
matters is the use that can be made of the spectrum,
not just the abstract width of the frequency band.
What does architecture mean in this context? A
simple example would be to compare a radio broadcast with a mobile telephone call.9 Radio is a broadcast service, meaning that a tower sends out a signal
at the maximum allowed power in all directions. It
blankets an area, so that every receiver within range
(typically a metropolitan area) can tune in the signal.
Mobile phone networks, by contrast, use a cellular
architecture. Each tower sends relatively low-power
Shannon’s Law
Bell Labs researcher Claude Shannon, in his seminal
work in the late 1940s, created the mathematical
concept of information theory. Shannon developed
equations to measure the ability of a communications
channel to carry useful information. As usually
explained, “Shannon’s Law” defines a maximum
capacity, which is proportional to the frequency or
bandwidth of the channel. As a result, bandwidth has
become almost synonymous with capacity. Because
there is only so much bandwidth, and any service can
only have a small portion of it, this model implies
strict limits on possible capacity.
This paraphrase of Shannon’s capacity theorem leaves
out critical facts. The version in question describes the
simplest possible case – one transmitter and one
FIGURE 3 – COMPARING BROADCAST VS. CELLULAR
ARCHITECTURE: The broadcast architecture (left) services
more users over a larger area with one signal, while the
cellular architecture (right) lets more users receive and
send distinct transmissions.
signals to any handset within a few square miles. For
handsets out of that range, there are other towers.
Because of the low power, users talking to one tower
don’t notice the signals that other users are exchanging at the same time with a different tower. The
same spectrum is being “reused” (see Figure 3).
receiver. Add more of either, and the solution is no
longer so easy. More devices may cause “interference,”
or may create opportunities to interweave signals or otherwise add intelligence to the network. Similarly, a wider
band adds capacity, but it’s not the only way to do so.
The scientific field of multi-user information theory
takes Shannon’s work and extends it for these more
complex (and more realistic) situations. It is a fruitful
area of research and experiment. In the half-century
since Shannon’s initial work, we have learned many
things about what is possible with wireless communications. But there are many things we still do not
know. For example, we don’t even know the maximum potential capacity of a system with an arbitrary
number of devices. This core uncertainty should make
us hesitate before uttering any statements about
what is or is not possible in the wireless realm.
9
Wireless Fundamentals
Notice the distinctions. The broadcast model
lets the transmitters and receivers be simple (and
therefore cheap), because there is only one transmitter sending data in one direction. The cellular
model requires many more towers and more
expensive devices, but in return it lets many more
conversations occur simultaneously, in both directions. The broadcast system services more users
with one signal, but the cellular system lets more
users receive separate and distinct transmissions.
There are other significant differences, and other
network architectures with their own characteristics. Which architecture is chosen depends on
technological capabilities and the service being
offered. However, the way the legal regime divides
and regulates spectrum defines the range of possible architectures to choose from.
Like capacity multiplexing techniques, wireless
architectures have evolved over time. Cellular
Content
Applications
(Voice, video distribution, Web, etc.)
Logical
(Addressing, meshed routing, handoffs, etc.)
Physical
(WiFi, OFDM, UWB, etc.)
FIGURE 4 – LAYERS: In the digital era, wireless signals consist of a stream of interchangeable digital bits. Engineers use
a layered framework to provide order and optimize and manage transmissions.
10
phone service was first conceived in the 1940s, but
couldn’t have been deployed economically at that
time even if there were no regulatory hurdles.
Today, thanks to massive increases in computing
power and miniaturization of digital devices, many
architectures are possible that once were not even
imaginable. Asking about capacity or interference
without considering architecture and the tradeoffs
it enforces is like asking how many people can live
in a city. It depends.
Layers
The third fundamental concept for understanding
wireless systems has traditionally not been part of
the wireless world at all. Wireless networks have
historically used integrated, special-purpose
devices. A short-range data connection in a television remote control does one thing; a microwave
relay for sending long-distance telephone signals
across the country does something else. This selfcontained approach mirrors that of pre-digital
wired networks. Telephone systems had little in
common with cable TV systems, for example.
In contrast, data networks tend to operate using
a layered model.10 Rather than defining the entire
system as an integrated whole, engineers split it up
into a communications “stack.” This allows for
separation of functions that can be optimized individually. For example, technology developed for
telephone networks to encode voice signals can be
applied to different “physical” layers, including
cable TV networks and wireless environments. In
today’s digital world, every signal is just a stream of
theoretically interchangeable digital bits. Cable and
phone companies can compete to offer voice,
video, and broadband services, even though their
networks use different types of wires.
Layering has several benefits. It makes systems
more flexible, allows innovation in one area to
benefit systems elsewhere, opens up new possibilities for competition, and gives users “best of breed”
solutions rather than bundles which may not meet
their needs.
Network engineers often use a seven-layer
model published by the International Standards
Organization.11 A simplified version for policy
purposes includes four layers, from bottom to top:
Radio Revolution
FIGURE 5 – NTIA SPECTRUM ALLOCATION CHART: The National Telecommunications and Information
Administration (NTIA) uses a complex chart with more than 500 colored patches to illustrate spectrumuse allocations.
physical, logical, applications, and content. The
wireless technologies described in this paper
operate at the bottom two layers. The physical layer
involves how information is actually sent across
the communications channel. The logical layer
concerns addressing and management functions
that ensure the right information gets to the right
place efficiently (see Figure 4).
The Role of Government
What is “possible” in communications always
has two meanings: technical and legal. Technical
possibility is a function of scientific discovery and
commercialization. Legal possibility is defined by
the regulatory system. Many things are possible
in the technical realm but not in the legal.
(Occasionally, the reverse is true!) No discussion
of the business and social fundamentals of wireless
technology would be complete without taking
government rules into account.
From its earliest days, the communications industry has been subject to pervasive government regula-
tion, in the U.S. and elsewhere. In wireless, that regulation takes the form of spectrum policy. The
Federal Communications Commission (FCC) tells
some entities that they can use particular frequencies, usually for specific purposes and with detailed
technical and economic requirements. It tells everyone else that they cannot communicate on those
frequencies. And it enforces those rules, punishing
violators. The National Telecommunications and
Information Administration (NTIA) manages the
federal government’s use of spectrum, and serves as
the principal advisor to the President for spectrum
policy formulation.
Extensive government regulation of spectrum
is taken for granted. But let us ask for a moment,
why should that be so?12 Communication over the
airwaves is speech, just like communication through
the wired phone network or through a microphone
at a political rally. Government regulation of speech
is strictly limited under the Constitution. Yet we
tolerate a government agency, the FCC, bestowing
the ability to speak upon individual companies,
11
Wireless Fundamentals
telling them exactly how they can speak, and punishing others who attempt to speak (see the NTIA
Spectrum Allocation Chart, Figure 5).
There are several rationales for government
regulation of spectrum. The airwaves are
considered a public asset, not to be left to the
vagaries of the private market. Regulation promotes a diversity of access by different voices,
and maximizes efficiency in use of the airwaves.
Government involvement prevents the chaos of
ruinous interference that might occur in a vacuum.
And there are important public safety and national
security uses of wireless communication that
government promotes and manages.
Behind all these rationales stands a single
assumption: scarcity. If spectrum were not scarce,
and simultaneous uses of the same spectrum were
not mutually exclusive, there would be no reason
to treat it differently from other forms of speech.
Whether spectrum is in fact as scarce as we assume
is a major theme of this paper.
Spectrum regulation developed early in the 20th
century in response to two developments: a burgeoning commercial radio broadcast industry, and
fears of chaos if government did not step in. The
failure of nearby ships to heed the distress signal
of the Titanic was seen as evidence for more
extensive government intervention. Back at home,
nascent radio broadcasters were squabbling about
interference, arguing over who had the right to
transmit on particular channels. Secretary of
Commerce Herbert Hoover pushed for federal
oversight of spectrum allocation. He was rebuffed
by the courts, who found he lacked statutory
authority. So the Radio Act of 1927 and the
Communications Act of 1934 were passed,
establishing the Federal Communications
Commission as the prime arbiter of the airwaves.13
12
For sixty years, spectrum policy meant deciding
which uses – and which users – were entitled to
frequencies.14 Federal spectrum allocation operates
as a kind of industrial policy, choosing some
services for favored treatment and often protecting
providers from competition. The biggest change in
recent years has been the shift to auctions and flexible licenses as the preferred initial assignment
mechanism. Responding to the critique first
articulated by Nobel Prize-winning economist
Ronald Coase in 1959,15 the FCC and many other
governments now use auctions as the primary
assignment tool, rather than comparative hearings
(“beauty contests”) or lotteries.
An important element of the FCC’s licensing
regime is what the licensees receive. In virtually all
cases, licensees do not receive the right to control
the spectrum absolutely. The Communications
Act does not view spectrum as a tangible
commodity. Licensees receive a right to use
the frequency to provide a particular service,
and only that service.16
A small amount of spectrum is assigned not to
any specific users, but for “unlicensed” operation.
The government sets technical requirements, such
as power limits, for users of the band, and provides
certification mechanisms for devices that operate
within it. Users of unlicensed devices have no
formal protection against interference from
other users in the band, but they need no special
permission to operate there. The FCC also allows
very low power devices (less than one watt) to
operate in significant sections of the spectrum
under its Part 15 rules, on the grounds that they
are too quiet to interfere with any other service.
Paradigm Shift: From Static to Dynamic
Wireless systems today are not just
better and faster than those of the past.
They can be fundamentally different.
One could explain a car as simply a
speedier and more durable horse, or a
computer as nothing but a very fast
calculator. Those descriptions sound
laughable. We know that new technologies have countless benefits and impacts
that their predecessors don’t. Horses are
horses, and cars are cars, even though in
some circumstances one can substitute
for the other.
So too with wireless communications.
Systems built using modern techniques
such as spread spectrum, softwaredefined radio, and mesh networking can
serve the same purposes as systems built
using older approaches. At this relatively early stage in the radio revolution, the two types of systems look very
similar. If the newer dynamic systems
can develop and be deployed, however,
they will eventually seem as different
from their predecessors as a car does
from a horse.
The Traditional Approach
Traditional wireless systems are static.
They assume dumb receivers and dumb
transmitters, whose function is to blast
out a signal at the maximum allowable
power level. The model is quite
straightforward: Imagine throwing a
large rock into the middle of a round
pond. Ripples will radiate out from the
point of impact, eventually reaching the
shore. No skill is required for a person
standing on the shore, or sitting in a
boat on the water, to come into contact
with the ripples.
The good aspect of this model is that
it doesn’t require much sophistication in
the endpoints. Transmitters and
receivers that don’t think much for
themselves are relatively cheap to build.
When the costs of radio hardware and
computing power are high, such savings
make a difference in what’s economically
possible. The downside of the static
approach is that the devices aren’t smart
enough to get out of each other’s way if
there are multiple signals in the same
13
Paradigm Shift: From Static to Dynamic
space and frequency band. Throw two rocks into
the pond, and it will be impossible to determine
which originated the resulting ripples.
For wireless communication, the solution to this
“interference” problem was to create exclusivity
within frequency bands. In the early days, such as
when radio and TV broadcasting were established,
affordable receivers weren’t even smart enough to
tell what was in their band. The FCC established
wide “guard bands” around licensed frequencies
where no one could transmit. That’s why the three
original U.S. broadcast TV networks are typically
on either channels 2, 4, and 7 or channels 3, 6,
and 10.17 The “white space” in between is dark to
ensure receivers in each channel don’t become
confused.18
Static wireless systems are static because of the
cost and computational capability of hardware,
more so even than scarcity of spectrum. As is well
known, computers have become more capable over
time. In a famous formulation, Intel co-founder
Gordon Moore noticed that transistor density on
microprocessors doubled every 18 months thanks
to advances in manufacturing technology. His
observation became a prediction, and then a law,
which has held true for 35 years.
The implication of Moore’s Law is that whatever
a computer can do today, it can do twice as well in
18 months, or twice as cheaply. If you bought a PC
for $1000 three years ago, today you’d be able to
buy a machine four times as fast for that same
$1000. On the other hand, if you bought a color
TV three years ago for $300, you might be able to
get a slightly bigger screen or sharper picture
today, but you’d see the same programming.
A radio itself is not a computer. It is a device for
transmitting and/or receiving signals. However,
computers can be used to control radios, or to
process those signals. Just as devices from automobiles to air conditioning systems benefit from
having computer “brains,” computers can improve
wireless communications devices. With today’s
computing power, in fact, they can totally transform them.
14
When the Devices Get Smart
Intelligent transmitters and/or receivers can engage
in a different form of wireless communication than
the traditional static systems. Rather than merely
waiting for an incoming signal, the receivers can
contribute to the communications process. Rather
than radiating constantly toward static targets, the
transmitters can craft what they send for maximum
efficiency. Call this dynamic wireless communication.
Changes in the nature of wireless devices also
affect the way devices interact with each other, and
with their surroundings. In other words, they
change the interference environment. As discussed
earlier, interference is a consequence of system
design, rather than an inherent property of the radio
spectrum. Interference is also inherently a legal construct. No radio signal on planet Earth is perfectly
pure. There is always some external radiation that
impinges on transmissions. Regulations or other
legal mechanisms distinguish between permissible
incidental noise and impermissible interference.19
The static mechanism to overcome ambient
“interference” is to raise power output. The louder
the signal, the easier it is to find among other
noise. Of course, raising power increases the likelihood of impinging on other signals, especially
those adjacent either geographically or frequencywise. Regulators must therefore define power output and license geography limits carefully.
Historically, large amounts of spectrum were kept
as guard bands where no one could transmit, to
allow a wide “buffer” between licensed signals.
Static wireless systems have traditionally dealt
with interference from other transmissions through
legal means. No one else may transmit in licensed
bands. The FCC’s rules provide penalties for
“harmful interference,” which is defined in terms
of the effects of the second signal on the licensed
service.20 When the FCC proposed to authorize a
large number of lower-power FM radio stations for
use by community groups, licensed radio and TV
broadcasters expressed alarm that their transmissions would be threatened. These opponents convinced the FCC to significantly scale back the
freedom it granted to low-power FM broadcasters.
Dynamic wireless systems look at interference differently. It’s the equivalent of listening closely rather
Radio Revolution
than asking the person you’re conversing
with to talk more loudly. Because of their
flexibility, dynamic systems often have the
ability to “maneuver” around potential
interference, whether by splitting up
signals into packets spread across a wide
range of frequencies, hopping from place
to place in the spectrum, or sending communications through a physically distinct
route across a mesh network.
Think about a group at a cocktail party.
Many people can hold conversations with
one another simultaneously. They can do
so not because they each shout over the
others, or because they agree on a set of
rules to define who can talk when and
FIGURE 6 – COCKTAIL PARTY: A good metaphor for the receiver capabilities
of smart devices is how people communicate at a cocktail party. Multiple
how. It’s obvious to us that the reason so
conversations can occur simultaneously despite ambient noise.
many people can talk at once is that the
speakers modulate their volume and the
listeners use their brains to distinguish their
National Telecommunications and Information
partner from the ambient noise (see Figure 6). If
Administration (NTIA) at the U.S. Department of
you’ve ever tried listening to a piece of music and
Commerce has proposed an “electrospace” model
concentrating on different instruments to pull them
to describe the ways that today’s wireless systems
out of the mix, you’ll understand this process immecan coexist. He proposes the following variables:
diately. Now transfer the setting from smart human
physical location (latitude, longitude, and height);
listeners to smart digital radios.
frequency; time; and direction of arrival (azimuth
Another analogy is the Internet. The Internet
doesn’t have master directories or switches controland elevation).21 These seven degrees of freedom
ling the flow of information. Every router can
for sharing spectrum compare with the single
decide independently where to send traffic. This
dimension – frequency – under the static approach.
works effectively thanks to cheap capacity and
And the electrospace framework is still too limited.
cheap computers that power the routers and other
It does not include techniques such as low-power
devices such as caches and Web servers at the
underlay, cooperative mesh networking, and softedges of the network.
ware-defined radio, which are discussed later.
As these analogies show, the switch from static to
Second, the architecture of wireless systems
dynamic approaches has two major consequences.
can change. Instead of cheap, dumb terminals at
First, the capacity of the system to transmit
the endpoints, there are agile, intelligent devices.
useful information increases. The same spectrum
Networks as a whole become more decentralized
can hold more communications. The intelligence of
and more flexible. Two-way communication
devices is substituting for brute-force capacity
replaces one-way blanket broadcasting as the
between them. Imagine what highways would be
dominant mode of connectivity. What were once
like if cars couldn’t be steered quickly to avoid collisingle-purpose, hardwired systems dominated by
sions and slowdowns. There would have to be huge
proprietary radio components increasingly become
buffers between each vehicle to prevent accidents…
general-purpose, adaptive platforms dominated by
precisely what exists in the spectrum today.
commodity computing components. These subtle
Dynamic wireless techniques effectively multiply
changes have dramatic consequences. Just as
the usable spectrum. Robert Matheson of the
commodity PCs vanquished powerful mainframes
15
Paradigm Shift: From Static to Dynamic
by bringing computing to the masses, dynamic
wireless systems will replace central transmission
towers with millions of interactive end-user devices.
Survey of Dynamic Wireless Techniques
Dynamic wireless communication is a deliberately
broad classification that includes many technical
mechanisms, with new ones being developed all
the time. Once the transmitters and receivers are
seen as computers that can contribute to the efficacy of the communications system, all sorts of
possibilities emerge. These possibilities do not
require any particular spectrum, or any particular
spectrum policy regime. However, as will be discussed later, spectrum policy significantly influences the kinds of techniques that are used, by
establishing the economic conditions and incentives for spectrum users.
Spread Spectrum
As discussed earlier, wireless systems designers
have long understood how to “multiplex” multiple
signals by sending them along different frequen-
An Intelligent Device Bill of Rights
A key question in a world of dynamic wireless systems is how to set the proper boundaries on how
devices can operate. More sophisticated equipment
can reliably transmit signals in situations that would
otherwise be subject to interference. “Cognitive”
devices can sense the local spectral environment,
adjust their transmissions to take advantage of temporarily empty spaces, and move their signals elsewhere as soon as another transmission is detected.
Such devices are incompatible with traditional frequency-based licensing, which assumes a frequency is
exclusively dedicated to a licensee.
One idea under consideration by the FCC’s
Technological Advisory Committee (TAC) is an
“Intelligent Wireless Device Bill of Rights.”22 It would
establish a set of principles to allow effective sharing
16
cies, or by splitting up spectrum into tiny slices of
time. These multiplexing techniques are entirely
consistent with the industry structure that developed under the dominance of the static wireless
approach. Frequency-division multiplexing
requires the most limited possible intelligence in
devices, with spectrum bands exclusively allocated
to specific users. Time-division multiplexing is
more complex, but traditionally requires all the
devices in a system to synchronize their “clocks” in
order to know which signal is in which time slice.
Such synchronization effectively requires exclusive
control of a frequency.
There are newer multiplexing techniques with
different results. The first developed was “spread
spectrum.” Hollywood actress Hedy Lamarr and
musician George Antheil filed for a patent on a
spread spectrum communications system in 1942,
though real-world deployment occurred later. A
spread spectrum system inverts the static model of
transmitting with high power on a narrow channel.
Using low power and spreading the signal across a
range of frequencies, it’s possible to carry more
of the spectrum. A draft of the Bill of Rights proposed in September of 2002 includes three articles:
❚ Any intelligent wireless device may, on a noninterference basis, use any frequency, frequencies,
or bandwidth, at any time, to perform its function.
❚ All users of the spectrum shall have the right to
operate without harmful electromagnetic interference from other users.
❚ All licensing, auctioning, selling, or otherwise disposition of the rights to frequencies and spectrum
usage shall be subordinate to, and controlled by,
Articles 1 and 2, above.
The Bill of Rights is at the early stages of discussion
within the TAC, which itself has no formal authority.
It suggests, though, how significantly the wireless
paradigm shift now underway could change the
basic framework of spectrum policy.
Radio Revolution
ABC
ABC
A
A
B
A
B
A
B
B
Because of fears about interference,
commercial use of ultra-wideband for
communication was illegal until early
2002, when the FCC authorized it
for the first time.23
Space-Time Coding
Many other multiplexing schemes
are possible beyond spread spectrum
C
C
C
C
and UWB. For example, companies
such as Northpoint Technology have
developed systems that multiplex
satellite and terrestrial transmissions
in the same frequency band.24
FIGURE 7 – SPREAD SPECTRUM: In spread-spectrum communications, low-power
Satellite signals arrive from above,
radio transmitters divide their signals into coded packets across a range of frequenwhile terrestrial signals are sent horicies and receivers reconstruct the message.
zontally. A smart enough system can
distinguish these two signals based
on their angle of arrival, and can
transmissions simultaneously. The basic notion is
even do so without requiring modifications to the
that if the transmission is broken into pieces, each
existing satellite system.
of which is tagged with a code, a receiver that
Northpoint’s technology is an example of a
knows the code can reconstruct the message. The
broader class of techniques that take into account
wider the spreading, the more space there is in
the physical location of transmitters and receivers.
between the coded packets to send other signals at
Static broadcasting uses a saturation approach.
the same time (see Figure 7).
The receivers can be anywhere within the propaTaking spread spectrum to its logical conclusion,
gation footprint of the signal. The transmitter
if the signal is spread wide enough, the power
has no idea where they are beyond that, and the
density can be so low that the signal becomes
receivers know nothing other than that the
effectively invisible to other systems in the same
transmitter is in the same footprint. As the
bands. Radio frequencies are never totally empty
Northpoint system shows, however, the location
of noise. Radiation-emitting devices such as hair
of transmitters and receivers is a useful piece of
dryers and microwave ovens, as well as cosmic
information. A signal arriving from thousands of
background radiation, create a “noise floor” that
miles overhead is different from a signal arriving
all systems must contend with. Static systems do
laterally from a few yards away, even if both are
so by using high-enough power that it’s easy to
within the same frequency band.
distinguish the high-power signal from the lowSpace-time coding techniques use the physical
level noise.
topology of the network, or the surrounding enviWith enough smarts, though, a dynamic system
ronment, to add efficiency to wireless communicacan transmit and receive very faint signals without
tions systems. For example, the BLAST system
ever raising above the noise-floor threshold. Only
developed at AT&T Bell Labs employs antenna
diversity and “multiple input/multiple output”
when very large numbers of such devices operate in
(MIMO) technology to increase capacity. Instead
the same location is interference even a realistic
of a single antenna at the receiver, BLAST employs
possibility. This approach is known as ultra-wideband (UWB). Many UWB systems employ very
arrays of multiple antennas at both the transmitter
and receiver. Comparing the signal received at the
short “carrierless” pulses of electricity that give
different antennas makes it easier to distinguish
them other unique and beneficial properties.
17
Paradigm Shift: From Static to Dynamic
transmissions from noise, increasing
effective capacity. Start-ups such as Airgo
Wireless are now using MIMO techniques
to enhance capacity and range of wireless
LAN chipsets.
Even factors that seemingly reduce
capacity can be employed to increase it.
The bane of many wireless systems is
“multipath.” When radio waves encounter
obstacles such as walls, some fraction
bounce off the obstacle and the remainder
pass through. The ones that bounce may
still reach the receiver. But they do so
through a more circuitous, and therefore
slightly slower, path. The receiver sees
FIGURE 8 – MESH NETWORKING: In a mesh network, every device added to
the system augments the network. In the case of a "last-mile," neighborhood
the same signal twice (or more), a split
network, end-users connect to the Internet by sharing connections with their
second apart. This multipath effect can
neighbors allowing for shorter distance, lower power transmissions.
confuse the receiver, degrading the signal
quality.
With a properly defined system, howtelephone networks, where users connect through
ever, multipath becomes just another informationlocal towers (see Figure 3). Many unlicensed
adding physical element. Knowing how a signal
technologies, including Bluetooth and WiFi, offer
is bouncing around tells the receiver something
some mesh networking capabilities. Vendors such
about the location and nature of the transmitter. If
as LocustWorld sell WiFi access points with
the two temporally spaced signals are identified as
sophisticated meshing software to
the same transmission, they can be combined in
discover and connect to other nodes automatically.
a buffer to enhance the output signal. Similar
techniques can be used for other factors that
The benefit of a mesh approach is that there are
traditionally cause “interference,” such as mobility.
likely to be other end-users of the network closer
to you than a tower or central broadcast facility.
Mesh Networking
Shorter distances mean better signals, lower power
Mesh networking is somewhat different than the
requirements, and the ability to avoid obstacles
previous techniques. It is a family of cooperative
such as trees.
network architectures that can be applied through
Consider the task of providing “last mile” highsoftware to any radio technology. The basic definispeed Internet connectivity to a neighborhood (see
tion of a mesh is that receivers talk to each other as
Figure 8). The benefit of mesh networking is that
well as to the transmitter.25 A good example of a
every new house brought online adds something to
mesh network is the Internet. Every router has a
the network, improving performance and reliabiltable that allows it to send packets to many other
ity. The difficulty is that a mesh network doesn’t
routers, rather than through a central clearingwork with one or two nodes. The system requires a
house. Thanks to this architecture, the Internet
critical mass of devices to operate effectively. That
avoids congestion choke points and single points
number depends on the service and deployment
of failure. When one link is down or overloaded,
environment. Several companies have tried to sell
traffic automatically shifts to other links.
last-mile mesh networking gear, including
Wireless systems have traditionally used either a
SkyPilot, Omnilux, and RoamAD. None has yet
pure broadcast architecture (one central transmitachieved a large-scale commercial deployment,
ter), or a hub-and-spoke approach as with cellular
though field trials are ongoing.
18
Radio Revolution
Other providers are selling mesh networking
gear for public safety applications. One vendor,
Tropos Networks, recently built a 17-cell mesh
network for the police force of San Mateo,
California so that officers could access crime databases from laptops in squad cars.26 MeshNetworks
is in trials with the Orange County Fire and
Rescue service in Florida, offering vehicle and
personnel tracking as well as mobile Internet
connectivity through a distributed mesh network.
Software-Defined Radio
At the heart of any wireless communications system
is the radio. A radio transmits or receives wireless
signals encoded into waves that oscillate at frequencies somewhere within the radio spectrum.
Traditionally, those radios have been fixed in hardware. A radio talks to a fixed swath of spectrum, and
understands a fixed modulation scheme for coding
signals. It’s like a dedicated telephone line between
two businesses. You simply couldn’t use it to call your
grandmother, or another business across the street.
Software-defined radio (SDR) uses software to
control how the radio works.27 It’s like replacing
the dedicated phone line with a connection that
goes through an electronic switch. Suddenly many
of the characteristics that were immutable become
flexible. Capabilities not envisioned when the
device was built can be added later.
SDR has several benefits. One device can support multiple services transmitting on different
frequencies with different encoding schemes. A
mobile phone handset, for example, could receive
signals from more than one service provider, or
from service providers in different countries,
regardless of what technical standard they employ.
This is particularly important for markets, such as
public safety, where incompatibilities between
systems, such as those used by police and fire
departments responding to the same emergency,
are critical problems. The U.S. military has funded
significant research and development related to
SDR for similar reasons. The Joint Tactical Radio
System (JTRS) is now under development through
prime contractor Boeing and subcontractors
including Vanu, a start-up in Cambridge, MA that
is a leading SDR technology developer.
The flexibility of SDR reduces costs by eliminating duplicate equipment, both for users and for
service providers. Imagine a cellular network, for
example, where each service provider only had to
put up towers where others had not, rather than
each of them having to build a redundant national
footprint.
The potential of SDR goes significantly beyond
cost reduction. Because a software radio is software, it can run on general-purpose computers
such as Windows and Linux devices, using massproduced digital signal processors (DSPs) and
other hardware. Such devices benefit from Moore’s
Law and the competitive dynamics that steadily
push costs down and capabilities up. As DSP chips
become more powerful for the same price, an SDR
system can decode a larger swath of spectrum or
perform other new functions. SDR systems can
also take the decoded radio signals and feed them
into other applications.
Agile or cognitive radios are a sub-category of
SDR currently in the development stage. Agile
radios can “jump” from one frequency to another
in a matter of milliseconds. Combined with processing capabilities that allow such devices to
sample the spectrum around them, agile radios
can in effect manufacture new spectrum. Even a
channel supposedly occupied by a licensed system
is empty much of the time in much of the defined
physical area. An agile radio could hop among
local, short-duration empty spaces in the spectrum, moving whenever it sensed another transmission in the same band. Such devices could
effectively become their own virtual networks,
creating connections with other nodes wherever
they are.
Implications of Dynamic Approaches
The switch from static to dynamic wireless
communication has huge consequences. Technical
approaches created to solve technical problems
turn out to have major policy, business, and even
social consequences. These consequences, such as
the possibility of replacing spectrum licensing
with “commons,” have generated a great deal of
attention. They would not be possible without the
technical advances we have described so far. In
19
Paradigm Shift: From Static to Dynamic
many cases, the technical advances have grown up
within the dominant wireless paradigm, as with
Qualcomm’s CDMA spread spectrum technology
for licensed cellular networks. As wireless technology moves forward, however, the possibilities
for radical change will become more difficult
to ignore.
Author George Gilder identified the disruptive
potential of dynamic wireless technologies ten
years ago, in a prescient article in Forbes ASAP.28
At a time when policy-makers were infatuated
with spectrum auctions, Gilder pointed out that
auctions were counterproductive if intelligent
devices could share spectrum and avoid interference on their own. The technological paradigm
shift of dynamic wireless calls for paradigm shifts
in regulation and business as well.
Commons
The first and biggest consequence of the radio
revolution is that licensing of spectrum frequencies is no longer required. Recall that the original
basis for spectrum licensing by the government
was the fear of ruinous and pervasive interference.
If devices can operate with sufficiently low power
and high intelligence to avoid one another, exclusive rights are no longer necessary to prevent
interference. With a properly defined environment, users effectively can’t prevent each other
from communicating. That opens the door for a
new regime that allows anyone to transmit within
general technical guidelines.
This notion has become known as a “spectrum
commons.” The analogy here is to common lands
in the Middle Ages in England, where anyone
could graze their sheep. Not all such environments
are subject to the infamous “tragedy of the
commons.” If there is enough open space, and
good enough legal or customary rules governing
individual action, a commons can thrive without
rapid exhaustion. Until recently, these concepts
were rarely applied to spectrum, because capacity
constraints were thought to be so severe. As noted
earlier, though, the revolution in wireless technology lifts the constraints that made spectrum scarce.
Beginning with the early 1990s challenge to
FCC spectrum auctions from Gilder, network
20
FIGURE 9 – UNLICENSED VS. LICENSED BUSINESS
MODELS: Technology is transforming the capacity of the
spectrum from scarcity to abundance, akin to the vastness of
the oceans. But current airwave regulation is similar to giving
companies exclusive shipping lanes, and requiring nonlicensed users to pay a toll to access them.
engineer Paul Baran, and communications scholar
Eli Noam, and continuing in the late 1990s and
beyond with the work of Yochai Benkler, David
Reed, Lawrence Lessig, Tim Shepard, and myself,
the argument for spectrum commons has gradually
taken shape.29 As will be discussed later, there are
several varieties of spectrum commons, including
unlicensed bands and underlays. Evolving dynamic
spectrum management techniques are rapidly creating new possibilities for sharing rather than
licensing spectrum.
If there is enough capacity to support a commons, exclusive rights are unnecessary. By analogy,
the ocean is not infinite in size, but it is large
enough that ships can be trusted to navigate
around one another (see Figure 9). The ships, like
dynamic wireless devices, can intelligently alter
their routes to avoid collisions. There is no need to
give companies exclusive shipping lanes, and prohibit other ships from using those routes unless
they pay a toll. Such exclusivity would significantly
Radio Revolution
reduce the level of shipping traffic, with no corresponding benefits. Technology is making the wireless world look more and more like the ocean.
Well-functioning commons produce several
normative benefits. Because access is no longer
controlled by a designated gatekeeper, everyone
can participate. Virtually anyone could distribute
TV programs, promote an idea, or engage in a
group conversation using the same mechanism
that today supports a limited set of broadcasters
and operators. This is freedom of speech on a
spectacular scale.
Market Structure
The second implication of dynamic wireless systems is that the business structure of markets
changes. Static systems necessitate exclusivity. The
downside of exclusivity is that no one else can contribute to a network. The licensee must bear the
total cost of building network infrastructure. The
licensee typically recovers that cost by charging
fees to users for both devices and communications
services. Any vendor of user or network equipment
must sell to or through the licensed operators.
They are the only customers who can take
technologies and legally put them into the market.
There is nothing inherently wrong with this
“infrastructure” market model. It is traditionally
the market structure used for services that have
high capital costs and benefit from economies of
scale. Having every person responsible for building
the roads passing their home wouldn’t make much
sense, nor would having every person responsible
for bringing their own connection to a central telephone exchange. There are, however, serious
downsides. Deployment is slow because it is costly
and requires proven models for recouping that
cost. Innovation is constrained because only a few
licensees control access to the market. Uniformity
and interoperability are enforced by the licensee,
but services and equipment are costly because they
are provided in limited volumes and based on
proprietary standards.
As wireless devices become more intelligent and
commons arrangements become viable, new market structures become possible. If the spectrum is
no longer part of the service equation, the primary
element of the “service” offered to end-users
becomes the devices those users purchase. Because
those devices run on standards defined by industry
bodies rather than mandated by spectrum licensees,
there can be open, competitive markets to build
better and more cost-effective equipment.30 We
move, therefore, from a market for centralized
infrastructure and proprietary services, to a market
for consumer devices, software, and ancillary services. Users pay a significant fraction of the total
network build-out costs directly, by purchasing
hardware, greatly reducing the expenses service
providers must undertake. For many services, there
are still core network costs—for access points,
backhaul to wired Internet
backbones, authentication,
“Change the technology,
roaming, and security—
and the economics and
but these are limited
compared to the allthe law of spectrum use
encompassing network
build-out that licensed
must change, too.”
operators must undertake.
Furthermore, dynamic
– ELI NOAM, PROFESSOR OF ECONOMICS
wireless devices allow for
AND FINANCE, COLUMBIA UNIVERSITY
markets with greater
diversity at several points.
Many equipment vendors, several providers, and
application or content providers can compete,
because there is no mandatory control point and
each provider can leverage the infrastructure built
by others.
Incentives for Robustness
Intelligent or dynamic spectrum management techniques may be used in any regulatory environment.
However, the nature of spectrum regulation heavily influences incentives for deployment of intelligent devices. The traditional, and still dominant,
environment is exclusive licenses for frequency
bands. Spectrum licensees have incentives to
squeeze as much capacity as possible out of their
spectrum. On the other hand, they have incentives
to make the devices users must purchase as
inexpensive as possible. In a static system, the
money is all in the service; the devices are dumb.
There is no need to make them robust against
interference, because interference from other
21
Paradigm Shift: From Static to Dynamic
systems is prohibited and policed by the FCC.
Similarly, there is no great incentive to make the
devices flexible, because the service provider is
focused only on supporting its own service. Indeed,
to keep the cost of switching to another service
provider high, licensed providers have an incentive
to discourage software-defined radios, which could
switch frequencies (and hence service providers) at
the click of a mouse.
All that changes in a commons environment.
Because wireless devices in a commons have no
legally guaranteed protec“We need to think of
tion against interference,
they must guard against it
ways to bring [WiFi]
using technical means.
Fortunately, that is what
applications to the
dynamic wireless devices
are good at.
developing world so as to
Static systems create
incentives to make the
make use of unlicensed
receivers as dumb as possible. Dumber means
radio spectrum to
cheaper, after all. Because
the central transmitter
deliver cheap and fast
does the heavy lifting,
there are no significant
Internet access.”
benefits from intelligence
at the edge devices. When
— UN SECRETARY GENERAL
end-user devices become
KOFI ANNAN
dynamic, however, they
contribute to the integrity
and performance of the overall system. Making
them smarter and more robust improves performance. And without license restrictions keeping
other devices from transmitting on the same
frequencies, robustness based on intelligence is
the only path open.
22
WiFi as a Case Study
WiFi Defined
WiFi is the most prominent unlicensed wireless
technology available today. It is a family of spread
spectrum wireless local area networking standards
designed to allow users to send and receive data at
11-to-54 Mbps within a few hundred feet of
another WiFi device or access point. WiFi is a
great case study for the impact of dynamic wireless
technologies.
Wireless data networking services have been
commercially available since the 1980s, beginning
with Ardis, a joint venture of IBM and Motorola,
and RAM Mobile Data. Ardis was purchased by
American Mobile Satellite and renamed Motient;
after reorganizing through bankruptcy in early
2002 it is still trying to right itself financially.
RAM Mobile data was renamed Mobitex and is
now part of Cingular Wireless. It is the primary
network used by wireless email devices such as the
RIM Blackberry and the Palm VII. Motient and
Mobitex are wide-area systems that target enterprise and messaging markets. Another wide-area
wireless network, the Metricom Ricochet system,
offered services directly to end-users, providing
wireless Internet access in several U.S. cities for
laptop users. Metricom filed for bankruptcy in
2001; its assets were purchased by Aerie Networks,
which is attempting to re-launch the service.
These early wireless data networks generally
used licensed spectrum, though Ricochet employed
900 MHz unlicensed frequencies in some areas.
They offered low-speed connections (19.2 kbps or
less) with wide-area coverage in cities or nationwide. Today, third-generation cellular networks are
beginning to offer packet data networking services
as well, typically at higher speeds. None of these
offerings has yet become a mass-market success.
WiFi has been exactly the opposite story. The
Institute for Electrical and Electronic Engineers
(IEEE) ratified the 802.11b standard for wireless
local area networking (WLAN) in 1999. Vendors
such as RadioLAN and Proxim had been offering
proprietary WLAN systems, for both office environments and home networking. 802.11b, related
to the 802.3 Ethernet standard, was envisioned primarily as a wireless replacement for wired Ethernet
Radio Revolution
connections in corporate environments. In 1999,
though, Apple Computer introduced a consumer
802.11b device, the Airport, using chipsets from
Lucent. The market exploded.
The WiFi market grew to $1 billion annually
(primarily in hardware sales) by 2002. Most of
that period has been a time of contraction in the
technology and telecom sector making the
achievement even more impressive. More than
half of U.S. companies now support WiFi networks, and another 22 percent plan to do so
within a year.31 And WiFi sales are projected to
keep growing. Cahners Instat sees the market
reaching $4.6 billion by 2005, and other research
firms have issued similar projections.32 By 2008,
says Allied Business Intelligence, 64 million WiFi
nodes will be shipped annually.
Secrets of WiFi’s Success
What made WiFi such a success, especially compared to previous wireless data systems? After all,
WiFi provides only short-range connections; on its
own, one access point can’t provide ubiquitous coverage in a neighborhood or city.
WiFi has thrived because it has benefited from
an ecosystem that could only exist with the type of
technology it uses. Because WiFi is a low-power,
spread-spectrum technology, WiFi devices can
coexist without the requirement of spectrum
licensing to prevent interference. That means there
is no need for service providers, cell towers, controlled hardware markets, or expensive spectrum
licenses. Anyone can buy a WiFi device and establish a network.
Because WiFi is an open standard and an equipment-centered rather than service-centered market
(again, both of which flow from the nature of the
technology), costs are subject to computer industry
downward pressure. A WiFi access point that cost
hundreds of dollars when introduced is available
for less than $100 today. Chipsets are down in the
$10 range, allowing laptop, personal digital assistant (PDA), and mobile phone vendors to incorporate them with little or no price increase for the
overall system. According to Intel CTO Patrick
Gelsinger, a WiFi network costs half as much per
user per month to operate than a DSL connection,
and as little as one-tenth as much as a third-generation cellular network.33
The WiFi market wouldn’t have taken off without standards – both the technical ones defined by
the IEEE and the interoperability testing and certification done by the Wi-Fi Alliance (formerly
WECA), an industry trade group. There is a need
for such industry standards because there is an
entire industry of vendors, rather than one service
provider and its chosen suppliers, operating in the
WiFi universe. Now that the standards are in
place, the market can take advantage of contributions from many competing vendors.
WiFi is now experiencing the next phase of
development that can occur with dynamic wireless
systems. It is evolving and
diversifying. The IEEE
“[T]he unlicensed bands
has already extended the
employ a commons
original 802.11b with several variants which will be
model and have enjoyed
discussed later. Meanwhile,
start-ups are offering new
tremendous success as
kinds of devices that add
functionality to the origihotbeds of innovation.”
nal short-range WiFi
access points. Vivato, for
— FCC CHAIRMAN MICHAEL POWELL
example, has developed a
smart, phased-area antenna
technology that can extend the range and capacity
of WiFi signals, while remaining completely backward compatible with existing equipment.
Locustworld in the UK is shipping 802.11 mesh
networking boxes that automatically create ad hoc
mesh networks with each other. Vendors such as
Engim and BroadBeam are offering WiFi switches
that increase capacity of WiFi infrastructure.
Sputnik is shipping low-cost yet powerful self-configuring access points. Companies such as Telesym
and Vocera are running voice communications over
WiFi, opening up whole new market opportunities.
In a traditional static wireless system, changing
the network technology means upgrading the
whole network. Not just end-user devices but all
the core transmission elements must be upgraded.
The costs and time frames involved parallel those
of deploying the system the first time. The transition from analog to digital television (DTV) is a
23
Paradigm Shift: From Static to Dynamic
perfect example. DTV technology has been commercially available for more than a decade. In the
U.S., the formal transition to DTV began in 1996,
and still only a tiny handful of customers can
receive DTV broadcasts. In Japan, high-definition
TV (HDTV) service has been commercially available since the 1980s, but Japan chose an analog
standard. Improving digital technology is now
considered the best way to deliver HDTV, and
24
Japan had to effectively start the entire process
over again.
Compare that to the transition from orphaned
WLAN standards. Two standards that competed
with 802.11, Europe’s Hiperlan for high-speed
WLANs and the HomeRF standard for home networking, have lost out in the marketplace. Though
some equipment has been orphaned, most vendors
have quickly switched to offering 802.11 products.
The Unlicensed World
Unlicensed wireless is far more than
WiFi. Dynamic techniques for efficient
sharing of the spectrum, combined with
the open field for unlicensed innovation,
are creating an explosion of new systems, techniques, and business models.
The Spectrum of
Spectrum-Use Regimes
“Licensed” and “unlicensed” are generally presented as the two models for
spectrum usage. There are actually
several variations, not entirely mutually
exclusive. Put another way, the fact that
WiFi uses spectrum bands dedicated to
unlicensed usage doesn’t mean that is
the only mechanism for regulators to
create more space for unlicensed devices
and systems. Matheson’s electrospace
model, for example, provides seven
dimensions for sharing spectrum, with
frequency only one of the variables.34
Looking at the possibility of a spectrum commons purely as a matter of
what regime to mandate for specific frequencies misses the point. Dynamic
wireless techniques, and the systems
they make possible, ultimately erode the
very rationale for thinking of “the spectrum” as a physical asset that can and
should be divided into frequency
bands.35
For present policy purposes, however,
it can still be useful to list prominent
mechanisms for sharing spectrum. The
New America Foundation and other
public interest organizations have urged
the FCC to consider four basic spectrum use regimes: exclusive licensed,
pure unlicensed, shared unlicensed, and
opportunistic unlicensed.36
Exclusive Licensed
Most spectrum is exclusively licensed
today. Licensees may be carriers, broadcasters, specialized mobile radio, corporations, the military, or public safety
agencies. The licensee may have
obtained its license for free, or through
a competitive mechanism such as an
auction. Regardless, it has exclusive control of the frequency band for a period
25
The Unlicensed World
of years subject to the limitations of its license.
Other systems are prohibited from producing
harmful interference with the licensee. The broadcast television bands are good examples of exclusively licensed spectrum.
Some spectrum is licensed, but not for a single
user. Examples include the bands for the amateur
radio service, radio astronomy, and private
microwave systems. These bands resemble pure
unlicensed spectrum, in that one entity doesn’t have
total control. However, there are still restrictions on
who can transmit, and what kinds of services they
can provide. A technology that could increase
capacity or deliver a new kind of service can’t be
used in these bands if it doesn’t meet those criteria.
Dedicated Unlicensed
HIGH-POWER LICENSED SIGNAL
POWER
The opposite regime is to have no exclusive
licensees in the band. For example, the 2.4 GHz
Industrial, Scientific, and Medical (ISM) band is
open solely for unlicensed use. No devices that
operate in that band—which range from the WiFi
access points to microwave ovens and cordless
phones—can claim protection against interference
from other approved devices in the band.
“Unlicensed” is actually something of a misnomer. It implies that the government has not
made the spectrum available to licensees, when in
fact the spectrum has been allocated and assigned
LOW-POWER UNLICENSED UNDERLAY
FREQUENCY
FIGURE 10 – UNDERLAY SHARING: In underlay sharing,
low-power, unlicensed devices share frequencies and avoid
interference by operating beneath the noise threshold of
high-power devices in the band.
26
like any other spectrum block. Instead of a service
provider gaining the rights to control use of the
spectrum, subject to limits set in the terms of the
license, manufacturers gain the rights to sell
devices that conform to FCC-designated standards.
The devices themselves must be licensed, generally
on a generic and often self-licensing basis known as
“type acceptance.”
An important point here is that unlicensed does
not mean unregulated. In the 2.4 GHz band, for
example, the FCC mandates power limits and other
technical requirements. New kinds of equipment
that use different techniques than those already
licensed must receive direct FCC approval. For
example, Vivato, a start-up that sells a novel “WiFi
switch” based on phased-array antennas, received
FCC approval in late 2002 for its technology.
Shared Unlicensed
Under certain circumstances, unlicensed devices
can share frequencies with licensed devices. This
arrangement is sometimes referred to as an easement, by analogy to real property law. It is also
known as “underlay,” because unlicensed devices
operate below the noise threshold of the highpower licensed devices in the band (see Figure 10).
There are two major examples of shared
unlicensed use in current FCC rules. The first,
Part 15, actually dates back to 1938. This section
of the FCC’s rules allows devices below a strict
power limit to operate in significant portions of the
spectrum. The severe power limits allow for only
very short-range devices, which do not produce
significant interference with licensed systems.
A more recent example of shared unlicensed use
is ultra-wideband (UWB). The FCC first authorized this technology in February 2002. It uses
transmissions spread across huge frequency ranges
at extremely low power, below the detectable noise
floor for other licensed devices in the same band.
Using spread spectrum techniques, UWB systems
are able to reconstruct messages and support highspeed transmissions even under such restrictive
conditions. By authorizing ultra-wideband for
much of the spectrum above 3 GHz, the FCC
effectively created a huge new unlicensed easement shared with licensed services.
Radio Revolution
Opportunistic sharing means taking advantage of
unused spectrum in licensed bands. As noted previously, the official frequency chart paints a misleading picture of how spectrum is actually used. Some
frequencies are allocated but not assigned to any
user. For example, they may be set aside as “guard
bands” between licensed frequencies for older services such as broadcast television. The guard bands
were necessary because the licensed equipment
wasn’t sophisticated enough to distinguish signals
otherwise. However, today’s smart unlicensed
devices could transmit in the guard bands without
impinging on the licensed services.37
In other cases, spectrum may be assigned but
not used in a particular area or for a particular
period of time. For example, a cellular phone
transmission tower is only active when communicating with a handset nearby. When no user is in
range, the spectrum is temporarily available.
Other frequencies, licensed nationally, may be
used in New York City but not at all in Montana.
“Cognitive radios” could detect such holes in the
spectrum, switch communications there, and then
move away as soon as the licensee began transmitting (see Figure 11). Furthermore, as the electrospace model shows, there are many ways to slice
the spectrum pie. An “angle of arrival” system, for
example, can opportunistically use “terrestrial”
spectrum in bands licensed for communication
with orbiting satellites overhead.
There is no reason to believe that all the possible mechanisms for opportunistically sharing
spectrum have been discovered or implemented.
As wireless systems become more dynamic and
more intelligent, they will be capable of coexisting
in new ways.
TIME
Opportunistic Unlicensed
FREQUENCY
FIGURE 11 – OPPORTUNISTIC SHARING: In opportunistic
sharing, unlicensed devices detect and access licensed
spectrum that is not currently in use, and then move away
as soon as licensees begin transmitting.
Local Area Networks (802.11)
802.11 refers to the Institute for Electrical and
Electronic Engineers (IEEE) working group for
wireless Ethernet networking. The IEEE defines
technical standards, but does not certify compliance with those standards. In parallel, industry
associations such as the Wi-Fi Alliance create
brand names which vendors are permitted to use if
they meet compatibility requirements. The term
WiFi, a play on HiFi stereo systems, is such a
brand name.38 Originally referring only to 802.11b,
WiFi now encompasses 802.11 a, b, and g.
There are three widely deployed 802.11
technologies:
❚
❚
Current Unlicensed Products
Though the WiFi name is getting a tremendous
amount of attention and, as a result, has been
expanded to include standards other than the original 802.11b, it is important to keep in mind that
WiFi is not a synonym for unlicensed wireless.
WiFi is a local-area networking protocol. It delivers data, such as Internet connectivity and email,
across links of no more than a few hundred feet.
❚
802.11b – The original WiFi, providing 11 Mbps
connections using direct-sequenced spread spectrum modulation in the 2.4 GHz frequency
band.
802.11a – A higher-speed standard delivering 54
Mbps connections, but using different spectrum
(5 GHz) and modulation (orthogonal frequency
division multiplexing, OFDM) than 802.11b. As
a result, 802.11a systems are not backward compatible with 802.11b, and require separate
radios.
802.11g – A backward-compatible high-speed
standard, delivering 54 Mbps through OFDM
like 802.11a, but using the 2.4 GHz spectrum.
27
The Unlicensed World
TECHNOLOGY
RANGE
CAPACITY
SPECTRUM
REPRESENTATIVE
COMPANIES
COMMENTS
Primary wireless LAN
market today
802.11b (WiFi)
300 feet
11 Mbps
2.4 GHz
Chipsets: Intersil, Agere,
Cisco, Intel
Equipment: Cisco,
Proxim, Netgear, Vivato,
Apple
Services: Boingo, Cometa
802.11a (WiFi)
150+ feet
54 Mbps
5 GHz
Major 802.11b vendors
plus Atheros, Bermai
Useful for corporate
networks, backhaul, and
media applications
802.11g (WiFi)
300 feet
54 Mbps
2.4 GHz
Major 802.11b vendors
plus Broadcom
Backward-compatible
with 802.11b devices
802.15.1
(Bluetooth)
300 feet
1 Mbps
2.4 GHz
Ericsson, Nokia, Intel,
Toshiba, Microsoft,
3Com, Motorola
Originally designed for
cable replacement; market niche unclear
802.15.3a
(WiMedia)
30 feet at 110
Mbps or 12 feet
at 200 Mbps
110 and 200
Mbps
Wideband
(3.1–10 GHz)
XtremeSpectrum,
Motorola, TI,
TimeDomain, Philips
High-bitrate personal
area networking for
media devices
802.15.4
(Zigbee)
200 feet
250 kbps
900 MHz, 2.4
GHz, or wideband
Philips, Honeywell,
Mitsubishi, Motorola.
Low-bitrate personal area
networking for sensors
Motorola, Alvarion
Proxim, Fujitsu, Aperto
Broadband metropolitanarea network connections
Cisco, Flarion,
HP, Nextel Mobile
wireless
Ethernet, currently
envisioned for licensed
spectrum, but may evolve
802.16 (WiMax)
30 miles
70 Mbps
10-66 GHz for
802.16; 2-10
GHz for
802.16a
802.20
(MobileFi)
15 km
1 Mbps
3.5 GHz
TABLE 1 — MAJOR UNLICENSED WIRELESS STANDARDS
The remaining alphabet soup of IEEE 802.11
standards are mostly variants of these protocols.
For example, 802.11e, based largely on technology
developed by Sharewave (now part of chip vendor
Cirrus Logic), adds quality of service mechanisms
to better support video and voice traffic. 802.11i
adds a new security protocol to the relatively
ineffective WEP encryption in 802.11b. 802.11j is
a WLAN protocol for the 4.9 GHz to 5 GHz
unlicensed spectrum in Japan.
28
Metropolitan-Area Networks and Last Mile (802.16)
Metropolitan-area networks (MANs) operate over
longer distances than LANs, typically a mile or
more. They are designed to provide relatively high
bandwidth to a moderate number of fixed sites,
such as homes and businesses, compared to LANs,
which connect individual devices. One MAN application is the so-called “broadband last mile,” which
can substitute for wired solutions such as digital
subscriber lines and cable modems.39 However,
Radio Revolution
MAN systems are also used to provide “backhaul”
connections from such last-mile networks to central aggregation points, point-to-point connections
between facilities, or coverage throughout a
campus or other geographically defined facility.
Increasingly, WISPs and non-profit community
access networks are creating MANs using line-ofsight relays on unlicensed spectrum (at 5GHz) and
WiFi (at 2.4 GHz, for the last few hundred feet) to
offer affordable last-mile connections in rural and
low-income areas.40
The IEEE has established a standards group,
802.16, for wireless MANs. The original 802.16
specification was designed for very high frequencies, over 10 GHz. A more recent subgroup,
802.16a, is crafting MAN standards for 2-10 GHz
frequencies that, unlike the original standards,
don’t require line-of-site visibility. 802.16 envisions
systems delivering 70 Mbps of data over a 30 mile
range. An industry alliance called WiMax, including Intel, Proxim, Fujitsu, Alvarion, Aperto, and
Nokia, has been formed to promote and ensure
interoperability of wireless MANs.
Though designed as LAN technologies, 802.11a
and b are being used by some companies such as
Etherlinx as the foundation for MAN systems.
Generally these systems use the commodity 802.11
physical layer and devices, adding their own media
access control (MAC) layer to boost range.
Another option, which several service providers are
reportedly considering, is to use standard 802.11
devices with enhanced access points from companies such as Vivato that boost the effective range.
Several companies including Motorola (with its
Canopy system), Magis Networks, Proxim, IP
Wireless, Navini, BeamReach, Aperto, Soma Networks, and Alvarion offer proprietary products that
are similar to 802.11b and 802.11a. Typically these
systems offer better performance, reliability, or
features such as security that aren’t well implemented in the 802.11 standards. Today, most of
them operate in the 5 GHz band and focus on
markets such as last-mile residential and smallbusiness connectivity, especially in rural or otherwise underserved areas. Many of these companies
are part of the 802.16 effort. It can be expected
that, as with WiFi, many proprietary systems will
eventually become standards-compliant. A new
industry association, WiMax, hopes to do for unlicensed wireless MANs what WiFi did for LANs.
Other companies are taking a different route to
deliver wireless MAN connectivity. Instead of
long-range MANs sending data directly to customers, they envision wireless mesh networks using
short-range links to cover neighborhoods. One
start-up, Skypilot, plans to use standard 802.11b
radios in the home connected to rooftop units
based on 802.11a with added mesh networking
software. Omnilux plans to use free-space optics
technology, combined with mesh networking.
Free-space optics uses lasers that operate in the
visible light range of the spectrum, above the radio
frequencies. It is therefore technically outside the
scope of FCC licensing, which applies only to
communication “by wire or radio.”
Personal-Area Networks (802.15)
Moving the opposite direction from MANs,
personal-area networks (PANs) are designed for
very short-range connections, no more than a few
dozen feet. WiFi can cover these distances, but
because WiFi devices are designed to serve larger
areas and provide relatively high-bandwidth connections, they require more power and have higher
equipment costs than would be necessary for closein, low-speed tasks such as communicating
between a mobile phone and a headset.
PAN applications are essentially “cable replacement.” They are tasks that people perform today
by stringing wires between devices, but that could
be done with more freedom if the wires weren’t
necessary. These include scenarios such as printing
from a laptop computer to a nearby printer, sending voice signals between a cordless phone and a
base station, and pulling a contact from a PDA to a
mobile phone. There are also high-capacity cable
replacement tasks involving rich media, such as
sending music between an Internet-connected
home server and a home theater or stereo system.
The standards body for PANs is IEEE 802.15.
Bluetooth was the first prominent PAN standard,
incorporated into IEEE 802.15.1. Based on
technology originally developed by Ericsson, it is
supported by a private industry standards body that
29
The Unlicensed World
now has more than 2000 members. Bluetooth provides approximately 1 Mbps connections over 30
feet by automatically creating network clusters of
nearby devices.
Bluetooth received a great deal of media
attention when the consortium was first
announced, because of its heavyweight backers and
excitement about the potential of all things wireless
at the time. However, interoperability issues and
questions about where Bluetooth really fits in the
market have limited adoption. Bluetooth mobile
phones, PDAs, and laptops are now available, and
costs are coming down as volumes increase.
Even shorter range and lower speed than Bluetooth is Zigbee (802.15.4). The protocol, originally
developed by Philips, is optimized for applications
such as distributed sensor networks, which must
only send a few kilobits of data at a time. Zigbee,
like Bluetooth, is currently supported by an alliance
of companies including Honeywell, Mitsubishi, and
Motorola. It works on various unlicensed frequencies, providing 20-250 kbps connections over 30
to 200 feet. The big advantage of Zigbee is its lowpower consumption and low cost, which are
essential for remote monitoring and sensing.
Ultra-wideband (UWB) is more than a PAN
technology. However, its initial applications in the
communications market are for short-range, PANtype uses. UWB is a form of spread-spectrum
transmission that uses such a wide band, and such
low power, that it can “underlay” with licensed
users in the same band. The UWB signal appears
as background noise to other transmitters. The
nature of the technology gives it several other
advantages, including very low power consumption, security, and penetration of walls. Until the
FCC’s decision in early 2002 to legalize UWB for
communications, its primary application was for
ground-penetrating radar and military uses.
Today, companies such as XtremeSpectrum and
Time Domain are building UWB chipsets targeting short-range, high-bandwidth data applications.
UWB systems can deliver 100 Mbps or more over
short distances, which makes them ideal for uses
such as streaming audio or video between media
devices within the home. Several companies
including HP, Kodak, Philips, Motorola, Samsung,
30
Sharp, Time Domain, and XtremeSpectrum have
created the WiMedia Alliance to promote this
application. Though UWB was late to the PAN
party because of its recent approval, it has recently
been gaining adherents within the 802.15 group.
Most of the proposals for the forthcoming
802.15.3a standard, a higher-speed PAN protocol,
involve some form of UWB.
Success Stories
Today’s WiFi Markets
There are four major WiFi markets today: home
networking, corporate/campus networking, commercial hotspots, and public access.
means sharing an Internet connection, or a peripheral such as a printer, among more
than one PC. Vendors such as Proxim, Netgear,
Linksys (now part of Cisco), D-Link, and 2Wire
have sold millions of access points and cards to endusers for this purpose. Broadband service providers
are now getting into the game, recognizing that
there is significant demand for home networking as
an add-on to high-speed Internet access.
HOME NETWORKING
(in both the business
and university sense) are slightly different than
homes. Except for very small offices, multiple access
points are required to cover the facility. These
customers generally want security, authentication,
and management capabilities to operate the WiFi
network in conjunction with their existing wired
networking infrastructure. All the major networking
vendors, such as Cisco, Lucent, and Nortel, now
have substantial corporate WiFi customer bases.
CORPORATE OR CAMPUS ENVIRONMENTS
HOTSPOTS are access points available to anyone within
a location, such as an airport, a café, or a hotel
lobby. Sometimes the hotspots require a fee for
access. Vendors such as Wayport and Mobilestar
(now T-Mobile) deploy hotspots in locations that
receive significant foot traffic, as both a moneymaker and an incentive for more traffic. The bestknown hotspot deployment is at Starbucks coffeehouses, now operated by T-Mobile. Aggregators
such as Boingo and iPass allow users to pay one fee
and access multiple networks of hotspots.
Radio Revolution
NYCwireless: Evolution of a
Wireless User Group
To followers of the wireless broadband revolution,
New York has been a hotspot of activity among U.S.
cities, and the NYCwireless user group has been leading the movement. The group began as an informal
network of early WiFi adopters who placed access
points on their apartment windows to share their
broadband connections with the public parks below
their buildings. As the trend gained acceptance, the
users organized to form NYCwireless, a non-profit,
volunteer organization, to encourage others to share
their broadband and foster an ethic of free public
Internet access across the city.
“New Yorkers live in cramped quarters, and our
goal has been to get people out of their apartments
and into the public parks,” says NYCwireless volunteer Dustin Goodwin. The group considers ubiquitous
broadband access to be a public amenity equivalent
to streetlights or water fountains. However, it’s difficult, if not impossible, to provide public parks with
wired broadband access because of construction
impediments on historic or public land. Cheap, and
easily installed WiFi technology allowed apartment
dwellers with good line-of-site to their parks to
install the 802.11b transmitters and address the
problem for themselves.
Volunteers from NYCwireless have built networks
in Bryant Park, Bowling Green Park, and Tompkins
Square Park, among others. This past year, founding
members of the group formed a consulting firm,
Emenity, to deploy six more public hot spots in lower
Manhattan for the NYC Downtown Alliance. The new
company was started to provide service to commercial clients, but their mission of building public access
networks remains intact.
Emenity has recently built a public network in
Union Square Park. This project is unique in that it
relies on a wireless backhaul to connect to the Internet provided by the commercial wireless broadband
provider TowerStream. Most public access points
in the city ultimately access the Internet via a DSL
Internet connection.
The efforts of NYCwireless have not gone unnoticed by broadband service providers. Some providers
have slapped “acceptable use” clauses on their subscriber contracts in an effort to discourage wireless
bandwidth sharing. One large cable operator has
been accused of sending out a WiFi “sniffer” to scour
the city in search of access points leading back to
their customers’ connections to close down the
transmitters.
However, as WiFi use has reached a critical mass,
more broadband providers are trying to enter the
public space arena. Speakeasy, Inc., a national DSL
reseller, now offers “WiFi Netshare,” a service that
allows users to resell their broadband connections to
neighbors, with Speakeasy handling the billing. And
Verizon DSL has built a number of hotspots in New
York that are free to their DSL home subscribers.
NYCwireless volunteer Dustin Goodwin sees the
commercial efforts to deploy WiFi in the city as a
direct response to NYCwireless’ success. While some
see the entrance of commercial players into the
public space as a threat to free access, others see
the development as an important step to recognizing
WiFi as a free public amenity that companies and
organizations should provide as a value-added service
to their constituents.
Now that wireless broadband has gained a foothold
in New York City parks, Goodwin says that NYCwireless is expanding its mission to resemble a volunteer
“Geek Corps” for communities without affordable
broadband Internet. NYCwireless volunteers have
trained residents of a community housing organization
to build and maintain their own wireless network,
which will provide more than 50 residents with private, high-speed connections. This effort is part of a
growing trend among wireless community groups
across the country to bring affordable broadband to
underserved communities. – Matt Barranca
31
The Unlicensed World
The spread of hotspots has been remarkable.
Boingo now has more than 1,200 nodes on its network. T-Mobile operates 2,300 hotspots, including
Starbucks coffeehouses, Borders bookstores, American Airlines Admirals Clubs, and terminals at fifteen airports in North America. It has announced
plans to put hotspots in the more than 1,000
Kinkos copy shops throughout the U.S.
This is only a fraction of the total. One Website
lists more than 5,000 hotspots worldwide, including both commercial and community nodes.41 And
that’s just the beginning. According to Pyramid
Research, 1,000 hotels offer WiFi access today,
mostly in lobbies and meeting rooms, and 25,000
will have it by 2007.42 Cometa, a joint venture
funded by AT&T Wireless, IBM, Intel Capital,
and venture capital firms 3i and Apax Partners,
plans to build 20,000 hotspots using a wholesale
model, with its first customer being McDonald’s.
In addition to the access points intended for
public consumption, most private WiFi nodes do
not use any security mechanism, making them
available to anyone within range. A map of Manhattan prepared last year by the Public Internet
Project, whose volunteers systematically drove
through the streets with WiFi-sniffing equipment,
shows public and home access points already covering most of the island (see Figure 12). Preliminary results from the group’s 2003 survey suggest
WiFi density has become significantly greater since
the original map was published.
PUBLIC ACCESS means providing free connectivity to
users within a particular area. Sometimes this is
done by private groups sharing their own networks
or promoting the concept of ubiquitous wireless
connectivity. In other cases the access is funded by
public organizations, non-profits, or corporations
as a type of civic amenity.
In all, the number of regular wireless LAN users
is expected to grow sevenfold in the next four
years, from 4.2 million to 31 million, according to
the Gartner Group.43
Independent Community Access Points
There are dozens of community WiFi organizations in cities throughout the United States, and
32
FIGURE 12 – WIFI ACCESS POINTS IN MANHATTAN
around the world. Typically, these groups are early
users of WiFi technologies. They come together in
physical meetings and through online discussions
to share experiences, ask questions, and experiment
with new technologies. In many cases, they install
their own WiFi infrastructure, with access open to
all. They deploy these hotspots where they can or
want to, rather than follow some master plan.
The most active community WiFi groups
include SF Wireless in San Francisco; NYCwireless in New York; SeattleWireless; and the Personal Telco Project in Portland, OR. The infra-
Radio Revolution
structure is typically contributed by members,
though increasingly these community groups have
formed partnerships with local businesses and community development organizations.
Hotspots as Civic Amenities
In several cities, hotspot deployments are being
funded by civic organizations or corporate sponsors as civic amenities, like parks or playgrounds.
In Manhattan, Intel and the Bryant Park Restoration Corporation supported a project by NYCwireless to establish a WiFi network in Bryant Park, a
A Community Access Model
for the Last Mile
While the success of commercial WISPs has generated much attention, grassroots community access
networks or CANs are the originators of the unlicensed movement. Most CANs are groups of likeminded individuals sharing a similar philosophy—that
citizens should have open, inexpensive, and ubiquitous access to the Internet. Using affordable and
easily installed WiFi technology, community members in Seattle, New York, Austin, San Francisco, Portland, Oregon, and Athens, Georgia have built
expanding networks of independently maintained
wireless access points.
Most CANs provide access to public spaces, however some groups have made forays into residential
space, by connecting neighborhoods with centrally
placed access points. One such organization is the
Bay Area Wireless Users Group (BAWUG), an informal
group of wireless early adopters who began mounting
WiFi transmitters on the roofs of their homes to give
neighbors free or shared-cost Internet connections via
their DSL and cable lines. While the cable and phone
companies didn’t approve of the practice, consumers
did and access points began popping up all over the
city. There are now more than 25 BAWUG access
points in the area.
popular outdoor gathering place in midtown. The
University of Georgia has funded a network of
WiFi hotspots covering all of downtown Athens,
GA. In Long Beach, CA, the Long Beach Economic Development Bureau partnered with several
local businesses to establish a WiFi network covering several downtown blocks, with plans to expand
it throughout the city’s business district.
For the civic groups involved, the costs of these
WiFi networks are relatively minor, especially
when businesses become involved and provide free
Internet bandwidth and other services. Wireless
But BAWUG has not stopped there. Under the
leadership of Tim Pozar, a telecommunications engineer and one of BAWUG’s founders, the group has
launched the Bay Area Research Wireless Network
(BARWN). BARWN is an active wireless network with
a mission to discover the best technical solutions to
bring wireless broadband to remote and economically
disadvantaged communities.
BARWN has set up two centrally located access
points atop the San Bruno Mountain and Potrero Hill
in south San Francisco, allowing anyone within an 8mile radius to point a 2.4 GHz antenna at the
BARWN towers to share the 11Mbps of bandwidth
they provide. Pozar says that a third public access
point is soon to be installed on Yerba Buena Island in
the San Francisco Bay, which will link to East Bay and
light-up an underserved area called Treasure Island.
All of these access points are constructed with nonproprietary equipment and open protocols to keep
costs down and to learn what technologies can be
most easily adopted by lower income communities.
As evidence of the network’s stability and flexibility,
BARWN is working with the City of San Francisco to
use this network for public safety communications—
such as earthquake or disaster response. Pozar says
one application for the unlicensed service would be to
provide streaming video of a disaster site to command
centers to evaluate response tactics. – Matt Barranca
33
The Unlicensed World
connectivity is becoming a benefit that draws
people into downtown areas.
Internet Connectivity for Rural Communities
High-speed Internet connections are available to
most of the U.S. population today through digital
subscriber line (DSL) or cable modem service.
However, there are still tens of millions of Americans
who live in rural or otherwise underserved areas,
where such broadband offerings are not yet available. In some cases, they are unlikely to be available any time soon. Technically minded citizens in
Revolution in the Rural Last Mile:
Unlicensed Spectrum Closing the
Technology Divide in Northern Virginia
Despite their proximity to Northern Virginia’s Internet
backbone, many towns in Loudoun County have no
broadband access. The mountainous western regions
of the county are far from the technology infrastructure of Northern Virginia where companies like AOL
and VeriSign reside. However, because of licenseexempt wireless activity, the technology divide across
the county is starting to close.
After the technology bubble of the late 90s burst,
Northern Virginia lost as many as 30,000 jobs. Many
laid-off professionals accustomed to broadband connections at their work started their own businesses or
began working from their homes, creating a large
demand for high-speed home services. One start-up,
Roadstar Internet, is trying to meet that demand with
an expanding rural wireless network.
Started in the autumn of 2002, Roadstar Internet
connects more than 150 rural households and small
businesses relying only on unlicensed spectrum. Most
wireless subscribers do not know exactly how their
service operates. What matters most to users is not
the technology behind the service, but that their
connections are fast and reliable. The Roadstar network is similar to many other WISP efforts, using a
combination of point-to-point connections for the
34
some of these communities have seized upon
unlicensed wireless as an alternative route to
provide connectivity.
In Laramie, WY, a group of technologists led
by Brett Glass established LARIAT, a non-profit
community wireless network. It has been in
operation since the mid-1990s, originally using
pre-WiFi unlicensed equipment in the 900 MHz
band. A similar effort is MagnoliaRoad.net, a
cooperative in a rural part of Colorado that is
offering WiFi connectivity to local residents
who have no other good broadband option.
long-distance transmissions, and point-to-multipoint
transmissions to connect neighborhood access points
to subscribers.
The first leg of the network travels 18 miles from a
mountaintop transceiver using 5 GHz bands and
OFDM (Orthogonal Frequency Division Multiplexing)
point-to-point technology. OFDM transmissions make
efficient, and secure, use of spread spectrum by dividing data into packets and encoding it over multiple
frequencies, without requiring perfect line-of-site.
Long distance point-to-point transmissions are the
standard for rural WISPs seeking to reach larger population pockets. Under Part 15 rules for unlicensed
usage, the FCC allows operators to make point-topoint connections without reducing Transmitter Power
Output (TPO) for the 5.725 GHz and 5.825 GHz
band. Because of this regulatory latitude for narrow
beam transmissions, providers are able to reach long
line-of-site distances with relatively low power.
The Roadstar network makes final, last-mile connections within neighborhoods by using modified
WiFi wireless access points mounted on customer
silos, barns, and rooftops. WISPs are able to transmit
distances greater than the 300-foot standards for
WiFi technology by creating sectorized cells with
high-gain, directional antennas. These last-mile connections on the 2.4 GHz band are the result of good
planning and engineering, and typically reach two to
three miles. – Matt Barranca
Radio Revolution
Meanwhile, Dewayne Hendricks of the Dandin
Group is spearheading efforts to provide wireless
Internet connectivity, using WiFi and other
technologies, on several Indian reservations in the
U.S. and Canada. More than 1,000 commercial
WISPs are providing similar wireless broadband
services, mostly in underserved rural areas across
the nation (see sidebar, page 34).44
Internet Connectivity for Low-Income Areas
High-speed connectivity has important benefits for
low-income and underserved communities. Broadband Internet access opens the door to educational,
informational, job-related, benefits, health, and
other materials. However, the costs of wiring lowincome facilities such as public housing complexes
has traditionally been prohibitive, given that most
residents cannot afford to pay typical monthly
broadband prices. WiFi is one answer.
In Boston, an MIT graduate student named
Richard O’Bryant led an effort to put free WiFi
hotspots in Camfield Estates, a 102-unit public
housing development in the Roxbury area, with
funding from HP and Microsoft. In Philadelphia, the
United Way is building two WiFi hotspots in the
poor section of West Philadelphia, which will offer
broadband Internet access for $5 to $10 per month.
It plans to give away computers and wireless cards to
people in the community who cannot afford them.
In Portland, OR, a non-profit called One Economy
is putting WiFi connections into three public housing developments, serving more than 500 residents.
35
Future Scenarios
Expanding the Space of Possibilities
We have only scratched the surface of
what dynamic wireless systems can do.
The growth of WiFi has been so
striking, its possibilities so exciting, that
WiFi has become virtually synonymous
with unlicensed wireless and open
spectrum. This is a mistake. WiFi is not
the culmination of the wireless story; it
is merely the end of the beginning.
WIFI HAS TWO GREAT LIMITATIONS: ITS
PROTOCOL, AND ITS SPECTRUM ENVIRONMENT.
The engineers who created the 802.11
family of protocols had no idea that WiFi
would take off the way it has, and would
be used in so many different deployment
scenarios. They were creating a wireless
Ethernet standard, parallel to the wired
Ethernet standard that is the basis for
most office computer networks today.
Thus, WiFi has limited range, and a
routing layer that isn’t particularly good
at mesh networking, quality of service,
interference management, security, or
many other functions that are important
for many of its potential markets.
At the same time, the government
regulators who established the 2.4 GHz
and 5 GHz unlicensed spectrum bands
where WiFi operates had even less idea
of what was coming. Especially for 2.4
GHz, they were looking at “industrial,
scientific, and medical” equipment, and
devices such as cordless phones. The
rules they created for managing those
bands have been highly successful, but
they were hardly designed to maximize
the potential. Based on our current
experience, we can design rules expressly
to promote efficiency and innovation
through unlicensed wireless technologies.
First and foremost, this means making
available more spectrum for unlicensed
uses, whether it be through dedicated
unlicensed bands, shared underlay
access, or opportunistic sharing. Second,
it means putting in place minimal rules
for that spectrum, which may be as little
as power limits, to foster an environment
of efficient cooperative development.
The wireless future, under any scenario, is likely to be marked by increasingly pervasive but non-uniform connec-
37
Future Scenarios
tivity. No wireless technology, let alone one service
provider, can address all the markets and deployment scenarios, from short-range low-bandwidth to
long-distance broadband. Even if unlicensed systems succeed beyond anyone’s wildest dreams, there
will be a need for licensed services for many years.
Even if there is soon WiFi in every coffeehouse,
An Unlicensed Education: A Wireless Model
to Connect Rural School Communities
While U.S. school districts have been issued the command to “leave no child behind,” many rural schools are
without the resources to bring broadband Internet
access into their classrooms. This last-mile problem presents hardships not only for schools, but also for local
households and businesses unable to fully participate in
the information economy. A public/private partnership
has been formed in western Pennsylvania to use unlicensed spectrum and the social capital of local school
districts to address the last mile on their own. The
efforts of the Broadband Rural Access Information
Network (BRAIN) have yielded great results connecting
rural areas, and their example could provide a model for
rural school communities across the country.
The BRAIN effort began with the vision of a school
district superintendent, Andy Demidont, and the help
of a large regional WISP, Sting Communications. Demidont wanted to provide high-speed access to Rockwood High School and Kingwood Elementary School in
mountainous Somerset County. The schools’ existing
dial-up accounts were expensive, and rendered connection speeds barely surpassing 14 kbps.
Relying on technical guidance from Sting Communications, and using grant money awarded from the
Individuals with Disabilities Act and E-Rate discounts,
the school district installed wireless access points on
the roofs of both schools, turning both schools into
state-of-the-art, wireless hotspots.
In total, Sting Communications installed three towers, creating a pie-shaped hot zone using the 5.8 GHz
and 2.4 GHz license-exempt bands. The Rockwood
High School gymnasium hosts a 100-foot tower that
38
getting it in every dry cleaner will take longer, as
will getting it in every train and airplane.
The downside of such a heterogeneous environment is that everyone is not connected all the time,
and any one system or technology will provide
only a small percentage of what connectivity does
exist. The natural impulse in communications is to
transmits to a 150-foot tower located at Kingwood
Elementary School. The two towers share a narrow
beam, point-to-point connection with a third tower
owned by the local Seven Springs Ski Resort.
Simply bringing the technology to the area wasn’t
the end goal – using the network to connect the
school with the community is the ultimate design of
the project. Both schools have put many classroom
operations on-line. Teachers use Palm Pilots and
laptops to track student progress and record grades,
which are available to parents online.
The project also gives community residents a chance
to purchase access from the school’s network, with the
school district serving as a WISP for the area. Sting has
installed access points in many neighborhoods, and the
company is offering subscription rates between $11
and $20 per month, depending on the number of subscribers the school can attract. Currently, thirty-five
families have been connected, with an additional 65
families expected to be online in the coming months.
From the project’s onset, Sting hoped their schoolbased approach could be replicated in other rural communities. Building on what they have learned in Somerset County, Sting has built a much larger network in
Cambria and Clearfield Counties to connect four more
regional school districts. Sting vice president Bob Roland
says that this new network spans an 1100 square-mile
area, and uses both 5 GHz frequency-hopping spread
spectrum and 802.11 connections for the last mile.
For the next phase, BRAIN has applied for an
additional $7.4 million grant from the USDA’s Rural
Utilities Service to “light-up” a wide corridor between
central Pennsylvania and Maryland. If successful, this
effort could provide a model for building a wide-area
regional network, one school at a time. – Matt Barranca
Radio Revolution
try to build all-encompassing networks, but sometimes that isn’t the best approach. WiFi hotspots
are spreading because they are cheap, funded by
users or facility owners, and go where there is
demand today. They don’t yet go where there isn’t
demand, but the good news is that users need not
pay for the extra cost of putting access points there.
As software-defined radios mature, they will make
it possible to stitch together some systems at the
end-user device. In general, though, the real
question is not how to provide ubiquitous wireless
connectivity in the abstract, but how to address
concrete needs and market opportunities.
The scenarios below represent examples of
opportunities that unlicensed wireless technologies
could address. They are relatively straightforward
extensions of existing technology. Most, however,
will require either spectrum reforms to expand
the space available for unlicensed devices, or at
the very least no regulatory actions that would
hamstring unlicensed devices in the existing areas.
The Last Wireless Mile
Broadband connectivity to homes is a topic of great
consternation in the communications industry today.
Telephone companies are deploying DSL and cable
TV operators are deploying cable modem systems.
However, many millions of Americans still have neither available to them, and a greater number have
only one option. Prices of these broadband services,
approximately $50/month, are high compared to the
rest of the world. These services are generally asymmetric, providing far more bandwidth down to the
user than up from the user to the Internet. Combined with terms of service restrictions, this architecture limits users from running home servers and
other actions. For business users, who typically need
higher bandwidth than homes, the only viable option
is often a traditional T-1 line at $1,000 per month or
more. There is thus great interest in alternatives.
Basic WiFi or its variants, 802.11a and 802.11g,
cannot simply be put into service for last-mile
deployments. WiFi is a short-range technology
designed primarily for connections to a nearby
hotspot. Even if every home in a neighborhood
had a WiFi access point, few of those nodes would
see one another and there would be no mechanism
Net
backbone
Transmitter
at regional
backbone
connection
Antenna
at local
access point
FIGURE 13 – WIRELESS LAST MILE: In one wireless last-mile
scenario, a transmitter tower connected to the Internet
transmits in a point-to-point connection to a community
access point, which makes point-to-multipoint or mesh
network connections to households.
to link them together. Even if signals could reach a
neighborhood access point, backhaul costs would
be significant, because every access point would
need a wired connection to a T-1 or larger circuit.
We can, however, envision scenarios for
unlicensed wireless last-mile connectivity based on
technology that is in the market or likely to be soon.
For homes, a mesh network configuration could be
used to shorten link lengths, increase robustness
through alternate traffic paths, and address impediments such as trees. A range-extending technology
such as Vivato’s phased-array antennas could also be
employed to receive signals from a cluster of nodes
in a local area. For businesses or residences wanting
more bandwidth, an 802.16 wireless MAN technology could be used to deliver tens of megabits per
second over many miles. This same technology, or a
variant, could be used to reduce the costs of backhaul, replacing costly wired connections.
An end-user would buy a device, whether a
dedicated piece of wireless hardware, a broadband
“residential gateway,” or a piece of general-purpose
hardware such as a laptop. The device could be
designed to operate with a particular unlicensed
wireless network, it might be deployed by a service
provider, or it might have “discovery” capability to
automatically locate and connect with nearby
access points or wireless end-users.
It is typically assumed today that last-mile broadband networks are designed to provide access to the
Internet (see Figure 13). Certainly, any broadband
customer will want to access Internet-based services
available through the World Wide Web, as well as
39
Future Scenarios
global email, instant messaging, and other applications. However, these are not the only things that
an unlicensed last-mile wireless network could
deliver. There is often value in online communications within a community, especially when that
community has not previously had a high-speed,
always-on network. These range from intra-community email to school bulletin boards to decisions
of the city planning board. They could be called
community intranet appli“I am hopeful that
cations. Because the traffic
is local, there is no need to
unlicensed operations
connect to a backbone
provider. In fact, there may
will…eventually provide
not be a need for any
provider at all. If users proa last-mile application to
vide their own wireless
nodes, and no node is
connect people’s homes
overwhelmed with traffic,
the community-wide netto the Internet, offering
work would function peerto-peer, much as local area
a real alternative to
networks in businesses
do today.
telephone wires, cable,
When the network
needs to connect to the
and satellite connections.”
Internet, an economic
problem emerges. Internet
— FCC COMMISSIONER KEVIN MARTIN
backbones charge for use
based on bandwidth consumed. Even if Internet-based services and content
are a minority of traffic on a wireless community
last-mile network, there still must be backhaul connections to the Internet, and these must be paid for.
There is a “free-rider” problem if the node connected to the backbone must bear all the costs,
independent of whether there is any congestion
across the unlicensed wireless links between the
community nodes. It may be necessary to develop
some sort of pricing mechanism for end-users in
such a situation. However, pricing could be implemented in several ways. It does not require central
providers or per-packet settlement charges. For
example, a cooperative could collect dues from all
community members and apply those to the backhaul charge, with some limits on each user’s bandwidth to prevent free riding.
40
Interoperable Public Safety Communications
Today, public safety agencies use many different
wireless communications systems. Many of them use
outdated technology. Few if any can talk to one
another. In an emergency, if the fire department can’t
communicate with the police, the consequences
could be disastrous. On September 11, 2001, firefighters were trapped in the World Trade Center
because they were unable to learn from other public
safety officers outside that the buildings were about
to collapse. Stories abound, from 9/11 and other
times, of firemen using commercial mobile phones
because they had better performance and a wider
audience than their expensive private radios. And
when these networks go down, everything goes
down with them. Unfortunately, public safety organizations are saddled with many legacy communications systems that are costly and difficult to upgrade.
As software-defined radios (SDR) mature, they
could replace the cacophony of devices with a single
set of devices. One phone handset, PDA, or laptop
could tap into any of the existing systems. A firefighter arriving on the scene could instantly check
police communications as well as data transmissions
providing essential information directly from dispatchers, such as building maps. Such a system would
require robust security and authentication mechanisms, but these could also be built into the devices.
The result would be both lower costs and more
effective systems for critical public safety services.
Adaptive Mobile Phones
Mobile phone networks today are self-contained
entities. In the U.S., for example, there are six competing national networks. Each has its own network
of transmission towers. If you are within range of
five of those towers, but not the one for your
service provider, you won’t get service. Things are a
little better in Europe, where universal adoption of
the GSM standard allows for more roaming agreements between carriers, but each carrier still must
maintain its own complete network.
As mobile phones evolve into SDR devices, the
structure of the business may change. Carriers
will be able to share infrastructure much more
widely, because their subscribers will be able to
transparently access transmissions from whatever
Radio Revolution
tower is closest to them, regardless of what
frequency band or encoding mechanism it uses.
Further, there are an increasing number of local
connectivity points, such as WiFi hotspots, that are
separate from the wide-area wireless networks.
When a user was within range of a hotspot, adaptive
mobile phones could transparently shift communications from voice transmissions over the mobile networks to voice-over-IP or packet data connections
through the WiFi infrastructure. Tapping these local
Think Globally, Act Locally:
Two WISPs Come of Age
At the onset of the unlicensed movement, most Wireless
ISPs (WISPs) were scaled-up WiFi hotspots built from
adapted, off-the-shelf, 802.11b equipment. But unlicensed devices have advanced beyond WiFi to include
frequency-hopping, non-line-of-sight transmitters, and
other technologies such as high-gain directional
antennas that reach distances well over 20 miles. The
most successful WISPs have outgrown the WiFi model
and evolved into sophisticated wide-area networks
with a high standard for service, security, and stability.
AMA*TechTel Communications of Amarillo, Texas
is one example. With more than 4,000 users on their
license-exempt network, AMA is one of the country’s
largest regional carriers of wireless broadband. AMA’s
63-tower deployment is a 20,000 square-mile, contiguous network providing secure service to numerous
towns, three college campuses, multiple school systems, hospitals, and banks.
AMA relies on equipment manufactured by Alvarion
to access the 900 MHz, 2.4 GHz, and 5 GHz unlicensed bands for backhaul and point-to-multipoint connections. The network’s expansiveness is due to AMA’s
partnership with Attebury Grain, a large grain storage
company that initially contracted AMA to wirelessly connect their grain elevators to the commodities market.
After the project, the two companies partnered, using
Attebury’s numerous grain elevators as transmitter towers to provide service to local towns and businesses.
While AMA’s wide-area deployment may be a daunting example for WISP entrepreneurs, this ambitious net-
nodes would reduce costs, avoiding the need to send
data long distances over wireless networks, and
would give users the maximum possible capacity,
since WiFi hotspots tend to offer substantially
greater bandwidth than wide-area data networks.
Personal Broadcast Networks
Today, broadcasting is the domain of the few. Only
companies with licenses have access to the airwaves to
deliver programming. Using a combination of the
work isn’t the only successful model. Hundreds of startup WISPs have relied on skillful engineering to build
smaller networks for remote communities. Prairie iNet,
of Des Moines, Iowa has used this localized approach to
build a network of unlicensed wireless oases in more
than 120 rural communities in Illinois and Iowa.
Neil Mulholland, Prairie iNet CEO, estimates that to
reach their standard for “carrier class” service, the company invests $100,000 in each population center for
base station, customer premise equipment, and a wireless link to their DS-3 Internet connection. Ideally, the
company covers their investment in each local network
after the first 75 customers. The company has more
than 4,000 subscribing customers in total.
Both models – AMA’s wide-area design and Prairie
iNet’s localized networks – present varying benefits.
Regional utility companies can learn from AMA and
leverage their tower infrastructure to provide virtual
private networks for campus clients and basic service
for their residential customers. The East Bay Municipal
Utility District in Oakland, California has installed the
Motorola Canopy system; Owensboro Municipal Utilities in Kentucky and Wheatland Electric Cooperative
in Kansas have both installed Alvarion networks; and
Midwest Wireless, a rural cellular provider, has built a
broadband business using unlicensed spectrum.
Prairie iNet’s localized model has been replicated
in smaller scales in rural areas across the country. InStat/MDR, a technology research company, estimates that there are up to 1,500 start-up WISPs in
operation. Low start-up costs, numerous equipment
options, and high consumer demand account for this
growth. – Matt Barranca
41
Future Scenarios
WiFi Calling: Campuses Turn
to WiFi for Voice Applications
Since the beginning of the wireless movement, college and corporate campuses have been fertile
ground for extensive WiFi networks. The latest trend
for campus organizations is to bypass local telephone
carriers and use their unlicensed wireless networks to
support voice applications.
In one example, the University of Arkansas invested
$4 million in Cisco’s call-processing software, CallManager, to traffic local calls over the University’s existing
WiFi network. The University has reduced their monthly
telephone service fees from $530,000 to $6,000 a
month. At this savings rate, the University should
recoup their call-processing investment in six months.
Dartmouth College has a similar program, offering
software to incoming freshman that turns wireless lap-
techniques described in this paper, it is possible to
imagine a world in which anyone can be a broadcaster.
As each user sends out video streams, they would
be relayed by other users wherever infrastructure
was unavailable. Cognitive radios would seek out
free space in the spectrum to carry the signals.
Content creators could contract with operators
of virtual broadcast networks who aggregated
together reliable high-speed connectivity to reach
an audience, creating a bottom-up division
between different classes of traffic.
Who would want to have their own broadcast
network? Some people would want to deliver the
kinds of creative programming available on television today. These personal wireless networks would
become a much more powerful version of the
alternative outlets available today, such as public
access channels on cable TV systems, public broadcasting stations, low-power FM radio stations, and
the Web. If consolidation in the media distribution
business threatened the diversity of voices available
to viewers and listeners, personal broadcast
networks would provide a powerful antidote.
But the existing market for heavily produced,
42
tops and PDAs into “softphones.” Dartmouth implemented their voice over wireless local area network
(VoWLAN) when they realized they were spending more
money billing students for long-distance calls than they
were taking in. The unlicensed VoWLAN provides a similar quality of service to traditional telephone service,
but without the billing and administrative costs.
Hospitals have also been early adopters of
VoWLAN technology. One company, Vocera, has created an 802.11b communications device that doctors
and nurses wear on their uniform collars to communicate with each other remotely. Users speak the name
of the person they’d like to contact into the device,
and they are instantly connected through the
VoWLAN. The device has the potential to eliminate
intercoms and speakers that broadcast announcements or pages meant for only one person to an
entire hospital floor. – Matt Barranca
mass-market content would only be a small part of
the total. Working parents would use personal
broadcast networks to tap into video images of
their children at home, streamed from Webcams to
their mobile phones. Distance learning courses
could be delivered on demand, or specific instructional modules could be delivered dynamically
when and where they were needed. Need to
change a flat tire and don’t know what to do?
Need on-the-spot medical advice? Use a personal
broadcast network to watch an instructional video
or establish a videoconference with an expert.
Predicting the future is dangerous. Any scenarios
we envision today will likely miss the specific kinds of
applications and content that will be popular tomorrow. But that doesn’t matter. The infrastructure of
emerging wireless technologies can be adapted to
whatever turn out to be the killer apps. Wireless networks built using intelligent, dynamic techniques
from the computer and networking industries will
feature radical flexibility. Network owners need not
predict uses in order to shape their infrastructure
build-out, because there will be no owners, infrastructure, or build-out in the current senses of the words.
Policy Recommendations
Governments should recognize and
embrace the tremendous potential of
the radio revolution. In the U.S., the
FCC’s November 2002 Spectrum Policy
Task Force report45 acknowledged that
technological changes allow for new
forms of spectrum access and interference management. It therefore proposed
expanded use of the commons model. In
June 2003, the Bush Administration
formed a task force to examine government spectrum use, with a similar
mandate to propose reforms.
The FCC has begun or announced
plans for a raft of spectrum reform proceedings. These include: unlicensed
sharing of the broadcast spectrum;
changes to allow unlicensed access to
underused education spectrum in the 2.5
GHz band currently allocated for
Instructional Fixed Television Service
(ITFS); incentives for unlicensed deployment in rural areas; investigating the
impact of cognitive radio and the
unlicensed allocation of extremely high
frequency spectrum (above 50 GHz);
and, a novel method for noise floor baselines called “interference temperature.”
Because radio signals do not respect
political boundaries, spectrum policy is
inherently international. Equipment
vendors can better justify the investment
needed to develop new products when
they can foresee a global market.
Hence, international harmonization of
spectrum bands is valuable, especially
for unlicensed bands. Participants at the
2003 World Radio Conference agreed
to expand the globally unlicensed spectrum in the 5 GHz range, adding 255
MHz to the existing allocation.
Despite these encouraging developments, more needs to be done. Governments cannot merely sit back and wait
for technological development to eliminate spectrum scarcity. Spectrum policies
around the world remain centered on
what the FCC calls a “command-andcontrol” model, more appropriate for the
industrial era than for the coming age of
digital networks. Though government
regulation of the radio spectrum was
43
Policy Recommendations
designed to manage and mitigate physical scarcity of
the airwaves, that regulation now creates scarcity.
As a general matter, the federal government
should recognize that the objective of spectrum
policy is not to minimize interference, but to maximize usable capacity. It should move away from
command-and-control toward more flexible
approaches, recognizing that there is value in a
diversity of legal regimes. In experimenting with
exclusive property rights and secondary markets,
it should balance the potential gains against the
opportunity cost of mak“Increasing demand for
ing that spectrum available
for unlicensed sharing in
spectrum-based services
the future. Wireless
devices will continue to
and devices are
become increasingly
sophisticated and inexpenstraining longstanding,
sive. Policy decisions
should take into account
and outmoded,
the possibility that what is
impractical now may be
spectrum policies.”
commonplace in the
future...if there is room
— FCC SPECTRUM POLICY TASK FORCE
for innovation.
Specifically, policy initiatives should proceed in four areas:
❚ MORE DEDICATED UNLICENSED SPECTRUM
The FCC and other government agencies
should continue their efforts to identify additional spectrum for unlicensed use. Some spectrum may become available through return of
analog broadcast television licenses as part of the
digital TV transition,46 or from guard bands
established to protect obsolete devices. Other
spectrum could be shifted from government or
military use, or through restructuring of underutilized bands such as the UHF television and
MMDS/ITFS frequencies.47
Unlicensed spectrum at low frequencies,
below 2 GHz, is particularly important.
Transmissions at those frequencies can more easily penetrate trees, walls, and other obstacles, a
key consideration for last-mile broadband applications. They also use less power, and thus make
more efficient use of batteries, a key considera-
44
tion for portable consumer electronic devices
such as PDAs. Contrary to the recommendation
of the FCC’s Spectrum Policy Task Force, new
unlicensed allocations should not be limited to
spectrum above 50 GHz. However, as much
spectrum as possible at such extremely high
frequencies should be made available for
unlicensed use with minimal restrictions.
❚ SHARED UNLICENSED UNDERLAY
The FCC should advance its proceeding on unlicensed sharing of broadcast and other licensed
spectrum bands. It should also make good on its
commitment to review its rules governing ultrawideband and low-power FM radio, both of which
are subject to significant restrictions. Interference
with licensed services is a legitimate concern, but
hypothetical fears unsupported by the evidence
should not trump innovation and competition.
The FCC should work with companies and
standards bodies in the private sector to identify
other technical means for unlicensed sharing.
The interference temperature concept outlined
in the FCC Spectrum Policy Task Force report
is promising but undeveloped. Identifying more
clearly the boundaries between high-power
licensed and low-power unlicensed services in
the same bands would benefit both types of systems. However, the boundaries should not be so
low as to preclude viable unlicensed devices.
❚ OPPORTUNISTIC SHARING
The FCC should allow for spectrum sharing
along the other dimensions of the electrospace
model. Depending on the nature of the licensed
use, the government can make under-utilized
frequencies available for non-interfering shared
access by time, geographic area, direction of signals, or other variables. By establishing guidelines for cognitive radios that can sense and
respond to the local spectral environment, the
government could in effect create “virtual whitespace” within spectrum that is already licensed.
The Defense Advanced Research Agency’s
(DARPA) XG sharing technology provides one
model for doing this. The XG moniker stands
for “neXt Generation.”
Radio Revolution
The FCC should pursue its proposal in the
Spectrum Policy Task Force report to examine
standards for receivers. Existing rules focus
entirely on transmitters, but as discussed in this
paper, interference is a phenomenon that manifests itself on the receiving end. Poorly defined
spectrum licenses create incentives to build the
cheapest, least robust receivers allowed. A poorquality receiver may not affect the service for
which it is licensed, but it will create scarcity by
limiting the space for opportunistic and shared
unlicensed devices around it.
❚ EXPERIMENTATION AND RESEARCH
Because spectrum has long been highly regulated and controlled by a small group of
licensees, opportunities for experimentation have
been limited. WiFi happened almost by accident,
A Universal Communications
Privilege for the “Supercommons”
Any set of prophylactic rules for wireless communication, no matter how liberal, will prevent some
potential transmissions. The current block allocation
system of government-licensed frequencies leaves
huge quantities of spectrum fallow, even though
today’s devices could exploit the unused capacity
with no harm to existing systems. Even with unlicensed allocations, underlays, and opportunistic
sharing, some non-interfering transmissions will be
precluded inefficiently. The optimal arrangement of
wireless communications devices depends on location, time, pre-existing systems, the desired service,
and technological capabilities. No set of rules determined ahead of time can possibly take all these
factors into account.
The very idea of spectrum as a scarce resource
divided into frequencies is becoming less and less
useful. As devices become smarter, the effective communications space outside designated frequency
bands increases. This “supercommons” cannot be
because the 2.4 GHz “junk band” spectrum was
so congested with devices such as microwave
ovens that it was considered unusable for
licensed systems. Because the band was unlicensed and open to public access, engineers and
companies interested in wireless local area
networking could use it for their commercially
and technically unproven technologies.
The government should ensure that its rules
for unlicensed bands allow for the broadest possible experimentation. It should also ensure that
it provides ample opportunities for experimental
authorizations that go beyond the existing rules,
so that innovators can develop novel techniques.
A large block of high frequency spectrum (above
50 GHz) that is currently unassigned could be
designated as an open space for completely
unregulated unlicensed activity, as Tim Shepard
exploited fully so long as only certain wireless
systems are legally permitted to operate.
An alternate approach would be to establish a
baseline universal privilege to communicate.48 Anyone could engage in any form of wireless transmission they chose, so long as they did not impinge on
other systems. Instead of prophylactic interference
rules enforced by the government, a liability regime
could be used to avoid and resolve disputes. Those
suffering harm from other transmissions could sue for
damages in court, using principles drawn from the
common law doctrine of torts. Various safe-harbor
and backstop mechanisms could be used to ensure
the system operated efficiently. Equipment vendors
and wireless device owners would have incentives to
use intelligent mechanisms to increase capacity.
The supercommons regime could operate alongside
existing licensed and unlicensed frequencies. It would
expand communications capacity without threatening
current allocations. Access to the airwaves would
become a distributed, real-time decision rather than
the result of a government pre-approval process. Innovation and investment would increase dramatically.
45
Policy Recommendations
and Yochai Benkler have proposed. This would
allow for real-world exploration of ideas that
may later migrate to lower frequencies where
propagation characteristics are better.
The government should also continue to
support and publicize research on groundbreaking wireless technology. Among the key questions for exploration are: whether technical
46
“etiquettes” are useful to facilitate efficient
spectrum sharing among unlicensed devices,
and if so, what those etiquettes should be; what
are the fundamental information theoretic
limits of radio communications; and how do
large numbers of unlicensed devices using different transmission mechanisms perform in
the real world.
Conclusion
A powerful lesson from the history of
communications and computing is that a
few simple trends have extraordinary
effects over time. The shift from analog
to digital networks, for example, is revolutionizing all forms of communication
and media. That transition has been
going on for many years, and hasn’t yet
finished. Similarly, growing intelligence
of computing devices at the edges of
networks exerts a powerful force that is
magnified over time.
The paradoxical fact is that wireless
communications are both more mundane and more remarkable than we currently believe. The mundane aspect is
that spectrum isn’t somehow special and
removed from the forces affecting other
industries touched by the relentless
improvements in computing and data
networking. Spectrum isn’t a domain
where normal market and technological
forces go out the window, replaced by
iron scarcities. In fact, it’s not a place or
a thing at all. It’s a mental construct we
use to aid understanding. We are rapidly
reaching the point where that mental
construct does more harm than good.
The real question now is not whether
but when. There is absolutely no question that wireless devices will continue
to become more powerful, and that
enterprising technologists will find new
ways to multiplex and interweave radio
signals. Whatever obstacles regulators
or incumbents throw up will ultimately
be routed around. The only immutable
barriers are physical, and though they
undoubtedly exist, we are nowhere near
reaching them.
Through errors of both omission and
commission, however, governments can
delay and weaken the revolutionary
changes that could bring into being the
scenarios described in the previous section. If no more unlicensed spectrum is
made available through dedicated openaccess bands, low-power underlay, and
opportunistic sharing, it will take that
much longer to overcome the scarcities
that define the market today. Such delays
would have economic and social costs.
47
Conclusion
The last time a new networking and communications paradigm took hold, it was called the Internet. There too, it was possible to discern signs and
possibilities years before the full commercial realization of the network. The U.S. government,
through both affirmative steps (such as protecting
nascent data services from domination by telephone companies) and conscious rejection of
48
unnecessary regulation, laid the groundwork for
the Internet to emerge as a powerful business and
social force whose impacts are still being felt
throughout the economy.
The next Internet is before us. It is an Internet of
the air, in some ways even more powerful than the
Internet of wires. If we put aside our preconceptions and outmoded notions, we can make it real.
Radio Revolution
Bibliography
Baran, Paul, “Visions of the [sic] 21st Century
Communications: Is the Shortage of Radio Spectrum for
Broadband Networks of the Future a Self Made Problem?”
Keynote Talk, 8th Annual Conference on Next Generation
Networks (November 9, 1994). Available at http://www.eff.
org/GII_NII/Wireless_cellular_radio/false_scarcity_baran_
cngn94.transcript (challenging the traditional conception of
spectrum scarcity)
Black, Jane, “The Magic of Wi-Fi,” BusinessWeek Online, March
18, 2003.
Benkler, Yochai, “Overcoming Agoraphobia: Building the
Commons of the Digitally Networked Environment,”
11 Harvard Journal of Law and Technology 287 (1998).
(describing the benefits of a spectrum commons)
Benkler, Yochai, “Some Economics of Wireless Communications,”
16 Harvard Journal of Law and Technology 25 (2002). (analyzing the economics of wireless systems under property rights
and commons)
Carter, Kenneth R., Ahmed Lahjouji, and Neil McNeal,
“Unlicensed & Unshackled: A Joint OSP-OET White
Paper on Unlicensed Devices and their Regulatory Issues,”
Federal Communications Commission, Office of Strategic
Policy, Working Paper Series No. 39 (May 2003). Available
at http://hraunfoss.fcc.gov/edocs_public/attachmatch/
DOC-234741A1.pdf (surveying unlicensed wireless systems
and the FCC’s policy towards them)
Crawford, Sabrina, “Wireless Wheels: Police Now Using
Internet to Solve Local Crime Cases,” The San Francisco
Examiner, September 18, 2003.
Clarke, Arthur C., Profiles of the Future: An Inquiry into the
Limits of the Possible, (London: Victor Gollancz, 1962).
The Economist, “Watch this Airspace,” The Economist,
Technology Quarterly, June 22, 2002. (describing four disruptive technologies that “could shake up the wireless
world”)
Faulhaber, Gerald R., and David Farber, “Spectrum Management:
Property Rights, Markets, and the Commons,” AEIBrookings Joint Center, Working Paper 02-12 (December
2002). Available at http://rider.wharton.upenn.edu/
~faulhabe/SPECTRUM_MANAGEMENTv51.pdf (arguing that commons and property rights can coexist through
“non-interfering easements”)
Federal Communications Commission, Spectrum Policy Task
Force Report, ET Docket No. 02-135 (November 15, 2002).
Available at http://hraunfoss.fcc.gov/edocs_public/
attachmatch/DOC-228542A1.pdf (proposing an overhaul
of spectrum regulation)
Federal Communications Commission, “First Report and
Order: Revision of Part 15 of the Commission’s Rules
Regarding Ultra-Wideband Transmission Systems,” ET
Docket 98-153 (April 22, 2002). (providing rules for ultrawideband operation)
Fleishman, Glenn, “Take the Mesh-Networking Route: Mesh
Networks Offer an Agile, Cost-Effective Alternative,”
InfoWorld, March 7, 2003. Available at http://www.infoworld.
com/article/03/03/07/10mesh_1.html (describing mesh networking)
Gartner Group, Wireless LAN Equipment: Worldwide, 2001-2007,
[report], Gartner Group, (January 2003).
Gelsinger, Patrick, “Wireless Fidelity (Wi-Fi): Leaping Across
the Digital Divide,” Keynote Address to the United
Nations Information and Communication Technologies
Task Force (June 26, 2003). Available at http://www.w2i.org/
pages/wificonf0603/speaker_presentations/W2i_Gelsinger_
Presentation.pdf
Gilder, George, “The New Rules of Wireless,” Forbes ASAP,
March 29, 1993. (explaining the possibility for spectrum
sharing without exclusive rights)
Gilder, George, “Auctioning the Airwaves,” Forbes ASAP,
April 11, 1994.
Hazlett, Thomas W., “The Wireless Craze, the Unlimited
Bandwidth Myth, the Spectrum Auction Faux Pas, and the
Punchline to Ronald Coase’s ‘Big Joke’: An Essay on
Airwave Allocation Policy,” 14 Harvard Journal of Law and
Technology 335 (2001). (attacking the commons position and
advocating exclusive property rights in spectrum)
In-Stat/MDR, It’s Cheap and It Works: Wi-Fi Brings Wireless
Networking to the Masses, [report], In-Stat/MDR,
(December 2002).
Johnston, James H., and J.H. Snider, “Breaking the Chains:
Unlicensed Spectrum as a Last-Mile Broadband Solution,”
New America Foundation, Spectrum Series Working Paper
#7 (June 2003). Available at http://www.newamerica.net/
index.cfm?pg=article&pubID=1250
Keizer, Gregg, “Wireless LANs Set For Growth Spurt,”
Information Week, March 27, 2003.
Konston, Kalle, “In Pursuit of a Wireless Device Bill of Rights,”
Presentation to the September 18, 2002 Meeting of the
FCC’s Technical Advisory Council (TAC). Available at
http://www.fcc.gov/oet/tac/Kalle_Kontson_9.18.02_bill_of_
Rights_Final.ppt
49
Bibliography
Lessig, Lawrence, The Future of Ideas: The Fate of the Commons in
a Connected World, (New York: Random House, 2001).
(describing the benefits of an open wireless commons for
innovation)
Margie, Paul, “Efficiency, Predictability, and the Need for
an Improved Interference Standard at the FCC,” Paper
Presented at the 31st Annual Telecommunications
Policy Research Conference (TPRC), Arlington, VA
(September 2003).
Matheson, Robert, “The Electrospace Model as a Tool for
Spectrum Management,” Paper Presented at the 6th
Annual International Symposium on Advanced Radio
Technologies (ISART), Boulder, CO (March 2003).
New America Foundation, et al., “Additional Spectrum for
Unlicensed Devices Below 900 MHz and in the 3 GHz
Band,” Comments to the Federal Communications
Commission Notice of Inquiry, ET Docket No. 02-380
(April 17, 2003). Available at http://www.newamerica.net/
Download_Docs/pdfs/Pub_File_1212_1.pdf
New America Foundation, et al., “Amendment of the
Commission’s Rules to Facilitate the Provision of Fixed
and Mobile Broadband Access, Educational and Other
Advanced Services in the 2150-2162 and 2500-2690
MHz Bands,” Comments to the Federal Communications
Commission Notice of Proposed Rulemaking,
WT Docket No. 03-66 (September 10, 2003). Available at
http://www.newamerica.net/Download_Docs/pdfs/
Pub_File_1350_1.pdf
New America Foundation, and Shared Spectrum Company,
“Dupont Circle Spectrum Utilization During Peak Hours,”
[report], New America Foundation (June 2003). Available at
http://www.newamerica.net/Download_Docs/pdfs/
Doc_File_183_1.pdf
Noam, Eli, “Spectrum Auctions: Yesterday’s Heresy, Today’s
Orthodoxy, Tomorrow’s Anachronism: Taking the Next
Step to Open Spectrum Access,” 41 Journal of Law and
Economics 765 (1998). (proposing an “open access” model
for wireless systems)
Reed, David, “Comments for FCC Spectrum Policy Task
Force on Spectrum Policy,” Comments to the Federal
Communications Commission Spectrum Policy Task Force
Public Notice, ET Docket No. 02-135 (July 8, 2002).
(describing the technical changes making a spectrum commons viable)
Shepard, Timothy, J., “Spectrum Policy Task Force Seeks
Public Comment on Issues Related to the Commission’s
Spectrum Policies,” Comments to the Federal
Communications Commission Spectrum Policy Task Force
Public Notice, ET Docket No. 02-135 (July 8, 2002).
50
Singer, Michael, “Analysts: Wi-Fi a ‘Positive Disruption’,”
802.11 Planet, April 3, 2003.
Snider, J.H., and Max Vilimpoc, “Reclaiming the ‘Vast
Wasteland’: Unlicensed Sharing of Broadcast Spectrum,”
New America Foundation, Spectrum Series Issue Brief #12
(July 2003). Available at http://www.newamerica.net/
index.cfm?pg=article&pubID=1286 (proposing shared use
of broadcast spectrum)
Stone, Amey, “Wi-Fi: It’s Fast, It’s Here — and It Works,”
Business Week Online, April 3, 2002. (overview of the WiFi
market and its implications)
Weinberger, David, “The Myth of Interference,” Salon, March
12, 2003. Available at http://www.salon.com/tech/feature/
2003/03/12/spectrum/print.html (explaining David
Reed’s ideas)
Werbach, Kevin, “A Layered Model for Internet Policy,” 1
Colorado Journal on Telecommunications and High Technology
Law 37 (2002).
Werbach, Kevin, “Open Spectrum: The Paradise of the
Commons,” Release 1.0, November 2001. (summarizing the
commons position and its implications for the communications industry)
Werbach, Kevin, “Open Spectrum: The New Wireless
Paradigm,” New America Foundation, Spectrum Series
Working Paper # 6 (October 2002). Available at
http://www.newamerica.net/index.cfm?pg=article&pubID=
1001 (explaining the benefits of the spectrum commons
approach)
Werbach, Kevin, “Supercommons: Toward a Unified Theory
of Wireless Communication,” 82 Texas Law Review
(forthcoming March 2004). Draft available at
http://werbach.com/research/supercommons.pdf
Woolley, Scott, “Dead Air,” Forbes, November 25, 2002.
(describing efforts to reform spectrum policy based on
technologies that allow spectrum sharing)
Community WiFi Directories
http://www.seattlewireless.net/index.cgi/SimilarProjectLinks
http://www.personaltelco.net/index.cgi/WirelessCommunities
http://www.afcn.org/
http://freenetworks.org/moin/index.cgi/
http://www.communitywireless.org/
Radio Revolution
Endnotes
1
11
The seven layers, from bottom to top, are: physical, data
link, network, transport, session, presentation, application.
The OSI model is broadly used as a conceptual tool, but
not always as a formal part of network design. In the
1980s, efforts to formally require adherence to the OSI
standard failed.
12
Separate from the technological critique developed here,
there is a long and distinguished history of economic arguments against the current structure of spectrum regulation,
generally advocating private property rights as an alternative to government licensing. However, the spectrum-sharing techniques the new dynamic wireless paradigm makes
possible remove the need for exclusive rights to minimize
interference. See Yochai Benkler, “Overcoming Agoraphobia: Building the Commons of the Digitally Networked
Environment,” 11 Harvard Journal of Law and Technology
287 (1998).
13
This analogy was first developed by Tim Shepard in his comments to FCC’s Spectrum Policy Task Force. See Timothy
J. Shepard, “Spectrum Policy Task Force Seeks Public
Comment on Issues Related to the Commission’s Spectrum
Policies,” Comments to the FCC’s Public Notice, ET
Docket No. 02-135 (July 8, 2002).
The FCC has oversight over private spectrum, and NTIA is
the lead agency for government spectrum. Several other
agencies, including the Defense and State Departments,
play significant roles in spectrum policy.
14
Capacity may not always be expressed in raw throughput such
as bits per second. Some services have particular requirements. For example, live voice communication needs low
latency (delay), while high-quality video broadcasting
requires limited packet loss. An environment that allows
these uses may be more valuable than one that does not,
even if total carrying capacity is lower.
The process of deciding which frequencies shall be available
for which uses is known as allocation. Determining who is
entitled to use those frequencies is called assignment. In
many cases, spectrum bands are assigned to more than one
user, or to one “primary” user and one or more “secondary” users.
15
See Ronald H. Coase, “The Federal Communications Commission,” 2 Journal of Law and Economics 1 (1959).
16
This became an issue in the FCC proceeding authorizing
ultra-wideband devices to underlay under existing licensed
services. Sprint PCS argued that, because it had spent billions of dollars purchasing spectrum licenses and building
out its mobile phone network, it had an expectation of
exclusive control over the frequencies. The FCC rejected
this claim, holding that even for flexible PCS spectrum, the
license was not absolute. A licensee may have a reasonable
expectation to operate free from harmful interference for
the limited term of the license, but the government may
give non-interfering users access to the same frequencies.
See “FCC First Report and Order: Revision of Part 15 of
the Commission’s Rules Regarding Ultra-Wideband Transmission Systems,” ET Docket No. 98-153 (April 22, 2002).
17
There are two different patterns to prevent interference in
nearby cities. There is no channel 3 in New York, but no
channel 2 in Philadelphia, to ensure the signals are received
properly.
18
In addition, because only a handful of big city markets have
more than 13 channels on the air, TV channels 30 to 69 (234
MHz of prime spectrum) are in use in only about 10 percent
of the nation’s TV market, on average. See J.H. Snider and
Max Vilimpoc, “Reclaiming the ‘Vast Wasteland’: Unlicensed
Sharing of Broadcast Spectrum,” New America Foundation,
Spectrum Series Issue Brief #12 (July 2003).
Albert Einstein reportedly offered an entertaining description
of radio with a similar “negative” character: “You see, wire
telegraph is a kind of a very, very long cat. You pull his tail
in New York and his head is meowing in Los Angeles. Do
you understand this? And radio operates exactly the same
way: you send signals here, they receive them there. The
only difference is that there is no cat.”
2
Arthur C. Clarke, Profiles of the Future: An Inquiry into the Limits of the Possible (London: Victor Gollancz, 1962).
3
The word paradigm, since its popularization in Thomas
Kuhn’s The Structure of Scientific Revolutions, has become
dull from over-use. A paradigm shift isn’t merely a new
technology or a new business opportunity. It’s a change in
cognitive framework. Our paradigms are the unconscious
scaffolding we use to associate concepts and information.
Most of the time they can safely operate in the background,
but occasionally they break down. True paradigm shifts are
rare, but this rarity increases their significance.
4
5
6
For a list of some other multiplexing mechanisms, see Robert
Matheson, “The Electrospace Model as a Tool for Spectrum Management,” Paper presented at ISART 2003 conference, Boulder, CO, March 2003.
7
This approach is known as Time Division Multiple Access
(TDMA), and is the basis of the popular GSM standard for
second-generation wireless phone systems.
8
See New America Foundation and Shared Spectrum Company, “Dupont Circle Spectrum Utilization During Peak
Hours,” (July 2003) (finding a significant amount of spectrum is unused, even in dense urban areas during peak
hours).
9
Transitioning from analog to digital is another means of
expanding capacity by changing the system design. Digital
“2G” mobile phone networks support more simultaneous
calls than analog “1G” systems, and they can support additional features such as data transmission and messaging. Yet
the 2G systems actually use less spectrum.
10
The layered model also has important implications for regulation. See Kevin Werbach, “A Layered Model for Internet
Policy,” 1 Colorado Journal on Telecommunications and High
Technology Law 37 (2002).
51
Endnotes
19
20
33
See Patrick Gelsinger, “Wireless Fidelity (Wi-Fi): Leaping
Across the Digital Divide,” Keynote Address to the United
Nations Information and Communication Technologies
Task Force (June 26, 2003).
34
See supra note 6.
Harmful interference is defined as “Interference which
endangers the functioning of a radio navigation service or
of other safety services or seriously degrades, obstructs or
repeatedly interrupts a radio communication service operating in accordance with the Radio Regulations.” 48 CFR
§97.3(a)(23).
35
See Kevin Werbach, “Supercommons: Toward a Unified
Theory of Wireless Communication,”82 Texas Law Review
___ (forthcoming March 2004), draft available at http://werbach.com/research/supercommons.pdf
36
See New America Foundation et al., “Additional Spectrum
for Unlicensed Devices Below 900 MHz and in the 3 GHz
Band,” Comments to the FCC Notice of Inquiry, ET
Docket No. 02-380 (April 17, 2003).
21
See Matheson, supra note 6.
22
See Kalle Konston, “In Pursuit of a Wireless Device Bill of
Rights,” Presentation to the September 18, 2002 Meeting
of the FCC’s Technical Advisory Council (TAC).
37
See “FCC First Report and Order: Revision of Part 15 of the
Commission’s Rules Regarding Ultra-Wideband Transmission Systems,” ET Docket No. 98-153 (April 22, 2002).
UWB was previously used for non-communications applications such as ground-penetrating radar, especially by the
military.
See J.H. Snider and Max Vilimpoc, “Reclaiming the ‘Vast
Wasteland’: Unlicensed Sharing of Broadcast Spectrum,”
New America Foundation, Spectrum Series Issue Brief #12
(July 2003).
38
WiFi stands for “wireless fidelity” only in this marketing
sense. HiFi systems actually have higher sound fidelity;
there is nothing special about the fidelity of WiFi systems.
39
See James H. Johnston and J.H. Snider, “Breaking the
Chains: Unlicensed Spectrum as a Last-Mile Broadband
Solution,” New America Foundation, Spectrum Series
Working Paper #7 (June 2003).
40
Id.
41
See http://www.nodedb.org
42
See Jane Black, “The Magic of Wi-Fi,” BusinessWeek Online,
(March 18, 2003).
43
See Sabrina Crawford, “Wireless Wheels: Police Now Using
Internet to Solve Local Crime Cases,” The San Francisco
Examiner (September 18, 2003).
See Gregg Keizer, “Wireless LANs Set For Growth Spurt,”
Information Week, (March 27, 2003).
44
Any digital wireless system involves some measure of software. SDR involves using software to control most of the
functions that traditionally were handled through RF hardware.
See James H. Johnston and J.H. Snider, “Breaking the
Chains: Unlicensed Spectrum as a Last-Mile Broadband
Solution,” New America Foundation, Spectrum Series
Working Paper #7 (June 2003).
45
See George Gilder, “The New Rules of Wireless,” Forbes
ASAP (March 29, 1993). See also George Gilder, “Auctioning the Airwaves,” Forbes ASAP (April 11, 1994).
See Federal Communications Commission, Spectrum Policy
Task Force Report, ET Docket No. 02-135 (November 15,
2002).
46
See Kevin Werbach, “Open Spectrum: The New Wireless
Paradigm,” New America Foundation, Spectrum Series
Working Paper #6 (October 2002). See also Kevin Werbach, “Open Spectrum: The Paradise of the Commons,”
Release 1.0, (November 2001).
See New America Foundation et al., “Additional Spectrum
for Unlicensed Devices Below 900 MHz and in the 3GHz
Band,” Comments to the FCC Notice of Inquiry, ET
Docket No. 02-380 (April 17, 2003).
47
See New America Foundation et al., “Amendment of the
Commission’s Rules to Facilitate the Provision of Fixed and
Mobile Broadband Access, Educational and Other
Advanced Services in the 2150-2162 and 2500-2690 MHz
Bands,” Comments to the FCC Notice of Proposed Rulemaking, WT Docket No. 03-66 (September 10, 2003).
48
See Werbach, supra note 35.
23
24
25
26
27
28
29
52
For an excellent discussion of the FCC’s evolving regulation
of interference, see Paul Margie, “Efficiency, Predictability,
and the Need for an Improved Interference Standard at the
FCC,” Paper presented at TPRC 2003, Arlington, VA,
September 2003.
Not to be confused with defunct digital subscriber line
provider Northpoint Communications. Because its
approach is so novel, Northpoint has had to endure a
multi-year struggle to get its technology approved by
the FCC.
In a pure mesh, every receiver is also a transmitter and a
repeater. However, this is not a requirement. The Internet,
for example, has many different kinds of routers and networks, some of which are connected hierarchically with
“edge” and “core” devices.
30
See Yochai Benkler, “Some Economics of Wireless Communications,” 16 Harvard Journal of Law and Technology 25
(2002).
31
See Michael Singer, “Analysts: Wi-Fi a ‘Positive Disruption,’” 802.11 Planet, (April 3, 2003).
32
See It’s Cheap and It Works: Wi-Fi Brings Wireless Networking
to the Masses, [report] In-Stat/MDR (December 2002). See
also Wireless LAN Equipment: Worldwide, 2001-2007,
[report] Gartner Group (January 2003).
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement