null  null
The Digital Consumer Technology
[This is a blank page.]
The Digital Consumer Technology
A Comprehensive Guide to Devices, Standards,
Future Directions, and Programmable Logic Solutions
by Amit Dhir
Xilinx, Inc.
Newnes is an imprint of Elsevier
Newnes is an imprint of Elsevier
200 Wheeler Road, Burlington, MA 01803, USA
Linacre House, Jordan Hill, Oxford OX2 8DP, UK
Copyright © 2004, Xilinx, Inc. All rights reserved. All Xilinx trademarks, registered trademarks,
patents, and further disclaimers are as listed at All other
trademarks are the property of their respective owners.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without the prior written permission of the publisher.
Permissions may be sought directly from Elsevier’s Science & Technology Rights Department
in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
[email protected] You may also complete your request on-line via the Elsevier
homepage (, by selecting “Customer Support” and then “Obtaining
Recognizing the importance of preserving what has been written, Elsevier prints its books
on acid-free paper whenever possible.
Library of Congress Cataloging-in-Publication Data
Dhir, Amit.
Consumer electronics handbook : a comprehensive guide to digital technology,
applications and future directions / by Amit Dhir.
p. cm.
ISBN 0-7506-7815-1
1. Digital electronics--Handbooks, manuals, etc. 2. Household electronics--Handbooks,
manuals, etc. I. Title.
TK7868.D5D478 2004
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
For information on all Newnes publications
visit our website at
03 04 05 06 07 08 10 9 8 7 6 5 4 3 2 1
Printed in the United States of America
Dedications and Acknowledgments
This book is dedicated to my loving wife, Rita, for her undying patience, support, and understanding.
For the countless hours she sat beside me giving me the strength to complete this effort and to always
believe in myself, I owe this book to her. My special thanks to my parents, family, and friends who
have supported me through the completion of this book.
I would especially like to acknowledge the efforts of Tom Pyles and Robert Bielby in getting this
book published. My special thanks to Tom who went beyond the call of his job at Xilinx to help me
edit and format the book in a brief two months. I thank Robert for providing me motivation to
complete this book and for his efforts in reviewing and overseeing its development. He was also
instrumental in guiding my professional development over the last four years.
About the Author
Amit Dhir is a Senior Manager in the Strategic Solutions Marketing group at Xilinx, Inc. He has
published over 90 articles in business and technical publications dealing with the role of programmable
logic in wired and wireless communications and in embedded and consumer applications. Amit is the
author of the popular The Home Networking Revolution: A Designer’s Guide, a book describing
convergence, broadband access, residential gateways, home networking technologies, information
appliances, and middleware. As a Xilinx representative, he has presented at numerous networking,
home networking, wireless LAN, and consumer conferences. Until recently he served as marketing
chairman of the HiperLAN2 committee. He holds a BSEE from Purdue University (1997) and a
MSEE from San Jose State University (1999). Amit may be contacted at [email protected]
Studying the history of consumer electronics is nothing short of fascinating. The landscape is filled
with countless stories of product successes and failures— fickle consumer adoptions, clever marketing campaigns that outsmart the best technologies, better packaging winning over better technology,
and products that are simply ahead of their time.
This book was not written to trace the history of consumer electronics. Rather, it discusses the
current state of the art of digital consumer devices. However, it is almost certain that what is considered leading edge today will eventually become another obsolete product in a landfill—a casualty of
continued technological advances and progress. But make no mistake, although technological
advances may render today’s products obsolete, they are the lifeblood of the digital consumer
Pioneers and visionaries such as Boole, Nyquist, and Shannon well understood the benefits of
digital technologies decades before digital circuits could be manufactured in a practical and costeffective manner. Only through advances in semiconductor technology can the benefits of digital
technology be realized. The role of semiconductor technology in driving digital technologies into
consumer’s hands is shown by looking at computer history.
The first computers were built of vacuum tubes and filled an entire room. They cost millions of
dollars, required thousands of watts to power, and had mean time between failures measured in
minutes. Today, that same level of computing power, using semiconductor technology, fits in the
palm of your hand. It takes the form of a hand-held calculator that can operate from available light
and, in many cases, is given away for free. Semiconductor advances enable products to be built that
are significantly more powerful than their predecessors and sell at a fraction of the price.
But semiconductor technology is only the basic fabric upon which the digital consumer product
is built. As you read this book, it’s important to realize that there are many elements that factor into
the success of a product. Building and delivering a successful consumer product requires the alignment of dynamics such as infrastructure, media, content, “killer” applications, technologies,
government regulation/deregulation, quality, cost, consumer value, and great marketing.
Just as the availability of “killer” software applications was key to the growth of the personal
computer, the Internet is a main driver behind the success of many of today’s—and tomorrow’s—
digital consumer devices. The ability to exchange digital media (i.e., music, data, pictures, or video)
between information appliances and personal computers has become a key component of today’s
consumer revolution. The success of today’s digital consumer revolution is based on the infrastructure that was created by yesterday’s consumer successes. This dynamic is sometimes referred to as
the “virtuous cycle”—the logical antithesis of the vicious cycle.
In the case of the Internet, the virtuous cycle means that more bandwidth drives greater capability, which drives new applications, which drive an increased demand for more bandwidth.
Mathematically, the numerator of the fraction—capabilities—continues to grow while the denominator—cost—continues to shrink as semiconductor technology advancements drive costs down. The
result of this equation is a solution that tends towards infinity. This is why consumer digital electronics continues to grow at such an explosive rate.
However, there are key elements such as consumer acceptance and governmental regulation/
deregulation that can negatively affect this explosive growth. A good example of government
regulation is illustrated by personal digital audio recording.
Over 20 years ago Digital Audio Tape (DAT) was developed and introduced into the consumer
market. DAT allowed consumers to make high-quality digital recordings in the home using 4mm
DAT tape cartridges. The Recording Institute Association of America, RIAA, was opposed to this
development. They claimed that this new format would enable rampant bootleg recordings that
would cause irreversible financial damage to the music industry.
Perhaps the argument was somewhat melodramatic since no consumer recording technology up
until that time had any real measurable impact on music sales. However, the impact of that argument
stalled the widespread introduction of DAT for almost 10 years. The result is that DAT never realized
anything near its market potential. Today, over 20 years later, CD RD/WR has become the premier
consumer digital audio recording technology. And history continues to prove that this format has had
no measurable effect on media sales.
Another phenomenon that has impacted product success is the delivery of consumer technology
ahead of available content. This was the case with the introduction of color television. It was
available years before there was any widespread availability of color television programming. The
added cost of color television without color programming almost killed the product.
The most recent case of this phenomenon has been seen in the slow introduction of HDTV—
high definition television. Here, the absence of high definition programming and the high cost of
HDTV television sets have significantly impacted the adoption rate. HDTV is just now seeing some
growth resulting from wide-range acceptance of the DVD and the gradual transition to HDTV
programming format.
The term “digital consumer devices” covers a wide range of topics and products that make up
today’s consumer technology suite. Through Mr. Dhir’s insights and expertise in this market, you are
provided with a comprehensive review of an exciting and dynamic topic that most of us rely upon
daily. This book will help you navigate the vast landscape of new consumer technologies and gain a
better understanding of their market drivers.
Robert Bielby
Senior Director, Strategic Solutions Marketing, Xilinx, Inc.
Preface, xvii
Chapter 1: Generation D—The Digital Decade ....................................................... 1
The Urge to Be Connected ................................................................................................. 1
Our Daily Lives ................................................................................................................... 1
The Digitization of Consumer Products ............................................................................... 2
Why Digitization? ............................................................................................................... 2
Converging Media .............................................................................................................. 3
Faster and Cheaper Components ....................................................................................... 3
Broadband Access—The Fat Internet Pipe ........................................................................... 4
Home Networking .............................................................................................................. 5
The Day of the Digital Consumer Device ............................................................................. 5
Bringing it All Together ....................................................................................................... 5
Chapter 2: Digital Consumer Devices ...................................................................... 6
Introduction ....................................................................................................................... 6
The Era of Digital Consumer Devices .................................................................................. 6
Market Forecast ................................................................................................................. 7
Market Drivers ................................................................................................................... 9
Phases of Market Acceptance ........................................................................................... 11
Success Factors and Challenges ........................................................................................ 11
Functional Requirements .................................................................................................. 12
What About the Personal Computer? ............................................................................... 13
Digital Home .................................................................................................................... 13
King of All—The Single All-Encompassing Consumer Device ............................................. 14
Summary ......................................................................................................................... 15
Chapter 3: Digital Television and Video ................................................................ 16
Introduction ..................................................................................................................... 16
History of Television .......................................................................................................... 17
Components of a Digital TV System ................................................................................. 18
Digital TV Standards ......................................................................................................... 21
SDTV and HDTV Technologies .......................................................................................... 22
Digital Set-top Boxes ........................................................................................................ 22
Market Outlook ............................................................................................................... 26
Integrated Digital Televisions ............................................................................................. 26
Digital Home Theater Systems .......................................................................................... 27
Digital Video Recorders .................................................................................................... 27
Summary ......................................................................................................................... 29
Chapter 4: Audio Players ........................................................................................ 31
Introduction ..................................................................................................................... 31
The Need for Digital Audio—Strengths of the Digital Domain ........................................... 31
Principles of Digital Audio ................................................................................................. 32
Digital Physical Media Formats .......................................................................................... 33
Pulse Code Modulation (PCM) .......................................................................................... 37
Internet Audio Formats .................................................................................................... 41
Components of MP3 Portable Players ............................................................................... 47
Flash Memory .................................................................................................................. 48
Internet Audio Players – Market Data and Trends .............................................................. 48
Other Portable Audio Products ......................................................................................... 50
Convergence of MP3 Functionality in Other Digital Consumer Devices .............................. 50
Internet Radio .................................................................................................................. 51
Digital Audio Radio .......................................................................................................... 51
Online Music Distribution ................................................................................................. 51
Summary ......................................................................................................................... 52
Chapter 5: Cellular/Mobile Phones ........................................................................ 54
Introduction ..................................................................................................................... 54
Definition ......................................................................................................................... 54
Landscape—Migration to Digital and 3G .......................................................................... 54
Standards and Consortia .................................................................................................. 62
Market Data .................................................................................................................... 64
Market Trends .................................................................................................................. 64
Summary ......................................................................................................................... 76
Chapter 6: Gaming Consoles .................................................................................. 77
Definition ......................................................................................................................... 77
Market Data and Trends ................................................................................................... 77
Key Players ....................................................................................................................... 79
Game of War ................................................................................................................... 80
Components of a Gaming Console ................................................................................... 86
Broadband Access and Online Gaming ............................................................................. 87
Gaming Consoles—More Than Just Gaming Machines ..................................................... 88
PC Gaming ...................................................................................................................... 89
Growing Convergence of DVD Players and Gaming Consoles ........................................... 89
New Gaming Opportunities .............................................................................................. 92
Summary ......................................................................................................................... 93
Chapter 7: Digital Video/Versatile Disc (DVD) ....................................................... 94
Introduction ..................................................................................................................... 94
The Birth of the DVD ........................................................................................................ 94
DVD Format Types ............................................................................................................ 94
Regional Codes ................................................................................................................ 95
How Does the DVD Work? ............................................................................................... 97
DVD Applications ........................................................................................................... 101
DVD Market Numbers, Drivers and Challenges ............................................................... 102
Convergence of Multiple Services ................................................................................... 106
Summary ....................................................................................................................... 108
Chapter 8: Desktop and Notebook Personal Computers (PCs) .......................... 109
Introduction ................................................................................................................... 109
Definition of the Personal Computer .............................................................................. 109
Competing to be the Head of the Household ................................................................. 110
The PC Fights Back ......................................................................................................... 113
Portable/Mobile Computing............................................................................................ 155
Requirements Overdose .................................................................................................. 158
New PC Demand Drivers ................................................................................................ 158
Summary ....................................................................................................................... 164
Chapter 9: PC Peripherals ..................................................................................... 165
Introduction ................................................................................................................... 165
Printers .......................................................................................................................... 165
Scanners ........................................................................................................................ 178
Smart Card Readers ....................................................................................................... 180
Keyboards ...................................................................................................................... 188
Mice .............................................................................................................................. 188
Summary ....................................................................................................................... 189
Chapter 10: Digital Displays ................................................................................. 190
Introduction ................................................................................................................... 190
CRTs—Cathode Ray Tubes .............................................................................................. 191
LCD—Liquid Crystal Displays .......................................................................................... 204
PDP—Plasma Display Panels ........................................................................................... 224
PALCD—Plasma Addressed Liquid Crystal Display ........................................................... 231
FEDs—Field Emission Displays ......................................................................................... 231
DLP—Digital Light Processor ........................................................................................... 232
Organic LEDs .................................................................................................................. 235
LED Video for Outdoors ................................................................................................. 236
LCoS—Liquid Crystal on Silicon ...................................................................................... 236
Comparison of Different Display Technologies ................................................................. 237
FPO ................................................................................................................................ 237
Three-dimensional (3-D) Displays .................................................................................... 238
The Touch-screen ........................................................................................................... 239
Digital Display Interface Standards .................................................................................. 242
Summary ....................................................................................................................... 245
Chapter 11: Digital Imaging—Cameras and Camcorders ................................... 247
Introduction ................................................................................................................... 247
Digital Still Cameras ....................................................................................................... 247
Digital Camcorders ......................................................................................................... 262
Summary ....................................................................................................................... 270
Chapter 12: Web Terminals and Web Pads ......................................................... 272
Introduction ................................................................................................................... 272
Web Pads/Tablets ........................................................................................................... 274
Role of the Service Provider ............................................................................................ 280
Summary ....................................................................................................................... 281
Chapter 13: Internet Smart Handheld Devices ................................................... 283
Introduction ................................................................................................................... 283
Vertical Application Devices ............................................................................................ 283
Smart Handheld Phones (or Smart Phones) ..................................................................... 283
Handheld Companions ................................................................................................... 284
History of the PDA ......................................................................................................... 286
PDA Applications ............................................................................................................ 289
PDA Form Factors .......................................................................................................... 301
Components of a PDA ................................................................................................... 302
Summary ....................................................................................................................... 308
Chapter 14: Screen and Video Phones ................................................................. 310
Introduction ................................................................................................................... 310
Definition ....................................................................................................................... 310
History ........................................................................................................................... 311
Screenphone Applications .............................................................................................. 312
The Screenphone Market ............................................................................................... 313
Categories and Types ..................................................................................................... 318
The Public Switched Telephone Network (PSTN) vs. the
Internet Protocol (IP) Network ........................................................................................ 319
Screenphone Components ............................................................................................. 325
Video Phone (Webcam) Using the PC ............................................................................. 330
e-Learning ...................................................................................................................... 331
Summary ....................................................................................................................... 331
Chapter 15: Automotive Entertainment Devices ................................................ 333
Introduction ................................................................................................................... 333
The Evolving Automobile ................................................................................................ 336
What is Telematics? ........................................................................................................ 337
The Automotive Electronics Market ................................................................................ 339
The Controversy ............................................................................................................. 341
The Human Touch .......................................................................................................... 344
Pushbutton Controls Versus Voice Recognition ................................................................ 348
Standardization of Electronics in Automobiles ................................................................. 351
Standards for In-vehicle Power Train and Body Electronics ............................................... 353
Standards for In-vehicle Multimedia Electronics ............................................................... 365
Components of a Telematics System ............................................................................... 379
Manufacturers and Products ........................................................................................... 385
Satellite Radio ................................................................................................................ 388
The Vision for Telematics ................................................................................................ 390
Summary ....................................................................................................................... 393
Chapter 16: eBooks ............................................................................................... 394
Introduction ................................................................................................................... 394
What is an eBook? ......................................................................................................... 394
Benefits of Using eBooks ................................................................................................ 395
Reading eBooks .............................................................................................................. 396
Market ........................................................................................................................... 396
Copy Protection ............................................................................................................. 397
Technology Basics ........................................................................................................... 398
Manufacturers and Products ........................................................................................... 399
Challenges ..................................................................................................................... 401
Summary ....................................................................................................................... 402
Chapter 17: Other Emerging and Traditional Consumer Electronic Devices .... 403
Introduction ................................................................................................................... 403
NetTV ............................................................................................................................ 403
E-mail Terminals ............................................................................................................. 404
Wireless E-mail Devices .................................................................................................. 405
Pagers ............................................................................................................................ 405
Internet-Enabled Digital Picture Frames ........................................................................... 406
Pen Computing or Digital Notepad ................................................................................. 408
Robot Animals—Robot Dogs .......................................................................................... 410
White Goods .................................................................................................................. 411
Lighting Control ............................................................................................................. 413
Home Control ................................................................................................................ 413
Home Security ................................................................................................................ 414
Energy Management Systems ......................................................................................... 416
Home Theater and Entertainment Systems ..................................................................... 416
Magnetic Recording ....................................................................................................... 416
VCRs—Video Cassette Recorders .................................................................................... 417
Custom-Installed Audio Systems ..................................................................................... 420
Receivers/Amplifiers ....................................................................................................... 420
Home Speakers .............................................................................................................. 420
Vehicle Security .............................................................................................................. 422
Vehicle Radar Detectors .................................................................................................. 422
Summary ....................................................................................................................... 422
Chapter 18: The Digital “Dream” Home .............................................................. 423
Emergence of Digital Homes .......................................................................................... 423
The Digital Home Framework ......................................................................................... 424
Productivity Increases in the Connected World ................................................................ 425
Broadband Access .......................................................................................................... 426
Home Networking .......................................................................................................... 442
The Different Needs of Different Consumer Devices ........................................................ 488
Residential Gateways ..................................................................................................... 489
Middleware .................................................................................................................... 497
Digital Home Working Group ......................................................................................... 513
Summary—The Home Sweet Digital Home ..................................................................... 514
Chapter 19: Programmable Logic Solutions
Enabling Digital Consumer Technology ........................................................ 516
What Is Programmable Logic? ........................................................................................ 516
Silicon—Programmable Logic Devices (PLDs) ................................................................... 523
IP Cores, Software and Services ...................................................................................... 541
Xilinx Solutions for Digital Consumer Systems ................................................................. 553
Addressing the Challenges in System Design .................................................................. 568
Using FPCs in Digital Consumer Applications .................................................................. 588
Memories and Memory Controllers/Interfaces ................................................................. 594
Discrete Logic to CPLD: The Evolution Continues ............................................................ 614
Signaling ........................................................................................................................ 625
Xilinx Solutions for EMI Reduction in Consumer Devices ................................................. 629
Summary ....................................................................................................................... 635
Chapter 20: Conclusions and Summary ............................................................... 638
Landscape ...................................................................................................................... 639
The Ultimate Digital Consumer Device ............................................................................ 640
Room for Many .............................................................................................................. 642
Components of a Typical Digital Consumer Device .......................................................... 643
The Coming of the Digital Home .................................................................................... 644
Key Technologies to Watch ............................................................................................. 645
In Closing ....................................................................................................................... 647
Index ....................................................................................................................... 649
The consumer electronics market is flooded with new products. In fact, the number of new consumer
devices that has been introduced has necessitated entirely new business models for the production
and distribution of goods. It has also introduced major changes in the way we access information and
interact with each other.
The digitization of technology—both products and media—has led to leaps in product development. It has enabled easier exchange of media, cheaper and more reliable products, and convenient
services. It has not only improved existing products, it has also created whole new classes of products.
For example, improvements in devices such as DVRs (digital video recorders) have revolutionized the way we watch TV. Digital modems brought faster Internet access to the home. MP3 players
completely changed portable digital music. Digital cameras enabled instant access to photographs
that can be emailed within seconds of taking them. And eBooks allowed consumers to download
entire books and magazines on their eBook tablets.
At the same time, digital technology is generating completely new breeds of products. For
example, devices such as wireless phones, personal digital assistants (PDAs) and other pocketsize
communicators are on the brink of a new era where mobile computing and telephony converge.
All this has been made possible through the availability of cheaper and more powerful semiconductor components. With advancements in semiconductor technology, components such as
microprocessors, memories, and logic devices are cheaper, faster, and more powerful with every
process migration. Similar advances in programmable logic devices have enabled manufacturers of
new consumer electronics devices to use FPGAs and CPLDs to add continuous flexibility and
maintain market leadership.
The Internet has brought about similar advantages to the consumer. Communication tasks such
as the exchange of emails, Internet data, MP3 files, digital photographs, and streaming video are
readily available today to everyone.
This book is a survey of the common thinking about digital consumer devices and their related
applications. It provides a broad synthesis of information so that you can make informed decisions
on the future of digital appliances.
Organization and Topics
With so many exciting new digital devices available, it is difficult to narrow the selections to a short
list of cutting edge technologies. In this book I have included over twenty emerging product categories that typify current digital electronic trends. The chapters offer exclusive insight into digital
consumer products that have enormous potential to enhance our lifestyles and workstyles. The text in
each chapter follows a logical progression from a general overview of a device, to its market dynamics, to the core technologies that make up the product.
Chapter 1 is an introduction to the world of digital technologies. It outlines some of the key
market drivers behind the success of digital consumer products.
Chapter 2 reviews the fundamentals of digitization of electronic devices. It presents an in-depth
understanding of the market dynamics affecting the digital consumer electronics industry.
Digital TV and video are growing at very fast pace. Chapter 3 deals extensively with these
products and includes in-depth coverage on set-top boxes, digital video recorders and home-entertainment systems.
Chapter 4 discusses the emerging digital audio technologies such as MP3, DVD-Audio and
Super Audio CD (SACD).
Chapter 5 explains next generation cellular technologies and how cell phones will evolve in the
The popularity of gaming consoles is growing because of the availability of higher processing
power and a large number of games in recent years. Chapter 6 overviews the main competitive
products—Xbox, PlayStation 2 and GameCube.
The digital videodisc (DVD) player continues to be the fastest growing new product in consumer
electronics history. Chapter 7 describes how a DVD player works and provides an insight into the
three application formats of DVD.
While consumer devices perform a single function very well, the PC remains the backbone for
information and communication exchange. Chapters 8 and 9 provide extensive coverage of PC and
PC peripheral technologies.
Chapter 10 includes an overview of the various alternative display technologies including digital
CRT, Flat panel displays, LCD’s, PDP and DLP technologies.
Chapter 11 presents details on digital imaging—one of the fastest growing digital applications.
It describes the market and the technology beneath digital cameras and camcorders.
Chapter 12 describes market, application, technology, and component trends in the web terminal
and web pad products areas.
PDAs and other handheld products constitute one of the highest growth segments in the consumer market. Chapter 13 describes the market dynamics and technology details of these devices.
Chapter 14 focuses on the developments in screenphone and videophone products.
Telematics technology has made a broad change to the facilities available to us while we travel.
Chapter 15 discusses the market dynamics and challenges of the evolving telematics industry.
Chapter 16 provides details on the emerging category of eBook products which promise to
change the way we access, store and exchange documents.
Changes occurring in consumer devices such as VCRs, pagers, wireless email devices, email
terminals, and NetTVs are discussed in Chapter 17.
Chapter 18 covers key technologies that continue to change the consumer landscape such as settop boxes which distribute voice, video, and data from the high-speed Internet to other consumer
Chapter 19 provides a synopsis of the book and highlights some of its key trends and takeaways.
Who Should Read this Book
This book is directed to anyone seeking a broad-based familiarity with the issues of digital consumer
devices. It assumes you have some degree of knowledge of systems, general networking and Internet
concepts. It provides details of the consumer market and will assist you in making design decisions
that encompass the broad spectrum of home electronics. It will also help you design effective
applications for the home networking and digital consumer devices industry.
Generation D—The Digital Decade
The Urge to Be Connected
Humans have an innate urge to stay connected. We like to share information and news to enhance
our understanding of the world. Because of this, the Internet has enjoyed the fastest adoption rate of
any communication medium. In a brief span of 5 years, more than 50 million people connected to
the web. Today we have over 400 million people connected and sharing information. In fact, in 2003
we exchanged 6.9 trillion emails! Thanks to the Internet, we’ve become a “connected society.”
The Internet has created a revolution in the way we communicate, share information, and
perform business. It is an excellent medium for exchanging content such as online shopping information, MP3 files, digitized photographs, and video-on-demand (VoD). And its effect on business has
been substantial. For example, a banking transaction in person costs a bank $1.07 to process, while
the same transaction costs just a cent over the Internet. This translates into reduced time, lower cost,
lower investment in labor, and increased conveniences to consumers.
The dynamics of the networking industry are also playing into homes. We are demanding faster
Internet access, networked appliances, and improved entertainment. Hence, there is an invasion of
smart appliances such as set-top boxes, MP3 (MPEG type III) players, and digital TVs (televisions)
into today’s homes.
The digitization of data, voice, video, and communications is driving a consumer product
convergence, and this convergence has given birth to the need for networked appliances. The ability
to distribute data, voice, and video within the home has brought new excitement to the consumer
The Internet, combined with in-home networking and “smart,” connected appliances, has
resulted in Generation D—the Digital Decade. Our personal and business lives have come to depend
on this digital revolution.
Our Daily Lives
Ten years ago cell phones were a luxury and personal digital assistants (PDAs) and web pads were
unheard of. Now our lives are filled with digital consumer devices.
Remember the post-it notes we left everywhere to remind us of important tasks? They’ve been
replaced by the PDA which tracks all our activities. The digital camera helps us save pictures in
digital format on our PCs and on CDs (compact disks). And we use the Internet to share these
pictures with friends and family around the world. Our music CDs are being replaced with MP3
audio files that we can store, exchange with friends, and play in a variety of places.
Thanks to our cell phones, the Internet is constantly available to us. Renting cars and finding
directions are easy with the use of auto PCs now embedded in most cars. No more looking at maps
Chapter 1
and asking directions to make it from the airport to the hotel. And we no longer need to stand in line
at the bank. Through Internet banking from our PCs, we can complete our financial transactions with
the click of a few buttons.
While email was one step in revolutionizing our lives, applications such as Yahoo! Messenger,
MSN Messenger, and AOL Instant Messenger provide the ability to communicate real-time around
the world. Applications such as Microsoft NetMeeting allow us to see, hear, and write to each other
in real time, thus saving us substantial travel money. Digital consumer devices such as notebook PCs,
DVDs (digital versatile disk), digital TVs, gaming consoles, digital cameras, PDAs, telematics
devices, digital modems, and MP3 players have become a critical part of our everyday lives.
So what has enabled us to have this lifestyle? It is a mix of:
Digital consumer devices
Broadband networking technologies
Access technologies
Software applications
Communication services.
These items provide us with the communication, convenience, and entertainment that we’ve
come to expect.
The Digitization of Consumer Products
In the first 50 years of the 20th century, the mechanical calculator was developed to process digital
information faster. It was supercharged when the first electronic computers were born. In the scant 50
years that followed we discovered the transistor, developed integrated circuits, and then learned how
to build them faster and cheaper.
With the advent of commodity electronics in the 1980s, digital utility began to proliferate beyond
the world of science and industry. Mass digital emergence was at hand and the consumer landscape began
a transition from an industrial-based analog economy to an information-based digital economy. Thus
was born the PC, the video game machine, the audio CD, graphical computing, the Internet, digital
TV, and the first generation of dozens of new digital consumer electronic products.
Today digital data permeates virtually every aspect of our lives. Our music is encoded in MP3
format and stored on our hard drives. TV signals are beamed from satellites or wired through cable
into our homes in MPEG (Moving Pictures Experts Group) streams. The change from an analog
CRT (cathode-ray tube) TV to digital TV provides us with better, clearer content through highdefinition TV. The move from photographs to digital pictures is making the sharing and storage of
image content much easier. The move from analog to digital cellular phones provides clearer, more
reliable calls. And it enables features such as call waiting, call forwarding, global roaming, gaming,
video communications, and Internet access from our cellular phones. We monitor news and investments over the Internet. We navigate the planet with GPS (global positioning satellites) guided
graphical moving maps. We record pictures and movies in Compact FLASH. We buy goods, coordinate our schedules, and send and receive email from the palms of our hands. All these applications
have been enabled by the move from analog to digital formats.
Why Digitization?
What’s driving this digital firestorm? There are three answers to this question: utility, quality, and
Generation D—The Digital Decade
By its nature digital data can be replicated exactly and distributed easily, which gives it much
more utility. (With analog formatted data, every subsequent generation deteriorates.) For example,
through digital technology you can quickly and easily send a copy of a picture you’ve just taken to
all your friends.
Digital content has several quality advantages as well. It exhibits less noise as illustrated by the
lack of background hiss in audio content. It also enables superior image manipulation as seen in
shadow enhancement, sharpness control, and color manipulations in medical imaging.
Economically, digital data manipulation is much less expensive since it is able to leverage
semiconductor economies of scale. It becomes better and more affordable every year. For example,
an uncompressed, photographic quality digital picture requires less than 20 cents of hard drive space
to store.
The digital revolution is growing rapidly. The digitization of consumer products has led not only
to the improvement of existing products, but to the creation of whole new classes of products such as
PVRs (personal video recorders), digital modems, and MP3 players.
Digital products are synonymous with quality, accuracy, reliability, speed, power, and low cost.
Simply stated, anything that is digital is superior.
Converging Media
Communication is not about data or voice alone. It includes voice, data, and video. Since digital
media offers superior storage, transport and replication qualities, the benefits of analog to digital
migration are driving a demand for digital convergence. The different media types are converging
and benefiting from the advantages of digital media. We find a demand for its capabilities in an everexpanding range of devices and applications:
Digital data (the world wide web)
Digital audio (CDs, MP3)
Digital video (DVD, satellite TV)
HDTV (high definition TV)
Digital communications (the Internet, Ethernet, cellular networks, wireless LANs, etc.)
For example, the circuit-switched network was the primary provider of voice communication.
However, the far cheaper data network (Internet backbone) accommodates voice, data and video
communication over a single network. Digital convergence has allowed availability of all these
media types through a host of consumer devices.
Faster and Cheaper Components
The availability of faster and cheaper components is fueling the digital revolution. Everyday consumer appliances are now embedded with incredible computing power. This is due to Moore’s law,
which predicts that the number of transistors on a microprocessor will approximately double every
12 to 18 months. Hence, every generation of new products yields higher computing power at lower
Moore’s law, coupled with process migration, enables certain component types to provide more
gates, more features, and higher performance at lower costs:
FPGAs (field programmable gate arrays)
ASICs (application specific integrated circuits)
Chapter 1
ASSPs (application specific standard products)
The microprocessor, for example, has seen no boundaries and continues to expand beyond speeds
of 3 GHz. While it remains to be seen what computing power is truly needed by the average consumer, this development has enabled the embedding of smart processors into every electronic
device—from a toaster to an oven to a web tablet. It has allowed software developers to provide the
highest quality in communication, entertainment, and information. And it enables programmable
logic devices such as FPGAs and CPLDs, memory and hard disk drives to offer more power at lower
Broadband Access—The Fat Internet Pipe
What started as a healthy communication vehicle among students, researchers, and university
professors is today the most pervasive communications technology available. The Internet has
positively affected millions of people around the world in areas such as communication, finance,
engineering, operations, business, and daily living. Perhaps the Internet’s best-known feature is the
World Wide Web (the Web), which provides unlimited resources for text, graphics, video, audio, and
animation applications.
Unfortunately, many people who access the Web experience the frustration of endless waiting for
information to download to their computers. The need for fast access to the Internet is pushing the
demand for broadband access. We are moving from phone line dial-up to new and improved connectivity platforms. Demand for high-speed Internet access solutions has fueled the proliferation of a
number of rival technologies that are competing with each other to deliver high-speed connections to
the home. The bulk of these connections are delivered by technologies based on delivery platforms
such as:
Wireless local loop
Satellite or Powerline
Telecommunication, satellite, and cable companies are looking for ways to enhance their
revenue sources by providing high-speed Internet access. While the average phone call lasts three
minutes, an Internet transaction using an analog modem averages over three hours and keeps the
circuits busy. This increased traffic requires more circuits, but comes with no incremental revenue
gain for the service provider.
Phone companies had to look for new techniques for Internet access. One such technology is
DSL, which offers several advantages. While phone lines are limited to speeds up to 56 Kb/s (kilobits per seconds) that many find inadequate, broadband communications offer much higher transfer
rates. And broadband connections are always on, which means users don’t have to go through a slow
log-on process each time they access the Internet.
Generation D—The Digital Decade
Home Networking
Many of us don’t realize that there are already several computers in our homes—we just don’t call
them computers. There are microchips in refrigerators, microwaves, TVs, VCRs (videocassette
recorders), and stereo systems. With all of these processors, the concept of home networking has
emerged. Home networking is the distribution of Internet, audio, video, and data between different
appliances in the home. It enables communications, control, entertainment, and information between
consumer devices.
The digitization of the different smart appliances is pushing the need for the Internet which, in
turn, is fueling the demand for networked appliances. Several existing networking technologies such
as phonelines, Ethernet, and wireless are currently used to network home appliances. Home networking will enable communications, control, entertainment, and information exchange between
consumer devices.
The Day of the Digital Consumer Device
While many believe that the day of the PC is over, this is not the case. The PC will survive due to its
legacy presence and to improvements in PC technology. The PC of the future will have increased
hard disk space to store audio, image, data and video files. It will provide access to the Internet and
control the home network. It will be a single device that can provide multiple services as opposed to
many recently introduced appliances that provide one unique benefit. In general, hardware and
software platforms will become more reliable.
The digitization of the consumer industry is leading to convergence and allowing consumers to
enjoy the benefits of the Internet. The digital decade introduces three types of product usage that
shape the market for new electronics devices—wireless, home networking and entertainment.
Wireless products will be smart products that enable wireless, on-the-go communications at all times
for voice, video and data. Home networking products such as gateways will extend PC computing
power to all smart devices in the home. In the entertainment area, digital technology is enabling
tremendous strides in compression and storage techniques. This will enable us to download, store,
and playback virtually any size content.
Digital technologies will continue to enable wireless convenience through consumer products
and provide better communication, information, and entertainment.
Bringing it All Together
This book examines the different consumer devices that are emerging to populate our homes. It
previews the digital home of the future and describes the market and technology dynamics of
appliances such as digital TVs, audio players, cellular phones, DVD players, PCs, digital cameras,
web terminals, screen phones, and eBooks.
This book also provides information, including block diagrams, on designing digital consumer
devices. And it explains how solutions based on programmable logic can provide key advantages to
the consumer in flexibility, performance, power, and cost.
Digital Consumer Devices
The average home is being invaded—by faster and smarter appliances. Traditional appliances such as
toasters, dishwashers, TVs, and automobiles are gaining intelligence through embedded processors.
Our cars have had them for years. Now, they appear in our children’s toys, our kitchen appliances,
and our living room entertainment systems. And their computing power rivals that of previous
generation desktop computers.
In addition, drastic price reductions are bringing multiple PCs into our homes. This has created
the need for high-speed, always-on Internet access in the home. The PC/Internet combination not
only serves as a simple communication tool, it also enables vital applications like travel, education,
medicine, and finance. This combination also demands resource sharing. Data, PC peripherals (i.e.,
printers, scanners) and the Internet access medium require device networking to interact effectively.
These requirements have helped spawn a new industry called home networking.
This chapter provides a peek into the consumer home and shows how the landscape of digital
consumer devices is evolving. It describes the types of digital consumer devices that are emerging
and how they interact with each other.
The Era of Digital Consumer Devices
The home product industry is buzzing with new digital consumer devices. They are also called
information appliances, Internet appliances, intelligent appliances, or iAppliances. These smart
products can be located in the home or carried on the person. They are made intelligent by embedded
semiconductors and their connection to the Internet. They are lightweight and reliable and provide
special-purpose access to the Internet. Their advanced computational capabilities add more value and
convenience when they are networked.
Although the PC has long dominated the home, it has been error-prone, which has hindered its
penetration into less educated homes. This compares with the television or the telephone, which have
seen much wider adoption not only among the educated, but also the uneducated and lower-income
households. This has led to the introduction of digital consumer devices. They are instant-on devices
that are more stable and cheaper than PCs when developed for a specific application. They can also
be more physically appropriate than a large, bulky PC for certain applications.
Since they are focused on functionality, digital consumer devices generally perform better than a
multi-purpose PC for single, specific functions. Web surfing is the most popular application, with
other applications such as streaming media and online gaming growing in popularity. These devices
are easier to use than PCs, simple to operate, and require little or no local storage.
Digital Consumer Devices
Digital consumer devices encompass hardware, software, and services. They are designed
specifically to enable the management, exchange, and manipulation of information. They are enabled
by technologies such as:
Internet convergence and integration
Wired and wireless communications
Software applications
Semiconductors technology
Personal computing
Consumer electronics
Digital consumer devices also enable a broader range of infotainment (information and entertainment) by providing for services such as accessing email, checking directions when on the road,
managing appointments, and playing video games.
Examples of key digital consumer devices
The following list includes some of the more common digital consumer devices.
Audio players – MP3 (Internet audio), CD, SACD, DVD-Audio player
Digital cameras and camcorders
Digital displays (PDP, LCD, TFT)
Digital TVs (SDTV, HDTV)
Digital VCRs or DVRs (digital video recorder) or personal video recorders (PVR)
DVD players
Email terminals
Energy management units, automated meter reading (AMR) and RF metering
Gaming consoles
Internet-enabled picture frames
Mobile phones
NetTVs or iTV-enabled devices
PDAs and smart handheld devices
Robot animals
Screen phones or video phones
Security units
Set-top boxes
Telematic or auto PCs
Web pads, fridge pads, Web tablets
Web terminals
White Goods (dish washer, dryer, washing machine)
While PCs and notebook PCs are not traditionally considered digital consumer devices, they
have one of the highest penetration rates of any consumer product. Hence, this book will cover
digital consumer devices as well as PCs and PC peripherals.
Market Forecast
The worldwide acceptance and shipment of digital consumer devices is growing rapidly. While
market forecasts vary in numbers, market data from several research firms show exploding growth in
this market.
Chapter 2
Market researcher IDC (International Data Corporation) reports that in 2003, 18.5 million digital
consumer device units shipped compared with 15.7 million units of home PCs. DataquestGartnerGroup reports that the worldwide production of digital consumer devices has exploded from
1.8 million units in 1999 to 391 million units in 2003. Complementing the unit shipment, the worldwide revenue forecast for digital consumer devices was predicted to grow from $497 million in 1999
to $91 billion in 2003.
The digital consumer device segment is still emerging but represents exponential growth potential as more devices include embedded intelligence and are connected to each other wirelessly. The
digital consumer device space includes:
Internet-enabling set-top boxes
Screen phones
Internet gaming consoles
Consumer Internet clients
Digital cameras
Smart handheld devices (personal and PC companions, PDAs, and
vertical industry devices)
High-end and low-end smart phones
Alphanumeric pagers
Bear Sterns believes that the market for these products will grow to 475 million by 2004 from
roughly 88 million in 2000. The market for digital consumer devices will grow from 100 millionplus users and $460 billion in the mid-1990s to over 1 billion users and $3 trillion by 2005.
The high-tech market research firm Cahners In-Stat finds that the digital consumer devices
market will heat up over the next several years with sales growing over 40% per year between 2000
and 2004. Predicted worldwide unit shipments for some key digital consumer devices are:
DVD players growing to over 92 million units by 2005.
Digital cameras growing to 41 million units and digital camcorders growing to 20 million
units by 2005.
Digital TV growing to 10 million units by 2005.
Smart handheld devices growing to 43 million units by 2004.
Gaming consoles reaching 38 million units by 2005.
DVRs growing to 18 million units by 2004.
Internet audio players growing to 17 million units by 2004.
NetTVs growing to 16 million units by 2004.
Email terminals, web terminals and screen phones reaching 12 million units by 2004.
PDAs growing to 7.8 million units by 2004.
Digital set-top boxes growing to over 61 million units by 2005.
Mobile handsets reaching 450 million units by 2005.
The Consumer Electronics Association (CEA) reports that the sales of consumer electronics
goods from manufacturers to dealers surpassed $100 billion in 2002 and was expected to top $105
billion in 2003. This will set new annual sales records and represent the eleventh consecutive year of
growth for the industry. The spectacular growth in sales of consumer electronics is due in large part
to the wide variety of products made possible by digital technology. One of the categories most
affected by the digital revolution has been the in-home appliance, home information, and mobile
electronics product categories. CEA predicts strong growth in the video appliance category in 2004,
Digital Consumer Devices
with products such as digital televisions, camcorders, set-top boxes, personal video recorders, and
DVD players leading the charge. By 2005, worldwide home-network installations are expected to
exceed 40 million households and the total revenues for such installations are expected to exceed
$4.75 billion.
Certainly the digital consumer device is seeing significant market penetration. But a critical
issue plaguing the market is the widespread perception that digital consumer devices are focused on
failed products such as Netpliance i-opener and 3Com Audrey. However, the market is not considering the success of handheld (PDA) or TV-based devices (set-top boxes, digital TVs, etc.). Internet
handheld devices, TV-based devices and Internet gaming consoles are the most popular consumer
devices based on both unit shipments and revenues generated.
Market Drivers
The most influential demand driver for digital consumer devices is infotainment acceleration. This is
the ability to access, manage, and manipulate actionable information at any time and any place.
Most people feel that the PC takes too long to turn on and establish an Internet connection. Users
want instant-on devices with always-on Internet access. Moreover, people want the ability to take
their information with them. For example, the Walkman allows us to take our music anywhere we go.
The pager makes us reachable at all times. And the mobile phone enables two-way communication
from anywhere Interconnected, intelligent devices yield a massive acceleration in communication
and data sharing. They also drastically increase our productivity and personal convenience.
Digital consumer devices are only now coming into their own because the cost of information
must be reasonable compared to the value of that information. Technology is finally tipping the
balance. Cost can be measured in terms of ease-of-use, price, and reliability. Value translates into
timeliness of information delivery, usefulness, and flexibility. The convergence of several technologies enables vendors to introduce appliances that are simple-on-the-outside and complex-on-theinside. Buyers can now obtain value at an affordable price.
The Integrated Internet
The development of the Internet has been key to the increases in information acceleration. This is
best illustrated by the exponential growth in Internet traffic. The ongoing development of Internet
technologies is enabling the connection of every intelligent device, thus speeding the flow of
information for the individual. This linkage is also making possible the introduction of Internetbased services. For example, consumers are realizing the benefit of integrated online travel planning
and management services. They can make real-time changes based on unforeseen circumstances and
send that information automatically to service providers over the Internet. If a flight is early, this
information is passed from the airplane’s onboard tracking system to the airline’s scheduling system.
From there, it is sent to the passenger’s car service which automatically diverts a driver to the airport
20 minutes early to make the pickup.
Today, many find it difficult to imagine life without the Internet. Soon, it will be hard to imagine
life without seamless access to information and services that doesn’t require us to be “on-line” for
Internet access.
Chapter 2
Advances in semiconductor technology have produced low-power silicon devices that are small,
powerful, flexible, and cheap. Such innovation has created advances such as system-on-a-chip (SoC)
technology. Here, multiple functions such as micro-processing, logic, signal processing, and memory
are integrated on a single chip. In the past, each of these functions was handled by a discrete chip.
Smaller chips that execute all required functions have several advantages. They allow smaller
product form factors, use less power, reduce costs for manufacturers, and offer multiple features. In
addition, many can be used for multiple applications and products depending on the software placed
on top. There are several companies specifically designing chips for digital consumer devices
including Xilinx, Motorola, ARM, MIPS, Hitachi, Transmeta, and Intel. This trend in semiconductor
technology allows new generations of consumer products to be brought to market much faster while
lowering costs. With process technologies at 90-nm (0.09 micron) and the move to 12-inch (300-mm)
wafers, the cost of the devices continues to see drastic reduction. In addition, integration of specific
features allows processors to consume less power and enhance battery life. This provides longer
operation time in applications such as laptop PCs, web tablets, PDAs, and mobile phones.
Software and Middleware
New software architectures like Java and applications like the wireless application protocol (WAP)
micro-browser are proving key to enabling digital consumer devices. For example, the Java platform
and its applications allow disparate devices using different operating systems to connect over-wire
and wireless networks. They enable the development of compact applications or applets for use on
large (e.g., enterprise server) or small (e.g., smart phone) devices. These applets can also be used
within Web pages that support personal information management (PIM) functions such as e-mail and
calendars. Other middleware technologies such as Jini, UPnP, HAVi, and OSGi also contribute to
increased production of digital consumer devices. And the move toward software standards in
communications (3G wireless, Bluetooth, wireless LANs) and copyrights makes it easier to produce
and distribute digital devices.
The demand for low-cost storage is another key market driver for digital consumer devices. Here, the
declining cost per megabyte (MB) lowers the cost of delivering a given product. The lower cost
results in new storage products for both hard drives and solid state memory such as DRAM, SRAM,
and flash memory. Also hard drive technology continues to embrace lower costs and this is fueling
new products such as set-top boxes and MP3 players.
Wired and wireless communications are gradually becoming standardized and less expensive. They
are driven by market deregulation, acceptance of industry standards, and development of new
technologies. They provide advantages for rapid, convenient communication. For example, the
advent of 2.5G and 3G wireless will enable greater transfer rates. And new wireless technologies
such as IEEE 802.11b/a/g and HiperLAN2 could be used to connect public areas (hotels and airports), enterprises, and homes. These LAN technologies evolved from enterprise data networks, so
their focus has been on increasing data rates since over two-thirds of network traffic is data. To that
end, newer wireless LAN technologies are designed to provide higher quality of service.
Digital Consumer Devices
Phases of Market Acceptance
While huge growth is predicted for digital consumer devices, the acceptance of these products will
be in phases.
In the pre-market acceptance phase there is:
Hype and debate
Few actual products
Limited business models
Limited Web pervasiveness
Less technology ammunition
The acceptance of these products will lead to:
Rise of the digital home
Broader range of products
Diversification of business models
Heightened industry support
Increased bandwidth for home networking and wireless
Maturation of the Internet base
Success Factors and Challenges
Some of the factors contributing to the success of digital consumer devices are:
Services are key to the business model. The success of some products such as web tablets
requires the development of thorough business models to provide real value. The key is to
offer low-cost solutions backed by partnerships and sustainable services.
Product design must achieve elegance. Consumers often buy products that are functionally
appealing and possess design elegance. For example, thinner PDAs will achieve higher
success than bigger ones.
Branding and channels are important because customers hesitate to buy products that do not
have an established name and supply channel.
Industry standards must be adopted in each area to enable adjacent industries to develop
products. For example, wired and wireless data communications are becoming standardized
and less expensive. They are driven by market deregulation, acceptance of industry standards, and development of new broadband and networking technologies. Critical
technologies such as broadband (high-speed Internet) access, wireless, and home networking must hit their strides.
Industry and venture capital investment must be heightened.
New product concepts must gain significant consumer awareness. (Consumers who do not
understand the value and function of a tablet PC, for example, will not be encouraged to buy
The Internet must be developed as a medium for information access, dissemination, and
Advances in semiconductor technology that enabled development of inexpensive SoC
technologies must continue.
Development of software platforms such as Java must continue.
The functionality of multiple devices must be bundled. Multiple-function products can be
very compelling. (Caveat: Bundling functionality is no guarantee of market acceptance,
even if the purchase price is low. Because there is no formula for what users want in terms
of function, form factor, and price, an opportunity exists for new product types in the market.)
Chapter 2
Some of the major outstanding issues affecting the growth of the market are:
Single-use versus Multi-use – The three design philosophies for digital consumer device
• Single-purpose device, like a watch that just tells time.
• Moderately integrated device such as pagers with email, phones with Internet access,
and DVD players that play multiple audio formats.
• All-purpose device such as the PC or the home entertainment gateway.
Note that there is no clear way to assess each category before the fact. One must see the
product, the needs served, and the customer reaction.
Content Integration – How closely tied is the product solution to the desired data? Can
users get the data they want, when they want it, and in the form they want it? Can they act
on it in a timely fashion? Digital consumer devices allow this, but not without tight integration with the information sources.
Replacement or Incremental – Are the products incremental or do they replace existing
products? Most products represent incremental opportunities similar to the impact of the
fractional horsepower engine. Some legacy product categories provide benefits over new
categories like the speed of a landline connection and clarity of full-screen documents. Also,
how many devices will users carry? Will users carry a pager, a cell phone, a wireless e-mail
device, a PDA, a Walkman tape, and a portable DVD player?
How Much Complexity – There is no rule on how much complexity to put into a device.
But there is a tradeoff, given power and processing speeds, between ease-of-use and power.
While most devices today have niche audiences, ease-of-use is key to mass audience
Access Speeds, Reliability, and Cost – Issues facing Internet access via mobile digital
consumer devices include:
• Establishing standards
• Developing successful pricing models
• Improving reliability and user interfaces
• Increasing geographical coverage.
Business Model – What is the right business model for a digital consumer devices company? A number of business models such as service subscription, on-demand subscription,
technology licensing, and up-front product purchase can be successful. Investors should
focus on whether a company is enabling a product solution and whether the company is
unlocking that value.
Industry Standards – Industry standards have not yet been determined in many areas such
as software platform, communication protocol, and media format. This increases the risk for
vendors investing in new products based on emerging technologies. And it raises the risk for
users that the product they purchase may lose support as in the case of the DIVX DVD
standard. It also raises incompatibility issues across geographies. In most areas, however,
vendors are finding ways to manage ompatibility problems and standards bodies are gaining
Functional Requirements
Some of the key requirements of digital consumer devices to gain market acceptance include:
Ubiquity – The prevalence of network access points.
Reliability – Operational consistency in the face of environmental fluctuation such as noise
interference and multi-path.
Digital Consumer Devices
Cost – Affordable for the mass market.
Speed – Support of high-speed distribution of media rich content (>10 Mb/s)
Mobility/portability – Support of untethered devices. The ability to access and manipulate
actionable information at any time and in any place is a must.
QoS (Quality of Service) – Scalable QoS levels for application requirements of individual
Security – User authentication, encryption, and remote access protection.
Remote management – Enabled for external network management (queries,
configuration, upgrades).
Ease-of-use – Operational complexity must be similar to existing technologies such as TVs
and telephones.
What About the Personal Computer?
Technology, not economics, is the Achilles’ heel of the PC. Until the arrival of the digital consumer
device, access to the Web and e-mail was the exclusive domain of the PC. Digital consumer devices
are clearly being marketed now as an alternative to the PC to provide network services. Previously,
volume economics gave the PC an edge. But it has weaknesses such as:
Long boot-up time
Long Internet connection time
Inability to be instant-on
Not always available
Not easy to use
Not truly portable
Complicated software installations
This has created an opportunity for a new wave of consumer products. Users want instant-on appliances that leverage their connection to the Internet and to other devices.
Digital consumer devices provide Internet access and they are low cost, consumer focused, and
easy to use. The acceptance of these appliances is causing PC manufacturers to reduce prices.
However, the PC will not go away because of the infrastructure surrounding it and its productivity.
We have entered the era of PC-plus where the PC’s underlying digital technology extends into new
areas. For example, the notebook PC continues to survive because of its valuable enhancements and
its plummeting prices. It is the ideal home gateway given its capabilities and features such as storage
space and processor speeds.
Digital Home
With the invasion of so many digital consumer devices and multiple PCs into the home, there is a big
push to network all the devices. Data and peripherals must be shared. The growing ubiquity of the
Internet is raising consumer demand for home Internet access. While many digital home devices
provide broadband access and home networking, consumers are not willing to tolerate multiple
broadband access technologies because of cost and inconvenience. Rather, consumers want a single
broadband technology that networks their appliances for file sharing and Internet access.
Home networking is not new to the home. Over the past few years, appliances have been networked
into “islands of technologies.” For example, the PC island includes multiple PCs, printer, scanner,
Chapter 2
and PDA. It networks through USB 1.1, parallel, and serial interfaces. The multimedia island
consists of multiple TVs, VCR players, DVD players, receivers/amplifiers, and speakers. This island
has been connected using IEEE 1394 cables. Home networking technologies connect the different
devices and the networked islands.
The digital home consists of:
Broadband Access – Cable, xDSL, satellite, fixed wireless (IEEE 802.16), powerline,
Fiber-to-the-home (FTTH), ISDN, Long Reach Ethernet (LRE), T1
Residential Gateways – Set-top boxes, gaming consoles, digital modems, PCs, entertainment gateways
Home Networking Technologies –
• No new wires – phonelines, powerlines
• New wires – IEEE 1394, Ethernet, USB 2.0, SCSI, optic fiber, IEEE 1355
• Wireless – HomeRF, Bluetooth, DECT, IEEE 802.11b/a/g, HiperLAN2, IrDA
Middleware Technologies – OSGi, Jini, UPnP, VESA, HAVi
Digital Consumer Devices – Digital TV, mobile phones, PDAs, web pads, set-top box,
digital VCRs, gaming consoles, screen phones, auto PCs/telematics, home automation,
home security, NetTV, PCs, PC peripherals, digital cameras, digital camcorders, audio
players (MP3, other Internet, CD, DVD-Audio, SACD), email terminals, etc.
King of All—The Single All-Encompassing Consumer Device
Will there ever be a single device that provides everything we need? Or will there continue to be
disparate devices providing different functionality? As ideal as a single-device solution would be, the
feasibility is remote because of these issues:
Picking the Best in Class – Users often want to have more features added to a particular
device. But there is a tendency to prefer the best-in-class of each category to an all-in-one
device because of the tradeoffs involved. For example, a phone that does everything gives up
something, either in battery life or size.
Social Issues – How do consumers use devices? Even though there is a clock function in a
PDA, a wireless e-mail device, and a phone, we still use the wristwatch to find the time.
This is due to information acceleration and the fact that we do not want to deal with the
tradeoffs of a single device. But we do want all devices to communicate with each other and
have Internet protocol (IP) addresses so that we see the same times on all the clock sources.
Size versus Location – There is a strong correlation between the size of a device and the
information it provides. For example, we don’t want to watch a movie on our watch, or
learn the time from our big-screen TV. Also, we would prefer a wired connection for speed,
but we will access stock quotes on a PDA when there is no alternative.
Moore’s Law Implications – We never really get the device we want because once we get
it, they come out with a better, faster, and sleeker one. For example, when we finally got the
modem of all modems—at 14.4K—they introduced the 28.8K version. The key issue is not
the device, but the continuation of Moore’s and Metcalfe’s Laws.
However, having the king of all devices—also known as the gateway—does provide a benefit. It
introduces broadband access and distributes it to multiple devices via home networking. It is also has
a unique platform which can provide massive storage for access by these devices. And it provides
features such as security at the source of broadband access.
Digital Consumer Devices
The era of digital convergence is upon us. From pictures to e-mail, from music to news, the world
has gone digital. This digital explosion and media convergence has given birth to several digital
consumer devices which provide communication, connectivity and conveniences. The landscape of
tomorrow’s home will look very different as consumer devices grow into smarter appliances that are
networked to increase productivity and provide further conveniences. Market researchers predict that
digital consumer devices will out-ship consumer PCs by 2005 in the U.S. While there are several
new-product issues that need to be addressed, the high-volume digital consumer devices will be
PDAs, set-top boxes, digital TVs, gaming consoles, DVD players, and digital cameras.
Digital Television and Video
Digital television (TV) offers a way for every home, school, and business to join the Information
Society. It is the most significant advancement in television technology since the medium was
created almost 120 years ago. Digital TV offers more choices and makes the viewing experience
more interactive.
The analog system of broadcasting television has been in place for well over 60 years. During
this period viewers saw the transition from black and white to color TV technology. This migration
required viewers to purchase new TV sets, and TV stations had to acquire new broadcast equipment.
Today the industry is again going through a profound transition as it migrates from conventional TV
to digital technology. TV operators are upgrading their existing networks and deploying advanced
digital platforms.
While the old analog-based system has served the global community very well, it has reached its
limits. Picture quality is as good as it can get. The conventional system cannot accommodate new
data services. And, an analog signal is subject to degradation and interference from things such as
low-flying airplanes and household electrical appliances.
When a digital television signal is broadcast, images and sound are transmitted using the same
code found in computers—ones and zeros. This provides several benefits:
Cinema-quality pictures
CD-quality sound
More available channels
The ability to switch camera angles
Improved access to new entertainment services
Many of the flaws present in analog systems are absent from the digital environment. For
example, both analog and digital signals get weaker with distance. While a conventional TV picture
slowly degrades for viewers living a long distance from the broadcaster, a digital TV picture stays
perfect until the signal becomes too weak to receive.
Service providers can deliver more information on a digital system than on an analog system.
For example, a digital TV movie occupies just 2% of the bandwidth normally required by an analog
system. The remaining bandwidth can be filled with programming or data services such as:
Video on demand (VoD)
Email and Internet services
Interactive education
Interactive TV commerce
Digital Television and Video
Eventually, all analog systems will be replaced by digital TV. But the move will be gradual to
allow service providers to upgrade their transmission networks and manufacturers to mass-produce
sufficient digital products.
History of Television
Television started in 1884 when Paul Gottlieb patented the first mechanical television system. It
worked by illuminating an image via a lens and a rotating disc (Nipkow disc). Square apertures were
cut out of the disc which traced out lines of the image until the full image had been scanned. The
more apertures there were, the more lines were traced, producing greater detail.
In 1923 Vladimir Kosma Zworykin replaced the Nipkow disc with an electronic component. This
allowed the image to be split into more lines, which produced greater detail without increasing the
number of scans per second. Images could also be stored between electronic scans. This system was
named the Iconoscope and patented in 1925.
J.L. Baird demonstrated the first mechanical color television in 1928. It used a Nipkow disc with
three spirals, one for each primary color—red, green and blue. However, few people had television
sets at that time so the viewing experience was less than impressive. The small audience of viewers
watched a blurry picture on a 2- or 3-inch screen.
In 1935 the first electronic television system was demonstrated by the Electric Musical Industries
(EMI) company. By late 1939, sixteen companies were making or planning to make electronic
television sets in the U.S.
In 1941, the National Television System Committee (NTSC) developed a set of guidelines for the
transmission of electronic television. The Federal Communications Commission (FCC) adopted the
new guidelines and TV broadcasts began in the United States. In subsequent years, television
benefited from World War II in that much of the work done on radar was transferred directly to
television set design. Advances in cathode ray tube technology are a good example of this.
The 1950s heralded the golden age of television. The era of black and white television commenced in 1956 and prices of TV sets gradually dropped. Towards the end of the decade, U.S.
manufacturers were experimenting with a wide range of features and designs.
The sixties began with the Japanese adoption of the NTSC standards. Towards the end of the
sixties Europe introduced two new television transmission standards:
Systeme Electronique Couleur Avec Memoire (SECAM) is a television broadcast standard
in France, the Middle East, and most of Eastern Europe. SECAM broadcasts 819 lines of
resolution per second.
Phase Alternating Line (PAL) is the dominant television standard in Europe. PAL delivers
625 lines at 50 half-frames per second.
The first color televisions with integrated digital signal processing technologies were marketed
in 1983. In 1993 the Moving Picture Experts Group (MPEG) completed definition of MPEG-2 Video,
MPEG-2 Audio, and MPEG-2 Systems. In 1993 the European Digital Video Broadcasting (DVB)
project was born. In 1996 the FCC established digital television transmission standards in the United
States by adopting the Advanced Television Systems Committee (ATSC) digital standard. By 1999
many communication mediums had transitioned to digital technology.
Chapter 3
Components of a Digital TV System
Behind a simple digital TV is a series of powerful components that provide digital TV technology.
These components include video processing, security and transmission networks.
A TV operator normally receives content from sources such as local video, cable, and satellite
channels. The content is prepared for transmission by passing the signal through a digital broadcasting system. The following diagram illustrates this process.
Note that the components shown are logical units and do not necessarily correspond to the
number of physical devices deployed in the total solution. The role of each component shown in
Figure 3.1 is outlined in the following category descriptions.
Figure 3.1: Basic Building Blocks of a Digital Broadcasting System
Compression and Encoding
The compression system delivers high quality video and audio using a small amount of network
bandwidth by minimizing information storage capacity. This is particularly useful for service
providers who want to ‘squeeze’ as many digital channels as possible into a digital stream.
A video compression system consists of encoders and multiplexers. Encoders digitize, compress
and scramble a range of audio, video and data channels. They allow TV operators to broadcast
several digital video programs over the same bandwidth that was formerly used to broadcast a single
analog video program. Once the signal is encoded and compressed, an MPEG-2 data stream is
transmitted to the multiplexer. The multiplexer combines the outputs from the various encoders with
security and program information and data into a single digital stream.
Once the digital signal has been processed by the multiplexer, the video, audio, and data are modulated (amalgamated) with the carrier signal. The unmodulated digital signal from the multiplexer has
Digital Television and Video
only two possible states—zero or one. By passing the signal through a modulation process, a number
of states are added that increase the data transfer rate. The modulation technique used by TV
operators depends on the geography of the franchise area and the overall network architecture.
Conditional access system
Broadcasters and TV operators interact with viewers on many levels, offering a greater program
choice than ever before. The deployment of a security system, called conditional access, provides
viewers with unprecedented control over what they watch and when. A conditional access system is
best described as a gateway that allows viewers to access a virtual palette of digital services.
A conditional access system controls subscribers’ access to digital TV pay services and secures
the operator’s revenue streams. Consequently, only customers who have a valid contract with the
network operator can access a particular service. Using today’s conditional access systems, network
operators can directly target programming, advertisements, and promotions to subscribers. They can
do this by geographical area, market segment, or viewers’ personal preferences. Hence, the conditional access system is a vital aspect of the digital TV business.
Network Transmission Technologies
Digital transmissions are broadcast through one of three systems:
Cable network
TV aerial
Satellite dish
Different service providers operate each system. If cable television is available in a particular
area, viewers access digital TV from a network based on Hybrid fiber/coax (HFC) technology. This
refers to any network configuration of fiber-optic and coaxial cable that redistributes digital TV
services. Most cable television companies are already using it. HFC networks have many advantages
for handling next-generation communication services—such as the ability to simultaneously
transmit analog and digital services. This is extremely important for network operators who are
introducing digital TV to their subscribers on a phased basis.
Additionally, HFC meets the expandable capacity and reliability requirements of a new digital
TV system. The expandable capacity feature of HFC-based systems allows network operators to add
services incrementally without major changes to the overall network infrastructure. HFC is essentially a ‘pay as you go’ architecture. It matches infrastructure investment with new revenue streams,
operational savings, and reliability enhancements.
An end-to-end HFC network is illustrated in Figure 3.2.
HFC network architecture is comprised of fiber transmitters, optical nodes, fiber and coaxial
cables, taps, amplifiers and distribution hubs. The digital TV signal is transmitted from the head-end
in a star-like fashion to the fiber nodes using fiber-optic feeders. The fiber node, in turn, distributes
the signals over coaxial cable, amplifiers and taps throughout the customer service area.
Customers who have aerials or antennas on their roofs are able to receive digital TV through one
of the following wireless technologies:
Multi-channel multi-point distribution system (MMDS) – MMDS is a relatively new
service used to broadcast TV signals at microwave frequencies from a central point or headend to small, rooftop antennas. An MMDS digital TV system consists of a head-end that
receives signals from satellites, off-the-air TV stations, and local programming. At the head-
Chapter 3
Figure 3.2: End-to-end HFC Network
end the signals are mixed with commercials and other inserts, encrypted, and broadcasted.
The signals are then re-broadcast from low-powered base stations in a diameter of 35 miles
from the subscriber’s home. The receiving rooftop antennas are 18 to 36 inches wide and
have a clear line of site to the transmitting station. A down converter, usually a part of the
antenna, converts the microwave signals into standard cable channel frequencies. From the
antenna the signal travels to a special set-top box where it is decrypted and passed into the
television. Today there are MMDS-based digital TV systems in use all around the U.S. and
in many other countries including Australia, South Africa, South America, Europe, and
Local multi-point distribution system (LMDS) – LMDS uses microwave frequencies in
the 28 GHz frequency range to send and receive broadband signals which are suitable for
transmitting video, voice and multimedia data. It is capable of delivering a plethora of
Internet- and telephony-based services. The reception and processing of programming and
other head-end functions are the same as with the MMDS system. The signals are then rebroadcast from low-powered base stations in a 4 to 6 mile radius of the subscribers’ homes.
Signals are received using a six-square-inch antenna mounted inside or outside of the home.
As with MMDS, the signal travels to the set-top box where it is decrypted and formatted for
display on the television.
Digital Television and Video
Terrestrial – Terrestrial communications, or DTT, can be used to broadcast a range of
digital services. This system uses a rooftop aerial in the same way that most people receive
television programs. A modern aerial should not need to be replaced to receive the DTT
service. However, if the aerial is out of date, updating may be necessary. Additionally, DTT
requires the purchase of a new digital set-top box to receive and decode the digital signal.
DTT uses the Coded Orthogonal Frequency Division Multiplexing (COFDM) modulation
scheme. COFDM makes the terrestrial signal immune to multi-path reflections. That is, the
signal must be robust enough to traverse geographical areas that include mountains, trees,
and large buildings.
Satellite – Digital television is available through direct broadcast satellite (DBS) which can
provide higher bandwidth than terrestrial, MMDS or LMDS transmission. This technology
requires a new set-top box and a satellite dish because existing analog satellite dishes are
unable to receive digital TV transmissions. The fact that satellite signals can be received
anywhere makes service deployment as easy as installing a receiver dish and pointing it in
the right direction. Satellite programmers broadcast, or uplink, signals to a satellite. The
signals are often encrypted to prevent unauthorized reception. The are then sent to a set-top
box which converts them to a TV-compatible format.
Digital TV Standards
The standard for broadcasting analog television in most of North America is NTSC. The standards
for video in other parts of the world are PAL and SECAM. Note that NTSC, PAL and SECAM will
all be replaced over the next ten years with a new suite of standards associated with digital
International organizations that contribute to standardizing digital television include:
Advanced Television Systems Committee (ATSC)
Digital Video Broadcasting (DVB)
The Advanced Television Systems Committee was formed to establish a set of technical standards for broadcasting television signals in the United States. ATSC digital TV standards include
high-definition television, standard definition television, and satellite direct-to-home broadcasting.
ATSC has been formally adopted in the United States where an aggressive implementation of digital
TV has already begun. Additionally, Canada, South Korea, Taiwan, and Argentina have agreed to use
the formats and transmission methods recommended by the group.
DVB is a consortium of about 300 companies in the fields of broadcasting, manufacturing,
network operation and regulatory matters. They have established common international standards for
the move from analog to digital broadcasting. DVB has produced a comprehensive list of standards
and specifications that describe solutions for implementing digital television in areas such as transmission, interfacing, security, and interactivity for audio, video, and data.
Because DVB standards are open, all compliant manufacturers can guarantee that their digital
TV equipment will work with other manufacturers’ equipment. There are numerous broadcast
services around the world using DVB standards and hundreds of manufacturers offering DVBcompliant equipment. While the DVB has had its greatest success in Europe, the standard also has
implementations in North and South America, China, Africa, Asia, and Australia.
Chapter 3
SDTV and HDTV Technologies
DTV is an umbrella term that refers to both high-definition television (HDTV) and standard definition television (SDTV). DTV can produce HDTV pictures with more image information than
conventional analog signals. The effect rivals that of a movie theater for clarity and color purity.
HDTV signals also contain multi-channel surround sound to complete the home theatre experience.
Eventually, some broadcasters will transmit multiple channels of standard definition video in the
same spectrum amount now used for analog broadcasting. SDTV is also a digital TV broadcast
system that provides better pictures and richer colors than analog TV systems. Although both
systems are based on digital technologies, HDTV generally offers nearly six times the sharpness of
SDTV-based broadcasts.
Consumers who want to receive SDTV on their existing television sets must acquire a set-top
box. These devices transform digital signals into a format suitable for viewing on an analog-based
television. However, consumers who want to watch a program in HDTV format must purchase a new
digital television set.
Commonly called wide-screen, HDTVs are nearly twice as wide as they are high. They present
an expanded image without sacrificing quality. Standard analog televisions use squarer screens that are
3 units high and 4 units wide. For example, if the width of the screen is 20 inches, its height is about
15 inches. HDTV screens, however, are about one third wider than standard televisions, with screens
that are 9 units high by 16 wide. Current analog TV screens contain approximately 300,000 pixels
while HDTV televisions have up to 2 million pixels.
In addition to classifying digital TV signals into HDTV and SDTV, each has subtype classifications for lines of resolution and progressive interlaced scanning.
A television picture is made up of horizontal lines called lines of resolution, where resolution is
the amount of detail contained in an image displayed on the screen. In general, more lines mean
clearer and sharper pictures. Clearer pictures are available on new digital televisions because they
can display 1080 lines of resolution versus 525 lines on an ordinary television.
Progressive scanning is when a television screen refreshes itself line-by-line. It is popular in
HDTVs and computer monitors. Historically, early TV tubes could not draw a video display on the
screen before the top of the display began to fade. Interlaced scanning overcomes this problem by the
way it refreshes lines. It is popular in analog televisions and low-resolution computer monitors.
Digital Set-top Boxes
Most consumers want to take advantage of the crystal-clear sound and picture quality of DTV, but
many cannot afford a new digital television set. To solve the problem, these viewers can use a set-top
box that translates digital signals into a format displayable on their analog TVs. In many countries,
service providers are retrofitting subscribers’ analog set-top boxes with new digital set-top boxes.
Additionally, some countries are pushing second-generation set-top boxes that support a range of
new services.
A set-top box is a type of computer that translates signals into a format that can be viewed on a
television screen. Similar to a VCR, it takes input from an antenna or cable service and outputs it to a
television set. The local cable, terrestrial, or satellite service provider normally installs set-top
Digital Television and Video
The digital TV market can be broadly classified into these categories:
Analog set-top boxes – Analog set-top boxes receive, tune and de-scramble incoming
television signals.
Dial-up set-top boxes – These set-top boxes allow subscribers to access the Internet through
the television. An excellent example is the NetGem Netbox.
Entry-level digital set-top boxes – These digital boxes can provide traditional broadcast
television complemented by a Pay Per View system and a very basic navigation tool. They
are low cost and have limited memory, interface ports, and processing power. They are
reserved for markets that demand exceptionally low prices and lack interactivity via
telephone networks.
Mid-range digital set-top boxes – These are the most popular set-top boxes offered by TV
operators. They normally include a return path, or back channel, which provides communication with a server located at the head-end. They have double the processing power and
storage capabilities of entry-level types. For example, while a basic set-top box needs
approximately 1 to 2MB of flash memory for coding and data storage, mid-range set-top
boxes normally include between 4MB and 8MB. The mid-range set-top box is ideal for
consumers who want to simultaneously access a varied range of multimedia and Internet
applications from the home.
Advanced digital set-top boxes – Advanced digital set-top boxes closely resemble a
multimedia desktop computer. They can contain more than ten times the processing power
of a low-level broadcast TV set-top box. They offer enhanced storage capabilities of
between 16-32 MB of flash memory in conjunction with a high-speed return path that can
run a variety of advanced services such as:
• Video teleconferencing
• Home networking
• IP telephony
• Video-on-demand (VoD)
• High-speed Internet TV
Advanced set-top box subscribers can use enhanced graphical capabilities to receive highdefinition TV signals. Additionally, they can add a hard drive if required. This type of
receiver also comes with a range of high-speed interface ports which allows it to serve as a
home gateway.
There is uncertainty about the development of the digital set-top box market in the coming years.
Most analysts predict that set-top boxes will evolve into a residential gateway that will be the
primary access point for subscribers connecting to the Internet.
Components of a Digital Set-top Box
The set-top box is built around traditional PC hardware technologies. It contains silicon chips that
process digital video and audio services. The chips are connected to the system board and communicate with other chips via buses. Advanced digital set-top boxes are comprised of four separate
hardware subsystems: TV, conditional access (CA), storage, and home networking interfaces (see
Figure 3.3).
The TV subsystem is comprised of components such as:
Channel modulator
MPEG-2 audio and video decoders
Chapter 3
Graphics hardware
The tuner in the box receives a digital signal from a cable, satellite, or terrestrial network and
converts it to base-band. Tuners can be divided into three broad categories:
1. Broadcast In-band (IB) – Once a signal arrives from the physical transmission media, the
IB tuner isolates a physical channel from a multiplex and converts it to baseband.
2. Out Of Band (OOB) – This tuner facilitates the transfer of data between the head-end
systems and the set-top box. OOB tuners are widely used in cable set-top boxes to provide a
medley of interactive services. OOB tuners tend to operate within the 100 - 350 MHz
frequency band.
3. Return path – This tuner enables a subscriber to activate the return path and send data back
to an interactive service provider. It operates within the 5 to 60 MHz frequency band.
The baseband signal is then sampled to create a digital representation of the signal. From this
digital representation, the demodulator performs error correction and recovers a digital transport
layer bitstream.
A transport stream consists of a sequence of fixed-sized transport packets of 188 bytes each.
Each packet contains 184 bytes of payload and a 4-byte header. The header field is a critical 13-bit
program identifier (PID) that identifies the transport stream in which the packet belongs. The
transport demultiplexer selects and de-packetizes the audio and video packets of the desired program.
Smart Card
TV and
Video Out
Return Path
Hard Disk
Hard Disk
Figure 3.3: Hardware Architecture of a Digital Set-top Box
Digital Television and Video
Once the demultiplexer finishes its work, the signal is forwarded to three different types of decoders:
1. Video – Changes the video signal into a sequence of pictures which are displayed on the
television screen. Video decoders support still pictures and are capable of formatting
pictures for TV screens with different resolutions.
2. Audio – Decompresses the audio signal and sends to the speakers.
3. Data – Decodes interactive service and channel data stored in the signal.
The graphics engine processes a range of Internet file formats and proprietary interactive TV file
formats. Once rendered by the graphics engine, the graphics file is often used to overlay the TV’s
standard video display. The power of these processors will continue to increase as TV providers
attempt to differentiate themselves by offering new applications such as 3D games to their subscriber
The following modem options are available to set-top box manufacturers:
Standard telephone modem for terrestrial, satellite and MMDS environments.
Cable modem for a standard cable network.
The system board’s CPU typically provides these functions:
Initializing set-top box hardware components.
Processing a range of Internet and interactive TV applications.
Monitoring hardware components.
Providing processing capabilities for graphic interfaces, electronic program guides,
and user interfaces.
Processing chipsets contain electronic circuits that manage data transfer within the set-top box
and perform necessary computations. They are available in different shapes, pin structures, architectures, and speeds. The more electronic circuits a processor contains, the faster it can process data.
The MHz value of the set-top box processor gives a rough estimate of how fast a processor processes
digital TV signals. 300-MHz chips are now becoming standard as demand from service providers
A set-top box requires memory to store and manipulate subscriber instructions. In addition, settop box elements like the graphics engine, video decoder, and demultiplexer require memory to
execute their functions. Set-top boxes use both RAM and ROM. Most functions performed by a settop box require RAM for temporary storage of data flowing between the CPU and the various
hardware components. Video, dynamic, and non-volatile RAM are also used for data storage.
The primary difference between RAM and ROM is volatility. Standard RAM loses its contents
when the set-top box is powered off; ROM does not. Once data has been written onto a ROM chip,
the network operator or the digital subscriber cannot remove it. Most set-top boxes also contain
EPROMs and Flash ROM.
A digital set-top box also includes a CA system. This subsystem provides Multiple Service
Operators (MSOs) with control over what their subscribers watch and when. The CA system is
comprised of a de-encryption chip, a secure processor, and hardware drivers. The decryption chip
holds the algorithm section of the CA. The secure processor contains the keys necessary to decrypt
digital services. The condition access system drivers are vendor-dependent.
The ability to locally store and retrieve information is expected to become very important for
digital TV customers. While storage in first-generation set-top boxes was limited to ROM and RAM,
high capacity hard drives are beginning to appear. Their presence is being driven by the popularity of
Chapter 3
DVR technology, among other things. They allow subscribers to download and store personal
documents, digital movies, favorite Internet sites, and e-mails. They also provide for:
1. Storing the set-top box’s software code.
2. Storing system and user data (such as user profiles, configuration, the system registry and
updateable system files).
3. Storing video streams (PVR functionality).
Some set-top boxes support home networking technology While Ethernet is the most popular
technology, we can expect to see Bluetooth, DECT, IEEE 1394, USB 2.0, and HomePNA in future
Economically, set-top box manufacturers must design to a network operator’s unique requirements, so costs are high. To overcome this, several manufacturers are beginning to use highly
integrated chips that incorporate most required functionality in a single chip.
Market Outlook
Digital set-top boxes continue to be one of the fastest-growing products in the consumer electronics
marketplace. A report from Gartner Dataquest estimates that the digital set-top box market will grow
to 61 million units by 2005. They represent a huge revenue-earning potential for content owners,
service providers, equipment manufacturers, and consumer-oriented semiconductor vendors.
Many industry experts are predicting that set-top boxes will evolve into home multimedia
centers. They could become the hub of the home network system and the primary access to the
Internet. The set-top market is very dynamic and will change dramatically over the next few years.
We are already seeing a convergence of technology and equipment in TV systems such as the
addition of a hard drive to store and retrieve programs. Such changes will continue to occur as the
market expands.
Integrated Digital Televisions
An integrated digital television contains a receiver, decoder and display device. It combines the
functions of a set-top box with a digital-compatible screen. While the digital set-top box market is
increasing steadily, the integrated TV set market remains small. The main reason is that the business
model adopted by digital services providers is one of subsidizing the cost of set-top boxes in return
for the customer’s one- or two-year TV programming subscription. This is consistent with the model
adopted by mobile phone companies. The subsidized model favors customers in that their up-front
cost is minimal.
Additionally, integrated TVs lag behind digital set-top boxes in production and penetration of
worldwide TV households due to the following facts:
Worldwide production of analog TV sets is still growing.
Each analog TV set has an estimated life span of 10 to 15 years. Consumers are unlikely to
throw away a useable TV set in order to move to digital TV.
Low-cost digital set-top boxes offer an easier route to the world of digital TV than high-cost
digital TV sets.
Set-top boxes will remain the primary reception devices for satellite, terrestrial, and cable
TV services.
Digital Television and Video
Digital Home Theater Systems
A typical home theater includes:
Large-screen TV (25 inches or larger)
Hi-fi VCR
VCR or DVD player
Audio/video receiver or integrated amplifier
Four or five speakers
According to CEA market research, nearly 25 million households—about 23% of all U.S.
households—own home theater systems. Perhaps the most important aspect of home theater is the
surround sound. This feature places listeners in the middle of the concert, film, TV show, or sporting
event. The growing interest in home theater is driven in large part by an interest in high-quality
audio. High-resolution audio formats such as multi-channel DVD—Audio and Super Audio CD
(SACD)—have emerged. Both offer higher resolution than CD, multi-channel playback, and extras
like liner notes, lyrics and photos.
Digital Video Recorders
While the potential for personalized television has been discussed for years, it has failed to capture
viewers’ imaginations. This is beginning to change because some of the larger broadcasting companies have begun to develop business models and technology platforms for personal TV.
The concept of personalizing the TV viewing experience has become more attractive with
offerings from companies like TiVo that sell digital video recorders (DVRs). Similar to a VCR, a
DVR can record, play, and pause shows and movies. However, a DVR uses a hard disk to record
television streams. DVRs also use sophisticated video compression and decompression hardware.
Hard disk recording offers a number of benefits including:
Storing data in digital format – Because the hard drive stores content digitally, it maintains a higher quality copy of content than a VCR based system. While an analog tape
suffers some loss with each recording and viewing, the quality of a digital recording is
determined by the DVR’s file format, compression, and decompression mechanisms.
‘Always-On recording’ – The storage mechanism is always available to record; you don’t
have to hunt for a blank tape. Like a PC, a DVR can instantly determine and inform you of
disk space available for recording. In addition, a DVR can prompt you to delete older
material on the hard disk to free up space.
Content management – A DVR file management system lets you categorize and search for
particular programs and video files. Rather than trying to maintain labels on tapes, you can
instantly browse DVR contents. (This is especially helpful for people who do not label tapes
and have to play the first few minutes of every tape to recall its contents!)
DVRs are also optimized for simultaneous recording and playback. This lets the viewer pause
live television, do an instant replay, and quick-jump to any location on the recording media. To do
this on a VCR, rewind and fast forward must be used excessively. Coupled with a convenient electronic program guide and a high-speed online service, DVRs can pause live television shows,
recommend shows based on your preferences, and automatically record your favorite shows. Taking
this a step further, some DVRs can preemptively record shows that might be of interest to you. They
do so by monitoring the shows you frequently watch and identifying their characteristics. It can then
record other shows with similar characteristics.
Chapter 3
Core Technologies
A DVR is like a stripped-down PC with a large hard disk. A remote is used to select a program to
record, then the DVR captures and stores these programs on its hard disk for future viewing. A DVR
box is comprised of both hardware and software components that provide video processing and
storage functions. The software system performs these functions:
Manage multiple video streams.
Filter video and data content.
Learn viewers’ program preferences
Maintain programming, scheduling, and dial-up information.
Current DVR software providers include TiVo, SonicBlue, Microsoft, and Metabyte.
The DVR’s tuner receives a digital signal and isolates a particular channel. The signal is then:
Encoded to MPEG-2 for storage on a local hard drive.
Streamed off the hard drive.
Transformed from MPEG-2 into a format viewable on your television.
The process of storing and streaming video streams from the hard disk allows viewers to pause
live television. The length of the pause is determined by the amount of hard drive storage. While
most DVRs use standard hard disks, some manufacturers are optimizing their products for storing
and streaming MPEG-2 video streams. For instance, Seagate technology is using Metabyte’s MbTV
software suite to control the DVR’s disc drive recording features. This enables optimization of video
stream delivery and management of acoustic and power levels.
The DVR’s CPU controls the encoding and streaming of the MPEG-2 signal. Companies like IBM
are positioning products such as the PowerPC 400 series processors for the DVR market. Eventually,
CPUs will include software applications that encode and decode the MPEG-2 video stream.
Figure 3.4 shows a detailed block diagram of a digital video recorder.
Figure 3.4: Digital Video Recorder Block Diagram
Digital Television and Video
Market Outlook
Currently, there are only a few million homes worldwide that own a DVR. With all the benefits of
DVR technology, why is the number so low? The answer lies the technology and pricing structure
for the components required to manufacture DVRs. The DVR is expensive when compared to a $99
DVD player or $69 VCR. At present, prices start at about $300 and go up to $2000 for a model that
can store hundreds of hours of TV shows. And many of them incur a $10 monthly service fee on top
of the purchase price—the last straw for many consumers.
For starters, a DVR requires a large hard disk to store real-time video. Large hard disks have
been available for some time but they have been far too expensive for use in digital consumer
products. However, this situation is changing as hard disk capacity continues to increase while costs
decrease, making high-capacity disks inexpensive enough for DVRs.
Another key DVR component is the video processing subsystem. This system is responsible for
simultaneously processing an incoming digital stream from a cable or satellite network, and streaming video streams off a hard disk. Due to the complexity of this system, many manufacturers find it
difficult to design cost-effective video processing chipsets. This too has begun to change, as manufacturers are incorporating all required functionality into a single chip. Multi-purpose chips foretell a
steep price decline for DVR technology in the future
Two other factors are also acting as a catalyst to promote these types of digital devices. The first
is rapid consumer adoption of DVD and home theater systems, indicating high consumer interest in
quality digital video. Analog VCRs are unable to offer these capabilities. Second, digital set-top box
manufacturers have started to integrate DVR capabilities into their products. These boxes are
providing an ideal platform for DVR features. Sheer market size is creating a whole new wave of
growth for DVR-enabled systems.
Informa Media forecasts that by 2010, nearly 500 million homes globally will have DVR
functionality; 62% will be digital cable customers, 18% digital terrestrial, 13% digital satellite and
7% ADSL.
Geographically, the United States is the initial focus for companies bringing DVRs to market.
This is changing, however, with the announcement by TiVo and British Sky Broadcasting Group to
bring the firm’s recording technology to the United Kingdom for the first time.
While the benefits of the DVR are very compelling, it is a new category of consumer device and
there are issues that could limit its success. For instance, if the DVR disk drive fails, the subscriber
can’t access services. This would lead to loss of revenue and customer loyalty for the operator.
To get around the high prices and recurring fees, some consumers are building their own DVRs.
Basically, creation of a DVR from a PC requires the addition of a circuit board and some software to
receive and process TV signals. The hard drive from a PC can be used for program storage. And
personalized TV listings are available for free from several Internet sites such as and
In its simplest form, television is the conversion of moving images and sounds into an electrical
signal. This signal is then transmitted to a receiver and reconverted into visible and audible images
for viewing. The majority of current television systems are based on analog signals. As viewers
demand higher quality levels and new services, the analog format is proving inefficient. Digital TV
Chapter 3
offers fundamental improvements over analog services. For instance, where an analog signal degrades with distance, the digital signal remains constant and perfect as long as it can be received.
The advent of digital TV offers the general public crystal clear pictures, CD quality sound, and
access to new entertainment services.
A set-top box can be used to access digital TV. Once a relatively passive device, it is now
capable of handling traditional computing and multimedia applications. A typical set-top box is built
around traditional PC hardware technologies.
The biggest advance in video in recent years has been the DVR. Similar to a VCR, a DVR stores
programs on a hard drive rather than on recording tape. It can pause and instant-replay live TV. It can
also let the consumer search through the electronic program guides for specific shows to watch and
record. After considerable hype in the early DVR market, expectations have cooled. But analysts still
believe that DVR will be the main attraction in interactive TV.
Audio Players
Today’s music formats are very different from those of a few years ago. In the past, the vinyl record
format lasted a relatively long time. In comparison, the life of the cassette format was shorter. Today
the popular CD (compact disc) format is facing extinction in even less time. The late 1990s saw the
introduction of the MP3 (MPEG Layer III) format, which has gained prominence because of its easeof-use and portability. MP3 music files can be stored in the hard drive of a PC or flash memory of a
portable player. However, the number of competing recording standards has increased as the number
of music vendors vying for market prominence has risen. Some of the emerging audio technologies
are MP3, DVD-A (DVD-Audio), and SACD (Super Audio CD).
In general, music producers favor two types of media format—physical and Internet-based.
Physical media formats include CD, MiniDisc, SACD, and DVD-Audio. Internet media formats
include MP3, Secure MP3, MPEG2 AAC, and SDMI. These will be explained in later sections of this
The Need for Digital Audio—Strengths of the Digital Domain
The first recording techniques by Thomas Edison and others were based on a common process—the
reproduction of an audio signal from a mechanical or electrical contact with the recording media.
After nearly 100 years, analog audio has reached a mature state and nearly all of its shortcomings
have been addressed. It can offer no further improvements without being cost-prohibitive.
The very nature of the analog signal leads to its own shortcomings. In the analog domain the
playback mechanism has no means to differentiate noise and distortion from the original signal.
Further, every copy made introduces incrementally more noise because the playback/recording
mechanism must physically contact the media, further damaging it on every pass. Furthermore, total
system noise is the summation of all distortion and noise from each component in the signal path.
Finally, analog equipment has limited performance, exhibiting an uneven frequency response (which
requires extensive equalization), a limited 60-dB dynamic range, and a 30-dB channel separation that
affects stereo imaging and staging.
Digital technology has stepped in to fill the performance gap. With digital technology, noise and
distortion can be separated from the audio signal. And a digital audio signal’s quality is not a
function of the reading mechanism or the media. Performance parameters such as frequency response, linearity, and noise are only functions of the digital-to-analog converter (DAC). Digital audio
parameters include a full audio band frequency response of 5 Hz to 22 kHz and a dynamic range of
over 90-dB with flat response across the entire band.
Chapter 4
The circuitry upon which digital audio is built offers several advantages. Due to circuit integration, digital circuits do not degrade with time as analog circuits do. Further, a digital signal suffers no
degradation until distortion and noise become so great that the signal is out of its voltage threshold.
However, this threshold has been made intentionally large for this reason. The high level of circuit
integration means that digital circuitry costs far less than its analog counterpart for the same task.
Storage, distribution, and playback are also superior in digital media technology. The only theoretical limitation of digital signal accuracy is the quantity of numbers in the signal representation and
the accuracy of those numbers, which are known, controllable design parameters.
Principles of Digital Audio
Digital audio originates from analog audio signals. Some key principles of digital audio are sampling, aliasing, quantization, dither, and jitter as described in the following sections.
Sampling converts an analog signal into the digital domain. This process is dictated by the Nyquist
sampling theorem, which dictates how quickly samples of the signal must be taken to ensure an
accurate representation of an analog signal. Furthermore, it states that the sampling frequency must
be greater than or equal to the highest frequency in the original analog signal. The sampling theorem
requires two constraints to be observed. First, the original signal must be band-limited to half the
sampling frequency by being passed through an ideal low-pass filter. Second, the output signal must
again be passed through an ideal low-pass filter to reproduce the analog signal. These constraints are
crucial and will lead to aliasing if not observed.
Aliasing is audible distortion. For a frequency that is exactly half the sampling frequency, only two
samples are generated, since this is the minimum required to represent a waveform. The modulation
creates image frequencies centered on integer multiples of Fs. These newly generated frequencies are
then imaged, or aliased, back into the audible band. For a sampling rate of 44.1 kHz, a signal of 23 kHz
will be aliased to 21.1 kHz. More precisely, the frequency will be folded back across half the
sampling frequency by the amount it exceeds half the sampling frequency—in this case, 950 Hz.
Once sampling is complete, the infinitely varying voltage amplitude of the analog signal must be
assigned a discrete value. This process is called quantization. It is important to note that quantization
and sampling are complementary processes. If we sample the time axis, then we must quantize the
amplitude axis and vice versa. Unfortunately, sampling and quantization are commonly referred to as
just quantization, which is not correct. The combined process is more correctly referred to as
In a 16-bit audio format, a sinusoidal varying voltage audio signal can be represented by 216 or
65,536 discrete levels. Quantization limits performance by the number of bits allowed by the
quantizing system. The system designer must determine how many bits create a sufficient model of
the original signal. Because of this, quantizing is imperfect in its signal representation whereas
sampling is theoretically perfect. Hence, there is an error inherent in the quantization process
regardless of how well the rest of the system performs.
Audio Players
Dither is the process of adding low-level analog noise to a signal to randomize, or confuse, the
quantizer’s small-signal behavior. Dither specifically addresses two problems in quantization. The
first is that a reverberating, decaying signal can fall below the lower limit of the system’s resolution.
That is, an attempt to encode a signal below the LSB (Least Significant Bit) results in encoding
failure and information is lost. The second problem is that system distortion increases as a percent of
a decreasing input signal. However, dither removes the quantization error from the signal by relying
on some special behavior of the human ear. The ear can detect a signal masked by broadband noise.
In some cases, the ear can detect a midrange signal buried as much as 10 to 12 dB below the broadband noise level.
Jitter is defined as time instability. It occurs in both analog-to-digital and digital-to-analog conversion. For example, jitter can occur in a compact disc player when samples are being read off the disc.
Here, readings are controlled by crystal oscillator pulses. Reading time may vary and can induce
noise and distortion if:
The system clock pulses inaccurately (unlikely)
There is a glitch in the digital hardware
There is noise on a signal control line
To combat the fear of jitter, many questionable products have been introduced. These include
disc stabilizer rings to reduce rotational variations and highly damped rubber feet for players.
However, a system can be designed to read samples off the disc into a RAM buffer. As the buffer
becomes full, a local crystal oscillator can then clock-out the samples in a reliable manner, independent of the transport and reading mechanisms. This process—referred to as time-base correction—is
implemented in quality products and eliminates the jitter problem.
Digital Physical Media Formats
Physical media formats include vinyl records, cassettes, and CDs. Today CD technology dominates
the industry. In 2001 alone over 1 billion prerecorded CDs were sold in the U.S., accounting for 81%
of prerecorded music sales. It took 10 years for the CD format to overtake the cassette in terms of
unit shipments. Popular digital physical music media formats include CD, MD, DVD-A, and SACD
as discussed in the following sections.
Compact Disc
The compact disc player has become hugely popular, with tens of millions sold to date. CD players
have replaced cassette players in the home and car and they have also replaced floppy drives in PCs.
Whether they are used to hold music, data, photos, or computer software, they have become the
standard medium for reliable distribution of information.
Understanding the CD
A CD can store up to 74 minutes of music or 783,216,000 bytes of data. It is a piece of injectionmolded, clear polycarbonate plastic about 4/100 of an inch (1.2 mm) thick. During manufacturing,
the plastic is impressed with microscopic bumps arranged as a single, continuous spiral track. Once
the piece of polycarbonate is formed, a thin, reflective aluminum layer is sputtered onto the disc,
covering the bumps. Then a thin acrylic layer is sprayed over the aluminum to protect it.
Chapter 4
Cross-section of a CD
A CD’s spiral data track circles from the inside of the disc to the outside. Since the track starts at the
center, the CD can be smaller than 4.8 inches (12 cm). In fact, there are now plastic baseball cards
and business cards that can be read in a CD player. CD business cards can hold about 2 MB of data
before the size and shape of the card cuts off the spiral.
The data track is approximately 0.5 microns wide with a 1.6 micron track separation. The
elongated bumps that make up the track are 0.5 microns wide, a minimum of 0.83 microns long, and
125 nm high. They appear as pits on the aluminum side and as bumps on the side read by the laser.
Laid end-to-end, the bump spiral would be 0.5 microns wide and almost 3.5 miles long.
CD Player
A CD drive consists of three fundamental components:
1. A drive motor to spin the disc. This motor is precisely controlled to rotate between
200 and 500 rpm depending on the track being read.
2. A laser and lens system to read the bumps.
3. A tracking mechanism to move the laser assembly at micron resolutions so the beam
follows the spiral track.
The CD player formats data into blocks and sends them to the DAC (for an audio CD) or to the
computer (for a CD-ROM drive). As the CD player focuses the laser on the track of bumps, the laser
beam passes through the polycarbonate layer, reflects off the aluminum layer, and hits an optoelectronic sensor that detects changes in light. The bumps reflect light differently than the “lands” (the
rest of the aluminum layer) and the sensor detects that change in reflectivity. To read the bits that
make up the bytes, the drive interprets the changes in reflectivity.
The CD’s tracking system keeps the laser beam centered on the data track. It continually moves
the laser outward as the CD spins. As the laser moves outward from the center of the disc, the bumps
move faster past the laser. This is because the linear, or tangential, speed of the bumps is equal to the
radius times the speed at which the disc is revolving. As the laser moves outward, the spindle motor
slows the CD so the bumps travel past the laser at a constant speed and the data is read at a constant
Because the laser tracks the data spiral using the bumps, there cannot be extended gaps where
there are no bumps in the track. To solve this problem, data is encoded using EFM (eight-fourteen
modulation). In EFM, 8-bit bytes are converted to 14 bits and EFM modulation guarantees that some
of those bits will be 1s.
Because the laser moves between songs, data must be encoded into the music to tell the drive its
location on the disc. This is done with subcode data. Along with the song title, subcode data encodes
the absolute and relative position of the laser in the track.
The laser may misread a bump, so there must be error-correcting codes to handle single-bit
errors. Extra data bits are added to allow the drive to detect single-bit errors and correct them. If a
scratch on the CD causes an entire packet of bytes to be misread (known as burst error), the drive
must be able to recover. By interleaving the data on the disc so it is stored non-sequentially around
one of the disc’s circuits, the problem is solved. The drive reads data one revolution at a time and uninterleaves the data to play it. If a few bytes are misread, a little fuzz occurs during playback.
When data instead of music is stored on a CD, any data error can be catastrophic. Here, additional error correction codes are used to insure data integrity.
Audio Players
CD Specifications
Sony, Philips, and Polygram jointly developed the specifications for the CD and CD players. Table 4.1
contains a summary of those spec standards.
Table 4.1: Specifications for the Compact Disc System
Playing time
74 minutes, 33 seconds maximum
Counter-clockwise when viewed from readout surface
Rotational speed
1.2 – 1.4 m/sec. (constant linear velocity)
Track pitch
1.6 µm
120 mm
1.2 mm
Center hole diameter
15 mm
Recording area
46 mm - 117 mm
Signal area
50 mm - 116 mm
Minimum pit length
0.833 µm (1.2 m/sec) to 0.972 µm (1.4 m/sec)
Maximum pit length
3.05 µm (1.2 m/sec) to 3.56 µm (1.4 m/sec)
Pit depth
~0.11 µm
Pit width
~0.5 µm
Any acceptable medium with a refraction index of 1.55
Standard wavelength
780 nm (7,800 Å)
Focal depth
± 2 µm
Number of channels
2 channels (4 channel recording possible)
16-bit linear
Quantizing timing
Concurrent for all channels
Sampling frequency
44.1 KHz
Channel bit rate
4.3218 Mb/s
Data bit rate
2.0338 Mb/s
Data-to-channel bit ratio
Error correction code
Cross Interleave Reed-Solomon Code (with 25%
Modulation system
Eight-to-fourteen Modulation (EFM)
Chapter 4
MiniDisc™ (MD) technology was pioneered and launched by Sony in 1992. It is a small-format
optical storage medium with read/write capabilities. It is positioned as a new consumer recording
format with smaller size and better fidelity than audio. MiniDisc was never meant to compete with
the CD; it was designed as a replacement for the cassette tape. Its coding system is called ATRAC
(Adaptive Transform Acoustic Coding for MiniDisc).
Based on psychoacoustic principles, the coder in a MiniDisc divides the input signal into three
sub-bands. It then makes transformations into the frequency domain using a variable block length.
The transform coefficients are grouped into non-uniform bands according to the human auditory
system. They are then quantized on the basis of dynamics and masking characteristics. While
maintaining the original 16-bit, 44.1-kHz signal, the final coded signal is compressed by an approximate ratio of 1:5.
In 1996, interest in the MiniDisc from the consumer market increased. While MD technology
has been much slower to take off in the U.S., it enjoys a strong presence in Japan because:
Most mini-component systems sold in Japan include an MD drive
Portable MD players are offered by several manufacturers in a variety of designs
Sony has advertised and promoted this technology through heavy advertisements and low
prices for players and media.
In 2003 the forecast for MiniDisc player shipments for the home, portable, and automobile
market was expected to exceed 3.7 million units in the U.S.
The first popular optical storage medium was the audio CD. Since then the fields of digital audio and
digital data have been intertwined in a symbiotic relationship. One industry makes use of the other’s
technology to their mutual benefit. It took several years for the computer industry to realize that the
CD was the perfect medium for storing and distributing digital data. In fact, it was not until well into
the 1990s that CD-ROMs became standard pieces of equipment on PCs. With the latest developments
in optical storage, the record industry is now looking to use this technology to re-issue album
collections, for example. The quest for higher fidelity CDs has spawned a number of standards that
are battling with the CD to become the next accepted standard. Among these are DVD-A, SACD, and
DAD (digital audio disc).
DVD-A is a new DVD (digital versatile disc) format providing multi-channel audio in a lossless
format. It offers high-quality two-channel or 5.1 channel surround sound. The winning technology
could produce discs with 24-bit resolution at a 96-kHz sampling rate, as opposed to the current 16bit/44.1-kHz format.
When DVD was released in 1996, it did not include a DVD-Audio format. Following efforts by
the DVD Forum to collaborate with key music industry players, a draft standard was released in early
1998. The DVD-Audio 1.0 specification (minus copy protection) was subsequently released in 1999.
DVD-A is positioned as a replacement for the CD and brings new capabilities to audio by providing
the opportunity for additional content such as video and lyrics. A DVD-Audio disc looks similar to a
normal CD but it can deliver much better sound quality. It allows sampling rates of 44.1, 88.2, 176.4,
48, 96, and 192 kHz with a resolution of 16, 20, or 24 bits. While the two best sampling rates can
only be applied to a stereo signal, the others can be used for 5.1 surround sound channels with
quadrupled capacity. Even though a DVD-Audio disc has a storage capacity of up to 5 GB, the
Audio Players
original signal takes even more space. To account for this, DVD-Audio uses a type of lossless
packing called Meridian Lossless Packing (MLP) applied to the PCM (pulse code modulation) bit
DVD-Audio includes the option of PCM (pulse code modulation) digital audio with sampling
sizes and rates higher than those of audio CD. Alternatively, audio for most movies is stored as
discrete, multi-channel surround sound. It uses Dolby Digital or Digital Theatre Systems (DTS)
Digital Surround audio compression similar to the sound formats used in theatres. DTS is an audio
encoding format similar to Dolby Digital that requires a decoder in the player or in an external
receiver. It accommodates channels for 5.1 speakers—a subwoofer plus five other speakers (front
left, front center, front right, rear left, and rear right). Some say that DTS sounds better than Dolby
because of its lower compression level. As with video, audio quality depends on how well the
processing and encoding was done. In spite of compression, Dolby Digital and DTS can be close to
or better than CD quality.
Like DVD-Video titles, we can expect DVD-Audio discs to carry video. They will also be able to
carry high-quality audio files and include limited interactivity (Internet, PCs). The capacity of a dual
layer DVD-Audio will be up to at least two hours for full surround sound audio and four hours for
stereo audio. Single-layer capacity will be about half these times.
As part of the general DVD spec, DVD-Audio is the audio application format of new DVDAudio/Video players. It includes:
Pulse code modulation
High-resolution stereo (2-channel) audio
Multi-channel audio
Lossless data compression
Extra materials, which include still images, lyrics, etc.
Pulse Code Modulation (PCM)
Digitizing is a process that represents an analog audio signal as a stream of digital 1’s and 0’s. PCM
is the most common method of digitizing an analog signal. It samples an analog signal at regular
intervals and encodes the amplitude value of the signal in a digital word. The analog signal is then
represented by a stream of digital words.
According to digital sampling theory, samples must be taken at a rate at least twice as fast as the
frequency of the analog signal to be reproduced. We can hear sounds with frequencies from 20 Hz to
20,000 Hz (20 kHz), so sampling must be performed at least at 40,000 Hz (or 40,000 times per
second) to reproduce frequencies up to 20 kHz. CD format has a sampling frequency of 44.1 kHz,
which is slightly better than twice the highest frequency that we can hear. While this method is a
minimum requirement, it can be argued mathematically that twice is not fast enough. Hence, higher
sampling frequencies in PCM offer better accuracy in reproducing high-frequency audio information.
Compared to PCM, the CD format’s sampling frequency is barely adequate for reproducing the
higher frequencies in the range of human hearing.
Another critical parameter in PCM is word length. Each sample of the analog signal is characterized by its magnitude. The magnitude is represented by the voltage value in an analog signal and by
a data word in a digital signal. A data word is a series of bits, each having a value of 1 or 0. The
longer the data word, the wider the range (and the finer the gradations of range values) of analog
voltages that can be represented. Hence, longer word length enables a wider dynamic range of
Chapter 4
sounds and finer sound nuances to be recorded. The CD format has a word length of 16 bits, which is
enough to reproduce sounds with about 96 dB (decibels) in dynamic range.
While most people think that the 44.1-kHz sampling rate and the 16-bit word length of audio
CDs are adequate, audiophiles disagree. High fidelity music fans say that audio CD sounds “cold”
and exhibits occasional ringing at higher frequencies. That is why consumer electronics manufacturers have designed the DVD-Audio format. Even the average listener using DVD-Audio format on a
properly calibrated, high-quality system can hear the differences and the improvement over CD format.
High-Resolution Stereo (2-Channel) Audio
DVD-Audio offers much higher resolution PCM. DVD-Audio supports sampling rates up to 192 kHz
(or more than four times the sampling rate of audio CD). It also supports up to a 24-bit word length.
The higher sampling rate means better reproduction of the higher frequencies. The 192-kHz sampling rate is over nine times the highest frequency of the human hearing range. One can hear the
quality improvement on a high- quality, well-balanced music system. Though DVD-Audio supports
up to 192-kHz sampling, not all audio program material has to be recorded using the highest rate.
Other sampling rates supported are 44.1, 48, 88.2 , and 96 kHz.
With a word length up to 24 bits, a theoretical dynamic range of 144 dB can be achieved.
However, it is not possible to achieve such high dynamic ranges yet, even with the best components.
The limiting factor is the noise level inherent in the electronics. The best signal-to-noise ratio that
can be achieved in today’s state-of-the-art components is about 120 dB. Hence, a 24-bit word length
should be more than enough for the foreseeable future. Though DVD-A can support word lengths of
16 and 20 bits, high-resolution stereo usually uses a word length of 24 bits.
Multi-Channel Audio
Another characteristic of DVD-Audio format is its capability for multichannel discrete audio
reproduction. Up to six full-range, independent audio channels can be recorded. This allows us not
just to hear the music, but to experience the performance as if it were live in our living room. No
other stereo music programs can approach this feeling.
Usually, the sixth channel serves as the low frequency effects (LFE) channel to drive the
subwoofer. But it is also a full frequency channel. It can serve as a center surround channel (placed
behind the listener) or as an overhead channel (placed above the listener) for added dimensionality to
the soundstage. The application and placement of the six audio channels are limited only by the
imagination of the artist and the recording engineer.
Note that multichannel DVD-Audio does not always mean six channels or 5.1 channels. Sometimes it uses only four channels (left front, right front, left surround, and right surround), or three
channels (left front, center, right front). Multichannel DVD-Audio can use up to 192 kHz and up to a
24-bit word length to reproduce music. Practically speaking, it usually uses 96 kHz sampling due to
the capacity limitation of a DVD-Audio disc. (Six-channel audio uses three times the data capacity
of a two-channel stereo when both use the same sampling rate and word length.) DVD-Audio uses
data compression to accommodate the high-resolution stereo and/or multi-channel digital information.
Lossless Data Compression
There are two types of compression:
Lossy – data is lost at the compression stage, such as MPEG-2, Dolby Digital, and DTS.
Lossless – data is preserved bit-for-bit through the compression and decoding processes.
Audio Players
To efficiently store massive quantities of digital audio information, the DVD Forum has approved the use of Meridian’s Lossless Packing (MLP) algorithm as part of the DVD-Audio format.
Hence, no digital information is lost in the encoding and decoding process, and the original digital
bitstream is re-created bit-for-bit from the decoder.
Extra Materials
The DVD-Audio format supports tables of contents, lyrics, liner notes, and still pictures. Additionally, many DVD-Audio titles are actually a combination of DVD-Audio and DVD-Video—called
DVD-Audio/Video. Record labels can use the DVD-Video portion to include artist interviews, music
videos, and other bonus video programming. Similarly, DVD-Audio discs may actually include
DVD-ROM content that can be used interactively on a PC with a DVD-ROM drive.
Backward Compatibility with DVD-Video Players
A DVD-Audio player is required to hear a high-resolution stereo (2-channel) or a multichannel PCM
program on the DVD-Audio disc. The Video portion on an Audio/Video disc contains a multichannel
soundtrack using Dolby Digital and/or optionally DTS surround sound which can be played back by
existing DVD-Video players. The DVD-Video player looks for the DVD-Video portion of the disc
and plays the Dolby Digital or DTS soundtracks. Dolby Digital and DTS soundtracks use lossy
compression and do not feature high-resolution stereo and multichannel information. However,
consumers are pleased with how the music sounds in DVD-Video audio formats, particularly with
DVD-Audio Market
The DVD-Audio market will grow quickly as manufacturers scramble to add features to keep their
DVD offerings at mid-range prices. By 2004, over 27 million DVD-Audio/Video players will ship.
Most manufacturers of mid-range players will offer DVD-Audio/Video players, not DVD Video-only
players. Very few DVD-Audio-only players will ship during this period since these products will be
popular only among audiophiles and in the automotive sound arena. To date, not many music titles
have been released to DVD-Audio since the format’s launch in the summer of 2000. Record labels
have just recently entered the DVD-Audio market with some releases.
Sales of DVD-Audio player shipments in 2003 for the home, portable, and automobile market
are expected to be 5.7 million units in the U.S. With competition from the SACD camp, consumers
will hesitate buying into the DVD-Audio format now. However, we might see universal players that
can play both DVD-Video and DVD-Audio discs. However, these should not be expected for some
time. Some affordable universal players promise the ability to play SACD, DVD-Audio, DVD-Video,
CD, CD-R, and CD-RW. These players will offer high-resolution stereo and multichannel music with
multiple formats.
Super Audio Compact Disc (SACD)
Super Audio CD is a new format that promises high-resolution audio in either two-channel stereo or
multi-channel audio. It represents an interesting and compelling alternative to the DVD-Audio
format. But unlike DVD-Audio, it does not use PCM. Instead, SACD technology is based on Digital
Stream Direct (DSD), whose proponents claim is far superior to PCM technology.
SACD is said to offer unsurpassed frequency response, sonic transparency, and more analog-like
sound reproduction. As much promise as SACD holds, the technology is still new and has not yet
Chapter 4
gained mainstream status. Very few stereo and multichannel titles are available in SACD. It is fully
compatible with the CD format and today’s CDs can be played in next-generation SACD players.
While DVD-Audio is positioned as a mass-market option, SACD is considered an audiophile format.
SACD technology was jointly developed by Sony and Philips to compete with DVD-Audio. The
format is the same size as CD and DVD media, but offers potential for higher sampling rates.
Using DSD, the bit stream of the SACD system is recorded directly to the disc, without converting to PCM. Unlike PCM, DSD technology uses a 1-bit sample at very high sampling rates (up to
2,822,400 samples per second), which is 64 times faster than the audio CD standard. Using noise
shaping, the final signal has a bandwidth of more than 100 kHz and a dynamic range of 120 dB.
Since this technique is much more efficient than PCM, it will allow for up to 6 independent, full
bandwidth channels with lossless packing. The implication is that DSD will be able to better reproduce the original analog audio signal.
The Disc
The SACD disc looks like an audio CD disc and resembles a DVD in physical characteristics and
data capacity. A single-layer SACD disc has a single, high-density layer for high-resolution stereo
and multi-channel DSD recording. A dual-layer SACD disc has two high-density layers for longer
play times for stereo or multi-channel DSD recordings. There is also a hybrid SACD disc that
features a CD layer and a high-density layer for stereo and multi-channel DSD recording. The
hybrid SACD looks like an ordinary audio CD to existing CD players and can be played in any CD
player. However, the CD player would only reproduce CD-quality stereo sound, not the highresolution, multichannel DSD tracks of the high-density layer.
Backward Compatibility
When present, the hybrid CD layer makes the SACD disc CD player-compatible. If you buy a hybrid
SACD title, you could play it in any CD player or in your PC’s CD-ROM drive. However, this cannot
be done for DVD-Audio albums. Conversely, SACD discs cannot contain video content as do DVDAudio/Video discs. Nevertheless, the SACD format is a strong contender for the new high-resolution
audio format.
Not all SACD titles or discs will be pressed with the hybrid CD layer option since they are
significantly more expensive to make. Those that feature the hybrid CD layer construction are clearly
marked as “Hybrid SACD” or “compatible with all CD players.” All SACD players can play regular
audio CDs so current CD collections will be SACD-compatible.
SACD Players
The first stereo-only Super Audio CD players made their debut in 1999. It was not until late 2000
that multichannel SACD discs and players appeared. Initially, SACD technology was marketed to
audiophiles since the prices of the first SACD players were quite high. Now, entry-level, multichannel SACD players can be purchased for as low as $250.
A stereo-capable audio system is all that’s needed for stereo DSD playback. For multichannel
DSD playback, a multichannel-capable SACD player is required. In addition, a 5.1-channel receiver
or preamplifier with 5.1-channel analog audio inputs and an ‘analog direct’ mode is necessary. To
prevent signal degradation, the analog direct mode allows the analog audio signals to pass without
additional analog-to-digital (A/D) and digital-to-analog (D/A) conversions in the receiver or
Audio Players
In 2003, Super Audio CD player shipments for the home, portable and automobile markets are
expected to exceed 1 million units in the U.S.
Internet Audio Formats
An Internet audio player is a device or program that plays music compressed using an audio compression algorithm. The best-known compression algorithm is MP3. Internet audio formats are
gaining popularity because they remove the middlemen and lead to higher revenues and market
share. Hence, major music labels are looking to leverage this format. Internet audio players are
available as Macintosh and PC applications and as dedicated hardware players including in-dash
automotive players.
The most popular player is the portable MP3 player, which resemble a Walkman™ or a pager.
Portable MP3 players store music files in flash memory. They typically come with 32 or 64 MB of
flash memory and can be expanded to 64 MB and beyond via flash card. For PC usage, music is
transferred to the player using a parallel or USB cable.
Analysts predict tremendous growth in the market for portable Internet audio players. They
expect online music sales to rise to $1.65 billion, or nearly 9% of the total market and over 10
million users by 2005. This compares to a mere $88 million in annual domestic sales of retail music,
or less than 1% of the total music market in 1999.
The Internet audio market is seeing increased support from consumer electronics vendors
because of the growing popularity of digital downloads. In spite of the growth forecasts, the following issues will impact product definition and limit market growth rate:
Music file format – While MP3 is the dominant downloading format, other formats are
Copyright protection – New formats were designed to provide copyright protection for the
music industry.
Feature set supported by players –
Internet audio formats include MP3, Secure MP3, MPEG2 AAC, Liquid Audio, Windows Media
Player, a2b, EPAC, TwinVQ, MPEG-4, Qdesign Music Codec, and SDMI. The following sections
provide details on these formats.
The Moving Pictures Experts Group (MPEG) was set up under the International Organization for
Standardization (ISO) to provide sound and video compression standards and the linkage between
The audio part of MPEG-1 describes three layers of compression:
1. Layer I – 1:4
2. Layer II – 1:6 to 1:8
3. Layer III – 1:10 to 1:12
These layers are hierarchically compatible. Layer III decoders can play all three layers, while
Layer II decoders can play Layer I and Layer II bitstreams. A Layer I decoder plays only Layer I
bitstreams. MPEG specifies the bitstream format and the decoder for each layer, but not the encoder.
This was done to give more freedom to the implementers and prevent participating companies from
having to reveal their business strategies. Nevertheless, the MPEG-group has submitted some Clanguage source for explanation purposes.
Chapter 4
All three layers are built on the same perceptual noise-shaping standard and use the same
analysis filter bank. To ensure compatibility, all the compressed packets have the same structure—a
header explaining the compression followed by the sound signal. Hence, every sequence of audio
frames can be used separately since they provide all necessary information for decoding. Unfortunately, this increases the file size. The ability to insert program-related information into the coded
packets is also described. With this feature, items such as multimedia applications can be linked in.
MP3 (MPEG-1 Layer III Audio Coding)
The three layers of MPEG-1 have different applications depending on the bit rate and desired
compression ratio. The most popular has been Layer III, called MP3. The name MP3 was implemented when making file extensions on the Windows platform. Since the typical extension consists
of three letters, “MPEG-1 Layer III” became “MP3”. Unfortunately, the name has resulted in
confusion since people tend to mix the different MPEG standards with the corresponding layers.
And, ironically, an MPEG-3 spec does not exist. Table 4.2 shows some of the different Layer III
Table 4.2: Typical Performance Data of MPEG-1 Layer III
Bit Rate
2.5 KHz
8 Kb/s
2.5 KHz
16 Kb/s
AM radio
7.5 KHz
32 Kb/s
FM radio
11 KHz
56-64 Kb/s
15 KHz
96 Kb/s
>15 KHz
112-128 Kb/s
Layer III enhancements over Layers I and II include:
Nonuniform quantization
Usage of a bit reservoir
Hoffman entropy coding
Noise allocation instead of bit allocation
These enhancements are powerful and require better encoders than do the other layers. But
today, even the cheapest computer easily manages to process such files. Each layer supports decoding audio sampled at 48, 44.1, or 32 kHz. MPEG 2 extends this family of codes by adding support
for 24, 22.05, or 16 kHz sampling rates as well as additional channels for surround sound and
multilingual applications.
MP3 supports audio sampling rates ranging from 16 to 48 kHz and up to five main audio
channels. It is a variable-bit codec where users can determine the sampling rate for encoded audio.
Higher sampling rates mean better fidelity to the original audio, but result in less compression. The
better the quality, the larger the resulting files, and vice versa. For most listeners, MP3 files encoded
at reasonable rates (96 or 120 Kb/s) are indistinguishable from CDs.
Audio Players
Secure MP3
Note that MP3 has no built-in security and no safeguards or security policies to govern its use.
However, secure MP3 is working to add security to the format. Some of the security approaches are:
Packing the MP3 files in a secure container that must be opened with a key. The key can be
associated with a particular system or sent to the user separately. However, issues such as
key tracking and lack of consumer flexibility remain problematic.
Using encryption technology combined with the encoded file. The key could be the drive
ID. Information held in the encryption code ensures copyright protection, establishment of
new business models, and the specific uses of songs.
Music label companies are not MP3 fans due to the free distribution of MP3 music files by
ventures such as Napster and Gnutella. Hence, secure MP3 may offer a solution for distributing
music over the Internet securely with revenue protection.
MPEG-2 BC became an official standard in 1995. Carrying the tag BC (Backward-Compatible), it
was never intended to replace the schemes presented in MPEG-1. Rather, it was intended to supply
new features. It supports sampling frequencies from 16 kHz to 22.05 kHz and 24 kHz at bit rates
from 32 to 256 Kb/s for Layer I. For Layers II and III, it supports sampling frequencies from 8 to 160
Kb/s. For the coding process, it includes more tables to the MPEG-1 audio encoder.
MPEG-2 is also called MPEG-2 multichannel because it includes the addition of multichannel
sound. MPEG-1 only supports mono and stereo signals, and it was necessary to design support for 5.1
surround sound for coding movies. This includes five full bandwidth channels and one low frequent
enhancement (LFE) channel operating from 8 kHz to 100 kHz. For backward compatibility, it was
necessary to mix all six channels down to a stereo signal. Hence, the decoder can reproduce a full
stereo picture.
MPEG-2 AAC (Advanced Audio Coding)
Dolby Laboratories argued that MPEG-II surround sound was not adequate as a new consumer
format and that it was limited by the backward compatibility issues. So MPEG started designing a
new audio compression standard, originally thought to be MPEG-3. Since the video part of the new
standard could easily be implemented in MPEG-2, the audio part was named MPEG-2 AAC. Issued
in 1997, the standard features Advanced Audio Coding (AAC) which represents sound differently
than PCM does. Note that AAC was called NBC (Non-Backward-Compatible) because it is not
compatible with MPEG-1 audio formats.
AAC is a state-of-the-art audio compression technology. It is more efficient than MP3. Formal
listening tests have demonstrated its ability to provide slightly better audio quality. The essential
benefits of AAC are:
A wider range of sampling rates (from 8 kHz to 96 kHz)
Enhanced multichannel support (up to 48 channels)
Better quality
Three different complexity profiles
Instead of the filter bank used by previous standards, AAC uses Modified Discrete Cosine Transform
(MDCT). Leveraging Temporal Noise Shaping, this method shapes the distribution of quantization
noise in time by prediction in the frequency domain. Together with a window length of 2048 lines per
transformation, it yields compression that is approximately 30% more efficient than that of MPEG-2 BC.
Chapter 4
Since MPEG-2 AAC was never designed to be backward-compatible, it solved the MPEG-2 BC
limitation when processing surround sound. In addition, MPEG changed the highly criticized
transport syntax. It left the encoding process to decide whether to send a separate header with all
audio frames. Hence, AAC provides a much better compression ratio relative to former standards. It
is appropriate in situations where backward compatibility is not required or can be accomplished
with simulcast. Formal listening tests have shown that MPEG-2 AAC provides slightly better audio
quality at 320 Kb/s than MPEG-2 BC provides at 640 Kb/s. At this point, it is expected that MPEG-2
AAC will be the sound compression system of choice in the future.
With time, AAC will probably be the successor of MP3. AAC can deliver equivalent quality to
MP3 at 70% of the bit rate (70% of the size at a rate of 128 Kb/s). And it can deliver significantly
better audio at the same bit rate. Like MPEG-1 audio encoding standards, AAC supports three levels
of encoding complexity. Perhaps the best indication of its significance is its use as the core audio
encoding technology for AT&T’s a2b, Microsoft’s WMA, and Liquid Audio.
MPEG-4 was developed by the same group that supported MPEG-1 and MPEG-2. It has better
compression capabilities than previous standards and it adds interactive support. With this standard
MPEG wants to provide a universal framework that integrates tools, profiles, and levels. It not only
integrates bitstream syntax and compression algorithms, but it also offers a framework for synthesis,
rendering, transport, and integration of audio and video.
The audio portion is based on MPEG-2 AAC standards. It includes Perceptual Noise Substitution
(PNS) which saves transmission bandwidth for noise-like signals. Instead of coding these signals, it
transmits the total noise-power together with a noise-flag. The noise is re-synthesized in the decoder
during the decoding process. It also provides scalability, which gives the encoder the ability to adjust
the bit rate according to the signal’s complexity.
Many developers are interested in synthesizing sound based on structured descriptions. MPEG-4
does not standardize a synthesis method; it standardizes only the description of the synthesis. Hence,
any known or unknown sound synthesis method can be described. Since a great deal of sounds and
music are already made through synthesis methods, the final audio conversion can be left for the end
computer by using MPEG-4.
Text-To-Speech Interfaces (TTSI) have existed since the advent of personal computers. MPEG-4
will standardize a decoder capable of producing intelligible synthetic speech at bit rates from 200 b/s
to 1.2 Kb/s. It will be possible to apply information such as pitch contour, phoneme duration,
language, dialect, age, gender and speech rate. MPEG-4 can also provide sound synchronization in
animations. For example, the animation of a person’s lips could easily be synchronized to his/her lips
so that they correspond regardless of the person’s language or rate of speech.
An MPEG-4 frame can be built up by separated elements. Hence, each visual element in a
picture and each individual instrument in an orchestral sound can be controlled individually. Imagine
you’re listening to a quintet playing Beethoven and you turn off one of the instruments and play that
part yourself. Or you choose which language each actor speaks in your favorite movie. This concept
of hypertextuality offers unlimited possibilities.
Audio Players
By 1996 the MPEG consortium found that people were finding it difficult to locate audiovisual
digital content on worldwide networks because the web lacked a logical description of media files.
The consortium remedied this problem with the Multimedia Content Description Interface, or
MPEG-7 for short. MPEG-7 describes media content. For example, if you hum lines of a melody into
a microphone connected to your computer, MPEG-7 will search the web and produce a list of
matching sound files. Or you can input musical instrument sounds and MPEG-7 will search for and
display sound files with similar characteristics. MPEG-7 can also be used with Automatic Speech
Recognition (ASR) for similar purposes. In this capacity, MPEG-7 provides the tools for accessing
all content defined within an MPEG-4 frame.
RealAudio G2
RealAudio 1.0 was introduced in 1995 to provide fast downloads with conventional modems. The
latest version is called RealAudio G2 and features up to 80% faster downloads than its predecessors.
It offers improved handling of data loss while streaming. The available bandwidth on the web may
vary and this can result in empty spaces in the sound being played. With RealAudio G2, data packets
are built up by parts of neighboring frames so they overlap each other. One package may contain
parts of several seconds of music. If some packets are lost, the possible gap will be filled in by an
interpolation scheme, similar to interlaced GIF-pictures. And even if packets are lost, the engine will
still produce a satisfactory result
RealAudio G2 is optimized for Internet speeds of 16 Kb/s to 32 Kb/s, with support for rates from
6 Kb/s to 96 Kb/s. This allows a wide range of bit rates as well as the ability to constantly change bit
rate while streaming. Due to its success, RealNetworks has expanded the scope offerings for sound
and video transfer as well as for multimedia platforms such as VRML and Flash. They are also
working on a light version of MPEG-7 to describe the content of the media being played. However,
RealNetworks products suffer from a lack of public source code and limitations in the free coding
tools. Consumers might reject it as too expensive. And even big companies would rather use tools
such as AAC or Microsoft Audio that are free and easily available
Microsoft—Audio v4.0 and Windows Media Player
Microsoft has been intimately involved with audio and has incorporated multimedia in its operating
systems since Windows 95. It provides basic support for CDs and WAV files and for the recentlyintroduced MS Audio format. Windows Media Player, for example, is a proprietary multimedia
platform shipped with Windows that has a default front end for playing audio and video files.
Currently, Microsoft is aggressively developing new technologies for the secure download of music.
WMA (Windows Media Audio) recently developed a codec that is twice as effective as MP3. It
is a variable-rate technology with new compression technology that provides high quality in very
small files. It has licensed its codec to other software providers and has incorporated WMA on
server-side technology in Windows NT Server OS (royalty-free). Microsoft is also working on
support for other Microsoft products such as PocketPC OS for hand-held devices and set-top boxes.
Liquid Audio
The Liquid Audio Company was formed in 1996. It developed an end-to-end solution to provide
high-quality, secure music. It enables music to be encoded in a secure format, compressed to a
Chapter 4
reasonable file size, purchased online, downloaded, and played. It also included back-end solutions
to manage and track payments—a sort of clearinghouse for downloaded music.
The company worked with Dolby Labs to create a compression format that retains high fidelity
to the original. The compression format encodes music better than MP3 does, resulting in files that
average about 0.75 MB per minute while retaining very high quality. As part of the security technology, content owners can encode business rules and other information into the songs. Options include
artwork and promotional information that appears on the player while the music plays. It can also
include lyrics or other song information, a time limit for playback, and even a URL for making a
purchase. One possible application is the presentation of a Web site to promote CDs. The site could
distribute a full song for download with a time limit for playback. While the song plays the Web site
name can be featured prominently to invite the listener to place an order for the full CD.
The Liquid Music Network includes over 200 partners and affiliates including Sony, HewlettPackard, Iomega, S3/Diamond Multimedia, Sanyo, Texas Instruments, and Toshiba. The network has
been hindered by its PC-only playback capabilities, but has recently gained support from silicon
vendors and consumer electronics companies.
a2b format is a compression and security technology supported by AT&T. The web site promotes the
a2b codec with free players and available content. It has a limited installed base of players and little
momentum in the market, thus preventing a2b from becoming a widely supported format.
EPAC (Enhanced Perceptual Audio Codec)
EPAC is a codec developed by Lucent Technologies (Bell Labs). It compresses audio at a rate of
1:11. It is supported by Real Networks in its popular G2 player, but few content owners and distributors support it.
TwinVQ (Transform-domain Weighted Interleave Vector Quantization)
This compression technology, targeted at download applications, was originally developed by
Yamaha. It has been incorporated, along with AAC, into the MPEG-4 specification. The underlying
algorithms are significantly different from the algorithm used in MP3. It has attracted high industry
interest due to its quality and compression capabilities. The codec can compress audio at rates of
1:18 to 1:96 that implies a near CD-quality file size of about 0.55 MB per minute.
Qdesign Music Codec
Based in British Columbia, Qdesign developed a high-quality, streaming audio codec. It is distributed by Apple Computer as part of its QuickTime media architecture. It gives excellent quality at
dialup data rates.
The Secure Digital Music Initiative (SDMI) is a forum of more than 180 companies and organizations representing information technology, music, consumer electronics, security technology, the
worldwide recording industry, and Internet service providers. SDMI’s charter is to develop open
technology specifications that protect the playing, storing, and distributing of digital music.
Audio Players
In June of 1999 SDMI announced that they had reached a consensus on specifications for a new
portable music player. This spec would limit digital music consumers to two options:
Users could transfer tracks from CDs they purchased onto their own players.
Users could pay for and download music from the Internet from its legitimate publisher.
The proposed version 1.0 specification provides for a two-phase system. Phase I commences
with the adoption of the SDMI Specification. Phase II begins when a screening technology is
available to identify pirated songs from new music releases and refuses playback.
During Phase I SDMI-compliant portable devices may accept music in all current formats,
whether protected or unprotected. In early 2000 record companies started imprinting CD content
with a digital watermark that secured music against illegal copying. In Phase II, consumers wishing
to download new music releases that include new SDMI technology would be prompted to upgrade
their Phase I device to Phase II in order to play or copy that music. As an incentive to upgrade—with
music now secured against casual pirating—music companies may finally be ready to put their music
libraries on line.
The new proposal’s impact on MP3 is far less than it might have been. The Big 5—Sony Music
Entertainment, EMI Recorded Music, Universal Music Group, BMG Entertainment and Warner
Music Group—had initially advocated making SDMI-compliant players incompatible with MP3
files. However, they may agree to a security scheme that is backward-compatible with the MP3
format. If so, consumers will be able to copy songs from their CDs and download unprotected music,
just as they do now.
The open technology specifications released by SDMI will ultimately:
Provide consumers with convenient access to music available online and in new, emerging
digital distribution systems.
Enable copyright protection for artists’ works.
Promote the development of new music-related business and technologies.
Components of MP3 Portable Players
Portable MP3 players are small and at present cost between $100-$300. They leverage a PC for
content storage, encoding, and downloading. MP3 home systems must support multiple output
formats and have robust storage capabilities. Automotive versions require broad operating environment support, specific industrial design specifications, and multiple format support including radio
and CD. Product differentiation trends include adding Bluetooth capability, user interface, video,
games, and day-timer features. Figure 4.1 shows a block diagram of a typical MP3 player.
The most expensive components in an MP3 player are the microprocessor/digital signal processor (DSP) and the flash memory card. The MP3 songs are downloaded into flash memory via the PC
parallel port, USB port, or Bluetooth. The user interface controls are interpreted by the main control
logic. The song data is manipulated to play, rewind, skip, or stop. The main control logic interfaces
directly to the MP3 decoder to transfer MP3 data from the flash memory to the MP3 decoder.
Chapter 4
Figure 4.1: MP3 Block Diagram
Flash Memory
Flash memory has enabled the digital revolution by providing for the transfer and storage data. Flash
memory cards have become the dominant media storage technology for mobile devices requiring
medium storage capacities. The largest application markets for flash cards are digital audio (MP3)
players, digital still cameras, digital camcorders, PDAs, and mobile cellular handsets. Flash memory
remains the de facto choice for MP3 storage.
The five most popular flash standards are:
Memory Stick
MultiMediaCard (MMC)
Secure Digital Card
Recently, shortages of flash cards and rising prices may have stunted the growth of the MP3
player market. Manufacturers are beginning to turn to hard disk drives and to Iomega’s Clik drives to
increase storage capacity.
Even with the increased availability of these alternatives, flash cards will continue to dominate
the market. The density of flash cards in MP3 players will rise steadily over the next few years. By
2004 most players shipped will contain 256 MB or 512 MB cards.
Internet Audio Players – Market Data and Trends
Sales of portable digital music players will drastically increase over the next few years. MP3 player
sales soared to $1.25 billion by the end of 2002. By 2004 the worldwide market for portable digital
Audio Players
audio players (MP3 and other formats) will grow to more than 10 million units and to just under $2
billion in sales. This compares to a mere $125 million for MP3 player sales in 1999. The sales will
top more than 25 million units for digital music players (portable, home, automotive) by 2005.
With the arrival of portable MP3 players, the market for portable CD players has for the first
time started to see a decrease in unit sales in the U.S. The MP3 player market is concentrated in
North America with over 90% of worldwide shipments last year occurring in the United States and
Canada. Over the next few years MP3 player sales will climb with increasing worldwide Internet
access penetration levels.
Some market trends in the Internet audio player industry are:
There is tremendous consolidation at every level, thus increasing the market power for the
remaining large music labels. This consolidation occurs because corporations are seeking
economies of scale and profitability in a mature industry. This, in turn, prompts artists and
independent labels to search for new ways to promote and distribute music. Efficient
systems provide lower costs and higher profits for the labels. The new medium provides the
opportunity for promotion and exposure to artists.
Communication, entertainment, and information are transitioning from analog to digital.
The shift has been occurring in the music industry for decades via CD formats as consumers
expect greater fidelity, depth, and range in music. Also, digitization makes it easier to copy
without losing quality. PCs aid this transition by making digital files easy to store and
The Internet has become a superb distribution channel for digital data. It is serving as a new
medium for music promotion and distribution.
Digital compression benefits the Internet audio player industry. Reducing music files to a
fraction of their size makes them easier to manage and distribute without sacrificing quality.
Nontraditional and experimental business models are being implemented. Knowledge of
consumer information reduces the distance between the fans and the artists. Focused
marketing allows closer relationships while increasing profits. Electronic distribution allows
per-play fees, limited-play samples, and subscription models.
Broadband (high-speed) access to the home will promote electronic music distribution over
the Internet.
The Internet audio market will probably not converge on a single standard any time soon. In
fact, it will probably get more fragmented as it grows. Each standard will be optimized for
different applications, bit rates, and business agendas of the media providers.
Users will want players that support multiple standards. An essential component of a player
is the range of music that it makes available. This trend has begun with recent announcements of products that support multiple standards and have the ability to download support
for new standards. It will be important for audio players to support physical formats such as
CD, DVD-Audio, and SACD since consumers cannot afford three players for three formats.
Another emerging trend is support for metadata in music standards. Metadata is non-music
data included in the music files. It includes data such as track information and cover art. A
potential use for metadata is advertisements.
Copyright protection is the biggest issue hindering the growth of Internet music distribution.
MP3 files can be easily distributed across the Internet using web page downloads or email.
Since MP3 has no inherent copy protection, once it is made available on the web everyone
has access to it at no charge. The recording industry represented by the Recording Industry
Association of America (RIAA) sees the combination of MP3 and the Internet as a
Chapter 4
Pandora’s box that will result in widespread piracy of copyrighted material. The RIAA
believes the threat is significant enough that it took legal action to block the sale of the
Diamond Rio in late 1998. The fear of piracy has limited the availability of legitimate MP3
material and tracks from emerging and mainstream artists.
Several trends will spur growth of the portable digital music player market:
More available hardware – Over the past 12 months more than 50 manufacturers announced new portable music players.
Greater storage capacity – The density of flash cards, which are the most common storage
media for MP3 players, is steadily increasing. Most players now offer 64 MB of storage—
about one hour’s worth of music.
More storage options – Cheaper alternatives to flash cards (hard disk drives, Iomega’s Clik
drives) are becoming available.
More digital music files – Legal or not, Napster, Gnutella, and other file sharing sites have
increased awareness and availability of digital music.
Other Portable Audio Products
Sales of traditional cassette and CD headphone stereos, Internet audio headphone portables, and
boomboxes are expected to exceed $3.2 billion in 2003. Sales continue to grow because, at prices as
low as $29, headphone CD players have become viable gifts for teens and pre-teens. Sales also grew
because anti-shock technologies have reduced player mis-track, making them more practical when
walking, hiking or working out.
Another growth booster is the launch of the first headphone CD players capable of playing back
MP3-encoded CDs. In the boombox category, factory-level sales of CD-equipped models will
continue to grow at 4.6%. Cassette boomboxes will continue their downward spiral, dropping 34%.
Within the CD-boombox segment, sales of higher priced three-piece units (featuring detachable
speakers) will fall 25% as consumers opt increasingly for more powerful tabletop shelf systems
whose prices have grown increasingly affordable.
Convergence of MP3 Functionality in Other Digital Consumer Devices
MP3 technology will proliferate as manufacturers integrate it into other electronics devices. Several
manufacturers are combining MP3 functionality into consumer devices such as cellular phones,
digital jukeboxes, PDAs, wristwatches, automotive entertainment devices, digital cameras, and even
toasters. Convergence between CD and DVD players has already been popular for several years. This
convergence will move further to players that incorporate DVD-Video, CD, DVD-Audio, and SACD.
And next generation portable players will support multiple Internet audio formats.
PC-Based Audio
Though PC-based audio has existed for over 20 years, only during the past two years have personal
computers become widely used to play and record popular music at the mainstream level. Sound card
and computer speaker sales are strong. Many fans are abandoning their home stereos in favor of PCs
with high storage capacity because they are ideal for storing music for portable MP3 players.
The home audio convergence and the shift to the PC will continue as new home audio networking products hit the market. These devices will call up any digital audio file stored on a home PC and
play it in any room of a house—through an existing home theater system or through a separate pair
of powered speakers. If the PC has a cable modem or DSL connection, the home networking products can even access Internet streamed audio.
Audio Players
It is easy to store CDs on a computer hard drive. Software for CD “ripping” has been available
for several years. The process is as simple as placing a disc in a CD-ROM drive and pressing start.
Some titles even let users tap into online databases containing song information and cover art. After
the CDs are stored, users can use a remote control to access custom play-lists or single tracks. Add
the ability to retrieve Internet streamed audio - a radio broadcast or an overseas sporting event, for
example - and the benefits of convergence emerge.
Digital entertainment is migrating from the computer room to the living room. PCs of the future
will provide for the distribution of broadband signals to the homes through residential gateways cable or DSL connections. The technology will quickly move beyond audio to provide whole-house
digital video access to movies and shows that are stored on a PC or on the Internet.
“Thin” operating systems such as PocketPC are also appearing in home theater components.
While some fear domination by a single OS manufacturer, standardization will permit interaction
among different component brands. For example, with a touch of a remote button devices such as the
television, stereo and lighting systems will synchronize. And this will not even require custom
In the coming years, the PC as we know it will disappear. Instead of a multi-function box, the
home computer will become a powerful information server for everyday devices. Consumers must
prepare to experience a new dimension in home audio access, one that will dramatically change our
media consumption habits.
Internet Radio
The Internet also extended its reach to influence the design of home radios, turning AM/FM radios
into AM/FM/Internet radios. Internet radios are available that enable consumers to stream music
from thousands of websites without having to sit near an Internet-enabled PC. Today the selection of
Internet radios is expanded to include devices from mainstream audio suppliers and computerindustry suppliers. Internet radios are showing up in tabletop music systems and in audio components
intended for a home’s audio-video system. Some connect to super fast broadband modems to achieve
live streaming with the highest possible sound quality.
Digital Audio Radio
The digital revolution has also come to radio. In 2001 Sirius Satellite Radio and XM Satellite Radio
delivered up to 100 channels of coast-to-coast broadcasting via satellite. The first terrestrial AM and
FM radio stations could soon begin commercial digital broadcasting if the Federal Communications
Commission authorizes digital conversion.
Online Music Distribution
Record companies have always been paranoid about piracy. This was rarely justified until the digital
age made high-fidelity copying a ubiquitous reality. First came the MP3 compression format. Then
came the Internet which made electronic content distribution worldwide viable. Next, portable
players appeared. Finally, sites such as Napster, AudioGalaxy, Gnutella, and LimeWire appeared as
venues for easy file sharing.
These technologies and services disrupted the long-established business model for the $50billion-a-year recording industry. The system was threatened and shareholders worried that stock
values would collapse. Companies replied by suing. Shortly after Diamond Multimedia Systems
Chapter 4
launched the Rio 300 MP3 player in the fall of 1998, the RIAA sued to have the device taken off the
market, alleging that it was an illegal digital recording device. The Rio 300 wasn’t the first MP3
player, but it was the first one marketed and sold heavily in the United States. The suit triggered a
media blitz that portrayed the RIAA as the bad guy. Ironically, the media displayed pictures of the
match-box-size Rio everywhere, bringing it to the attention of many who didn’t know about MP3
The RIAA dropped its suit after a federal circuit court agreed with Diamond that Rio did not fit
the law’s definition of an illegal digital recording device. That news triggered another sales surge
with major consumer electronics companies such as Creative Technology, Thomson Multimedia, and
Samsung Electronics jumping on the bandwagon. MP3 file-sharing Web sites, including Napster,
became favorite stops for music lovers.
In late 1999 the RIAA sued Napster on copyright infringement, bringing more media attention
and spurring sales of even more portable players. A federal court ruling stopped Napster, but not
before three billion files were swapped illegally. Although MP3 player sales slowed a bit, other
websites picked up where Napster left off. During all the years that people were downloading free
MP3 files, only 2% of U.S. households ever paid for downloaded music. The record companies are
fearful of losing control over content. They are looking at SDMI and other secure schemes as a
possible solution for protecting the playing, storing, and distributing of digital music.
The copyright infringement lawsuits have been settled, with Napster and being
purchased by separate labels. These types of deals and partnerships between consumer electronics
vendors, music labels, Internet portals, retailers, and online music players will enable legitimate
online music distribution channels. The key ingredients in a successful online music service will be
an easily negotiable interface with a broad array of content accessible from any device at a reasonable price. It would be comprised of a paid service that approximates the convenience and ubiquity
of a Napster, but with better file quality and a broader range of services. Forecasts predict that the
number of music service provider users in the U.S. will exceed 10 million by 2005 and revenues will
exceed $1.6 billion.
By 1981 the vinyl LP was more than 30 years old and the phonograph was more than 100 years old.
It was obvious that the public was ready for a new, advanced format. Sony, Philips, and Polygram
collaborated on a new technology that offered unparalleled sound reproduction—the CD. Instead of
mechanical analog recording, the discs were digital. Music was encoded in binary onto a five-inch
disc covered with a protective clear plastic coating and read by a laser.
Unlike fragile vinyl records, the CD would not deteriorate with continued play, was less vulnerable to scratching and handling damage, held twice as much music, and did not need to be flipped
over. It was an immediate sensation when it was introduced in 1982. By 1988 sales of CDs surpassed
vinyl as the home playback medium of choice. It passed the pre-recorded cassette in sales in 1996.
But the CD could only play back digital music. Now accustomed to the perfection of digital music
playback, consumers demanded the ability to record digitally as well. In 1986 Sony introduced
Digital Audio Tape (DAT), followed by MiniDisc in 1992. Philips countered that same year with
DCC (Digital Compact Cassette). None of these formats caught on simply because none of them
were CD. The recordable CD was unveiled in 1990, but the first consumer CD-R devices would not
be introduced until the mid-1990s.
Audio Players
With the success of the DVD player and the definition of a DVD format audio specification,
DVD-Audio is one of the most popular of emerging formats. A similar audio format—Super Audio
CD—is also being introduced. The arrival of MP3 and similar formats has introduced several
exciting trends, one of which is the distribution of music over the Internet. Changes in the
multibillion dollar audio music market will spell success for some technologies and failure for
others. At the very least, we are guaranteed to see improved sound quality and more secure and easy
distribution of music in the coming years.
Cellular/Mobile Phones
The wireless industry includes electronics systems such as pagers, GPS, cordless telephones, notebook PCs with wireless LAN functionality, and cellular phones. Cellular handset phones represent
the largest and most dynamic portion of the wireless communication market. No product has become
as dominant in unit shipments as the cellular phone. It has evolved from being a luxury for urgent
communications to a platform that provides voice, video, and data services. It’s now a necessity for
both business and personal environments.
The term cellular comes from a description of the network in which cellular or mobile phones
operate. The cellular communications system utilizes numerous low-power transmitters, all interconnected to form a grid of “cells.” Each ground-based transmitter (base station) manages and controls
communications to and from cellular phones in its geographic area, or cell. If a cellular phone user is
traveling over a significant distance, the currently transmitting cell transfers the communications
signal to the next adjacent cell. Hence, a cellular phone user can roam freely and still enjoy uninterrupted communication. Since cellular communications rely on radio waves instead of on a fixedpoint wired connection, a cellular phone can be described as a radiophone. The dependence on radio
waves is the common denominator for wireless communications systems.
Wireless and cellular roots go back to the 1940s when commercial mobile telephony began. The
cellular communications network was first envisioned at Bell Laboratories in 1947. In the United
States cellular planning began in the mid-1940s, but trial service did not begin until 1978. It was not
until 1981 in Sweden that the first cellular services were offered. Full deployment in America did not
occur until 1984. Why the delay? The reasons are limited technology, Bell System ambivalence,
cautiousness, and governmental regulation. The vacuum tube and the transistor made possible the
early telephone network. But the wireless revolution began only after low-cost microprocessors,
miniature circuit boards, and digital switching became available. Thus, the cellular phone industry
can still be considered a “young” electronic system segment—much younger than the PC industry,
which began in the mid-1970s.
Landscape—Migration to Digital and 3G
History—The Wireless Beginning
Although CB (Citizens Band) radio and pagers provided some mobile communications solutions,
demand existed for a completely mobile telephone. Experiments with radio telephones began as far
back as the turn of the century, but most of these attempts required the transport of bulky radio
Cellular/Mobile Phones
transmitters or they required tapping into local overhead telephone wires with long poles. The first
practical mobile radio telephone service, MTS (Mobile Telephone Service), began in 1946 in St.
Louis. This system was more like a radio walkie-talkie - operators handled the calls and only one
person at a time could talk.
The idea of permanent “cells” was introduced in 1947, the same year radio telephone service
was initiated between Boston and New York by AT&T. Automatic dialing—making a call without the
use of an operator—began in 1948 in Indiana. But it would be another 16 years before the innovation
was adopted by the Bell system.
The Bell Labs system (part of AT&T then) moved slowly and with seeming lack of interest at
times toward wireless. AT&T products had to work reliably with the rest of their network and they
had to make economic sense. This was not possible for them with the few customers permitted by the
limited frequencies available at the time. The U.S. FCC was the biggest contributor to the delay,
stalling for decades on granting more frequency space. This delayed wireless technology in America
by perhaps 10 years. It also limited the number of mobile customers and prevented any new service
from developing. But in Europe, Scandinavia, Britain, and Japan where state-run telephone companies operated without competition and regulatory interference, cellular came more quickly. Japanese
manufacturers equipped some of the first car-mounted mobile telephone services, their technology
being equal to what America was producing.
During the next 15 years the introduction of the transistor and an increase in available frequencies improved radio telephone service. By 1964 AT&T developed a second-generation cell phone
system. It improved Mobile Telephone Service to have more of the hallmarks of a standard telephone, though it still allowed only a limited number of subscribers. In most metropolitan service
areas there was a long waiting list. The idea of a mobile phone was popularized by Secret Agent 86,
Maxwell Smart, who used a shoe phone on the TV spy spoof, “Get Smart.” In 1973 Motorola filed a
patent for a radio telephone system and built the first modern cell phone. But the technology would
not reach consumers until 1978.
First There Was Analog
The first cellular communications services (first generation, or 1G) were analog systems. Analog
systems are based on frequency modulation (FM) using bandwidths of 25 kHz to 30 kHz. They use a
constant phase variable frequency modulation technique to transmit analog signals.
Among the most popular of analog wireless technologies is AMPS (Advanced Mobile Phone
System), developed by Bell Labs in the 1970s. It operates in the 800-MHz band and uses a range of
frequencies between 824 MHz and 894 MHz for analog cell phones. To encourage competition and
keep prices low, the U. S. government required the presence of two carriers, known as carrier A and
B, in every market. One of the carriers was normally the local phone company. Carriers A and B are
each assigned 832 frequencies—790 for voice and 42 for data. A pair of frequencies, one for transmit
and one for receive, makes up one channel. The frequencies used are typically 30 kHz wide. 30 kHz
was chosen because it provides voice quality comparable to a wired telephone.
The transmit and receive frequencies of each voice channel are separated by 45 MHz to keep
them from cross-interference. Each carrier has 395 voice channels as well as 21 data channels for
housekeeping activities such as registration and paging. A version of AMPS known as Narrowband
AMPS (NAMPS) incorporates some digital technology, which allows the system to carry about three
times as many calls as the original version. Though it uses digital technology, it is still considered
Chapter 5
analog. AMPS and NAMPS operate in the 800-MHz band only and do not offer features such as email and Web browsing.
When AMPS service was first fully initiated in 1983, there were a half million subscribers across
the U.S. By the end of the decade there were two million cell phone subscribers. But demand far outstripped the supply of frequency bands and cell phone numbers.
First-generation analog networks are gradually being phased out. They are limited in that there
can only be one user at a time per channel and they can’t provide expanded digital functionality
(e.g., data services). Also, these systems could not contain all the potential wireless customers.
Companies began researching digital cellular systems in the 1970s to address these limitations. It
was not until 1991, however, that the first digital cellular phone networks (second generation, or 2G)
were established.
Along Comes Digital
Digital cell phones use analog radio technology, but they use it in a different way. Analog systems do
not fully utilize the signal between the phone and the cellular network because analog signals cannot
be compressed and manipulated as easily as digital. This is why many companies are switching to
digital—they can fit more channels within a given bandwidth.
Digital phones convert a voice into binary information (1s and 0s) and compress it. With this
compression, between three and 10 digital cell phone calls occupy the space of a single analog call.
Digital cellular telephony uses constant frequency variable phase modulation techniques to transmit
its analog signals. With this technology digital cellular can handle up to six users at a time per
channel compared to one user at a time with analog. This is especially critical in densely populated
urban areas. Thus, digital technology helped cellular service providers put more cellular subscribers
on a given piece of transmission bandwidth. The cost to the service provider of supplying digital
cellular services to users can be as little as one-third the cost of providing analog services. Besides
reduced costs to the service provider, digital cellular technology also offers significant benefits to the
user. These benefits include encryption capability (for enhanced security), better voice quality, longer
battery life, and functionality such as paging, caller ID, e-mail, and FAX.
In the 1970s digital cellular research concentrated on narrow-band frequency division multiple
access (FDMA) technology. In the early 1980s the focus switched to time division multiple access
(TDMA) techniques. In 1987 narrow-band TDMA with 200 kHz channel spacing was adopted as the
technology of choice for the pan-European GSM digital cellular standard. In 1989 the U.S. and Japan
also adopted the narrow-band TDMA. More recently cellular networks are migrating to CDMA (code
division multiple access) technology.
The three common technologies used to transmit information are described in the following
Frequency division multiple access (FDMA) – Puts each call on a separate frequency.
FDMA separates the spectrum into distinct voice channels by splitting it into uniform
chunks of bandwidth. This is similar to radio broadcasting where each station sends its
signal at a different frequency within the available band. FDMA is used mainly for analog
transmission. While it is capable of carrying digital information, it is not considered to be an
efficient method for digital transmission.
Time division multiple access (TDMA) – Assigns each call a certain portion of time on a
designated frequency. TDMA is used by the Electronics Industry Alliance and the Telecom56
Cellular/Mobile Phones
munications Industry Association for Interim Standard 54 (IS-54) and Interim Standard 136
(IS-136). Using TDMA, a narrow band that is 30 kHz wide and 6.7 milliseconds long is split
into three time slots. Narrow band means “channels” in the traditional sense. Each conversation gets the radio for one-third of the time. This is possible because voice data that has
been converted to digital information is compressed to take up significantly less transmission space. Hence, TDMA has three times the capacity of an analog system using the same
number of channels. TDMA systems operate in either the 800 MHz (IS-54) or the 1900
MHz (IS-136) frequency band. TDMA is also used as the access technology for Global
System for Mobile communications (GSM).
Code division multiple access (CDMA) – Assigns a unique code to each call and spreads it
over the available frequencies. Multiple calls are overlaid on each other on the channel.
Data is sent in small pieces over a number of discrete frequencies available for use at any
time in the specified range. All users transmit in the same wide-band chunk of spectrum but
each phone’s data has a unique code. The code is used to recover the signal at the receiver.
Because CDMA systems must put an accurate time-stamp on each piece of a signal, they
reference the GPS system for this information. Between eight and ten separate calls can be
carried in the same channel space as one analog AMPS call. CDMA technology is the basis
for Interim Standard 95 (IS-95) and operates in both the 800 MHz and 1900 MHz frequency
bands. Ideally, TDMA and CDMA are transparent to each other. In practice, high-power
CDMA signals raise the noise floor for TDMA receivers and high-power TDMA signals can
cause overloading and jamming of CDMA receivers.
With 1G (analog) cellular networks nearing their end and 2G (digital) ramping up, there is much
discussion about third-generation (3G) cellular services. 3G is not expected to have a significant
impact on the cellular handset market before 2002-03. While 3G deployments are currently underway, 2.5G is being considered as the steppingstone for things to come. The evolution of the digital
network from 2G, to 2.5G, to 3G is described next.
Second-generation digital systems can provide voice/data/fax transfer as well as other value-added
services. They are still evolving with ever-increasing data rates via new technologies such as
HSCSD and GPRS. 2G systems include GSM, US-TDMA (IS_136), cdmaOne (IS-95), and PDC.
US-TDMA/PDC have been structured atop existing 1G analog technology and are premised on
compatibility and parallel operation with analog networks. GSM/IS-95, however, are based on an
entirely new concept and are being increasingly adopted worldwide.
2.5G technologies are those offering data rates higher than 14.4 Kb/s and less than 384 Kb/s. Though
these cellular data services may seem like a steppingstone to things to come, they may be with us
much longer than some cellular carriers want to believe.
2.5G technologies accommodate cellular nicely. First, they are packet-based as opposed to 2G
data services, which were generally connection based. This allows for always-on services. And since
no real connection needs to be established, latency in sending data is greatly reduced. Second, since
2.5G services don’t require new spectrum or 3G licenses, the carrier’s cost to deploy these services is
modest. This might make them cheaper for consumers. Third, enabling handsets for 2.5G services
are fairly inexpensive, typically adding less than $10 to the handset cost.
Chapter 5
3G (third generation) mobile communications systems include technologies such as cdma2000,
UMTS, GPRS, WCDMA, and EDGE. The vision of a 3G cellular system was originally articulated
by the International Telecommunications Union (ITU) in 1985. 3G is a family of air-interface
standards for wireless access to the global telecommunications infrastructure. The standards are
capable of supporting a wide range of voice, data, and multimedia services over a variety of mobile
and fixed networks. 3G combines high-speed mobile access with enhanced Internet Protocol (IP)based services such as voice, text, and data. 3G will enable new ways to communicate, access
information, conduct business, and learn.
3G systems will integrate different service coverage zones including macrocell, microcell, and
picocell terrestrial cellular systems. In addition, they support cordless telephone systems, wireless
access systems, and satellite systems. These systems are intended to be a global platform and provide
the infrastructure necessary for the distribution of converged services. Examples are:
Mobile or fixed communications
Voice or data
To enhance the project, the ITU has allocated global frequency ranges to facilitate global
roaming and has identified key air-interface standards for the 3G networks. It has also identified
principal objectives and attributes for 3G networks including:
increasing network efficiency and capacity.
anytime, anywhere connectivity.
high data transmission rates: 44 Kb/s while driving, 384 Kb/s for pedestrians, and 2 Mb/s for
stationary wireless connections.
interoperability with fixed line networks and integration of satellite and fixed-wireless
access services into the cellular network.
worldwide seamless roaming across dissimilar networks.
bandwidth on demand and the ability to support high-quality multimedia services.
increased flexibility to accommodate multiple air-interface standards and frequency bands
and backward compatibility to 2G networks.
The ITU World Radio Conference (WRC) in 1992 identified 230 MHz on a global basis for IMT2000, including both satellite and terrestrial components. However, following unexpectedly rapid
growth in both the number of mobile subscribers and mobile services in the 1990s, the ITU is
considering the need for additional spectrum since WRC 2000. The currently identified frequency
bands are:
806 – 960 MHz
1,710 – 1,885 MHz,
1,885 – 2,025 MHz
2,110 – 2,200 MHz
2,500 – 2,690 MHz
Of the five 3G air-interface standards approved by the ITU, only two—cdma2000 and Wideband
CDMA—have gained serious market acceptance.
Cellular/Mobile Phones
Digital technologies are organized into these categories:
2G – GSM, TDMA IS-136, Digital-AMPS, CDMA, cdmaOne, PDC, PHS, and iDEN, PCS
2.5G – GSM, HSCSD (High Speed Circuit Switched Data), GPRS (General Packet Radio
Service), EDGE (Enhanced Data Rate for GSM Evolution), and cdma2000 1XRTT
3G – cdma2000 1XEV, W-CDMA, TDMA-EDGE, and cdma2000 3XRTT
Some of the key digital wireless technologies are:
CDMA – Code Division Multiple Access, known in the U.S. as Inc. and as IS-95. The
Telecommunications Industry Association (TIA) adopted the CDMA standard in 1993.
It was developed by Qualcomm and characterized by high capacity and small cell
radius. It uses the same frequency bands and supports AMPS, employing spread-spectrum
technology and a special coding scheme.
cdmaOne – This is considered a 2G technology mobile wireless technology. It is used by
member companies of the CDMA Development Group to describe wireless systems complying with standards including the IS-95 CDMA air interface and the ANSI-41 network
standard for switch interconnection. CdmaOne describes a complete wireless system that
incorporates the ANSI-41 network standard for switch interconnection. The IS-95A protocol
employs a 1.25-MHz carrier, operates in radio-frequency bands at either 800 MHz or 1.9
GHz, and supports data speeds up to 14.4 Kb/s. IS-95B can support data speeds up to 115
Kb/s by bundling up to eight channels.
cdma2000 – 3G wireless standard that is an evolutionary outgrowth of cdmaOne. It provides
a migration path for current cellular and PCS operators and has been submitted to the ITU
for inclusion in IMT-2000. Cdma2000 offers operators who have deployed a 2G cdmaOne
system a seamless migration path to 3G features and services within existing spectrum
allocations for both cellular and PCS operators. Cdma2000 supports the 2G network aspect
of all existing operators regardless of technology (cdmaOne, IS-136 TDMA, or GSM). This
standard is also known by its ITU name IMT-CDMA Multi-Carrier (1X/3X). Cdma2000 has
been divided into 2 phases. First phase capabilities are defined in the 1X standard which
introduces 144 Kb/s packet data in a mobile environment and faster speeds in a fixed
environment. Cdma2000 phase two, known as 3X, incorporates the capabilities of 1X and:
supports all channel sizes (5 MHz, 10 MHz, etc.).
provides circuit and packet data rates up to 2 Mb/s.
incorporates advance multimedia capabilities.
includes a framework for advanced 3G voice services and vocoders (voice compression
algorithm codecs), including voice over packet and circuit data.
CDPD – Cellular Digital Packet Data. It refers to a technology that allows data packets to
be sent along idle channels of CDPD cellular voice networks at very high-speeds during
pauses in conversations. CDPD is similar to packet radio technology in that it moves data in
small packets that can be checked for errors and retransmitted.
CT-2 – A 2G digital cordless telephone standard that specifies 40 voice channels (40
carriers times one duplex bearer per carrier).
CT-3 – A 3G digital cordless telephone standard that is a precursor to DECT.
D-AMPS – Digital AMPS, or D-AMPS, is an upgrade to the analog AMPS. Designed to
address the problem of using existing channels more efficiently, D-AMPS (IS-54) employs
the same 30 KHz channel spacing and frequency bands (824 - 849 MHz and 869 - 894
MHz) as AMPS. By using TDMA, IS-54 increases the number of users from one to three per
channel (up to 10 with enhanced TDMA). An AMPS/D-AMPS infrastructure can support
analog AMPS or digital D-AMPS phones. Both operate in the 800 MHz band.
Chapter 5
DECT – Initially, this was Ericsson’s CT-3, but it grew into ETSI’s Digital European
Cordless Standard. It is intended to be a far more flexible standard than CT-2 in that it has
120 duplex voice channels (10 RF carriers times 12 duplex bearers per carrier). It also has a
better multimedia performance since 32 Kb/s bearers can be concatenated. Ericsson is
developing a dual GSM/DECT handset that will be piloted by Deutsche Telekom.
EDGE – Enhanced Data for GSM Evolution. EDGE is an evolution of the US-TDMA
systems and represents the final evolution of data communications within the GSM standard. EDGE uses a new enhanced modulation scheme to enable network capacity and data
throughput speeds of up to 384 Kb/s using existing GSM infrastructure.
FSK – Frequency Shift Keying. Many digital cellular systems rely on FSK to send data back
and forth over AMPS. FSK uses two frequencies, one for 1 and the other for 0. It alternates
rapidly between the two to send digital information between the cell tower and the phone.
Clever modulation and encoding schemes are required to convert the analog information to
digital, compress it, and convert it back again while maintaining an acceptable level of
voice quality.
GPRS – General Packet Radio Service. It is the first implementation of packet-switched
data primarily for GSM-based 2G networks. Rather than sending a continuous stream of
data over a permanent connection, packet switching utilizes the network only when there is
data to be sent. GPRS can send and receive data at speeds up to 115 Kb/s. GPRS network
elements consists of two main elements - SGSN (Service GPRS Support Node) and GGSN
(Gateway GPRS Support Node).
GSM – Global System for Mobile Communications. The first digital standard developed to
establish cellular compatibility throughout Europe. Uses narrow-band TDMA to support
eight simultaneous calls on the same radio frequency. It operates at 900, 1800, and 1900
MHz. GSM is a technology based on TDMA which is the predominant system in Europe,
but is also used worldwide. It operates in the 900 MHz and 1.8 GHz bands in Europe and
the 1.9 GHz PCS band in the U.S. It defines the entire cellular system, not just the air
interface (TDMA, CDMA, etc.). GSM provides a short messaging service (SMS) that
enables text messages up to 160 characters to be sent to and from a GSM phone. It also
supports data transfer at 9.6 Kb/s to packet networks, ISDN, and POTS users. GSM is a
circuit-switched system that divides each 200 KHz channel into eight 25 KHz time slots. It
has been the backbone of the success in mobile telecoms over the last decade and it continues to evolve to meet new demands. One of GSM’s great strengths is its international
roaming capability, giving a potential 500 million consumers a seamless service in about
160 countries. The imminent arrival of 3G services is challenging operators to provide
consumer access to high-speed, multimedia data services and seamless integration with the
Internet. For operators now offering 2G services, GSM provides a clear way to make the
most of this transition to 3G.
HSCSD – High Speed Circuit Switched Data. Introduced in 1999, HSCSD is the final
evolution of circuit-switched data within the GSM environment. It enables data transmission
over a GSM link at rates up to 57.6 Kb/s. This is achieved by concatenating consecutive
GSM timeslots, each of which is capable of supporting 14.4 Kb/s. Up to four GSM timeslots
are needed for the transmission of HSCSD.
IDEN – Integrated Digital Enhanced Network.
IMT-2000 – International Mobile Telecommunication 2000. This is an ITU (International
Telecommunications Union) standards initiative for 3G wireless telecommunications
Cellular/Mobile Phones
services. It is designed to provide wireless access to global telecommunication infrastructure
through satellite and terrestrial systems, serving fixed and mobile phone users via public and
private telephone networks. IMT-2000 offers speeds of 144 Kb/s to 2 Mb/s. It allows
operators to access methods and core networks to openly implement their technologies,
depending on regulatory, market and business requisites. IMT-2000 provides high-quality,
worldwide roaming capability on a small terminal. It also offers a facility for multimedia
applications (Internet browsing, e-commerce, e-mail, video conferencing, etc.) and access to
information stored on PC desktops.
IS–54 – TDMA-based technology used by the D-AMPS system at 800 MHz.
IS–95 – CDMA-based technology used at 800 MHz.
IS–136 – The wireless operating standard for TDMA over AMPS. It was previously known
as D-AMPS (Digital AMPS). It was also known as US TDMA/IS-136 which was the first
digital 2G system adopted in the U.S.
PCS – Personal Communications Service. The PCS is a digital mobile phone network very
similar to cellular phone service but with an emphasis on personal service and extended
mobility. This interconnect protocol network was implemented to allow cellular handset
access to the public switched telephone network (PSTN). While cellular was originally
created for use in cars, PCS was designed for greater user mobility. It has smaller cells and
therefore requires a larger number of antennas to cover a geographic area. The term “PCS”
is often used in place of “digital cellular,” but true PCS means that other services like
messaging (paging, fax, email), database access, call forwarding, caller ID, and call waiting
are bundled into the service.
In 1994 the FCC auctioned large blocks of spectra in the 1900-MHz band. This frequency
band is dedicated to offering PCS service access and is intended to be technology-neutral.
Technically, cellular systems in the U.S. operate in the 824-MHz to 894-MHz frequency
bands. PCS operates in the 1850-MHz to 1990-MHz bands. While it is based on TDMA,
PCS has 200-kHz channel spacing and eight time slots instead of the typical 30-kHz channel
spacing and three time slots found in digital cellular. PCS networks are already operating
throughout the U.S.
GSM 1900 is one of the technologies used in building PCS networks—also referred to
as PCS 1900, or DCS 1900. Such networks employ a range of technologies including GSM,
TDMA, and cdmaOne. Like digital cellular, there are several incompatible standards using
PCS technology. CDPD, GSM, CDMA, and TDMA cellular handsets can all access the PCS
network provided they are capable of operating in the 1900-MHz band. Single-band GSM
900 phones cannot be used on PCS networks.
PDC – Personal Digital Cellular. This is a TDMA-based Japanese digital cellular standard
operating in the 800-MHz and 1500-MHz bands. To avoid the lack of compatibility between
differing analog mobile phone types in Japan (i.e., the NTT type and the U.S.-developed
TACS type), digital mobile phones have been standardized under PDC. In the PDC standard,
primarily six-channel TDMA (Time Division Multiple Access) technology is applied. PDC,
however, is a standard unique to Japan and renders such phone units incompatible with
devices that adopt the more prevalent GSM standard. Nevertheless, digitalization under the
standard makes possible ever-smaller and lighter mobile phones which, in turn, has spurred
market expansion. As a result, over 93% of all mobile phones in Japan are now digital.
PHS – Personal Handy Phone System. Soon after PDC was developed, Japan developed
PHS. It is considered to be a low-tier TDMA technology.
Chapter 5
TDMA – Time Division Multiple Access was the first U.S. digital standard to be developed.
It was adopted by the TIA in 1992 and the first TDMA commercial system began operating
in 1993. It operates at 800 MHz and 1900 MHz and is used in current PDC mobile phones.
It breaks voice signals into sequential, defined lengths and places each length into an
information conduit at specific intervals. It reconstructs the lengths at the end of the conduit.
GSM and US-TDMA standards apply this technique. Compared to the FDMA (Frequency
Division Multiple Access) applied in earlier analog mobile phones, TDMA accommodates a
much larger number of users. It does so by more finely dividing a radio frequency into time
slots and allocating those slots to multiple calls. However, a shortage in available channels
is anticipated in the near future so a more efficient system adopting CDMA is currently
being developed under IMT-2000.
UMTS – Universal Mobile Telephone Standard. It is the next generation of global cellular
which should be in place by 2004. UMTS proposes data rates of < 2Mb/s using a combination of TDMA and W-CDMA. IT operates at about 2 GHz. UMTS is the European member
of the IMT-2000 family of 3G cellular mobile standards. The goal of UMTS is to enable
networks that offer true global roaming and can support a wide range of voice, data, and
multimedia services. Data rates offered by UMTS are 144 Kb/s for vehicles, 384 Kb/s for
pedestrians, and 2 Mb/s for stationary users.
W-CDMA – Wideband Code Division Multiple Access standard. Also known as UMTS, it
was submitted to the ITU for IMT-2000. W-CDMA includes an air interface that uses
CDMA but isn’t compatible for air and network interfaces with cdmaOne, cdma2000, or IS136. W-CDMA identifies the IMT-2000 CDMA Direct Spread (DS) standard. W-CDMA is a
3G mobile services platform based on modern, layered network-protocol structure, similar
to GSM protocol. It was designed for high-speed data services and for internet-based
packet-data offering up to 2 Mb/s in stationary or office environments. It also supports up to
384 Kb/s in wide area or mobile environments. It has been developed with no requirements
on backward compatibility with 2G technology. In the radio base station infrastructure-level,
W-CDMA makes efficient use of radio spectrum to provide considerably more capacity and
coverage than current air interfaces. As a radio interface for UMTS, it is characterized by
use of a wider band than CDMA. It also offers high transfer rate and increased system
capacity and communication quality by statistical multiplexing. W-CDMA efficiently uses
the radio spectrum to provide a maximum data rate of 2 Mb/s.
WLL – Wireless Local Loop. It is usually found in remote areas where fixed-line usage is
impossible. Modern WLL systems use CDMA technology.
WP-CDMA – Wideband Packet CDMA is a technical proposal from Golden Bridge
Technology that combines WCDMA and cdma2000 into one standard.
UWC-136 – UWC-136 represents an evolutionary path for both the old analog Advanced
Mobile Phone System (AMPS) and the 2G TIA/EIA-136 technologies. Both were designed
specifically for compatibility with AMPS. UWC-136 radio transmission technology proposes a low cost, incremental, evolutionary deployment path for both AMPS and TIA/EIA
operators. This technology is tolerant of its frequency band of 500 MHz to 2.5 GHz.
Standards and Consortia
Success of a standard is based on industry-wide support of that standard. Open standards are also key
for a vital communications industry. Some of the key standards bodies promoting next-generation
cellular technologies are discussed next.
Cellular/Mobile Phones
The Third Generation Partnership Project (3GPP) is a global collaboration agreement that brings
together several telecommunications standards bodies known as “organizational partners.” The
current organizational partners are ARIB, CWTS, ETSI, T1, TTA, and TTC.
Originally, 3GPP was to produce technical specs and reports for a globally applicable 3G mobile
system. This system would be based on evolved GSM core networks and radio access technologies
supported by the partners. In this case, Universal Terrestrial Radio Access (UTRA) for both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) modes. The scope was
subsequently amended to include the maintenance and development of the GSM technical specifications and reports including evolved radio access technologies such as GPRS and EDGE.
The Third Generation Partnership Project 2 (3GPP2) is a 3G telecommunications standards-setting
project comprised of North American and Asian interests. It is developing global specifications for
3GPP2 was born out of the International Telecommunication Union’s IMT-2000 initiative which
covered high speed, broadband, and Internet Protocol (IP)-based mobile systems. These systems
featured network-to-network interconnection, feature/service transparency, global roaming, and
seamless services independent of location. IMT-2000 is intended to bring high-quality mobile
multimedia telecommunications to a worldwide market by:
increasing the speed and ease of wireless communications.
responding to problems caused by increased demand to pass data via telecommunications.
providing “anytime, anywhere” services.
The UMTS Forum
The UMTS Forum is the only body uniquely committed to the successful introduction and development of UMTS/IMT-2000 3G mobile communications systems. It is a cross-industry organization
that has more than 260 member organizations drawn from mobile operator, supplier, regulatory,
consultant, IT, and media/content communities worldwide.
Some UMTS objectives are:
Promote global success for UMTS/3G services delivered on all 3G system technologies
recognized by the ITU.
Forge dialogue between operators and other market players to ensure commercial success
for all.
Present market knowledge to aid the rapid development of new service.
Geography-Dependent Standards Bodies
Standards bodies govern the technical and marketing specifications of cellular and telecommunication interests. The European Telecommunications Standards Institute (ETSI) is defining a standard
for 3G called the UMTS. The Japan Association of Radio Industries and Business (ARIB) primarily
focuses on WCDMA for IMT-2000. In Canada, the primary standards development organization is
the Telecommunications Standards Advisory Council of Canada (TSACC).
The American National Standards Institute (ANSI) is a U.S. repository for standards considered to
be semi-permanent. The United States Telecommunications Industry Association (TIA) and T1 have
presented several technology proposals on WCDMA, TDMA UWC-136 (based upon D-AMPS IS-136),
Chapter 5
and cdma2000 (based upon IS-95). ANSI accredits both TIA and T1. The primary standards working
groups are TR45 (Mobile and Personal Communications 900 and 1800 Standards) and TR46 (Mobile
and Personal Communications 1800 only standards).
Standards development organizations in Asia include the Korean Telecommunications Technology Association (TTA) and the China Wireless Telecommunications Standards Group (CWTS).
Market Data
The Yankee Group predicted that global handset unit sales would exceed 498.5 million in 2003,
542.8 million in 2004, and 596 million in 2005. New network technologies will drive the introduction of new services and applications that will create demand among consumers for the latest cell
phone models. The introduction of new network features will drive sales for devices and personalized
applications, and new data services will drive increased demand.
Other analysts predict 1.4 billion subscribers by the end of 2004, and 1.8 billion by the end of
2006. This compares to the estimate of 60 million cellular subscribers in 1996.
Some of the developing countries are seeing faster growth in the wireless subscriber base than
developed countries. This is because the digital cellular access provides better quality and cheaper
service compared to landline phones.
Today the market is dominated by 2G technologies such as GSM, TDMA IS-136, cdmaOne, and
PDC. This is changing to a market that will be dominated by 2.5G and 3G technologies. It is believed that by 2006, 56% of subscribers—nearly 1 billion—will be using 2.5G or 3G technologies.
GPRS terminals are forecast to be the largest 2.5G market—nearly 625 million terminals will be
shipped by 2004. For the same year cellular terminals based on 3G will ship nearly 38 million
Market Trends
Cell phones provide an incredible array of functions and new ones are being added at a breakneck
pace. You can store contact information, make task lists, schedule appointments, exchange e-mail,
access the Internet, and play games. You can also integrate other devices such as PDAs, MP3 players,
and GPS receivers. Cell phone technology will evolve to include some of the trends discussed next.
Smart Phones—Converging Cellular Phones, Pagers and PDAs
A smart phone combines the functions of a cellular phone, pager, and PDA into a single, compact,
lightweight device—a wireless phone with text and Internet capabilities. It can handle wireless
phone calls, hold addresses, and take voice mail. It can also access the Internet and send and receive
e-mail, page, and fax transmissions. Using a small screen located on the device, users can access
information services, task lists, phone numbers, or schedules.
The smart phone market is growing at a phenomenal rate. As the mobile workforce evolves, a
premium is being placed on mobility and “anytime, anywhere” data access. In 2002, analysts
predicted nearly 400 million cell phone subscribers worldwide. Since that time, smart phone production rose 88% to meet increased demand.
Although smart phones have been around since 1996, they are just coming of age. As wireless
technology takes off, manufacturers have begun vying for a slice of the smart phone pie. Telecom
majors Nokia, Qualcomm, and Motorola were among the earliest contenders offering Web-enabled
phones. Now, new companies are dedicating themselves wholly to the production of smart phones.
Cellular/Mobile Phones
Smart phone technology has also led to a spate of joint ventures. The latest is a merger between
software giant Microsoft and telecomm company LM Ericsson. The alliance calls for joint development of mobile phones and terminals equipped with Microsoft’s Mobile Explorer micro-browser and
mobile e-mail for network operators. Microsoft will also develop a more sophisticated Mobile
Explorer for smart phones based on an optimized version of Windows PocketPC.
So far, the primary obstacle to wide adoption of smart phones has been a lack of content designed for their tiny screens. There is also a lack of support for color or graphics. However, over the
past year several content providers (AOL-Time Warner, Reuters, etc.) and Internet portals have
announced efforts to close the gap. Early last year, CNN and Reuters both announced content
delivery plans aimed at smart phones. Soon web sites with location- and topic-specific information
will be available from any WAP (Wireless Application Protocol)-enabled smart phone. The sites will
deliver proximity-based listings of real-world stores. This will provide smart phone users with
relevant, local merchant information such as store hours, specials, and parking information.
Conflicting wireless standards have so far limited the growth of the smart phone segment in the
U.S. Many vendors still employ analog cellular technology which does not work with smart phones.
Although vendors are trying to move to digital technology, they are unable to reach a consensus on
which specific technology to use. In Europe and Asia, GSM is the digital cellular technology of
choice. However, in the U.S. GSM is just one of several technologies vying for acceptance. Until
standardization is achieved, smart phones may be slow to catch on.
Video or Media Phones
Following e-mail, fax-mail, and voice-mail, video-mail is seen as the evolutionary step for cellular
terminals. Sending and receiving short text messages is currently one of the fastest growing uses for
mobile phones. Within the next few years it will also be used to send video messages as 3G mobile
phones become available
First generation videophones are just coming to the market. These PDA-cell phone convergence
devices incorporate a small camera in the terminal that can record short visual images to be sent to
other compliant wireless devices. Handsets in the future will include voice, video, and data communications. They will have full-PDA capabilities including Internet browsing, e-mail, and handwriting
Imaging Phones
The imaging phone can carry strong emotional attachment because it allows the sharing of experiences through communication. It facilitates person-to-person or person-to-group multimedia
communication based on self-created content. It includes excellent messaging and imaging, an easyto-use interface, and a compact and attractive design. It is fun and can be personalized for individual
Bluetooth-Enabled Cell Phones
Bluetooth is a personal area networking standard that is inexpensive and power-friendly. It offers
short-range radio transmission for voice and data. It uses frequency-hopping technology on the free,
2.4 GHz industrial, scientific, and medical (ISM) band. It was developed to allow a user to walk into
a room and have a seamless, automatic connection with all other devices in that room. These devices
could include cellular phones, printers, cameras, audio-visual equipment, and mobile PCs. Bluetooth
enables mobile users to easily connect a wide range of computing and telecommunications devices
Chapter 5
without the need for cables. Also, it connects without concern for vendor interoperability. Other
Bluetooth applications include synchronization, email, Internet and Intranet access, wireless headsets, and automobile kits for hands-free communication while driving. Bluetooth radios, which
theoretically are placed inside every connected device, are about the size of a quarter and can
transfer data at 728 Kb/s through walls and around corners. The Bluetooth standard was established
by the Bluetooth special interest group (SIG) in July 1999 by founding companies Intel, IBM,
Toshiba, Ericsson, and Nokia. Today it is supported by over 2000 companies.
Cellular terminals represent the biggest single opportunity for Bluetooth in its development.
Initial development of Bluetooth technology allows for a transmission range of only 10 m. But its
next stage of development will allow transmission over a distance up to 100 m. The penetration of
Bluetooth into cellular terminals is forecast to rise from (1% in 2000) to 85% by 2005. This corresponds to an increase in shipments of Bluetooth-enabled cellular terminals from 4 million in 2000 to
over 900 million in 2005.
Mobile phones represent the single largest application for Bluetooth technology. They allow
Bluetooth-enabled headsets and handsets to communicate without the need for wires. This will
enable drivers to safely use their phones while driving, for example. And from a health perspective,
the phones will be located away from the brain, a subject which has generated much discussion over
the last few years.
Mobile workers are a key target market for all Bluetooth products. Bluetooth will enable mobile
workers to link their PDAs and portable computers to the Internet and to access their mobile terminals without the need for a cable link. However, mobile workers today represent only a small portion
of the total cellular subscriber base. Although mobile workers may drive early sales of Bluetooth
cellular phones, this application alone will not drive massive Bluetooth cellular terminal application
volumes. For high volumes to be achieved, users must switch to communication with headsets or by
a number of the other applications.
Cordless terminals could be replaced by data-function terminals that can link to cellular networks. They could interface with an access point in the home to route calls via the PSTN network.
Bluetooth could also link smart phones into the PSTN via conveniently located public access points
for accessing email and other Internet services. Use of Bluetooth local radio standard and infrared
links will allow mobile phones to communicate with shop terminals. This could lead to a revolution
in shopping, going far beyond the replacement of credit cards.
Wireless LANs—Friend or Foe
Wireless LANs (WLANs) provide high-bandwidth, wireless connectivity for a corporate network.
They seamlessly provide a connection to the Internet and the ability to access data anywhere,
anytime for the business user. Wireless LANs combine data connectivity with user mobility to
provide a good connectivity alternative to business customers. WLANs enjoy strong popularity in
vertical markets such as healthcare, retail, manufacturing, and warehousing. These businesses enjoy
productivity gains by using handheld terminals and notebook PCs to transmit information to centralized hosts for processing. With their growing popularity, cellular phone manufacturers and operators
wonder if WLANs will eat into the profit potential of flashier 3G mobile carriers.
The growing ubiquity of WLANs will likely cause wireless carriers to lose nearly a third of 3G
revenue as more corporate users begin using WLANs. The growth of public WLAN “hotspots” in
airports, hotels, libraries, and coffee shops will increase WLAN revenue to over $6 billion by 2005
Cellular/Mobile Phones
with over 15 million users. This compares to over $1 million earned by WLANs in 2000. Over 20
million notebook PCs and PDAs are shipped every year, showing the potential for WLAN networks.
They pose a real threat to cellular carriers. Fueling the increased interest in WLANs is the growing
availability of fast Internet access and the increasingly common use of wireless networks in homes
and office. In comparison to the rise of WLANs, the handheld market is in a free-fall and mobile
carriers are stumbling. In addition, the slow introduction of intermediate 2.5G mobile technology is
aiding a 10% decline in cell phone sales.
WLANs are an attractive alternative to mobile 3G because of their convenient installation and
ease of use. And they enable consumers to use the larger screens on their notebook PCs for better
viewing. In contrast to the reported $650 billion spent worldwide by carriers to get ready for 3G,
setting up a WLAN hotspot requires very little. An inexpensive base station, a broadband connection,
and an interface card using the WLAN networking standard are all that’s needed. And WLAN cards
are readily available for laptops, PDAs, and smart phones. However, the two technologies use
different radio frequencies and are targeting different markets. Where 3G is mostly phone-based and
handles both voice and data, WLAN is purely data-driven.
While carriers have invested heavily in 3G, they are more financially sound. WLANs do not have
the capital to compete with Cingular, Sprint, NTT DoCoMo, British Telecom, or AT&T. Examples
such as the failure of WLAN provider MobileStar (VoiceStream recently purchased MobileStar) do
not bode well for the technology. It is unlikely that WLANs will expand outdoors beyond a limited
area, such as a corporate campus. WLANs are a potentially disruptive technology. While network
equipment providers will see a huge demand of WLAN products, WLAN network operators must
determine ways to charge subscribers.
WLANs still face technical and regulatory issues. For instance, there are already potential
interference issues between IEEE 802.11b and Bluetooth. These issues will only worsen as the
technologies become more widely deployed. Also, WLAN security needs improvement. The Wired
Equivalency Privacy (WEP) standard is the encryption scheme behind 802.11 technology. However,
WEP encryption must be installed manually and has been broken. Hence, next-generation WLAN
specifications are implementing higher security standards. While carriers will survive WLAN
competition, the cost of recent auctions of 3G airwaves may lead to WLAN encroachment on
potential profit from 3G services.
Rather than being viewed as a threat, WLAN could help introduce consumers to the wireless
world. Today’s WLANs are about five times faster and more accurate than 3G will be. It is unlikely
that a 3G network could ever be cost-effectively upgraded to speeds on par with WLANs. To serve
demanding applications, 3G wireless network operators need public wireless LANs. 3G operators
and public WLAN operators must start working together to ensure their mutual success. This
cooperation will enable public WLANs to obtain sufficient coverage to be truly useful. Wireless
operators need the WLANs to offload heavy indoor traffic. Like wireless operators, WLAN operators
need subscription control, roaming agreements, and centralized network management. There are
significant e-commerce opportunities for WLAN operators, most notably location-dependant,
targeted promotions. Public WLANs could serve as a distribution point for multimedia. In the future,
the market will see an introduction of products that switch between 3G and WLAN technologies to
provide seamless coverage.
Chapter 5
From a cost benefit viewpoint the use of the combined technologies could gain traction. But
from the technology and market perspective the implementation of such convergence seems unlikely.
To begin with, 3G offers wide area mobility and coverage. Each base station typically covers a 5-10
mile radius and a single wide area cell is typically equivalent to 10,000 WLAN cells. In contrast,
WLAN offers mobility and coverage across shorter distances—typically less then 150 feet. The two
technologies address different market needs and are more likely to share the nest than to compete for
it. While WLANs offer many benefits at a local level, they will never match the benefits of wireless
access to the Internet since they were never intended to. There is also the question of air interfaces—
while Bluetooth and some WLAN technologies share the 2.4 GHz and 5 GHz frequency bands, 3G
does not. The question of access arises. Unless wireless manufacturers develop devices that unite 3G
and WLAN and combine their air interfaces, convergence will never happen.
Whatever the result, WLANs have a bright future. They offer a range of productivity, convenience, and cost advantages over traditional wired networks. Key players in areas such as finance,
education, healthcare, and retail are already benefiting from these advantages. With the corporate
success of IEEE 802.11b or WiFi, the WLAN market is slowly advancing. WiFi operates at 2.4 GHz
and offers as much as 11 Mb/s data speed. With the arrival of next-generation 5 GHz technologies
such as HiperLAN2 and IEEE 802.11a, faster (up to 54 Mb/s) and better service will be addressed.
Consumers, corporate travelers, and vertical market employees will see a better alternative in
services at home and in public areas. WLANs have the capability to offer a true mobile broadband
experience for Internet access, entertainment, and voice. Mobile operators may not be able to offer
this for a while. 3G mobile phone operators will need public WLANs to offload heavy indoor traffic
from their lower speed, wide area networks. Though WLANs may pose a threat to 3G revenue, 3G is
still some years away and WLANs are already available. The longer the delay in 3G introduction, the
greater will be the usage of WLANs.
A comparison of wireless technologies such as Bluetooth, WLANs, and cellular reveals room for
each standard. Bluetooth provides a low-cost, low-power, wireless cable replacement technology.
WLANs provide a good solution for bandwidth-intensive distribution of data and video. However,
WLANs will require too many base stations or access points to provide similar coverage as cellular.
Cellular technologies will focus on broad geographical coverage with moderate transfer rates for
voice and data services. Personal, local, and wide area networks are well covered by the three
Touch Screen and Voice Recognition as Dominant User Interfaces
The alphanumeric keypad has been the most dominant user interface. With the emergence of the
wireless Internet and the issue of cellular handset size, manufacturers must provide a way to incorporate maximum screen display with minimal terminal size.
It is predicted that voice recognition will become the dominant user interface in the future. In the
interim a touch screen is projected to co-exist with voice to provide functions that are not currently
possible with voice recognition alone. Touch screen has been widely available for a few years, but
has only recently been used on mobile phones. With the right software interface it allows a novice to
easily interact with a system.
A number of products are coming to market that have a touch screen display and a voice recognition address book. This will allow cellular handset manufacturers to eliminate alphanumeric
Cellular/Mobile Phones
keypads. Inputs to the handset will be via touch screen, voice recognition, or a combination of both.
This will maximize the screen area for data applications and optimize the handset size.
Once reserved for specialized applications, voice input technology is now being developed for
the mobile and commercial markets. Voice activation is projected to become standard in the next
generation of terminals. As phones shrink in size their keyboarding capabilities are becoming too
limited for useful web surfing. Voice recognition solves this problem.
Dual Band vs. Dual Mode
Travelers will probably want to look for phones that offer dual band, dual mode, or both
Dual band – A dual band phone can switch frequencies. It can operate in both the 800 MHz
and 1900 MHz bands. For example, a dual band TDMA phone could use TDMA services in
either an 800-MHz or a 1900-MHz system.
Dual mode – “Mode” refers to the type of transmission technology used. A phone that
supported AMPS and TDMA could switch back and forth as needed. One of the modes must
be AMPS to give you analog service where digital support is unavailable.
Dual band/Dual mode – This allows you to switch between frequency bands and transmission modes as needed.
Changing bands or modes is done automatically by phones that support these options. Usually
the phone will have a default option set, such as 1900 MHz TDMA. It will try to connect at that
frequency with that technology first. If it supports dual bands, it will switch to 800 MHz if it cannot
connect at 1900 MHz. If the phone supports more than one mode, it will try the digital mode(s) first,
and then switch to analog.
Some tri-mode phones are available; however, the term can be deceptive. It may mean that the
phone supports two digital technologies such as CDMA and TDMA, as well as analog. It can also
mean that it supports one digital technology in two bands and also offers analog support. A popular
version of the tri-mode phone has GSM service in the 900-MHz band for Europe and Asia, and the
1900-MHz band for the United States. It also has analog service.
A multi-mode terminal is a cellular phone that operates with different air interfaces. It offers wider
roaming capabilities and it can use different networks under different conditions for different
There is no single cellular standard available in every country worldwide, and certain standards
are only used in a small number of countries. When travelling to foreign countries, cellular subscribers often find that there are no cellular networks on which their terminals work. For example, a
cdmaOne user from the U.S. could not use a cdmaOne terminal on any networks within the European
Union. In the EU there are no cdmaOne networks. Similarly, a Japanese user could not use a PDC
terminal except in Japan because there are no PDC networks outside Japan. In many countries such
as China, the U.S., and Russia, individual digital cellular networks do not provide nationwide
coverage. However, combinations of networks using different standards do give nationwide cellular
coverage. Terminals that can operate under a number of different standards have to be used to
address this problem.
The next generation of networks will reveal the availability of services and features in specific
areas. Multi-mode terminals will allow adoption of higher data rates where they can be accessed.
When operators begin to upgrade their networks for faster data rates and increased services, this may
Chapter 5
not be done across the operators’ whole service area. The initial investment to do so would be too
high. Operators will probably offer the new network technologies in areas of high cellular subscriber
penetration first. This will enable them to offer the advanced services to as wide a customer base as
possible with the smallest investment. This is expected to occur with the change from 2G to 2.5G and
from 2.5G to 3G services.
The ability to offer connection to different networks under different conditions for different
applications is the driving force for multi-mode terminals. It will allow users from one network (e.g.,
2G) to have higher data rates, extra features, and more services in another network (e.g., 3G).
Multi-mode combinations already exist in the mass market. CdmaOne/AMPS and TDMA/AMPS
combinations are present in the U.S. and PDC/GSM combinations are present in Japan. The European
market has not seen many multi-mode terminals because of the omnipresence of GSM between
European nations. With the launch of 2.5G and 3G networks, future dual mode phone markets are
projected to consist of 2G/2.5G, 2G/3G, and 2.5G/3G technologies.
Positioning Technologies
Mobile Internet location-based services are key to unlocking the potential of non-voice services.
Without location, web services are more suited to the PC than a mobile phone. Location awareness
and accurate positioning are expected to open up a vast new market in mobile data services. It it has
been demonstrated with GPS that future localization of information with up to four meters of
accuracy will be possible with handsets. Positioning systems will enable m-commerce and mobile
Internet. It will also enable the location of 911 calls from a mobile phone. At the same time, it is
unclear how to define a coherent strategy for adding location to consumer and business services.
Positioning technologies are not new, but they have not been quickly accepted by the cellular
industry. This has been due mainly to limitations with infrastructure technology or the cost of
deployment. Also, there has not been high public demand for such technologies considering the high
price for terminals that adopt such technology.
Wireless Data
Wireless data capability is slowly being added to mobile appliances. But the wireless networks to
which the devices are connecting and the speeds at which those connections are made will determine
what services may be offered.
Today’s wireless networks can be divided into analog and digital systems. Digital systems
provide cheaper and more advanced voice and data transmission. Currently the world’s wireless
networks are divided into a number of incompatible standards such as GSM, CDMA, and TDMA. As
a result there is no unified global network. Network carriers will not be able to implement a network
standard compatible across geographies before the introduction of 3G wireless (1-2 Mb/s throughput). This technology is expected to roll out in the next 2-3 years in Europe and Asia and in the next
3-4 years in the United States.
Digital systems can carry data using circuit-switched technology or packet-switched technology.
Circuit-switched technology establishes a dedicated link between sender and receiver to provide
better data transmission, but cannot be shared by others. Packet-switched technology is always online
and sends and receives data in small packets over a shared link. Because of competitive issues, U.S.
wireless carriers have rolled out incompatible wireless networks. These make it difficult for users to
roam and for device makers to offer globally compatible devices.
Cellular/Mobile Phones
The following sections show how networks have approached wireless data services.
Carriers using CDMA and TDMA networks who want to leverage existing wireless network
assets offer wireless data with speeds between 9.6 Kb/s and 14.4 Kb/s
Carriers with GSM networks have developed their own standard, GPRS, for wireless packetswitched data transmission with speeds of 115 Kb/s
Terrestrial data networks such as American Mobile’s DataTAC system, BellSouth Wireless
Data’s Mobitex system, and Metricom’s Ricochet system use packet data for wireless data
transmission between 19.2 Kb/s and 128 Kb/s.
Packet data service in the U.S. currently offers the best wireless data coverage and speed. Packet
data carriers include American Mobile Satellite and BellSouth Mobile. On the horizon is the availability of evolved 2G and 3G wireless data service that offers broadband speeds over wireless
networks. 3G would require the rollout of a completely new network. This might be possible in small
regions in Europe and Asia, but it would be an enormous undertaking in the U.S. A number of 2.5G
services may be rolled out over the next three years in the U.S. that allow 128 Kb/s transfer speeds.
This would enable wireless streaming audio, video, and multimedia content.
The GPRS, mentioned earlier, is of special interest. It is a 2.5G technology that effectively
upgrades mobile data transmission speeds of GSM to as fast as 115 Kb/s from current speeds of 9.6
Kb/s to 38.4 Kb/s. The higher speed is expected to be especially useful for emerging PDA applications. The full commercial launch of GPRS services occurred in mid-2001, which was concurrent
with the launch of a new range of GPRS-enabled phones and PDAs.
Connecting Wireless Networks and the Internet
Wireless data transmission connects the mobile user to the carrier’s network. To connect that network
to data located on the Internet or with an enterprise, data networking protocols must be integrated
over the wireless networks, or a gateway connecting the wireless and external networks must be
built. Most carriers have chosen to use Internet protocols (TCP/IP) as the basis for connecting
wireless devices with the Internet and with other external networks.
There are currently three ways to connect mobile devices to data residing on the Internet or to
external networks:
One-Way Wireless Access – Data such as brief messages and alerts can be sent out in one
direction from the network to the device.
Serial Port or USB – Data are downloaded via serial port or USB to a personal consumer
device during synchronization with a connected device. The user can then access, manipulate, and repost to the network or Internet upon the next synchronization.
Two-Way Wireless Messaging – Data can be sent wirelessly between a user’s mobile
consumer device and another device via the Internet or a network. This allows users to
seamlessly access and manipulate data residing on the network via their mobile devices.
While the first two are more commonly used, two-way wireless will be the most compelling
method over the next few years and will drive the creation of many applications and services that
leverage this mobile connectivity.
The Wireless Application Protocol (WAP) is an open, global specification that empowers wireless
device users to instantly access information and services. It will enable easy, fast delivery of information and services to mobile users on high-end and low-end devices such as mobile phones, pagers,
Chapter 5
two-way radios, and smart phones. WAP works with most wireless networks such as:
WAP is both a communications protocol and an application environment. It can be built on any
operating system including PalmOS, EPOC, Windows PocketPC, FLEXOS, OS/9, and JavaOS. It can
also provide service interoperability between different device families.
WAP includes a micro browser and a network server. With minimal risk and investment, operators can use WAP to decrease churn, cut costs, and increase revenues. Technology adoption has been
slowed by the industry’s lack of standards that would make handheld products Internet compatible.
WAP provides a de facto global open standard for wireless data services.
WAP is similar to HTML as it transformed the Internet into the World Wide Web. The protocols
are similar and will allow wireless devices to be a simple extension of the Internet. WAP was defined
primarily by wireless equipment manufacturers and has support from all major, worldwide standards
bodies. It defines both the application environment and the wireless protocol stack.
The WAP standard allows Internet and database information to be sent direct to mobile phone
screens. It is an open standard so information can be accessed from any WAP-compliant phone. It
uses the Wireless Mark-up Language (WML) to prepare information for presentation on the screen.
The graphics are simplified and the phones can call up existing Internet content. And like HTML,
WAP includes links to further information.
Cellular operators are increasingly looking to m-Commerce (Mobile Electronic Commerce) to attract
new business and maintain existing business. M-Commerce is forecast to progress from the ecommerce services available today (via fixed data line services such as the Internet) to mobile data
services such as WAP.
Alliances among operators, content providers, financial institutions, and service administrators
will build this market. Examples of initial services being offered are links to cinema Internet sites for
purchasing movie tickets and a platform allowing purchases from any site on the Internet. Mobile
devices will also be used to handle short-distance transactions to point-of-sale machines.
Japan is the first country to offer mobile Internet through its i-mode service. I-mode is a packetbased mobile Internet service which allows users to connect to the Internet without having to dial up
an Internet service provider. Considering its over 25 million subscribers, it is surprising that this has
Cellular/Mobile Phones
been achieved at the slow network speed of 9.6 Kb/s. It is text-based with limited capabilities to send
low-quality still images. The service is fairly inexpensive considering that the network is packet
based, meaning charges are not based on airtime but on volume of data transmitted or received.
In addition to voice and e-mail, there is a plethora of information and content services available
such as retrieving summarized newspaper texts, making travel arrangements, finding an apartment or
a job, playing games, or downloading ringing tones for mobile phone. Mobile e-mail is a hit with
Japanese teenagers who can use truncated text to converse with each other while in earshot of parents
or teachers!
The service is based on compact HTML which has successfully promoted and differentiated imode from WAP-based services. Since it is a subtext of HTML, users with a homepage can reformat
their web pages for i-mode access.
There are a number of factors that contribute to i-mode’s success in comparison to other services
in Japan. It is easier to use and has better quality content. NTT DoCoMo, i-mode’s service provider,
provides aggressive advertising. And market factors such as the demand for internet access services,
timing, and the proliferation of short message service (SMS) and e-mails.
I-mode is also very entertainment orientated—entertainment content accounts for 55% of its
total access. This includes limited-functionality games, musical tunes for the phone’s incoming ring,
daily horoscopes, and cartoon characters for the phone’s display.
Comparatively low value-added content accounts for over 50% of total access with the majority
of users in their teens and 20’s. This indicates that i-mode demand is still in a boom and has yet to
As other functionality such as file attachments, faster network speeds, and e-commerce transactions are added, new services and applications will be developed. If prices stay stable and usage
remains simple, i-mode will be used for more value-added applications by a broader group of users.
Technologically, i-mode is not particularly revolutionary but its marketing strategy is. By using a
subtext of HTML, NTT DoCoMo has enabled content providers to readily reformat their web sites
for i-mode access. This has undoubtedly contributed to rapid i-mode content development. Also,
NTT DoCoMo offers billing and collection services to content providers. And, Internet access is easy
and the packet-based fee structure is reasonable. Currently, NTT DoCoMo is in discussion with U.S.
operators with a view toward launching i-mode mobile Internet services in America.
E-mail and SMS (Short Messaging Service)
SMS allows short text messages to be exchanged between cellular phones. E-mail service basically
comes standard with a web-enabled cellular terminal. To access e-mail the terminal and network
must be able to connect and deliver e-mail from a standard POP account. Although SMS has been
available for a few years, the service available with this technique is very limited. It has a 160character text limit and transfers only data between cellular terminals. E-mail via a cellular phone,
however, enables communication with other cellular phone users (as with SMS) and with anyone
who has Internet access via cellular phone, personal computer, television, PDA, etc.
Initial deployment of the service in Japan has proved extremely popular. If this trend continues
in Europe, the demand for e-mail through cellular phones will be huge. And as e-mail becomes more
sophisticated with the ability to send pictures and video clips via handsets, cellular phone email
service will continue to gain in popularity.
Chapter 5
Other cellular phone trends
Color LCDs have started to appear in the market.
With handset prices continuing to fall, the disposable cellular handset market is slowly
becoming a reality.
Handsets have reached the ideal size.
Battery life has continued to improve with higher standby times.
Integration of an MP3 player
FM and digital radio capability
Embedded digital camera capability
Smart card readers for online purchases and verification
TV reception
Enhanced multimedia messaging
Entertainment (ring tones, logos, etc.)
Vertical markets (logistics and sales force automation)
Satellite handset phone systems could impact the cellular handset market. Satellite phone
systems can transmit at very high rates which will offer satellite phone users similar
capabilities as 3G cellular—video, conferencing, etc. The failure of Iridium LLC, which
filed for bankruptcy in 1999 with only 50,000 subscribers, has seen limited success and bad
press for satellite phones. However, other satellite phone service operators are looking to
reach about 10 million users by 2006.
Components of a Mobile/Cellular Handset
Cell phones are extremely intricate devices. Modern digital cell phones can process millions of
calculations per second to compress and decompress the voice stream.
The typical handset contains the subsystems described in the next sections.
DSP/Baseband Processor and Analog codec subsection – The digital baseband and analog
section includes:
• Channel and speech encoding/decoding
• Modulation/demodulation
• Clock
• D/A and A/D conversion
• Display
• RF interfacing.
This section performs high-speed encoding and decoding of the digital signal and manages
the keypad and other system-level functions. For analog voice and radio signals to become
digital signals that the DSP can process, they must be converted in the analog baseband
section. The voice and RF codecs perform analog-to-digital and digital-to-analog conversion
and filtering. The handset processor is responsible for functions such as keyboard entry,
display updates, phonebook processing, setup functions, and time functions. These functions
are typically handled by an 8-bit or 16-bit micro-controller, but increasingly the processor is
being integrated into the baseband chip.
Memory – This includes SRAM, EEPROM and flash. Flash is a non-volatile memory that
provides random access to large amounts of program information or code. There are two
types of flash memories—NAND and NOR. NAND is better suited for serial access of large
amounts of data. SRAM is much faster than flash and is used as low-power/low-density
Cellular/Mobile Phones
RAM for cache. ROM and flash memory provide storage for the phone’s operating system
and customizable features such as the phone directory.
RF section – The RF section includes the RF/IF transmitter and receiver ICs, RF power
amplifiers, PLLs, synthesizers, and VCOs. These components receive and transmit signals
through the antenna. RF functions include modulation, synthesis, up and down conversion,
filtering, and power amplification.
Battery and/or power management – This section supplies and controls power distribution
in the system since different components often operate at different voltages. The power
management unit also controls power-down and stand-by modes within the handset and
regulates the transmitter power of the handset’s power amp. Phones spend the majority of
their time waiting for a call, when much of the phone’s circuitry is not required and can be
placed in a low-power standby state. When a call comes through, the power management
system powers-up the sleeping components to enable the call. During the call, the power
management section continuously adjusts the phone’s transmitted power to the level
required to maintain the connection to the base station. Should the handset get closer to the
base station, the handset’s output power is reduced to save battery power. The power
management function also regulates the battery charging process and feeds battery charge
state information to the phone’s processor for display. The charging and monitoring components for Lithium-ion batteries are typically the most complex so these types of batteries
usually contain power management chips embedded in the battery housing. Conversely,
Nickel-Metal-Hydride and Nickel-Cadmium batteries typically have control chips embedded within the phone itself.
Display – Cell phone displays can be color or monochrome. Some are backlit, some
reflective, while others are organic electroluminescent displays.
Operating system – Several vendors have announced operating systems for the cell phones.
Microsoft has introduced the PocketPC 2002 Phone Edition which is a PDA operating
system based on Windows CE 3.0.
Microphone and speaker
A mobile handset block diagram is shown in Figure 5.1.
Figure 5.1: Mobile Handset Block Diagram
Chapter 5
People with talking devices can now be seen everywhere. Cellular phones are the most convenient
consumer device available. And mobile phones are really just computers—network terminals—
linked together by a gigantic cellular network.
Cellular technology started as an analog (1G) network but is migrating to digital. Just as the 2G
digital cellular technology is ramping up, there is much discussion about third-generation (3G)
cellular services. While 3G deployments are currently underway, 2.5G is being seen as the
steppingstone for things to come. 2G, 2.5G, and 3G networks are all made up of combinations of
technologies that prevail in different geographies. The vision of 3G includes global and high-speed
mobile access with enhanced IP-based services such as voice, video, and text.
The cell phone was initially a basic voice access system with a monochrome display, but is
evolving to include voice, video, text, image access with color display, Internet access, and integrated digital camera. Other evolving wireless technologies such as Bluetooth and wireless LANs are
looking to complement next-generation cellular technologies. The future for wireless and mobile
communications remains very bright and undoubtedly a host of new applications will be generated in
the years to come.
Gaming Consoles
Gaming consoles are among the most popular consumer devices to come on the market in the last
few years. While they have existed for a while, their popularity is growing because of higher processing power and a larger number of available games. While several companies have introduced gaming
consoles, the current market has three main hardware providers—Microsoft, Nintendo, and Sony.
Gaming consoles deliver electronic game-based entertainment. They feature proprietary hardware
designs and software operating environments. They include a significant base of removable, first and/
or third party software libraries. Gaming consoles are marketed primarily for stationary household
use. They rely on AC power for their primary energy source and they must typically be plugged into
an external video display such as a television. Examples of popular gaming consoles include the
Nintendo GameCube, Sony PlayStation2, and Microsoft Xbox. They can also provide Internet and
email access. Gaming consoles are platform independent from PCs, video-game consoles, and
NetTVs. They are also media independent from Internet, cable television, and DBS.
Note that certain consoles are excluded from this definition:
Machines with embedded software only (i.e., containing no removable software)
Those without a significant library of external first and/or third party title support. (Examples: Milton Bradley’s Pocket Yahtzee and embedded LCD pocket toys from Tiger
Systems geared for development, hobbyist, or other non-mass-entertainment use (Example:
Sony’s Yaroze)
Those not targeted primarily as a gaming device (e.g., interactive DVD players)
Handheld gaming devices are designed for mobile household use. They rely on DC (battery)
power and they include an embedded video display such as a LCD. Examples include the Nintendo
GameBoy Color and Nintendo GameBoy Advance.
Market Data and Trends
Currently there are about 29 million console players, 11 million PC game players, and 7 million who
play both. There are clear differences in content so there is limited customer overlap. And the
consumer pool is large enough to allow for multiple companies to do well.
IDC predicts that for 2003 shipments of and revenue from game consoles in the U.S. will exceed
21.2 million units and U.S. $2.3 billion, respectively. This is down from a peak of $3.2 billion in
2001. The worldwide shipment of game consoles is forecast to exceed 40 million units for the same
Chapter 6
According to the Informa Media Group, the worldwide games industry is expected to grow 71%
from $49.9 billion in 2001 to $85.7 billion in 2006. This includes everything from arcades to game
consoles to PC games, set-top box games, and cell phone games. Analysis by the investment bank
Gerard Klauer Mattison predicts that worldwide console sales will double in the next five years to a
total of 200 million units. There should also be new opportunities for game startups within this fastgrowing market, especially for wireless games and devices.
Market Trends
Some of the market trends for gaming consoles are:
Generation Y is entering the teenage years with a keen interest in games and interactive
The growing pervasiveness of the Internet will help online gaming.
Online gaming with high-speed access and in-home gaming between consoles and PCs will
increase demand for home networking and broadband access solutions.
Market Accelerators
Market momentum – The momentum driving today’s video game market cannot be
underestimated. Growth beyond the hardcore gamer has given the market a much-needed
boost in exposure and revenue. While the hardcore gamer is still important, continual
market growth will be driven by the casual gamer. This has led to tremendous momentum,
priming the market for the launch of next-generation hardware and games as well as related
products including videos, toys, and gaming paraphernalia.
Games – The availability of high-quality games along with the launch of a new platform
can significantly accelerate video game platform sales. Conversely, it can also inhibit
growth. The importance of games as a purchase factor cannot be underestimated. In many
cases the games, not the platform, sell the new technology. For instance, the long-term
popularity of older-generation platforms such as PlayStation and Nintendo 64 has been
sustained in part by the continued introduction of quality and original game titles. Efforts to
expand into other demographic segments will result in new and innovative games that will,
in turn, spur market growth.
New functionality – Better games will be a primary driver for console sales. Over the long
term these new platforms will have greater appeal because of:
• Additional features such as DVD-video and online capabilities.
• The potential for the console to exist as a set-top box and residential gateway.
Market Inhibitors
Limited success of on-line, console-based gaming – Quality of service (QoS) and highbandwidth connection to the Internet are required to enable real-time, high-speed video
exchange online gaming. The slow growth of high-speed broadband limits online gaming.
Low-cost PCs – With the prices of PCs plummeting, consumers are encouraged to own
multiple PCs. Moore’s Law allows availability of features, performance improvements, and
lower prices—all of which make the PC a gaming platform.
Higher prices – With prices of consoles and games remaining high, consumers must
consider the number of years they will be keeping their gaming console. In comparison, a
PC provides more benefits and functionality than a gaming device.
Lack of games – Improved hardware technology has been and will continue to be an
attractive selling point for next-generation consoles. But the availability of new and original
Gaming Consoles
titles will be just as important, if not a more important. Gaming consoles will come under
fire for a lack of games that take advantage of improved hardware. A lack of games could
inhibit growth—a problem that has historically been encountered by the release of new
Poor reputation – Although much of the negativity surrounding the video game industry
after the school shootings in 1999 has subsided, the threat of negative publicity remains
problematic. The Interactive Digital Software Association (IDSA) has taken many steps to
mitigate negative publicity through special studies and lobbying efforts. Despite evidence
that video games are not linked to violent behavior, the link remains and does not promise
to go away anytime soon. If and when acts of violence do surface, fingers are sure to point
to the video game industry.
Market confusion – The availability of three mainstream video game console platforms and
the continued availability of older platforms give consumers much choice. However, this
choice could cause confusion and lead to buyer hesitation. This may be particularly problematic because the software industry hesitates to introduce games for specific platforms
while awaiting public response. And consumers could delay purchasing existing products
while waiting for the introduction of new, powerful competitive systems and features such
as broadband.
Key Players
Microsoft Corporation
While relatively new to the market, Microsoft has a number of products in the home consumer space
including its Xbox gaming platform. Xbox hardware is built with an impressive Intel processor, a
custom-designed Microsoft graphics processor with Nvidia memory, and a hard disk drive. It
supports HDTV and includes a DVD, four I/O ports for the game controller, and an Ethernet port.
Based in Japan, Nintendo was the original developer of the Nintendo 64 and the more recent
Nintendo GameCube systems. The company has stated that it is not pursuing a home gateway
strategy and that it will stick to games only.
Based in Japan, Sega brought to market the Dreamcast, a second-generation 128-bit gaming console
CD player
Built-in 56K modem for Internet play
Internet access and e-mail
Expansion slots for home networking.
Although Sega has exited the market to concentrate on gaming software, it was first to market
with its second-generation game console. The company also provides accessories such as a keyboard,
extra memory, and various controllers.
After Sega ceased production of the Dreamcast gaming console, the company refashioned itself
as a platform-agnostic game developer. By the end of the year Sega will bring its games to handheld
devices (such as PDAs and cell phones), set-top boxes, and even rival consoles. In addition Sega
agreed to deliver Dreamcast chip technology to set-top box manufacturers to enable cable subscribers
to play Dreamcast games. Sega has been the number three video game console supplier and has been
Chapter 6
hemorrhaging money since the introduction of Dreamcast in 1999. The release of Microsoft’s Xbox
most likely influenced Sega’s move to exit the game console market.
Based in Japan, Sony Computer Entertainment Inc. is positioning itself as the consumer’s one-stop
shop for home consumer devices ranging from entertainment to personal computing. Sony has
overcome a number of challenges to create a unified network of consumer devices and digital
services. It brought together diverse parts of the company to work together on a single, cohesive
Internet strategy. Sony is positioning its 128-bit PlayStation2 as a gateway into the home from which
it can offer Internet access, value-added services, and home networking. Its home networking would
use USB or iLink (also called IEEE 1394 or FireWire) ports to connect devices. PlayStation2
contains a DVD player that could enable digital video playback and could, in time, include an
integrated digital video recorder. With its added functionality the Playstation2 is leaving the “game
console” designation behind.
Game of War
Strategic differences between the superpowers of the video game console industry are surfacing as
they launch the next versions of their products. In the battle for control of the living room, the
contenders are:
Nintendo, who has a lock on the market for kids who think games are just another toy.
Sony, who is pushing the convergence of different forms of entertainment like DVDs and
Microsoft who is making its pitch to game artists and aficionado players.
The Xbox management team has used the “game is art” statement for entering the console
market, even though Sony sold 20 million PlayStation2 units before Microsoft sold its first Xbox.
Sony drew its line in the sand with the PlayStation2 launch in March 2000 while Microsoft launched
the Xbox in November 2001. Meanwhile, Nintendo had a double-barreled blast since it launched its
GameBoy Advance and GameCube console in 2001 in the U.S.
The gaming titan most likely to prevail will have to do more than pontificate about its philosophy. The three companies have likely spent a combined $1.5 billion worldwide on ferocious
marketing campaigns. Not only will they fight for customers, they also must woo developers and
publishers who make the console games.
To date, console makers have been embroiled in a battle of perception with lots of saber rattling
before the ground war begins in retail stores worldwide. Microsoft has signed up 200 companies to
make games for its Xbox console. But most industry observers declared Nintendo the winner. The
seasoned Japanese company is believed to have the best games and the strongest game brands of any
contender. And most believe they have the lowest-price hardware as well.
The Incumbents
The philosophy behind the design of the Sony game console was to combine a step up in graphics
quality. It offers a new graphics synthesizer with unprecedented ability to create characters, behavior,
and complex physical simulations in real time via massive floating-point processing. Sony calls this
concept “Emotion Synthesis.” It goes well beyond simulating how images look by depicting how
characters and objects think and act.
Gaming Consoles
The key to achieving this capability is the Emotion Engine, a 300-MHz, 128-bit microprocessor
based on MIPS RISC CPU architecture. Developed by MIPS licensee Toshiba Corp. and fabricated
on a 0.18 micron CMOS process, the CPU combines two 64-bit integer units with a 128-bit SIMD
multimedia command unit. It includes two independent floating-point, vector-calculation units, an
MPEG-2 decoder, and high-performance DMA controllers, all on a single chip.
The processor combines these functions across a 128-bit data bus. As a result, it can perform
complicated physical calculations, curved surface generation, and 3-dimensional (3D) geometric
transformations that are typically difficult to perform in real time with high-speed PCs. The Emotion
Engine delivers floating-point calculation performance of 6.2 Gflops/s. When applied to the processing of geometric and perspective transformations normally used in 3D graphics calculations, it can
reach 66 million polygons per second.
Although the Emotion Engine is the most striking innovation in the PlayStation2, equally
important are the platform’s extensive I/O capabilities. The new platform includes the entire original
PlayStation CPU and features a highly integrated I/O chip developed by LSI Logic that combines
IEEE 1394 and USB support.
It is projected to have a long life because it includes:
Extensive I/O capability via IEEE 1394 ports with data rates between 100 and 400 Mb/s.
USB 1.1 which can handle data rates of 1.5 to 12 Mb/s.
Broadband hard-drive communications expansion of the system.
The IEEE 1394 interface allows connectivity to a wide range of systems including VCRs, set-top
boxes, digital cameras, printers, joysticks, and other input devices.
As creator of the original PlayStation CPU, LSI Logic Corp. was the natural choice to develop
this high-performance I/O chip. They boast a strong track record in ASIC and communications IC
design. And being backward compatible is the only way to guarantee that the game can be played on
the exact same processor on which it was originally played.
While the PlayStation2 represents the high mark in video-game performance, rival Nintendo has
introduced its next-generation console called GameCube. Nintendo officials argue that their system
will offer game developers a better platform than PlayStation2 because it focuses exclusively on
those applications. Unlike PlayStation2 which offers extensive multimedia and Internet capabilities,
Nintendo designers did not try to add such home-information, terminal-type features. For example,
GameCube does not add DVD movie playback features. Instead, it uses proprietary 3-inch optical
disks as a storage medium. Nintendo is planning to add DVD capability to a higher-end GameCube.
The new Nintendo platform gets high marks for its performance. It features a new graphics
processor designed by ArtX Inc. The platform, code-named Flipper, runs at 202 MHz and provides
an external bus that supports 1.6 GB bandwidth.
One key distinction between the GameCube and its predecessor lies in its main system processor.
While earlier Nintendo machines used processors based on the MIPS architecture, Nintendo has
turned to IBM Microelectronics and the PowerPC. Code-named Gekko, the new custom version of
the PowerPC will run at a 405 MHz clock rate and sport 256 Kbytes of on-board L2 cache. Fabricated on IBM’s 0.18-micron copper wire technology, the chip will feature 32-bit integer and 64-bit
floating-point performance. It benchmarks at 925 DMIPS using the Dhrystone 2.1 benchmark.
The RISC-based computing punch of the PowerPC will help accelerate key functions like
geometry and lighting calculations typically performed by the system processor in a game console.
Chapter 6
The 925 DMIPS figure is important because it adds what the game developers call “artificial intelligence.” This is the game-scripting interaction of multiple bodies and other items that may not be
very glamorous, but are incredibly important. It allows the creativity of the game developer to come
into play.
New Kid on the Block
Microsoft Xbox was launched in November 2001. It was designed to compete with consoles from
Nintendo and Sony and with with books, television and the Internet. While the PlayStation2 renders
beautiful images and the Nintendo Game Cube offers superb 3D animation, what is under the hood
of the Xbox makes the true difference. The Xbox sports a 733-MHz Intel Pentium III processor,
about the same level of power as today’s low-end computers. It also includes a 233-MHz graphics
chip designed jointly by Microsoft and Nvidia Corp., a top player in the PC video adapter market.
This combination offers more power than both its competitors. Like the PlayStation2, the Xbox will
play DVD movies with the addition of a $30 remote control. The Xbox also plays audio CDs and
supports Dolby Digital 5.1 Surround Sound. Nintendo’s Game Cube uses proprietary 1.5-GB minidiscs that hold only a fifth as much information as regular discs, so they will not play DVDs.
The front panel of the Xbox provides access to four controller ports (for multi-player gaming),
power switch, and eject tray for the slide-out disk drive. The Xbox has 64 MB of RAM and an 8-GB
hard drive, making it much more like a PC than the other consoles. The design speeds up play
because the Xbox can store games temporarily on its fast hard disk instead of reading from a DVD
which accesses data far slower. It also allows users to store saved games internally instead of buying
memory cards.
It offers another unique feature. If you do not like a game’s soundtrack, you can “rip” music
from your favorite audio CD to the hard drive. Then you can substitute those tunes for the original—
if the game publisher activates the feature. Of course, all this does not mean much if you cannot see
the results, and Microsoft can be proud of what’s on the screen. Thanks to crisp, clean images and
super-smooth animation, watching the backgrounds in some games can be awe-inspiring.
On the downside, Microsoft’s bulky controller is less than ideal with two analog joysticks,
triggers, a D-pad, and other buttons. For average and small-sized hands, the sleeker PlayStation2 and
smaller GameCube controllers feel better. However, third party substitutes are available for all
gaming stations.
Microsoft hopes the Xbox’s expandability will make a serious dent in PlayStation2 sales. The
Xbox’s Ethernet connection which is designed for cable and DSL modem hookups should allow for
Internet play and the ability to download new characters and missions for games. It will also provide
the ability to network different appliances in the home. Microsoft has also announced that the new
version on the Xbox should be expected in 2006.
Meanwhile, Sony has not made any date commitment on the launch of the successor to the
PlayStation2 (most likely to be named PlayStation3). However, in May 2003 it introduced the PSX, a
mid-life follow-up to the PlayStation 2. Microsoft touted it as a device that creates a new home
entertainment category. In addition to the basic features of a game console, PSX will offer a DVD
recorder, a 120 GB hard drive, a TV tuner, an Ethernet port, a USB 2.0 port and a Memory Stick slot.
The PSX shares a number of components with PlayStation 2 including the Emotion Engine processor
and the operating system. Few details have been released about the PlayStation3 which is expected to
Gaming Consoles
be launched in late 2005 or 2006. The new “Cell” processor chip is touted to be a thousand times
more powerful than the processor in the current PlayStation 2. This is a multi-core architecture in
which a single chip may contain several stacked processor cores. It will be based on industry-leading
circuitry widths of 65 nanometers. This reduction in circuitry widths allows more transistors (read:
more processing power) to be squeezed into the core. It will be manufactured on 300 mm wafers for
further cost reduction.
The new multimedia processor, touted as a “supercomputer on a chip,” is being created by a
collaboration of IBM, Sony, and Toshiba. It is termed a “cell” due to the number of personalities it
can take and, hence, the number of applications it can address. While the processor’s design is still
under wraps, the companies say Cell’s capabilities will allow it to deliver one trillion calculations per
second (teraflop), or more floating-point calculations. It will be able to do more than 1 trillion
mathematical calculations per second, roughly 100 times more than a single Pentium 4 chip running
at 2.5 GHz. Cell will likely use between four and 16 general-purpose processor cores per chip. A
game console might use a chip with 16 cores while a less complicated device like a set-top box
would have a processor with fewer cores.
While Cell’s hardware design might be difficult, software is being created for the chip that may
be difficult to establish in the market. Creating an operating system and set of applications that can
take advantage of its multiprocessing and peer-to-peer computing capabilities will determine if Cell
will be successful. Toshiba hinted that it aims to use the chip in next-generation consumer devices.
These would include set-top boxes, digital broadcast decoders, high-definition TVs, hard-disk
recorders, and mobile phones. Elements of the cell processor design are expected in future server
chips from IBM.
According to analysts about eight million units of the Xbox console had been sold globally by
June of 2003, with less than a million going to Asia. Despite deposing Nintendo as the number two
in units shipped worldwide, Microsoft remains number three in Japan. Meanwhile, Sony’s
PlayStation console is number 1 with more than 50 million sold worldwide.
As a new kid on the block the Xbox has one major disadvantage compared to the PlayStation2.
Sony’s console has over 175 games available, compared to 15 or 20 Xbox titles at launch time. This
has increased to a little over 50 games since. Because games are the key point in attracting players,
the Xbox has a long way to go. However, Microsoft is expanding its Asian territories with localization efforts for China and Korea. It is also bringing in its Xbox Live online game product.
The GameCube seems to be geared toward gamers 10 years old or younger with a growing
selection of more age-appropriate titles. It’s a toss-up for older children and adults. If you want an
excellent machine with plenty of games available right now, the PlayStation2 is your best bet. If you
want the hottest hardware with potential for the future, buy the Xbox.
Support System
Developer support is crucial to the success of any video console. Sega’s demise can be attributed to
its failure to attract Electronic Arts or Japan’s Square to create the branded games it needed to
compete with Sony. Today, however, many of the large game publishers spread their bets evenly by
creating games that can be played on each console. Smaller developers don’t have the resources to
make multiple versions of their titles or to create branded games for more than one platform. They
gravitate to the console that makes their life easiest.
Chapter 6
Naturally, the console makers are jockeying for both large publishers and small developers. And
in a few cases makers have even funded game developers. Microsoft, which owes much of its overall
success to its loyal software developers, started the Xbox in part because it foresaw PC developers
moving to the Sony PlayStation and its successor, PlayStation2. Sony has lured a total of 300
developers into its fold, mostly because it has the largest installed base and offers the best potential
return on investment for the developers.
In its previous consoles Nintendo’s cartridge format made it hard for developers and publishers
to generate large profits on anything but the biggest hit titles. This was because publishers had to
order cartridges far in advance to accommodate a manufacturing process that took up to ten weeks.
Nintendo’s latest console has a disk drive instead and the company claims it is embracing third-party
Nintendo launched its console with only 8 titles and it had 17 titles in North America by the end
of the year. By contrast, Microsoft had 15 to 20 titles at launch because it distributed its development
kits early and it actively courted developers with a system that was easy to program. (In comparison,
Sony’s system is considered difficult to program.)
The importance of external developer loyalty will wane over time. The major publishers will
continue to support each console platform, leaving it up to the console makers’ in-house game
developers to distinguish each platform with exclusive titles. From most game developers’ perspectives the three video consoles are created equal.
Featured Attractions
Sony’s machine uses parallel processing to generate images. This allows the console to run computeintensive games—a boon for gamers who delight in vivid graphics, but long considered a curse by
some developers.
PC game developers believe that the Xbox has a clear edge over older machines like the
PlayStation2. Microsoft’s console uses PC-like tools so game development for it is relatively easy. It
also has several times the computing performance of the Sony box and it comes with a hard drive.
The question remains whether game developers will use these features to make their games stand
apart from Sony’s.
Meanwhile, the Nintendo console offers decent performance, but to save costs, it left off a highcost DVD player and a hard drive—features that rivals have.
Driving down production costs will be a determining factor in profitability over the next five
years. According to most estimates Sony’s PlayStation2 cost the company $450 per unit upon initial
production in early 2000. The company had at first sold the machine as a loss leader for $360 in
Japan and for $300 in the United States and Europe. The strategy paid off with the first Play Station
because Sony was able to reduce the product’s cost from $480 in 1994 to about $80 now. (It was
initially priced at $299 and sells at about $99 today). Meanwhile, the company sold about nine
games for every console. That model allowed Sony to make billions of dollars over the life of the
PlayStation, even if it lost money at first. Sony says its PlayStation2 is selling three times as fast as
the original PlayStation.
Part of the reason Sony’s game division has yet to report a profit is the disarray in its game
software unit. Sony combined two separate game divisions before the PlayStation2 launch. This
resulted in big talent defections and a poor showing of Sony-owned titles at the launch of the
Gaming Consoles
PlayStation2 in the United States. With its slow launch of internally produced games, Sony may have
squandered its first-mover advantage. The good news for Sony is that its original PlayStation titles
are continuing to sell.
Microsoft’s Xbox is believed to have initially cost the company about $425 thanks to components like a hard disk drive, extra memory, and ports for four players. The company added these
extras to snare consumers who want more “gee-whiz” technology than the PlayStation2 can deliver.
The console’s launch price was approximately $299. Microsoft’s home and entertainment division
which operates the Xbox business continues to generate losses.
Perhaps the most significant difference between the Xbox and PlayStation2 is a hard drive. An 8GB hard drive is built into the Xbox but the PlayStation2 does not have a hard drive for the non-PSX
version. The hard drive provides for a richer game experience by giving Xbox gamers more realism,
speed, expandability, and storage. Fans of sports games will no longer have to wait for their console
to catch up to the action and deliver in real time. Microsoft wants to take those technology advances
into the living room where 86% of U.S. families with teens have one or more game consoles.
Microsoft’s original manufacturing goal was to take advantage of the PC industry’s economies of
scale by using standardized PC components. (Sony, however, relied on costly internally built components.) But this may not work. At best, the Xbox cost per unit may fall to $200 over time, but not
much lower. Many of the key components in the Xbox already have low costs because they’re used in
computers. According to the market research firm Disk/Trend, hard drive costs will not fall much
lower than the estimated $50 to $60 that Microsoft is paying now. The Intel microprocessor inside
the Xbox already costs less than $25.
Heavy Betting
To break even, Microsoft must sell about eight or nine games for every hardware unit, and at least
three of those titles must be produced in-house. Typically, games sell for about $50 with profit
margins of about 50 to 70% for in-house titles. Third-party titles bring in royalties of about 10%, or
$5 to $10 for the console maker. Microsoft says it has more than 80 exclusive titles in the works. And
to shore up its business model Microsoft is betting heavily on online gaming services. It hopes that
gamers will pay a small fee, say $10 a month, to play other gamers on the Internet and to download
new games. But today that is uncharted territory. The more Xbox units Microsoft sells, the more
money it loses on hardware. Some sources estimate that the company will lose $800 million in the
Xbox’s first four years and a total of $2 billion over eight years. The only chance for an upside is if
the online plan, untested though it is, pulls off a miracle.
Nintendo says that the company’s games should turn a profit on hardware alone. The GameCube
console sold at $199 at launch. Over time Nintendo should be able to reduce the cost of its system
from $275 to less than $100. Nintendo currently generates the highest profits on its consoles because
anywhere from 50 to 70% of its games are internally produced. In the past Nintendo lost out to Sony
in part because it had to sell its costlier cartridge games at $50 to $60 when Sony was selling games
at $40. This time Nintendo has a 3-inch disk that is smaller and cheaper than the DVDs that Sony is
using—and far cheaper than the cartridges.
Nintendo’s upper hand in profitability is strengthened because it completely dominates the
portable-games market. GameBoy Advance titles are dominating the best-seller lists. And the device
sells for about $90 per unit. Nintendo can use portable device and game revenues to make up for any
Chapter 6
losses in the console hardware market. Nintendo is expecting to sell 4 million GameCube units and
10 million games in addition to 23.5 million Game Boy handhelds and 77.5 million portable games.
Self Promotion
Marketing is another success factor. Microsoft has the No. 2 brand in the world according to a survey
by Interbrand, Citicorp, and BusinessWeek. Sony ranked No. 20 while Nintendo was 29. But
Microsoft’s brand has a lot less power in the games market. That is one reason that Microsoft spent
$500 million on marketing worldwide over the first 18 months after the introduction of the Xbox.
Microsoft got a lot of attention when it announced that figure. However, this comes out to $111
million a year in the U.S. market which is much less than the $250 million Sony plans to spend
promoting both the PlayStation2 and PlayStation One in the North American market.
Targeting will also matter. Sony’s dominance was achieved by targeting older, 18- to 34-year-old
male game players. As role models for younger players, these players are key influencers. By
contrast, Nintendo targeted young players. While younger players are more likely to grow up and
adopt the tastes of older players, older players are not likely to adopt “their younger brother’s
machine.” Microsoft targets the same age group as Sony.
Console makers will also have to delight their audiences. Nintendo has shown that with a
franchise like Pokemon it can make just as much money as Sony, and with fewer titles. By contrast,
Microsoft will follow the Sony model of producing scores of titles for more variety. Sony has a raft
of titles slated for this fall but Microsoft’s games are untested and lack an obvious blockbuster.
The online-games market is a wild card. Since Microsoft has all the hardware for online games
built into the machine, it can more easily recruit developers. Sega plans to exploit the Xbox’s online
advantage and develop specific online games.
Game Over
Sony will come out ahead of the game. According to a forecast by the European Leisure Software
Publishers Association, Sony is expected to have 42% of the worldwide video console market by
2004. According to the same forecast Nintendo and Microsoft are expected to take about 29% of the
market each. Microsoft is expected to do its best in Europe and the United States but poorly in Japan.
Although Sony grabs the largest market share, it does not necessarily garner the biggest profits.
Nintendo will occupy second place in the worldwide console market. But it will garner the most
overall profits, thanks to its portable empire. The Xbox is winning over consumers with its superior
technology. Microsoft is trying to supplement its profits with online revenues. And like Sony,
Microsoft benefits most if the demographics of gaming expand to increase the market size. If
Microsoft is committed to a ten-year battle, it can stay in the game despite heavy initial losses. But
five years may be too short a horizon for it to catch up to market leader Sony.
Components of a Gaming Console
A typical gaming console is made up of several key blocks that include a CPU, graphics processor,
memory, storage mediums, and Internet connectivity. A typical gaming platform has a CPU, a
graphics chip capable of several billion operations per second, and a 3D audio processor with 32 or
64 audio channels. It may support a multi-gigabyte hard drive, memory card, DVD, and on-board
SRAM, DRAM or flash memory. The MPEG processor also supports a display controller. While
upcoming boxes will support an analog modem, future boxes will provide broadband connectivity for
Gaming Consoles
online game play. The graphic synthesizer connects to NTSC, PAL, DTV, or VESA via a NTSC/PAL
decoder. The I/O processing unit contains a CPU with bus controller to provide an interface for
HomePNA, Ethernet, USB, wireless, and IEEE 1394. These connections make it possible to use the
existing PC peripherals and create a networked home.
Figure 6.1 shows the block diagram and components of a typical gaming console.
Figure 6.1: Gaming Console Block Diagram
Broadband Access and Online Gaming
To succeed in this market gaming companies must recognize that the video game business is all
about the games. From Atari to Nintendo to Sega to PlayStation it has historically been shown time
and again that gamers are loyal to the games—not to the hardware. Broadband access will enable
users to play with thousands of other gamers on the Internet and will provide the ability to download
the latest updates to games. For example, in a basketball video game a user will be able to download
the most current status of teams in the NBA so the game will accurately reflect that information.
Online gaming is interactive game play involving another human opponent or an offsite PC.
Online gaming capabilities are promoting gaming consoles into potential home gateways. Playing
between different players across the Internet has been gaining prominence with the coming of
broadband access and the ubiquity of the Internet. Most gaming consoles have a 56K (analog)
modem embedded to allow this. The addition of home-networking capabilities within the gaming
console will make the evolution into a residential gateway a realistic possibility in the near future.
Online capabilities in next-generation game consoles, whether through a built-in or add-on
narrowband or broadband modem, can substantially expand the console’s gaming capabilities and
open up new areas of entertainment. However, well-defined plans must be laid out for the developer
and the consumer in order to use these communication capabilities.
Chapter 6
Several console developers have already announced the addition of online functionality. This
functionality comes in several guises ranging from looking up instructions to downloading new
characters and levels. The latter approach has already been used for PC games in a monthly subscription scheme with some level of success.
Some of the major video game hardware players have introduced a comprehensive online
program. When Sega introduced its Dreamcast it provided the Internet service, used
Dreamcast’s built-in narrowband modem, and rolled out, a site optimized for narrowband
Dreamcast online gaming. Games are being introduced to take advantage of this online functionality.
In addition, the web sites of these companies will feature optimized online games.
Microsoft’s Xbox contains an internal Ethernet port. Sony announced that the PlayStation2 will
feature an expansion unit for network interface and broadband connectivity. Nintendo has announced
that both Dolphin and GameBoy Advance will have online connectivity.
Broadband access capabilities are very promising and will eventually become a reality for most
households. IDC believes that only 21.2 million U.S. households will have broadband access by
2004. This low number serves as a sobering reminder that those video game households without
broadband access could be left out of online console gaming unless plans are made to support
narrowband access.
The difficulties many households experience with the installation of broadband can significantly
affect the adoption of broadband via the gaming console. Logistically, technology needs to advance
the capability of more households to adopt broadband Internet access. In addition, ensuring that the
broadband capabilities interface properly with the PC is often a trying experience. Once broadband is
fully working, will a household tempt fate by trying to extend that connection to the console? The
solution appears to be the creation of a home network. This can be a daunting task for many, often
requiring financial outlays and a great deal of self-education. Households that choose to use the
broadband functionality of the console will confront these and many other issues.
IDC predicts that by 2004 the number of consoles being used for online gaming will be 27.2
million units, or 33% of the installed base.
Gaming Consoles—More Than Just Gaming Machines
Current 128-bit gaming consoles such as the Xbox, GameCube, and PlayStation2 promise an ultrarealistic gaming experience. They will include Internet access, interactive television capabilities, and
the ability to network with other devices in the home through expansion ports (PC card slots).
Traditional gaming console vendors are positioning their devices as the nerve center, or gateway, of
the home. This will make them true home consumer devices. Products in development will enable:
Broadband access
Traditional Internet access
Internet-enabled value-added services such as video on demand
Home networking.
Residential (home or media) gateways will network the different home appliances to distribute
audio, video, data, and Internet content among them. In this way consumer devices can share digital
content and a single broadband access point. This will also be a platform for utility companies to
provide value-added services such as video-on-demand and Internet connectivity.
Gaming Consoles
PC Gaming
Does the introduction of the gaming console mean the end of PC-based gaming? The PC and the
gaming console are complementary devices. Each has very distinct audiences. PC games are more
cerebral while console games are more visceral. A comparison of the top 10 game lists for these
platforms shows that the games don’t really match up. The most popular PC games of 1999 included
Age of Empires II, Half-Life, and SimCity 3000. The most popular console games of 1999 included
Pokemon Snap, Gran Turismo Racing, and Final Fantasy VIII. The gaming console is expected to
offer the most advanced graphics, the most flexibility in Internet gaming, and the most realistic play
of any game console on the market.
Games have always been recognized as a popular form of home entertainment. As a category,
games represent 50% of consumer PC software sales. Consumers are becoming more comfortable
with PC technology—more than half of U.S. households now own a PC.
Growing Convergence of DVD Players and Gaming Consoles
Few doubt that DVD (digital versatile/video disk) is becoming the video format of choice for the
next couple of decades. Its significant capacity, digital quality, and low cost of media make it far
superior to magnetic tape. And growing vendor acceptance and market presence have already created
an installed base of over a million players in the U.S. alone. As consumer awareness improves and
content availability increases, the demand for DVD players will grow.
Manufacturers can drive prices down as volumes begin to expand. In fact, some vendors are
already bringing very low priced players to market. They expect to see second-tier consumer electronic companies try to pick up volumes with even lower priced systems during the coming year.
These lower prices, dipping below $100, should further boost volumes for DVD players. However,
with low prices comes margin pressure. Traditionally, top vendors were able to differentiate their
products from second-tier vendors’ offerings. They did this through a strong brand and separation of
products within their line through feature sets. However, due to the digital nature of the DVD player,
the opportunity to further expand differentiation through the addition of separate functionality is now
possible. Already DVD players will play audio CDs. And it is possible to add gaming, Internet
access, or even a full PC within the same system.
This opportunity creates a conundrum for consumer electronics companies. If consumers are
drawn to the combination products, it will allow vendors to maintain higher product prices, greater
product differentiation, and potentially create new lines of revenue. However, there is risk of turning
away potential sales if the additional functions are not appealing to buyers, or if the perceived
incremental cost of that functionality is too high. Further, maintaining a higher basic system price can
result in a lower total market due to the proven price sensitivity of demand for consumer electronics.
There are a number of potential add-on functions that could meaningfully enhance a DVD
player. These include Internet connectivity, video recording capabilities, satellite or cable set-tops,
and gaming. The integration of gaming is a very interesting concept for a number of reasons. The
shared components and potentially common media make the incremental functionality cheap for a
vendor. And the combined product would be conceptually appealing for consumers. The following
sections describe factors that would make it easy for a vendor to integrate gaming into a DVD player.
Chapter 6
Shared Components
A game console and a DVD player actually have a number of common components such as video
and audio processors, memory, TV-out encoders, and a host of connectors and other silicon. There is
also potential for both functions to use the same drive mechanism which is one of the most expensive
parts of the system. Today many of the components are designed specifically for one or the other, but
with a little design effort it would be easy to integrate high-quality gaming into a DVD player at
little incremental cost.
Performance Overhead
As CPUs, graphics engines, and audio processors become more powerful, the performance overhead
inherent in many DVD components is also increasing. The excess performance capabilities make it
even easier to integrate additional functionality into a gaming console.
Shared Media
A DVD as the medium would be natural for a combination DVD player/game console. Game
consoles are already moving quickly towards higher capacity and lower-cost media, and DVD-based
software represents the best of both worlds. Although cartridges have benefits in the area of piracy
and platform control, they are expensive and have limited a number of manufacturers. Both Sony and
Microsoft gaming consoles have DVD support.
Many DVD video producers are incorporating additional interactive functions into their products.
For example, a movie about snowboarding may include additional features for a PC such as a game
or information on resorts and equipment. The incremental gaming capabilities of a combination
DVD/game console could easily take advantage of these features and enable consumers to enjoy
them without a PC.
Potential Enhancements
The combination of a DVD and a gaming console has the potential to enhance the use of both.
Besides allowing users to take advantage of the interactive elements discussed above, it would enable
game producers to add high-quality MPEG2 video to games. This represents an opportunity to
enhance the consumer experience and differentiate products.
This also creates new opportunities for cross-media development and marketing. Historically,
games have been based on movies and vice versa, but this would enable a DVD with both. It would
differentiate the DVD over a VHS movie or a cartridge game while lowering media and distribution
costs for both content owners.
One of the most significant drawbacks to such a product for DVD vendors would be the development and marketing of a gaming platform. To take gaming functionality beyond that of limited,
casual games that are built into the product, a substantial effort must be undertaken. However, several
companies are already working on adapting gaming technology to other platforms. Partnering with a
third party or adopting an outside initiative would avoid these issues. Since the DVD manufacturer is
relieved of creating the game technology, cultivating developer relationships, and marketing the
platform, the investment and risk to the DVD vendor is minimized. Examples of third party initiatives include VM Labs which is developing game technology for a variety of platforms. And Sony
has developed a small footprint version of its PlayStation platform that could be integrated into a
number of products. Most gaming hardware suppliers have invested research into gaming integration.
Gaming Consoles
Although the ability to easily integrate functionality is a key part of the equation, consumer
interest in such a combination is the critical factor. And the consumer factor is also the most difficult
to understand or forecast. Although there is no definitive way to estimate consumer interest without
actual market experience, there are factors indicating that consumers may be interested in a gamingequipped DVD player.
Both the DVD player and the gaming console are entering a significant growth phase where
vendor leadership is up for grabs and product attributes are uncertain. This situation could provide
success for a product that excels in both categories. During the next few years both product categories will shake out. In the meantime there is opportunity for change.
A significant indicator of consumer interest in a DVD player with gaming capabilities is overlapping consumer interest. If done well, the gaming functionality could enhance the DVD player
experience. A potential enhancement is the ability for consumers to take advantage of the interactive
elements and games that are added to DVD movies for PC-based DVD drives. Given the common
components for DVD players and game consoles, there is also potential for high-quality performance
of both functions and significant cost savings to the consumer who is interested in both products.
And since the game console and the DVD player will sit near each other and use a TV for display, an
opportunity for space savings is created. All things considered, the addition of game playing into a
DVD player could be successful.
Gaming Trends
In addition to those specifically interested in a game console, there may be other who would value
gaming capabilities in a DVD player. Currently, household penetration of game consoles in the
United States is almost 40% and PCs are in 46% of U.S. homes. Electronic entertainment is far more
pervasive than just consoles; games are appearing in PCs, handhelds, and even cellular phones.
While there are many positive indicators for a DVD player with gaming functionality, the
potential success of such a product is far from guaranteed. There are numerous challenges that such a
product would face in the market. And any one of them could derail consumer interest and potentially damage the success of a vendor during this important DVD/game console growth phase.
It’s all About the Games
As anyone in the gaming business knows, the hardware is the easy part. To drive demand for a
gaming platform, developers have to support it fully, creating blockbuster software titles that are
either better or unique to the platform. Without plenty of high-demand games, few game enthusiasts
will even consider a product. And casual gamers will see little use for the additional features on a
DVD player.
Building support from developers without a track record of success or a large installed base is
notoriously difficult. Even successful game companies with great technology are challenged in this
area. Any one vendor may have difficulty driving volumes of a combination DVD/gaming player to
make a significant enough platform to woo developers. This would especially be true if multiple
manufacturers brought competing technologies to market.
Marketing is Key
Marketing the platform and its games to consumers is critical. Hardware and software companies
spend millions to create demand for their products, using everything from TV ads to sports endorse-
Chapter 6
ments. During this critical game-console transition phase, marketing noise will only increase as the
incumbents try to sustain software sales on older platforms while building momentum for new
technology. In this environment it will be very difficult to build demand for a new platform. Even
making a realistic attempt will require a significant investment.
New Gaming Opportunities
While a bulk of household gaming is focused on traditional gaming platforms such as the gaming
console and the PC, the gaming industry is looking for opportunities in emerging or nontraditional
gaming platforms. These include PDAs, cellular phones, and interactive TV. These emerging devices
will offer the industry several potential channels through which to target new gaming audiences,
including those that have not traditionally been gamers. It will also enable vendors to take advantage
of new and emerging revenue streams.
However, the industry must resolve several issues to build a successful and profitable business
model. Vendors must form partnerships, deliver the appropriate services, and derive revenues from
sources such as advertisements, sponsorships, subscriptions, and one-time payment fees.
The following sections present brief descriptions of gaming opportunities for emerging devices.
Wireless Connectivity
In Japan multi-player online gaming has already become extremely popular. Sony plans to market its
original PlayStation console as a platform for mobile networks. The company has introduced a
version of its popular game platform in Japan that offers the same functionality of the original at
one-third the size.
Called PSone, the new unit will feature a controller port that attaches the PlayStation console to
mobile phone networks. Players will be able to carry the game console from room to room or use it
in a car. Slightly bigger than a portable CD player, PSone will feature a 4-inch TFT display.
As part of this new strategy Sony has announced a partnership with NTT DoCoMo Inc., Tokyo.
They will develop services that combine NTT DoCoMo’s 10-million i-mode cellular phones with the
20 million PlayStation consoles in Japan. The cellular provider uses a 9.6 Kb/s packet communications scheme to enable access to Internet sites written in Compact-HTML. The company will launch
a W-CDMA service that will support 38 Kb/s communications and will eventually expand coverage
to other carriers.
The future for mobile gaming beyond this service looks bright. Sony has talked extensively about
integrating its PSone with cell phones in Japan. Ericsson, Motorola, and Siemens are working to
develop an industry standard for a universal mobile game platform. And several publishers including
Sega and THQ have begun to develop content for the platform. For most users the concept is still
relatively new. But trials have shown that once offered a gaming service, mobile users tend to
become avid gamers. The vast majority of mobile phone gaming will be easy-to-play casual games
such as trivia and quick puzzle games.
Wireless connectivity, both to broadband networks and between game consoles and controllers,
promises to bring new capabilities to game users. It will enable a whole range of new products and
uses in the home. Intel has introduced the first PC game pad in the U.S. that offers wireless connectivity. Part of a family of wireless peripherals, the new game pads use a 900 MHz digital frequencyhopping spread-spectrum (FHSS) RF technology. It supports up to four players simultaneously at a
range of up to 10 feet. And it will offer the gamer and the Internet user much more flexibility. It will
Gaming Consoles
unleash the whole gaming experience by removing the cord clutter and eliminating the need for the
PC user to be a certain distance from the PC.
The portable and ubiquitous nature of PDAs such as Palm Pilot, Handspring Visor, and Compaq iPaq
makes them perfect as handheld gaming platforms. The open Palm operating system has encouraged
the development of many single-player games such as Tetris and minesweeper. Competing platforms
such as the Compaq iPaq (with Microsoft’s PocketPC operating system) have made specific game
cartridges available for single-player gaming. The integration of online connectivity into PDAs has
enabled the development of online gaming. Factors such as screen size, operating system, and
connectivity speeds will limit the scope and type of online games. However, online games that
evolve to these platforms will not be substandard or meaningless to the gamer. Rather, the online
game pallet has the potential to include a broad mixture of casual and low-bandwidth games (such as
those currently played online) that will appeal to a wide spectrum of individuals.
Interactive TV (iTV)
Digital TV (digital cable and satellite) service providers are deploying digital set-top boxes with twoway communication and new middleware that can support interactive TV. iTV has the potential to
bring gaming applications to the masses. Already deployed in Europe, the French-based Canal+ has
been extremely successful with iTV online games. Significant iTV opportunities exist in the U.S. as
well. The rapid deployment of digital set-top boxes will contribute to significant growth of new
online and interactive gaming applications.
Home Networking
Today, home networking is the connection of two or more PCs for sharing broadband Internet access,
files, and peripherals within a local area network (LAN). A number of households that have established home networks use the high-speed access to engage in multi-player gaming. The
establishment of feature-rich home networks capable of delivering new services and products into the
home will facilitate the delivery of online gaming applications. It is envisioned that these gaming
applications will be delivered to multiple platforms including PCs, game consoles, PDAs, Internet
appliances, and interactive TV.
The leading game console manufacturers (Microsoft, Nintendo, and Sony) are not playing games.
The stakes are much higher than a simple fight for the hearts and minds of little Johnny and Jill.
There is a growing belief that products of this type will eventually take over the low-end PC/homeentertainment market because there is more than enough processing power in these new boxes. For
the casual Web surfer or for someone who is watching TV and needs more information about a
product on a Web site, this kind of box is desirable. It can replace what we do with PCs and TVs
today and provide a much wider range of functionality. Although the game industry is forecast to be
in the tens of billions, the impact may be much greater than the revenue it generates.
Game systems are likely to become hybrid devices used for many forms of digital entertainment
including music, movies, Web access, and interactive television. The first generations of machines to
offer these functions hold the potential to drive greater broadband adoption into the consumer
Digital Video/Versatile Disc (DVD)
The digital versatile disc, or DVD, is a relatively new optical disc technology that is rapidly replacing VHS tapes, laser discs, video game cartridges, audio CDs, and CD-ROMs. The DVD-Video
format is doing for movies what the compact disc (CD) did for music. Originally the DVD was
known as the digital video disc. But with the addition of applications such as PCs, gaming consoles,
and audio applied to the disc, DVD became known as digital versatile disc. Today the DVD is
making its way into home entertainment and PCs with support from consumer electronics companies, computer hardware companies, and music and movie studios.
The Birth of the DVD
The DVD format came about in December 1995 when two rival camps ended their struggle over the
format of next-generation compact discs (CDs). Sony and Philips promoted their MultiMedia
Compact Disc format while Toshiba and Matsushita offered their Super Density format. Eventually
the two camps agreed on the DVD format. Together they mapped out how digitally stored information could work in both computer-based and consumer electronic products.
Although it will serve as an eventual replacement for CDs and VCRs, DVD is much more than
the next-generation compact disc or VHS tape player. DVD not only builds upon many of the
advances in CD technology, it sounds a wake-up call for the motion picture and music industries to
prepare their intellectual property for a new era of digital distribution and content.
DVD Format Types
The three application formats of DVD include DVD-Video, DVD-Audio, and DVD-ROM.
The DVD-Video format (commonly called “DVD”) is by far the most widely known. DVDVideo is principally a video and audio format used for movies, music concert videos, and other
video-based programming. It was developed with significant input from Hollywood studios and is
intended to be a long-term replacement for the VHS videocassette as a means for delivering films
into the home. DVD-Video discs are played in a machine that looks like a CD player connected to a
TV set. This format first emerged in the spring of 1997 and is now considered mainstream, having
passed the 10% milestone adoption rate in North America by late 2000.
The DVD-Audio format features high-resolution, two-channel stereo and multi-channel (up to
six discrete channels) audio. The format made its debut in the summer of 2000 after copy protection
issues were resolved. DVD-Audio titles are still very few in number and have not reached mainstream status, even though DVD-Audio and DVD-Video players are widely available. This is due
primarily to the existence of several competing audio formats in the market.
Digital Video/Versatile Disc (DVD)
DVD-ROM is a data storage format developed with significant input from the computer industry.
It may be viewed as a fast, large-capacity CD-ROM. It is played back in a computer’s DVD-ROM
drive. It allows for data archival and mass storage as well as interactive and/or web-based content.
DVD-ROM is a superset of DVD-Video. If implemented according to the specifications, DVD-Video
discs will play with all the features in a DVD-ROM drive, but DVD-ROM discs will not play in a
DVD-Video player. (No harm will occur. The discs will either not play, or will only play the video
portions of the DVD-ROM disc.) The DVD-ROM specification includes recordable versions - either
one time (DVD-R), or many times (DVD-RAM).
At the introduction of DVD in early 1997 it was predicted that DVD-ROM would be more
successful than DVD-Video. However, by mid-1998 there were more DVD-Video players being sold
and more DVD-Video titles are available than DVD-ROM. DVD-ROM as implemented so far has
been an unstable device, difficult to install as an add-on and not always able to play all DVD-Video
titles without glitches. It seems to be awaiting the legendary “killer application.” Few DVD-ROM
titles are available and most of those are simply CD-ROM titles that previously required multiple
discs (e.g., telephone books, encyclopedias, large games).
A DVD disc may contain any combination of DVD-Video, DVD-Audio, and/or DVD-ROM
applications. For example, some DVD movie titles contain DVD-ROM content portion on the same
disc as the movie. This DVD-ROM content provides additional interactive and web-based content
that can be accessed when using a computer with a DVD-ROM drive. And some DVD-Audio titles
are actually DVD-Audio/Video discs that have additional DVD-Video content. This content can
provide video-based bonus programming such as artist interviews, music videos, or a Dolby Digital
and/or DTS surround soundtrack. The soundtrack can be played back by any DVD-Video player in
conjunction with a 5.1-channel surround sound home theater system.
The DVD specification also includes these recordable formats:
DVD-R – DVD-R can record data once, and only in sequential order. It is compatible with
all DVD drives and players. The capacity is 4.7 GB.
DVD-RW – The rewritable/erasable version of DVD-R. It is compatible with all DVD
drives and players.
DVD+R and DVD+RW – The rewritable/erasable version of DVD+R.
DVD-RAM – Rewritable/erasable by definition.
The last three erasable (or rewritable) DVD formats—DVD-RW, DVD-RAM, and DVD+RW—are
slightly different. Their differences have created mutual incompatibility issues and have led to
competition among the standards. That is, one recordable format cannot be used interchangeably
with the other two recordable formats. And one of these recordable formats is not even compatible
with most of the 17 million existing DVD-Video players. This three-way format war is similar to the
VHS vs. Betamax videocassette format war of the early 1980s. This incompatibility along with the
high cost of owning a DVD recordable drive has limited the success of the DVD recordable market.
Regional Codes
Motion picture studios want to control the home release of movies in different countries because
cinema releases are not simultaneous worldwide. Movie studios have divided the world into six
geographic regions. In this way, they can control the release of motion pictures and home videos into
different countries at different times. A movie may be released onto the screens in Europe later than
in the United States, thereby overlapping with the home video release in the U.S. Studios fear that
Chapter 7
copies of DVD discs from the U.S. would reach Europe and cut into theatrical sales. Also, studios
sell distribution rights to different foreign distributors and would like to guarantee an exclusive
market. Therefore, they have required that the DVD standard include codes that can be used to
prevent playback of certain discs in certain geographical regions. Hence, DVD player is given a code
for the region in which it is sold. It will not play discs that are not allowed in that region. Discs
bought in one country may not play on players bought in another country.
A further subdivision of regional codes occurs because of differing worldwide video standards.
For example, Japan is region 2 but uses NTSC video compatible with that of North America (region
1). Europe is also region 2 but uses PAL, a video system not compatible with NTSC. Many European
home video devices including DVD players are multi-standard and can reproduce both PAL and
NTSC video signals.
Regional codes are entirely optional for the maker of a disc (the studio) or distributor. The code
division is based on nine regions, or “locales.” The discs are identified by the region number
superimposed on a world globe. If a disc plays in more than one region it will have more than one
number on the globe. Discs without codes will play on any player in any country in the world. Some
discs have been released with no codes, but so far there are none from major studios. It is not an
encryption system; it is just one byte of information on the disc which recognizes nine different DVD
worldwide regions. The regions are:
Region 0 – World-wide; no specific region encoded
Region 1 – North America (Canada, U.S., U.S. Territories)
Region 2 – Japan, Western Europe, South Africa, Middle East (including Egypt)
Region 3 – Southeast Asia, East Asia (including Hong Kong)
Region 4 – Australia, New Zealand, Pacific Islands, Central America, South America,
Region 5 – Former Soviet Union, Eastern Europe, Russia, Indian Subcontinent, Africa
(also North Korea, Mongolia)
Region 6 – China
Region 7 – Reserved
Region 8 – Special international venues (airplanes, cruise ships, etc.)
In hindsight, the attempt at regional segregation was probably doomed to failure from the start.
Some of the region standards proved more complicated to finalize than was originally expected.
There were huge variations in censorship laws and in the number of different languages spoken
across a region. This was one of the reasons why DVD took so long to become established. For
example, it is impossible to include films coded for every country in Region-2 on a single disc. This
led the DVD forum to split the region into several sub-regions. And this, in turn, caused delays in the
availability of Region-2 discs. By the autumn of 1998 barely a dozen Region-2 discs had been
released compared to the hundreds of titles available in the U.S. This situation led to many companies selling DVD players that had been reconfigured to play discs from any region.
For several years now games console manufacturers (Nintendo, Sega and Sony) have been trying
to stop owners from playing games imported from other countries. Generally, whenever such
regional standards were implemented it took someone only a few weeks to find a way around it,
either through a machine modification or use of a cartridge adapter. In real terms, regional DVD
coding has cost the DVD Forum a lot of money, delayed market up-take, and allowed third-party
companies to make a great deal of money bypassing it.
Digital Video/Versatile Disc (DVD)
How Does the DVD Work?
DVD technology is faster and provides storage capacity that is about 6 to 7 times greater than CD
technology. Both have the same aerial space—4.75 inches and 3.1 inches, 1.2mm thick. DVD
technology provides multiple languages on movies with multiple language subtitles. Since a beam of
laser light touches the data portion of a DVD disc, it is never touched by a mechanical part when
played, thus eliminating wear.
Some key DVD features include:
MPEG2 video compression
Digital Theatre Systems (DTS), Dolby Digital Sound, Dolby AC-3 Surround Sound
Up to eight audio tracks
133 minutes/side video running time (at minimum)
Disc changers
Backward compatibility with CD and/or CD-ROM
Still motion, slow motion, freeze frame, jump-to-scene finding
Interactive/programming-capable (story lines, subtitles)
A DVD system consists of a master (original) copy, a DVD disc, and a player.
The Master
A master disc must be manufactured before a DVD disc can be duplicated. The backbone of any
DVD is the master copy. Once a movie has been transferred from photographic film to videotape, it
must be properly formatted before it can be distributed on a DVD. The mastering process is quite
complicated and consists of these steps:
1. Scan the videotape to identify scene changes, enter scan codes, insert closed-caption
information, and tag objectionable sequences that would be likely targets for parental
2. Use variable-bit-rate encoding to compress the video into MPEG-2 format.
3. Compress audio tracks into Dolby AC-3 Surround Sound format.
4. In a process called multiplexing, combine the compressed video and audio into a single
data stream.
5. Simulate the playback of the disc (emulation).
6. Create a data tape with the image of the DVD.
7. Manufacture a glass master which duplicators use to “press discs.”
8. With the exception of disc copying and reproduction (step 7 above), all the mastering
process steps are completed on high-end workstations.
One of the key steps to making a master DVD disc from videotape is the encoding process (step
2 above). The encoding process uses compression to eliminate repetitive information. For instance,
much of a movie picture remains nearly the same from frame to frame. It may have a redundant
background (a cloudless sky, a wall in a room, etc.) and the same foreground. If left in its raw state,
capturing and coding these repetitive scenes on digital video would be prohibitive. It would be so
extensive and expansive that a feature-length movie would require 4.7 GB of storage capacity. That’s
where encoding comes in.
Encoding is a complicated process that identifies and removes redundant segments in a movie frame.
Using video encoder ICs, chipsets, and sophisticated software, the process makes several passes of the
video to analyze, compare, and remove repetitive sequences. The process can eliminate more than 97%
of the data needed to accurately represent the video without affecting the quality of the picture.
Chapter 7
A DVD encoder uses more data to store complex scenes and less data for simple scenes. Depending on the complexity of the picture, the encoder constantly varies the amount of data needed to store
different sequences throughout the length of a movie. The average data rate for video on DVD is
about 3.7 Mb/s.
The DVD Disc
DVD discs can be manufactured with information stored on both sides of the disc. Each side can
have one or two layers of data. A double-sided, double-layered disc effectively quadruples the
standard DVD storage density to 17 GB. A double-sided DVD disc holds 17 GB of data (compared
with a CD’s 680 MB) because it is actually two tightly packed, dual-layered discs glued back-toback.
With 17 GB, the DVD provides 26 times the storage capacity of a typical audio CD that has 680
MB. This is the equivalent of four full-length feature movies, or 30 hours of CD quality audio, all on
one disc. The DVD format is a nearly perfect medium for storing and delivering movies, video,
audio, and extremely large databases.
In late 1999 Pioneer Electronics took optical disc density storage one step further when it
demonstrated a 27.4-GB disc that used a 405-nm violet laser. The high-density disc has a track pitch
about half that of a 17-GB disc. Several vendors are developing a new technology that uses blue laser
technology to provide superior quality video. This technology will be capable of achieving higher
data densities with 12- to 30-GB capacity.
DVD and CD discs are developed and replicated in a similar manner due to their physical
similarities. A DVD disc is made of a reflective aluminum foil encased in clear plastic. Stamping the
foil with a glass master forms the tightly wound spiral of tiny data pits. In the case of a single-sided
disc, the stamped disc is backed by a dummy that may show advertisements, information, or entertainment. Two halves, each stamped with information, are attached to the back of a double-sided
disc. The stamping technique is a well-understood process, derived in part from CD-ROM manufacturing and in part from the production of laser discs.
Replication and mastering are jobs most often done by large manufacturing plants that also
replicate CDs. These companies estimate that the cost of a single-sided, single-layer DVD will
eventually be about the same as that of a CD—approximately $0.50. This low cost compares favorably to the roughly $2.50 cost to make and distribute a VHS tape and the $8.00 cost to replicate a
laser disc.
There are three reasons for the greater data capacity of a DVD disc:
1. Smaller pit size – DVDs have a smaller pit size than CDs. Pits are the slight depressions, or
dimples, on the surface of the disc that allow the laser pickup to distinguish between the
digital 1s and 0s.
2. Tighter track spacing – DVDs feature tighter track spacing (i.e., track pitch) between the
spirals of pits. In order for a DVD player to read the smaller pit size and tighter track
spacing of the DVD format, a different type of laser with a smaller beam of light is required.
This is one of the major reasons why CD players cannot read DVDs, while DVD players can
read audio CDs.
3. Multiple layer capability – DVDs may have up to 4 layers of information, with two layers
on each side. To read information on the second layer (on the same side), the laser focuses
deeper into the DVD and reads the pits on the second layer. When the laser switches from
Digital Video/Versatile Disc (DVD)
one layer to another, it is referred to as the “layer switch,” or the “RSDL (reverse spiral dual
layer) switch”. To read information from the other side of the DVD, almost all DVD players
require the disc to be flipped.
Based on DVD’s dual-layer and double-sided options, there are four disc construction formats:
1. Single-Sided, Single-Layered – Also known as DVD-5, this simplest construction format
holds 4.7 GB of digital data. The “5” in “DVD-5” signifies the nearly 5 GB worth of data
capacity. Compared to 650 MB of data on CD, the basic DVD-5 has over seven times the
data capacity of a CD. That’s enough information for approximately two hours of digital
video and audio for DVD-Video or 74 minutes of high-resolution music for DVD-Audio.
2. Single-Sided, Dual-Layered – DVD-9 construction holds about 8.5 GB. DVD-9s do not
require manual flipping; the DVD player automatically switches to the second layer in a
fraction of a second by re-focusing the laser pickup on the deeper second layer. This
capability allows for uninterrupted playback of long movies up to four hours. DVD-9 is
frequently used to put a movie and its bonus materials or its optional DTS Surround Sound
track on the same DVD-Video disc.
3. Double-Sided, Single-Layered – Known as DVD-10, this construction features a capacity
of 9.4 GB. DVD-10s are commonly used to put a widescreen version of a movie on one side
and a full frame version of the same movie on the other side. Almost all DVD players
require you to manually flip the DVD which is why the DVD-10 is called the “flipper” disc.
(There are a few DVD players that can perform flipping automatically.)
4. Double-Sided, Dual-Layered – The DVD-18 construction can hold approximately 17 GB
(almost 26 times the data capacity of a CD), or about 8 hours of video and audio as a DVDVideo. Think of DVD-18 as a double-sided DVD-9 where up to four hours of uninterrupted
video and audio can be stored on one side. Content providers usually choose two DVD-9s
rather than a single DVD-18 because DVD-18s cost far more to produce.
A DVD disc can be used for data storage in a PC by using a DVD ROM drive. Using a similar
format, each DVD can store up to 17 GB of data compared to a CD-ROM disc which stores 650 MB
of data. While a single layer of a DVD disc can hold 4.7 GB of data, greatly increased storage
capacity is accomplished by using both sides of the disc and storing two layers of data on each side.
The amount of video a disc holds depends on how much audio accompanies it and how heavily the
video and audio are compressed. The data is read using 635-nm and 650-nm wavelengths of red
The DVD Player
A DVD player makes the DVD system complete. The player reads data from a disc, decodes audio
and video portions, and sends out those signals to be viewed and heard.
DVD units are built as individual players that work with a television or as drives that are part of a
computer. DVD-video players that work with television face the same standards problem (NTSC
versus PAL) as videotape and laser disc players. Although MPEG video on DVD is stored in digital
format, it is formatted for one of these two mutually incompatible television systems. Therefore,
some players can play only NTSC discs, some can only play PAL discs, but some will play both. All
DVD players sold in PAL countries play both. A very small number of NTSC players can play PAL
Most DVD PC software and hardware can play both NTSC and PAL video. Some PCs can only
display the converted video on the computer monitor, but others can output it as a video signal for a TV.
Chapter 7
Most players and drives support a standard set of features including:
Language choice
Special effects playback (freeze, step, slow, fast, scan)
Parental lock-out
Random play and repeat play
Digital audio output (Dolby Digital)
Compatibility with audio CDs
Established standards also require that all DVD players and drives read dual-layer DVD discs.
All players and drives will play double-sided discs, but the discs must be manually flipped over to
see and/or hear side two. No manufacturer has yet announced a model that will play both sides. The
added cost is probably not justifiable at this point in DVD’s history.
Components of a DVD Player
DVD players are built around the same fundamental architecture as CD-ROM drivers/players. Major
components of a DVD player include:
Disc reader mechanism – Includes the motor that spins the disc and the laser that reads the
information from it.
DSP (digital signal processor) IC – Translates the laser pulses back to electrical form that
other parts of the decoder can use.
Digital audio/video decoder IC – Decodes and formats the compressed data on the disc
and converts data into superior-quality audio and video for output to TVs and stereo
systems. This includes the MPEG video decoder and audio decoder. It also includes a DAC
for audio output and an NTSC/PAL encoder for video output.
8-, 16-, 32-bit Microcontroller – Controls operation of the player and translates user inputs
from remote control or front panel into commands for audio/video decoder and disc reader
Copy protection descrambler
Network Interface ICs – To connect to other electronics devices in the home, the DVD
player supports a number of network interfaces such as IEEE 1394, Ethernet, HomePNA,
Memory – SRAM, DRAM, flash
Figure 7.1 shows the different components of a typical DVD player.
The DVD player functions when a visible laser diode (VLD) beams a 635- or 650-nm wavelength red light at a DVD disc spun via a spindle motor at roughly eight times the speed of a typical
CD-ROM. An optical pickup head governed by a spindle-control chip feeds data to a DSP which
feeds it to an audio/video decoder.
The position of the reading point is controlled by the microcontroller. From the audio/video
decoder, MPEG-2 data is separated and synchronized (demultiplexed) into audio and video streams.
The stream of video data passes through an MPEG-2 decoder where it is formatted for display on a
monitor. The audio stream passes through either an MPEG-2 or Dolby Digital AC-3 decoder where it
is formatted for home audio systems. The decoder also formats on-screen displays for graphics and
performs a host of other features.
Digital Video/Versatile Disc (DVD)
Figure 7.1: DVD Player Block Diagram
First-generation DVD players contained many IC and discrete components to accomplish the
coding, decoding, and other data transmission functions. Chipsets in second- and third-generation
DVD players have greatly reduced the component count on board these systems. Function integration
in fewer integrated circuits is driving prices down in all DVD units. The first DVD players used up to
12 discrete chips to enable DVD playback, mostly to ensure quality by using time-tested chips
already on the market. So far, function integration has reduced the number of chips in some players
to six. Audio and video decoding, video processing, NTSC/PAL encoding, and content scrambling
system (CSS) functions are now typically handled by one chip. Some back-end solutions have also
been integrated into the CPU. In addition, front-end functions such as the channel controller, servo
control, and DSP are also being integrated into a single chip.
With the integration of front- and back-end functions, single chip DVD playback solutions are in
the works, which only require an additional preamplifier, memory, and audio DAC. This component
integration is essential for saving space in mini-DVD systems and portable devices. After those chips
become widely available, costs will drop. In 2004 an average DVD player bill of materials will fall
below $75.
DVD Applications
DVD discs and DVD players/drives serve two major markets - consumer (movie sales and rentals)
and PCs (games, database information, etc). In each respective market DVD has the potential to
replace older VCR and CD-ROM technologies. DVD improves video and audio and provides widescreen format, interactivity, and parental control. Because of these advantages the DVD is likely to
replace VHS tapes and laser discs. With only a slightly higher cost than CD-ROMs, and because
audio CDs and CD-ROMs can be played in DVD-ROM drives, the DVD-ROM will gain presence in
PCs. DVD-ROMs will provide high-memory storage in PCs.
Chapter 7
DVDs deliver brilliance, clarity, range of colors, and detailed resolution—up to 600 horizontal
lines across a TV screen vs. only 300 lines provided by broadcast standards. They also remove
fuzziness from the edge of the screen. DVD systems are optimized for watching movies and playing
games on a wide-screen format, with aspect ratio of 16:9. They provide superior audio effects by
incorporating theatre-quality Dolby Digital AC-3 Surround Sound or MPEG-2 audio. And DVDs
provide the interactivity that videotapes couldn’t. Because of the storage capacity and the digital
signals, consumers can randomly access different segments of the disc. DVDs provide a multi-angle
function for movies, concerts, and sporting events with replay capabilities. They allow storage of
multiple soundtracks and closed-captioning in multiple languages. And DVDs enable parents to
control what their children watch. Parents can select appropriate versions of movies, depending on
the ratings.
DVD Market Numbers, Drivers and Challenges
DVD-Audio players and DVD recorders represent the fastest growing consumer electronics product
in history. Their recent introduction will propel sales of DVD players to new heights over the next
several years. According to the Cahners In-Stat Group, the DVD market grew from nothing in 1996
to more than 28 million units shipped worldwide in 2001. Shipments are expected to exceed 60
million units in 2004.
In 2004 IDC predicts:
DVD ROM (PC) drive shipments will reach about 27.8 million units
DVD-R/RW will reach 3.2 million units
DVD-Audio will reach 4 million units
DVD game console will reach 9.5 million units
DVD players will reach 21 million units
Adoption of DVD players is quickly growing and analysts state that unit shipments of DVD
players exceeded the total shipments of VCR shipments in 2003. In the U.S. alone in 2003 shipments
for DVD players exceeded 20 million units, compared to 18 million VCR players. The installed base
for DVD players will grow to 63.5 million units by the end of 2004. This represents the rise of the
installed base from 5.5% of the estimated 100 million total U.S. households in 1999 to 63.5% in
2004. Worldwide shipments for DVD players exceeded 51 million units in 2003 and are predicted to
exceed 75 million units in 2004. The total market for DVD ROM drives in PCs reached 190 million
units in 2003 as consumers started replacing CD drives. Some 400 million DVDs were sold in 2003
compared with roughly 227 million in 2000.
The market is currently experiencing high volume, declining prices, broadening vendor support,
and more interactive features (gaming, Internet, etc.) which will help the DVD market.
Key market drivers for the success of DVD players include:
DVD-Video players are now mainstream because of low cost – The adoption rate for
DVD-Video players surpasses that of any consumer electronics device to date and has long
since passed the “early adopter” stage. With prices as low as $100 for a stand-alone DVDVideo player, DVD-Video is now a mainstream format.
High consumer awareness
Pure digital format – The video and audio information stored on a DVD-Video is pure
digital for a crystal clear picture and CD-quality sound. It is the ideal format for movie
viewing, collecting, and distribution.
Digital Video/Versatile Disc (DVD)
Improved picture quality and color – The DVD format provides 480 (up to 600) horizontal lines of resolution. This is a significant improvement over 260 horizontal lines of
resolution of standard VHS. The color is brilliant, rich, and saturated, accurately rendering
skin tones. With the right equipment and set-up you can enjoy a picture that approaches the
quality of film.
Aspect ratio – True to its promise of delivering the cinematic experience, DVD-Video can
reproduce the original widescreen theatrical formats of movies as they are shown in movie
theaters. DVD-Video can deliver the 1.85:1, 16:9, or the 2.35:1 aspect ratio. Of course,
DVD-Video can also provide the “full-frame” 1.33:1 or 4:3 aspect ratio that represents the
standard NTSC television screen (the standard TV format for the U.S. and Canada).
State-of-the-art surround sound – All DVD-Videos include Dolby Digital surround sound
consisting of up to six and seven channels of surround sound (i.e., Dolby Digital 5.1-channel
and 6.1-channel surround sound). The DVD specification requires Dolby Digital 2.0, 2-channel
audio to be encoded on every DVD-Video disc for countries using the NTSC TV standard.
This 2-channel soundtrack allows Dolby Surround Pro-Logic to be encoded in the stereo
audio channels for backward compatibility with pre-existing Dolby Surround Pro-Logic
sound systems. Additionally, some DVDs contain an additional alternative surround sound
format called DTS Digital Surround. They can also support the newer extended surround
formats such as THX Surround EX, DTS-ES matrix, and DTS-ES discrete 6.1.
Multiple language dialogues and soundtracks - Many DVD movies are distributed with
multiple language options (up to a maximum of 8), each with its own dialogue. Closed
captioning and/or subtitles are also supported with up to 32 separate closed caption and/or
subtitle/karaoke tracks encoded into the DVD disc.
Multiple angles option – DVDs can support the director’s use of simultaneous multiple
camera angles (up to 9) to put a new spin on the plot.
Bonus materials – Many DVD-Video movie releases come with bonus materials that are
normally not included in the VHS version. These might include:
• Segments on the making of a movie
• Cast and crew interviews
• Theatrical trailers
• TV spots
• Director’s audio commentary
• Music videos
• Cast and crew biographies
• Filmographies
Some bonus features are actually DVD-ROM features where the same disc features DVDROM application content. Such DVD-ROM content (e.g., full screenplay text cross-referenced
with video playback, web access, and games) requires a computer with a DVD-ROM drive
for viewing. Overall, bonus features make movie collecting on the DVD-Video format all
the more rewarding. This material can contain multilingual identifying text for title name,
album name, song name, actors, etc. Also there are menus varying from a simple chapter
listing to multiple levels with interactive features.
Standard audio CDs may be played in DVD players.
Random access to scenes – Movies on DVDs are organized into chapters similar to how
songs are recorded on tracks of an audio CD. You can jump to your favorite scenes directly
by using the “skip chapter” button on the DVD player, by entering the chapter number, or by
Chapter 7
using the DVD disc’s menu feature. And there is no more rewinding of videotapes. DVDVideo has a unique feature called “seamless branching” where different video segments can
be pre-programmed to combine in various combinations (reorder the sequence of scene
playback). This allows for the same DVD-Video disc to contain different versions of the
same film, like an original theatrical release version and a director’s cut version of the same
film. For example, if you chose the “original theatrical release” version from the main
menu, the DVD-Video disc will play the original version of the movie by playing the same
scenes as shown in the movie theaters. If you chose the “director’s cut” version from the
main menu, the DVD-Video disc will play back the director’s cut of the movie. This may
skip to scenes that were previously unreleased during certain segments, then automatically
branch back to the common scenes shared with the theatrical version. These scene transitions are nearly instantaneous and transparent to the viewer.
Parental control – The DVD format offers parents the ability to lock out viewing of certain
materials by using the branching function. And different versions of the same movie with
different MPAA ratings (e.g., G, PG, PG-13, R, and NC-17) can be stored on the same
Durable disc format – The DVD disc format offers durability and longevity similar to that
of audio CDs. With proper handling and care the DVD disc should last a very long time.
There is no wear and tear to worry about since there is no contact between the laser pickup
and the DVD disc. Unlike VHS videotapes, there is virtually no deterioration or physical
wear with repeated use. With its durability and small size, DVD is a great format in which
to collect movies and other video titles. The discs are resistant to heat and are not affected
by magnetic fields.
Content availability – With over 10,000 titles available on DVD-Video, there is a wide
selection of DVD movies available at national, local and independent video rental stores.
There are even on-line merchants that rent DVDs such as DVDs are here to
stay and have become the new medium of choice for home viewing and movie collecting.
High (and multiple) storage capacities – Over 130 minutes of high-quality digital video
are available on a single-layered, single-sided disc. Over four hours are available on a
single-layered, dual-sided disc or dual-layered, single-sided disc. And over eight hours are
theoretically possible with the dual-layered, dual-sided format.
Some of the challenges limiting the success of DVD products are:
Content (copy) protection and encryption – Copy protection is a primary obstacle that the
DVD industry must overcome for rapid acceptance and growth. Movie studios do not want
to see highly valuable content copied illegally and distributed via PCs and the Internet—like
what happened with MP3 music. The industry has developed CSS as a deterrent to illegal
copying. A CSS license gives a company permission to build a device with decrypt or antiscrambling code. Built-in copy protection can also interfere with some display devices, line
doublers, etc.
The failure of DIVX – DIVX was essentially a limited-use, pay-per-view DVD technology.
It was marketed as a more affordable DVD format. With the financial backing of Circuit
City retail stores, it allowed a user to purchase a DIVX disc at a minimal cost and view its
contents for an unlimited number of times within a 48-hour period. Once the 48 hours were
up, the user was charged for each additional use. The DIVX machines had a built-in modem
which automatically called the central billing server about twice a month to report player
usage. Users had the option to purchase the right to unlimited viewing for a sum equal to the
Digital Video/Versatile Disc (DVD)
cost of a DVD-Video disc. Given that a DIVX player was basically a DVD-Video player
with additional features to enable a pay-per-view, it is not surprising that it was capable of
playing standard DVD-Video discs. Obviously a standard DVD player did not allow viewing
of a DIVX disc.
In addition to the built-in modem, the typical DIVX player contained decrypting
circuitry to read the DIVX discs which were encoded with a state-of-the-art algorithm
[Triple-DES]. Also, the player read the unique serial number off the disc which is recorded
on an area known as the Burst Cutting Area (BCA) located in the inner most region of the
disc. Essentially, this could be used to record up to 188 bytes of data after the disc has been
manufactured. DIVX uses this number to keep track of the viewing period.
Some consumers balked at the idea of having two different standards for digital discs.
Others objected to having to keep paying for something they had already purchased. Still,
DIVX appeared to be gaining acceptance among consumers with sales of the enhanced
players reportedly matching those of standard DVD units. Then its backers pulled the plug
on the format in mid-1999, blaming the format’s demise on inadequate support from studios
and other retailers. Its fate was effectively sealed when companies, including U.S. retail
chain Blockbuster, announced plans to rent DVDs to consumers instead of DIVX discs.
Though it didn’t last long, DIVX played a useful role in creating a viable rental market essential
for DVD-Video to become as popular as VHS. Furthermore, its BCA feature offered some interesting
possibilities for future distribution of software on DVD-ROM discs. For example, it could mean an
end to forcing consumers to manually enter a long string of characters representing a product’s serial
number during software installation. A unique vendor ID, product ID, and serial number can be
stored as BCA data and automatically read back during the installation process. Storing a product’s
serial number as BCA data could also deter pirating by making it almost impossible to install a
software product without possessing an authentic copy of the disc.
DVD does have its limitations:
There is consumer confusion due to DVD rewritable format incompatibility
Increasing DVD capabilities are converging with other devices such as gaming consoles.
There are region code restrictions. By encoding each DVD-Video disc and DVD players
with region codes, only similarly coded software can be played back on DVD hardware.
Hence, a DVD-Video coded “region 1” can be played back only by a DVD player that is
compatible with region 1. This allows movie studios to release a DVD-Video of a movie
while preparing the same movie for theatrical release of that movie overseas. Regional
lockout prevents playback of foreign titles by restricting playback of titles to the region
supported by the player hardware.
Poor compression of video and audio can result in compromised performance. When 6channel discrete audio is automatically down-mixed for stereo/Dolby Surround, the result
may not be optimal.
Not HDTV standard
Reverse play at normal speed is not yet possible.
The number of titles will be limited in the early years.
Some emerging DVD player technology trends are:
DVD-Video players are combining recordability features.
DVD players are combining DVD-Audio capabilities, enabling DVD players to achieve
higher-quality audio. DVD-Audio provides a major advance in audio performance. It
Chapter 7
enables the listener to have advanced resolution stereo (2 channels), multi-channel surround
sound (6 channels), or both.
The interactive access to the Internet feature is gaining prominence in DVD players and
gaming consoles. DVD players contain almost all of the circuitry required to support an
Internet browser, enabling many new opportunities for DVD. The connection to the Internet
would allow the customer to get current prices, order merchandise, and communicate via
email with a personal shopper.
PCs are adopting DVD read and write drives.
A networked DVD player allows the user to play DVD movies across the home network in
multiple rooms.
Gaming functionality is being added.
Convergence of Multiple Services
The functionality provided by DVD players and DVD-ROM devices will converge and be incorporated within existing consumer electronic appliances and PCs. Several companies are producing a
DVD player/Internet appliance in set-top boxes and recent video game consoles from Sony and
Microsoft allow users to play DVD movies on their units. TV/DVD combinations are becoming as
prevalent as car entertainment systems were. DVD is being combined with CD recorders, VCRs, or
hard disc drives to form new products. Several companies have introduced DVD home cinema
systems that integrate a DVD Video player with items such as a Dolby digital decoder, CD changer,
tuner, and cassette deck. Some consumer device manufacturers are integrating DVD functionality
within existing set-top boxes and gaming consoles. DVD-Video players are combining recordability
features. And a few consumer electronic manufacturers have introduced integrated DVD players and
receivers/amplifiers. This device provides a one-box gateway that will provide excellent quality
surround sound and digital video and audio playing.
Convergence of DVD-ROM Devices and PCs
Many mid- and high-priced new desktop and notebook computers are being delivered with DVDROM drives that will also play DVD-Videos—either on the computer monitor or on a separate TV
set. In order to play DVD-Videos a computer must be equipped to decode the compressed audio and
video bitstream from a DVD. While there have been attempts to do this in software with high-speed
processors, the best implementations currently use hardware, either a separate decoder card or a
decoder chip incorporated into a video card.
An unexpected byproduct of the ability of computers to play DVD videos has been the emergence of higher quality video digital displays suitable for large-screen projection in home theaters.
Computers display in a progressive scan format where all the scan lines are displayed in one sweep,
usually in a minimum of 1/60th of a second. Consumer video displays in an interlaced scan format
where only half the scan lines (alternating odd/even) are displayed on a screen every 1/60th of a
second (one “field”). The complete image frame on a TV takes 1/30th of a second to be displayed.
The progressive scan display is referred to as “line-doubled” in home theater parlance because
the lines of a single field become twice as many. It has been very expensive to perform this trick. The
best line doublers cost $10,000 and up. However, movie films are stored on DVD in a format that
makes it simple to generate progressive scan output, typically at three times the film rate, or 72 Hz.
Currently, only expensive data-grade CRT front projectors can display the higher scan rates required.
Digital Video/Versatile Disc (DVD)
But less expensive rear projectors that can do this have been announced. Convergence of computers
and home entertainment appears poised to explode.
This progressive scan display from DVD-Video is technically referred to as 720x480p. That is,
720 pixels of horizontal resolution and 480 vertical scan lines displayed progressively rather than
interlaced. It is very close to the entry-level format of digital TV. The image quality and resolution
exceed anything else currently available in consumer TV.
Convergence of DVD Players and Gaming Devices
As described in Chapter 6, DVD players will converge with gaming consoles. The significant storage
capacity, digital quality, features, and low cost of media make it far superior to game cartridges. A
game console and a DVD player actually have a number of components in common such as video
and audio processors, memory, TV-out encoders, and a host of connectors and other silicon. There is
also potential for both functions to use the same drive mechanism which is one of the most expensive
parts of the system. Today, many of the components are designed specifically for one or the other,
but with a little design effort it would be easy to integrate high-quality gaming into a DVD player at
little incremental cost.
Game consoles are already moving quickly towards higher-capacity and lower-cost media, and
DVD-based software represents the best of both worlds. Although cartridges have benefits in the area
of piracy and platform control, they are expensive and have limited a number of manufacturers.
Gaming consoles announced by both Sony and Microsoft have DVD support.
Convergence of DVD Players and Set-top Boxes
Consumers will want plenty of versatility out of their DVD systems. Instead of receiving satellite,
terrestrial, and cable signals via a set-top box or reading a disc from their DVD player, they will
expect to download video and audio from a satellite stream and record it using their DVD-RAM
Because current DVD players, satellite receivers, and cable set-top boxes use similar electronic
components, a converged player may not be far away. DVD is based on the MPEG-2 standard for
video distribution as are most satellite and wireless cable systems. The audio/video decoder chip at
the heart of a DVD player is an enhanced version of the video and audio decoder chips necessary to
build a home satellite receiver or a wireless cable set-top box. With many similarities, it is likely that
most of these appliances will eventually converge into a single unit— a convergence box that blends
the features of three separate systems into one unit.
Convergence of DVD-Video Players and DVD-Audio Players
The audio market is seeing several technologies compete to become the next-generation audio
technology that proliferates into consumer homes. Some of these include CD, MP3, SACD, and
DVD-Audio. The DVD specification includes room for a separate DVD-Audio format. This allows
the high-end audio industry to take advantage of the existing DVD specification. DVD-Audio is
releasing music in a PCM (pulse code modulation) format that uses 96 KHz and 24 bits as opposed
to the CD PCM format of 44.1 KHz and 16 bits.
All current DVD-Video players can play standard audio CDs (although not required by the
specification) as well as DVDs with 48 or 96 KHz/16-, 20- or 24-bit PCM format. Most secondgeneration DVD-Video players will actually use 96 KHz digital to analog converters, whereas some
Chapter 7
of the first generation players initially down-converted the 96 KHz bitstream to 48 KHz before
decoding. Hence, several next-generation DVD players will automatically add DVD-Audio capability. However, they may support multiple other audio formats and standards as well.
DVD-A combined into a DVD video player will help prevent severe price deflation by maintaining the value of the DVD video player. Because it will require only a small premium for DVD-A to
be incorporated into a DVD video player, players capable of DVD-A will quickly win market share
due to the mainstream price point of this advanced player.
Since DVD players can play audio CDs, sales of standalone CD players have shrunk. Consumer
electronics manufacturers are looking to integrate audio functionality within DVD players for added
revenues. DVD video players combined with DVD-A functionality will eventually replace DVD
video players with CD player capabilities.
The “convergence box” concept with its features and functions is not for everyone. For example,
high-quality 3D-imaging for video games may be great for some, but not for all. Likewise, reception
of 180-plus television stations via satellite is not a necessity for everyone. Though convergence
seems inevitable, convergence boxes will probably be packaged next to stand-alone products (DVD,
satellite-dish, cable system, etc.) for consumers who will not want to pay extra for features they will
not use.
The DVD format is doing for movies what the CD format did for music. Because of its unique
advantages, the DVD has become the most successful consumer technology and product.
The three application formats of DVD include DVD-Video for video entertainment, DVD-Audio
for audio entertainment, and DVD-ROM for PCs. In the future DVD will provide higher storage
capacities, but it will change the way consumers use audio, gaming, and recording capabilities.
DVD technology will see rapid market growth as it adds features and integrates components to
obtain lower prices. In addition, sales will increase as new and existing digital consumer devices and
PCs incorporate DVD technology.
Desktop and Notebook
Personal Computers (PCs)
Over the last 10 years the PC has become omnipresent in the work environment. Today it is found in
almost every home, with a sizeable percentage of homes having multiple PCs. Dataquest estimates
that PC penetration exceeds 50% of U.S. households—out of 102 million U.S. households, 52 million
own a PC. They also find that households with multiple PCs have grown from 15 million in 1998 to
over 26 million in 2003.
A plethora of new consumer devices are available today with a lot more to come. Some consumer devices such as MP3 players, set-top boxes, PDAs, digital cameras, gaming stations and
digital VCRs are gaining increased popularity. Dataquest predicts that the worldwide unit production
of digital consumer devices will explode from 1.8 million in 1999 to 391 million in 2004. The next
consumer wave is predicted to transition from a PC-centric home to a consumer device-centric home.
Despite growth in the shipments of digital consumer devices, the PC will continue to penetrate
homes for several years. While most consumer devices perform a single function very well, the PC
will remain the backbone for information and communication. In the future it will grow to become
the gateway of the home network. A large number of consumer devices such as Internet-enabled cell
phones and portable MP3 players will not compete directly with PCs. Rather, a set-top box with a
hard-drive or a game console may provide similar functionality as the PC and may appeal to a certain
market segment. However, the PC in both desktop and notebook variations will continue to be used
for a variety of tasks.
Definition of the Personal Computer
Since the PC is so prominent, defining it seems unnecessary. However, it is interesting to look at how
the PC has evolved and how its definition is changing.
The PC is a computer designed for use by one person at a time. Prior to the PC, computers were
designed for (and only affordable by) companies who attached multiple terminals to a single large
computer whose resources were shared among all users. Beginning in the late 1980s technology
advances made it feasible to build a small computer that an individual could own and use. The term
“PC” is also commonly used to describe an IBM-compatible personal computer in contrast to an
Apple Macintosh computer. The distinction is both technical and cultural. The IBM-compatible PC
contains Intel microprocessor architecture and an operating system such as Microsoft Windows (or
DOS previously) that is written to use the Intel microprocessor. The Apple Macintosh uses Motorola
microprocessor architecture and a proprietary operating system—although market economics are
now forcing Apple Computer to use Intel microprocessors as well. The IBM-compatible PC is
associated with business and home use. The “Mac,” known for its more intuitive user interface, is
Chapter 8
associated with graphic design and desktop publishing. However, recent progress in operating
systems from Microsoft provides similar user experiences.
For this discussion, the PC is a personal computer—both desktop and laptop - that is relatively
inexpensive and is used at home for computation, Internet access, communications, etc. By definition, the PC has the following attributes:
Digital computer
Largely automatic
Programmable by the user
Accessible as a commercially manufactured product, a commercially available kit,
or in widely published kit plans
Transportable by an average person
Affordable by the average professional
Simple enough to use that it requires no special training beyond an instruction manual
Competing to be the Head of the Household
The PC is the most important and widely used device for computing, Internet access, on-line gaming,
data storage, etc. It is also viewed as a strong potential candidate for a residential gateway. In this
capacity, it would network the home, provide broadband access, control the functionality of other
appliances, and deploy other value-added services. The rapid growth in multi-PC households is
creating the need for sharing broadband access, files and data, and peripherals (e.g., printers and
scanners) among the multiple PCs in different rooms of a house. This is leading to the networking of
PCs, PC peripherals, and other appliances.
With worldwide PC shipments totaling 134.7 million units in 2000, and predicted to exceed 200
million in 2005, the PC (desktop and notebook) is seeing healthy demand. This is due to ongoing
price per performance improvements and the position of the PC as a productivity tool. However,
there are still several weaknesses with the PC. It is often viewed as being complex, buggy, and
confusing. But with the recent introduction of operating systems that have a faster boot-up time and
are less buggy, the PC has become friendlier. Standardization of components is also addressing some
of the weaknesses.
PC market opportunities include:
Growth in multiple PC households
Evolution of new business models
Introduction of new and innovative PC designs
The growing ubiquity of the Internet
PC Market threats include:
Digital consumer devices
Web-based services and applications
Saturation in key markets
Low margins
With this in mind, an emerging category of digital consumer electronics device is coming to
market. This category includes these features:
Desktop and Notebook Personal Computers (PCs)
Special-purpose access to the Internet.
Devices which pose a threat to the existence of the PC include set-top boxes, digital TV, gaming
consoles, Internet screen phones, Web pads, PDAs, mobile phones, and Web terminals. Smarter chips
and increased intelligence are being embedded into everyday consumer devices. The digitization of
data, voice, and video is enabling the convergence of consumer devices. Applications such as email,
Web shopping, remote monitoring, MP3 files, and streaming video are pushing the need for Internet
access. These devices have advanced computational capabilities that provide more value and convenience when networked.
Over the last couple of years the Internet has grown ubiquitous to a large number of consumers.
Consumers are demanding high-speed Internet access to PCs and other home appliances such as Web
pads, Web terminals, digital TV, and set-top boxes. Using a single, broadband access point to the
Internet provides cost savings, but requires the networking of PCs and digital consumer devices.
Shipments for PCs exceeded 11.6 million units and revenue exceeded $21 billion in the U.S. in
1998. Market analysts project an increase in shipments to 25.2 million units in the year 2001 for the
U.S. The growth in PC revenues has stalled over the last few years. Non-PC vendors are seeking an
opportunity to capture the $21 billion market in U.S. alone. Predictions for 2001 in the U.S. include
sales of 22 million home digital consumer devices (excluding Internet-enabled mobile phones and
telematics systems) compared with 18 million home PCs. In 2004 digital consumer appliance
revenues will rise above falling PC revenues.
Digital consumer appliances include:
Internet-connected TVs
Consumer network computers
Web tablets
E-mail-only devices
Screen phones
Internet game consoles
Handheld PCs/personal digital assistants (PDAs)
Internet-enabled mobile phones
Automotive telematics systems
The firm forecasts that total revenues from all digital consumer appliances (including Internetenabled mobile phones and telematics systems) will reach $33.7 billion by the end of 2005. Market
numbers for digital consumer devices are largely skewed between research firms because some of the
numbers include appliances such as cell-phones and telematics systems that do not directly compete
with the PC.
This slower growth in revenues is due to the increase in the number of digital consumer devices
and falling ASPs (average selling prices) of PCs. In 2001 shipments of digital consumer devices
exceeded the unit shipments for PCs in the U.S.
Factors driving digital consumer devices are:
Aggressive vendor pursuit
Meeting consumer demands (ease-of-use, providing specific functionality, requiring Internet
Advancing bandwidth capacity
Lower product costs
Chapter 8
Digital consumer devices are targeting three specific areas:
Replacing PCs and providing robust Web browsing, email, and interactivity (e.g., email
terminals, Web terminals, Web pads).
Supplementing PCs and coexisting with them (e.g., PDAs, printers, scanners).
Sidestepping PCs with appliances that provide functionality quite different from the PC and
are not a significant threat to its existence (e.g., set-top boxes, cellular phones.)
Some factors that are driving the success of digital consumer devices are:
Aggressive vendor pursuit
Consumer market demands
Advancing bandwidth capacity
Lower product costs
Consumer needs
Device distribution subsidized by service contracts rather than via retail or direct sales, like
for PCs. An example would be the digital video recorder (DVR).
Because digital consumer devices are based on the idea that the PC is too complex, comparisons
between digital consumer devices and PCs have always been made. It is this constant comparison
that has cast the Web terminal segment in such a difficult light. Many in the industry had wrongly
expected that shipments of the Web terminal segment (without the help of handheld- or TV-based
devices) would single-handedly exceed and replace the PC market. In light of these misinterpretations, comparing shipments of digital consumer devices with shipments of PCs is not completely
Also, there are multiple definitions for digital consumer devices and some include devices that
don’t directly compete with PCs. The more commonly perceived definition includes only Web
terminals, Web tablets, and email terminals. These products barely even register on the radar screen
in comparison with PCs. Once the definition of digital consumer devices is expanded to include
handheld, gaming, and iTV-enabled devices, the change is obvious. A very healthy spike in digital
consumer device shipments in 2001—largely attributable to next-generation gaming consoles
becoming Internet-enabled—pushes the definition of digital consumer devices beyond just shipments
for PCs. Even with high volumes of handhelds and TV-based devices, it is important to remember
that not one single category is expected to out-ship PCs on its own.
Note that this does not mean PCs are going away. Despite certain fluctuations, the PC market is
still expected to remain solid. In fact, the function and usage could be completely different (e.g., web
surfing on a PC versus online gaming on a console). This may happen even though digital consumer
devices represent an alternative to PCs as an Internet access method.
Additionally, many vendors approaching the post-PC market see the PC continuing as a central
point (or launching pad) for the other devices that peacefully coexist in the networked home. Hence,
many in the industry further argue that the term “post-PC era” is better replaced with the term “PCplus era.” Finally, keep in mind that this comparison with PCs is primarily applicable to the United
States where PCs are currently seeing over 55% household penetration. In international markets PC
penetration is much lower. In those instances the more appropriate comparison might be with mobile
Desktop and Notebook Personal Computers (PCs)
The PC Fights Back
The PC is not going away soon because of its:
Compatibility – Interoperability between documents for business, education, and
Flexibility in the PC platform – Video editing, music authoring, Web hosting, gaming,
etc., can all be done on one PC. Comparatively, digital consumer devices are dedicated to
one function.
Momentum – PCs have a huge installed base, high revenue, and annual shipments.
Awareness – More than 50% of consumer homes have PCs.
Investment protection and reluctance to discard
Pace of improvement – Faster processors, bigger hard drives, better communication.
Being an established industry – Corporate momentum will continue due to PC vendors,
software developers, stores, and support networks.
Some of the market trends for PCs include:
Lower price points – Rapidly moving below $500 which causes blurred lines between PCs
and digital consumer devices.
New downstream revenue opportunities – Internet services, financing/leasing, and
e-commerce options. Higher intelligence is making the PC an ideal platform for the
residential gateway of the future. PCs are already shipped with analog or digital modems,
allowing broadband Internet access. This platform is ideal to provide home networking
capabilities to multiple consumer devices.
Super-efficient distribution – File data can be distributed effectively via the Internet.
Offsetting-profit products – Products such as servers, services (Internet, high-speed
access), and workstations enable aggressive prices to consumers.
Most consumer devices focus on a single value-add to the customer and provide it very well.
Hence, some focus on the unique negatives of a PC and show how the PC will never survive. For
example, MP3 players focus on portability and ease-of-use to customers. However, today there is no
better device than the PC to download and mass-store music in a library .
There are applications that digital consumer devices perform well. For example, a Web pad lets
you perform tasks such as scheduling, ordering groceries, and sending email—all with portability.
While very convenient within the home, it is incapable beyond this and requires an access point and
a gateway to provide a high-speed Internet connection. The PC can provide all the above functions
and more—such as video editing, gaming, and Word editing. Hence, PCs will continue to maintain a
stronghold on homes. Some digital consumer devices have unique value propositions and are making
their way into consumer lives. These include gaming consoles, Web pad, set-top boxes, and
telematics. Both PCs and consumer devices will coexist in tomorrow’s homes.
Components of a PC
Figure 8.1 shows a generic PC that includes processor, memory, hard-disk drive, operating system,
and other components. These make up a unique platform that is ideal for providing multiple functions such as Internet and e-commerce services. Future PCs will provide digital modems and home
networking chipsets. Broadband access will provide high-speed Internet access to the PC and other
digital consumer devices. Home networking will network multiple PCs, PC peripherals, and other
consumer devices.
Chapter 8
Figure 8.1: The PC
The following sections describe some key components that make up the PC.
The processor (short for microprocessor, also called the CPU, or central processing unit) is the
central component of the PC. This vital component is responsible for almost everything the PC does.
It determines which operating systems can be used, which software packages can be run, how much
energy the PC uses, and how stable the system will be. It is also a major determinant of overall
system cost. The newer, faster, and more powerful the processor, the more expensive the PC.
Today, more than half a century after John Von Neumann first suggested storing a sequence of
instructions (program) in the same memory as the data, nearly all processors have a “Von Neumann”
architecture. It consists of the central arithmetical unit, the central control unit, the memory, and the
Input/Output devices.
The underlying principles of all computer processors are the same. They take signals in the form of
0s and 1s (binary signals), manipulate them according to a set of instructions, and produce output in
the form of 0s and 1s. The voltage on the line at the time a signal is sent determines whether the
signal is a 0 or a 1. On a 2.5V system, an application of 2.5 volts means that it is a 1, while an
application of 0V means it is a 0.
Processors execute algorithms by reacting in a specific way to an input of 0s and 1s, then return
an output based on the decision. The decision itself happens in a circuit called a logic gate. Each gate
requires at least one transistor, with the inputs and outputs arranged differently by different opera114
Desktop and Notebook Personal Computers (PCs)
tions. The fact that today’s processors contain millions of transistors offers a clue as to how complex
the logic system is.
The processor’s logic gates work together to make decisions using Boolean logic. The main
Boolean operators are AND, OR, NOT, NAND, NOR, with many combinations of these as well. In
addition, the processor uses gate combinations to perform arithmetic functions and trigger storage of
data in memory. Modern day microprocessors contain tens of millions of microscopic transistors.
Used in combination with resistors, capacitors, and diodes, these make up logic gates. And logic
gates make up integrated circuits which make up electronic systems.
Intel’s first claim to fame was the Intel 4004 released in late 1971. It was the integration of all
the processor’s logic gates into a single complex processor chip. The 4004 was a 4-bit microprocessor intended for use in a calculator. It processed data in 4 bits, but its instructions were 8 bits long.
Program and data memory were separate, 1 KB and 4 KB respectively. There were also sixteen 4-bit
(or eight 8-bit) general-purpose registers. The 4004 had 46 instructions, using only 2,300 transistors.
It ran at a clock rate of 740 KHz (eight-clock cycles per CPU cycle of 10.8 microseconds).
Two families of microprocessor have dominated the PC industry for some years—Intel’s Pentium
and Motorola’s PowerPC. These CPUs are also prime examples of the two competing CPU architectures of the last two decades—the former being a CISC chip and the latter a RISC chip.
CISC (complex instruction set computer) is a traditional computer architecture where the CPU
uses microcode to execute very comprehensive instruction sets. These may be variable in length and
use all addressing modes, requiring complex circuitry to decode them. For many years the tendency
among computer manufacturers was to build increasingly complex CPUs that had ever-larger sets of
instructions. In 1974 IBM decided to try an approach that dramatically reduced the number of
instructions a chip performed. By the mid-1980s this led to a trend reversal where computer manufacturers were building CPUs capable of executing only a very limited set of instructions.
RISC (reduced instruction set computer) CPUs:
Keep instruction size constant.
Ban the indirect addressing mode.
Retain only those instructions that can be overlapped and made to execute in one machine
cycle or less.
One advantage of RISC CPUs is that they execute instructions very fast because the instructions
are so simple. Another advantage is that RISC chips require fewer transistors, which makes them
cheaper to design and produce.
There is still considerable controversy among experts about the ultimate value of RISC architectures. Its proponents argue that RISC machines are both cheaper and faster, making them the
machines of the future. Conversely, skeptics note that by making the hardware simpler, RISC
architectures put a greater burden on the software. For example, RISC compilers have to generate
software routines to perform the complex instructions that are performed in hardware by CISC
computers. They argue that this is not worth the trouble because conventional microprocessors are
becoming increasingly fast and cheap anyway. To some extent the argument is becoming moot
because CISC and RISC implementations are becoming more and more alike. Even the CISC
champion, Intel, used RISC techniques in its 486 chip and has done so increasingly in its Pentium
processor family.
Chapter 8
Basic Structure
A processor’s major functional components are:
Core – The heart of a modern processor is the execution unit. The Pentium has two parallel
integer pipelines enabling it to read, interpret, execute, and dispatch two instructions
Branch Predictor – The branch prediction unit tries to guess which sequence will be
executed each time the program contains a conditional jump. In this way, the pre-fetch and
decode unit can prepare the instructions in advance.
Floating Point Unit – This is the third execution unit in a processor where non-integer
calculations are performed.
Primary Cache – The Pentium has two on-chip caches of 8 KB each, one for code and one
for data. Primary cache is far quicker than the larger external secondary cache.
Bus Interface – This brings a mixture of code and data into the CPU, separates them for
use, then recombines them and sends them back out.
All the elements of the processor are synchronized by a “clock” which dictates how fast the
processor operates. The very first microprocessor had a 100-kHz clock. The Pentium Pro used a 200MHz clock, which is to say it “ticks” 200 million times per second. Programs are executed as the
clock ticks. The Program Counter is an internal memory location that contains the address of the
next instruction to be executed. When the time comes for it to be executed, the Control Unit transfers
the instruction from memory into its Instruction Register (IR).
At the same time, the Program Counter increments to point to the next instruction in sequence.
The processor then executes the instruction in the IR. The Control Unit itself handles some instructions. Here, if the instruction says “jump to location 213,” the value of 213 is written to the Program
Counter so that the processor executes that instruction next.
Many instructions involve the arithmetic and logic unit (ALU). The ALU works in conjunction
with the General Purpose Registers (GPR) which provides temporary storage areas that can be loaded
from memory or written to memory. A typical ALU instruction might be to add the contents of a
memory location to a GPR. As each instruction is executed, the ALU also alters the bits in the Status
Register (SR) which holds information on the result of the previous instruction. Typically, SR bits
indicate a zero result, an overflow, a carry, and so forth. The control unit uses the information in the
SR to execute conditional instructions such as “jump to address 740 if the previous instruction
Architectural Advances
According to Moore’s Law, CPUs double their capacity and capability every 18-24 months. In recent
years Intel has managed to doggedly follow this law and stay ahead of the competition by releasing
more powerful chips for the PC than any other company. In 1978 the 8086 ran at 4.77 MHz and had
under a million transistors. By the end of 1995 their Pentium Pro had a staggering 21 million on-chip
transistors and ran at 200 MHz. Today’s processors run in excess of 3 GHz.
The laws of physics limit designers from increasing the clock speed indefinitely. Although clock
rates go up every year, this alone wouldn’t give the performance gains we’re used to. This is the
reason why engineers are constantly looking for ways to get the processor to undertake more work in
each tick of the clock. One approach is to widen the data bus and registers. Even a 4-bit processor
can add two 32-bit numbers, but this takes lots of instructions. A 32-bit processor could do the task in
Desktop and Notebook Personal Computers (PCs)
a single instruction. Most of today’s processors have a 32-bit architecture, and 64-bit variants are on
the way.
In the early days processors could only deal with integers (whole numbers). It was possible to
write a program using simple instructions to deal with fractional numbers, but it was slow. Virtually
all processors today have instructions to handle floating-point numbers directly. To say that “things
happen with each tick of the clock” underestimates how long it actually takes to execute an instruction. Traditionally, it took five ticks—one to load the instruction, one to decode it, one to get the
data, one to execute it, and one to write the result. In this case a 100-MHz processor could only
execute 20 million instructions per second.
Most processors now employ pipelining, a process resembling a factory production line. One
stage in the pipeline is dedicated to each of the stages needed to execute an instruction. Each stage
passes the instruction on to the next when it is finished. Hence, at any one time one instruction is
being loaded, another is being decoded, data is being fetched for a third, a fourth is being executed,
and the result is being written for a fifth. One instruction per clock cycle can be achieved with
current technology.
Furthermore, many processors now have a superscalar architecture. Here, the circuitry for each
stage of the pipeline is duplicated so that multiple instructions can pass through in parallel. For
example, the Pentium Pro can execute up to five instructions per clock cycle.
The 4004 used a 10-micron process—the smallest feature was 10 millionths of a meter across.
This is huge by today’s standards. Under these constraints a Pentium Pro would measure about 5.5
inches × 7.5 inches and would be very slow. (To be fast, a transistor must be small.) By 2003 most
processors used a 0.13-micron process with 90 nanometers being the goal in late 2003.
Software Compatibility
In the early days of computing many people wrote their own software so the exact set of instructions
a processor could execute was of little importance. However, today people want to use off-the-shelf
software so the instruction set is paramount. Although there’s nothing magical about the Intel 80x86
architecture from a technical standpoint, it has long since become the industry standard.
If a third party makes a processor which has different instructions, it will not run industry
standard software. So in the days of 386s and 486s, companies like AMD cloned Intel processors,
which meant they were always about a generation behind. The Cyrix 6x86 and the AMD K5 were
competitors to Intel’s Pentium, but they weren’t carbon copies. The K5 has its own instruction set
and translates 80x86 instructions into its native instructions as they are loaded. In this way, AMD
didn’t have to wait for the Pentium before designing the K5. Much of it was actually designed in
parallel—only the translation circuitry was held back. When the K5 eventually appeared, it leapfrogged the Pentium in performance (if clock speeds were equal).
The use of standard busses is another way processors with different architectures are given some
uniformity to the outside world. The PCI bus has been one of the most important standards in this
respect since its introduction in 1994. PCI defines signals that enable the processor to communicate
with other parts of a PC. It includes address and data buses as well as a number of control signals.
Processors have their own proprietary buses so a chipset is used to convert from a “private” bus to the
“public” PCI bus.
Chapter 8
Historical Perspective
The 400 was the first Intel microprocessor. To date, all PC processors have been based on the original
Intel designs. The first chip used in an IBM PC was Intel’s 8088. At the time it was chosen, it was
not the best CPU available. In fact, Intel’s own 8086 was more powerful and had been released
earlier. The 8088 was chosen for economic reasons—its 8-bit data bus required a less costly
motherboard than the 16-bit 8086. Also, most of the interface chips available were intended for use
in 8-bit designs. These early processors would have nowhere near the performance to run today’s
software. Today the microprocessor market for PCs primarily belongs to two companies—Intel and
AMD. Over the last 15-20 years, the introduction of PC architectures and families (with the Intel
80286, 80386, and 80486) has been more often followed by the Pentium family series.
Third generation chips based on Intel’s 80386 processors were the first 32-bit processors to
appear in PCs. Fourth generation processors (the 80486 family) were also 32-bit and offered a
number of enhancements which made them more than twice as fast. They all had 8K of cache
memory on the chip itself, right beside the processor logic. These cached data that was transferred
from the main memory. Hence, on average the processor needed to wait for data from the
motherboard for only 4% of the time because it was usually able to get required information from the
cache. The 486 model brought the mathematics co-processor on board as well. This was a separate
processor designed to take over floating-point calculations. Though it had little impact on everyday
applications, it transformed the performance of spreadsheet, statistical analysis, and CAD packages.
Clock doubling was another important innovation introduced on the 486. With clock doubling,
the circuits inside the chip ran at twice the speed of the external electronics. Data was transferred
between the processor, the internal cache, and the math co-processor at twice the speed, considerably
enhancing performance. Versions following the base 486 took these techniques even further - tripling
the clock speed to run internally at 75 or 100 MHz and doubling the primary cache to 16 KB.
The Intel Pentium is the defining processor of the fifth generation. It provides greatly increased
performance over a 486 due to architectural changes that include a doubling of the data bus width to
64 bits. Also, the processor doubled on-board primary cache to 32 KB and increased the instruction
set to optimize multimedia function handling.
The word Pentium™ does not mean anything. It contains the syllable pent—the Greek root for
five—symbolizing the 5th generation of processors. Originally, Intel was going to call the Pentium
the 80586 in keeping with the chip’s 80x86 predecessors. But the company did not like the idea that
AMD, Cyrix, and other clone makers could use the name 80x86 as well. So they decided to use
‘Pentium’—a trademarkable name. The introduction of the Pentium in 1993 revolutionized the PC
market by putting more power into the average PC than NASA had in its air-conditioned computer
rooms of the early 1960s. The Pentium’s CISC-based architecture represented a leap forward from
the 486. The 120-MHz and above versions had over 3.3 million transistors, fabricated on a 0.35micron process. Internally, the processor uses a 32-bit bus, but externally the data bus is 64 bits wide.
The external bus required a different motherboard. To support this, Intel also released a special
chipset for linking the Pentium to a 64-bit external cache and to the PCI bus.
The Pentium has a dual-pipelined superscalar design which allows it to execute more instructions
per clock cycle. Like a 486, there are still five stages of integer instruction execution—pre-fetch,
instruction decode, address generate, execute and write back. But the Pentium has two parallel
Desktop and Notebook Personal Computers (PCs)
integer pipelines that enable it to read, interpret, execute, and dispatch two operations simultaneously. These handle only integer calculations; a separate Floating-Point Unit handles real numbers.
The Pentium also uses two 8-KB, two-way set, associative buffers known as primary, or Level 1,
cache. One is used for instructions, the other for data. This is twice the amount of its predecessor, the
486. These caches increase performance by acting as temporary storage for data instructions obtained
from the slower main memory. A Branch Target Buffer (BTB) provides dynamic branch prediction. It
enhances instruction execution by remembering which way an instruction branched, then applying
the same branch the next time the instruction is used. Performance is improved when the BTB makes
a correct prediction. An 80-point Floating-Point Unit provides the arithmetic engine to handle real
numbers. A System Management Mode (SMM) for controlling the power use of the processor and
peripherals rounds out the design.
The Pentium Pro was introduced in 1995 as the successor to the Pentium. It was the first of the
sixth generation of processors and introduced several unique architectural features that had never
been seen in a PC processor before. The Pentium Pro was the first mainstream CPU to radically
change how it executes instructions. It translates them into RISC-like microinstructions and executes
them on a highly advanced internal core. It also featured a dramatically higher-performance secondary cache compared to all earlier processors. Instead of using motherboard-based cache running at
the speed of the memory bus, it used an integrated Level 2 cache with its own bus running at full
processor speed, typically three times the speed of a Pentium cache.
Intel’s Pentium Pro had a CPU core consisting of 5.5 million transistors, with 15.5 million
transistors in the Level 2 cache. It was initially aimed at the server and high-end workstation markets. It is a superscalar processor incorporating high-order processor features and is optimized for
32-bit operation. The Pentium Pro differs from the Pentium in that it has an on-chip Level 2 cache of
between 256 KB and 1 MB operating at the internal clock speed. The positioning of the secondary
cache on the chip, rather than on the motherboard, enables signals to pass between the two on a 64-bit
data path rather than on the 32-bit Pentium system bus path. Their physical proximity also adds to
the performance gain. The combination is so powerful that Intel claims the 256 KB of cache on the
chip is equivalent to over 2 MB of motherboard cache.
An even bigger factor in the Pentium Pro’s performance improvement is the combination of
technologies known as “dynamic execution.” This includes branch prediction, data flow analysis, and
speculative execution. These combine to allow the processor to use otherwise-wasted clock cycles.
Dynamic execution makes predictions about the program flow to execute instructions in advance.
The Pentium Pro was also the first processor in the x86 family to employ super-pipelining, its
pipeline comprising 14 stages, divided into three sections. The in-order, front-end section, which
handles the decoding and issuing of instructions, consists of eight stages. An out-of-order core which
executes the instructions has three stages, and the in-order retirement consists of a final three stages.
The other, more critical distinction of the Pentium Pro is its handling of instructions. It takes the
CISC x86 instructions and converts them into internal RISC micro-ops. The conversion helps avoid
some of the limitations inherent in the x86 instruction set such as irregular instruction encoding and
register-to-memory arithmetic operations. The micro-ops are then passed into an out-of-order
execution engine that determines whether instructions are ready for execution. If not, they are
shuffled around to prevent pipeline stalls.
Chapter 8
There are drawbacks to using the RISC approach. The first is that converting instructions takes
time, even if calculated in nano or microseconds. As a result the Pentium Pro inevitably takes a
performance hit when processing instructions. A second drawback is that the out-of-order design can
be particularly affected by 16-bit code, resulting in stalls. These tend to be caused by partial register
updates that occur before full register reads, imposing severe performance penalties of up to seven
clock cycles.
The Pentium II proved to be an evolutionary step from the Pentium Pro. Architecturally, the
Pentium II is not very different from the Pentium Pro, with a similar x86 emulation core and most of
the same features. The Pentium II improved on the Pentium Pro by doubling the size of the Level 1
cache to 32 KB. Intel used special caches to improve the efficiency of 16-bit code processing. (The
Pentium Pro was optimized for 32-bit processing and did not deal quite as well with 16-bit code.)
Intel also increased the size of the write buffers. However, its packaging was the most talked-about
aspect of the new Pentium II. The integrated Pentium Pro secondary cache running at full processor
speed was replaced with a special circuit board containing the processor and 512 KB of secondary
cache running at half the processor’s speed.
The Pentium III, successor to the Pentium II, came to market in the spring of 1999. The Single
Instruction Multiple Data (SIMD) process came about with the introduction of MMX—multimedia
extensions. SIMD enables one instruction to perform the same function on several pieces of data
simultaneously. This improves the speed at which sets of data requiring the same operations can be
processed. The new processor introduces 70 new Streaming SIMD Extensions, but does not make any
other architecture improvements. Fifty of the new SIMD Extensions improve floating-point performance to assist data manipulation. There are 12 new multimedia instructions to complement the
existing 57 integer MMX instructions. They provide further support for multimedia data processing.
The final eight instructions, referred to by Intel as cache-ability instructions, improve the efficiency
of the CPU’s Level 1 cache. They allow sophisticated software developers to boost performance of
their applications or games. Other than this, the Pentium III makes no other architecture improvements. It still fits into slot 1 motherboards and still has 32 KB of Level 1 cache.
Close on the heels of the Pentium III came the Pentium Xeon with the new Streaming SIMD
Extensions (SSE) instruction set. Targeted at the server and workstation markets, the Pentium Xeon
shipped as a 700-MHz processor with 512 KB, 1 MB, or 2 MB of Level 2 cache. Since then Intel has
announced processors with 0.18-micron process technology that provides smaller die sizes and lower
operating voltages. This facilitates more compact and power-efficient system designs, and makes
possible clock speeds of 1 GHz and beyond.
Intel and Hewlett Packard announced their joint research and development project aimed at
providing advanced technologies for workstation, server, and enterprise-computing products. The
product, called Itanium, uses a 64-bit computer architecture that is capable of addressing an enormous 16 TB of memory. That is 4 billion times more than 32-bit platforms can handle. In huge
databases, 64-bit platforms reduce the time it takes to access storage devices and load data into
virtual memory, and this has a significant impact on performance.
The Instruction Set Architecture (ISA) uses the Explicitly Parallel Instruction Computing (EPIC)
technology which represents Itanium’s biggest technological advance. EPIC—incorporating an
innovative combination of speculation, prediction, and explicit parallelism—advances state-of-art in
processor technologies. Specifically, it addresses performance limitations found in RISC and CISC
Desktop and Notebook Personal Computers (PCs)
technologies. Both architectures already use various internal techniques to process more than one
instruction at once where possible. But the degree of parallelism in the code is only determined at
run-time by parts of the processor that attempt to analyze and re-order instructions on the fly. This
approach takes time and wastes die space that could be devoted to executing instructions rather than
organizing them. EPIC breaks through the sequential nature of conventional processor architectures
by allowing software to communicate explicitly to the processor when operations can be performed
in parallel.
The result is that the processor can grab as large a chunk of instructions as possible and execute
them simultaneously with minimal pre-processing. Increased performance is realized by reducing the
number of branches and branch mis-predicts, and reducing the effects of memory-to-processor
latency. The IA-64 Instruction Set Architecture applies EPIC technology to deliver massive resources
with inherent scalability not possible with previous processor architectures. For example, systems
can be designed to slot in new execution units whenever an upgrade is required, similar to plugging
in more memory modules on existing systems. According to Intel the IA-64 ISA represents the most
significant advancement in microprocessor architecture since the introduction of its 80386 chip in
IA-64 processors will have massive computing resources including 128 integer registers, 128
floating-point registers, 64 predicate registers, and a number of special-purpose registers. Instructions
will be bundled in groups for parallel execution by the various functional units. The instruction set
has been optimized to address the needs of cryptography, video encoding, and other functions
required by the next generation of servers and workstations. Support for Intel’s MMX technology and
Internet Streaming SIMD Extensions is maintained and extended in IA-64 processors.
The Pentium 4 is Intel’s first new IA-32 core aimed at the desktop market since the introduction
of the Pentium Pro in 1995. It represents the biggest change to Intel’s 32-bit architecture since the
Pentium Pro. Its increased performance is largely due to architectural changes that allow the device
to operate at higher clock speeds. And it incorporates logic changes that allow more instructions to
be processed per clock cycle.
Foremost among these innovations is the processor’s internal pipeline—referred to as the Hyper
Pipeline—which is comprised of 20 pipeline stages versus the ten for P6 micro-architecture. A
typical pipeline has a fixed amount of work that is required to decode and execute an instruction.
This work is performed by individual logical operations called “gates.” Each logic gate consists of
multiple transistors. By increasing the stages in a pipeline, fewer gates are required per stage.
Because each gate requires some amount of time (delay) to provide a result, decreasing the number
of gates in each stage allows the clock rate to be increased. It allows more instructions to be “in
flight” (be at various stages of decode and execution) in the pipeline. Although these benefits are
offset somewhat by the overhead of additional gates required to manage the added stages, the overall
effect of increasing the number of pipeline stages is a reduction in the number of gates per stage.
And this allows a higher core frequency and enhances scalability.
Other new features introduced by the Pentium 4’s micro-architecture—dubbed NetBurst—include:
An innovative Level 1 cache implementation comprised of an 8-KB data cache and an
Execution Trace Cache that stores up to 12K of decoded x86 instructions (micro-ops). This
arrangement removes the latency associated with the instruction decoder from the main
execution loops.
Chapter 8
A Rapid Execution Engine that pushes the processor’s ALUs to twice the core frequency.
This results in higher execution throughput and reduced latency of execution. The chip
actually uses three separate clocks: the core frequency, the ALU frequency and the bus
frequency. The Advanced Dynamic—a very deep, out-of-order speculative execution
engine—avoids stalls that can occur while instructions are waiting for dependencies to
resolve by providing a large window of memory from which units choose.
A 256-KB Level 2 Advanced Transfer Cache that provides a 256-bit (32-byte) interface to
transfer data on each core clock.
A much higher data throughput channel – 44.8 GB/s for a 1.4-GHz Pentium 4 processor.
SIMD Extensions 2 (SSE2) – The latest iteration of Intel’s Single Instruction Multiple Data
technology. It integrates 76 new SIMD instructions and improves 68 integer instructions.
These changes allow the chip to grab 128 bits at a time in both floating-point and integer
CPU-intensive encoding and decoding operations such as streaming video, speech, 3D
rendering, and other multimedia procedures.
The industry’s first 400-MHz system bus, providing a three-fold increase in throughput
compared with Intel’s current 133-MHz bus.
The first Pentium 4 units with speeds of 1.4 GHz and 1.5 GHz showed performance improvements on 3D applications (games) and on graphics intensive applications (video encoding). The
performance gain appeared much less pronounced on everyday office applications such as word
processing, spreadsheets, Web browsing, and e-mail. More recent Pentium 4 shipments show speeds
in excess of 3 GHz.
Intel’s roadmap shows future processor developments that shrink in process utilized (using
copper interconnects), increase the number of gates, boost performance, and include very large onchip caches.
Intel has enjoyed its position as the premier PC processor manufacturer in recent years. Dating
from the 486 line of processors in 1989, Cyrix and long-time Intel cloner Advanced Micro Devices
(AMD) have posed the most serious threat to Intel’s dominance. While advancements in microprocessors have come primarily from Intel, the last few years have seen a turn of events with Intel facing
some embarrassments. It had to recall all of its shipped 1.13-GHz processors after it was discovered
that the chip caused systems to hang when running certain applications. Many linked the problem to
increasing competition from rival chipmaker AMD who succeeded in beating Intel to the 1-GHz
barrier a few weeks earlier. Competitive pressure may have forced Intel into introducing faster chips
earlier than it had originally planned.
AMD’s involvement in personal computing spans the entire history of the industry. The company
supplied every generation of PC processor from the 8088 used in the first IBM PCs, to the new,
seventh-generation AMD Athlon processor. Some believe that the Athlon represents the first time in
the history of x86 CPU architecture that Intel surrendered the technological lead to a rival chip
manufacturer. However, this is not strictly true. A decade earlier AMD’s 386 CPU bettered Intel’s
486SX chip in speed, performance, and cost.
In the early 1990s, both AMD and Cyrix made their own versions of the 486. But their products
became better known with their 486 clones, one copying the 486DX2-66 (introduced by Intel in
1992) and the other increasing internal speed to 80 MHz.
Although Intel stopped improving the 486 with the DX4-100, AMD and Cyrix kept going. In
1995 AMD offered the clock-quadrupled 5x86, a 33-MHz 486DX that ran internally at 133 MHz.
Desktop and Notebook Personal Computers (PCs)
AMD marketed the chip as comparable in performance to Intel’s new Pentium/75. But it was a
486DX in all respects, including the addition of the 16-K Level 1 cache (built into the processor) that
Intel introduced with the DX4. Cyrix followed suit with its own 5x86 called the M1sc, but this chip
was much different from AMD’s. In fact, the M1sc offered Pentium-like features even though it was
designed for use on 486 motherboards. Running at 100 MHz and 120 MHz, the chip included a 64bit internal bus, a six-stage pipeline (as opposed to the DX4’s five-stage pipeline), and
branch-prediction technology to improve the speed of instruction execution. However, it is important
to remember that the Cyrix 5x86 appeared after Intel had introduced the Pentium, so these features
were more useful in upgrading 486s than in pioneering new systems.
In the post-Pentium era, designs from both manufacturers have met with reasonable levels of
market acceptance, especially in the low-cost, basic PC market segment. With Intel now concentrating on its Slot 1 and Slot 2 designs, its competitors want to match the performance of Intel’s new
designs as they emerge, without having to adopt the new processor interface technologies.
However, Cyrix finally bowed out of the PC desktop business when National Semiconductor sold
the rights to its x86 CPUs to Taiwan-based chipset manufacturer VIA Technologies. The highly
integrated MediaGX product range remained with National Semiconductor. It became part of their
new Geode family of system-on-a-chip solutions being developed for the client devices market. VIA
Technologies has also purchased IDT’s Centaur Technology subsidiary which was responsible for the
design and production of its WinChip x86 range of processors. It is unclear if these moves signal
VIA’s intention to become a serious competitor in the CPU market, or whether its ultimate goal is to
compete with National Semiconductor in the system-on-a-chip market. Traditionally, the chipset
makers have lacked the x86 design technology needed to take the trend for low-cost chipsets incorporating increasing levels of functionality on a single chip to its logical conclusion. The other
significant development was AMD seizing the technological lead from Intel with the launch of its
new Athlon processor.
Components and Interfaces
The PC’s adaptability—its ability to evolve many different interfaces allowing the connection of
many different classes of add-on component and peripheral device—has been one of the key reasons
for its success. In essence, today’s PC system is little different than IBM’s original design—a
collection of internal and external components, interconnected by a series of electrical data highways
over which data travels as it completes the processing cycle. These “buses,” as they are called,
connect all the PC’s internal components and external devices and peripherals to its CPU and main
memory (RAM).
The fastest PC bus, located within the CPU chip, connects the processor to its primary cache. On
the next level down, the system bus links the processor with memory. This memory consists of the
small amount of static RAM (SRAM) secondary cache and the far larger main banks of dynamic
RAM (DRAM). The system bus is 64 bits wide and 66 MHz—raised to 100 MHz. The CPU does not
communicate directly with the memory. Rather, it communicates through the system controller chip
which acts as an intermediary to manages the host bus and bridges it to the PCI bus (in modern PCs).
Bus Terminology
A modern PC system includes two classes of bus—a System Bus that connects the CPU to main
memory, and a Level 2 cache. A PC system also includes a number of I/O Busses that connect
Chapter 8
various peripheral devices to the CPU. The latter is connected to the system bus via a “bridge,”
implemented in the processor’s chipset.
In a Dual Independent Bus (DIB) architecture, a front-side bus replaces the single system bus. It
shuttles data between the CPU and main memory, and between the CPU and peripheral buses. DIB
architecture also includes a backside bus for accessing level 2 cache. The use of dual independent
buses boosts performance, enabling the CPU to access data from either of its buses simultaneously
and in parallel.
The evolution of PC bus systems has generated a profusion of terminology, much of it confusing,
redundant, or obsolete. The system bus is often referred to as the “main bus,” “processor bus” or
“local bus.” Alternative generic terminology for an I/O bus includes “expansion bus,” “external bus,”
“host bus,” as well as “local bus.”
A given system can use a number of different I/O bus systems. A typical arrangement is for the
following to be implemented concurrently:
ISA Bus – The oldest, slowest, and soon-to-be- obsolete I/O Bus system.
PCI Bus – Present on Pentium-class systems since the mid-1990s.
USB Bus – The replacement for the PC’s serial port. Allows up to 127 devices to be
connected by using a hub device or by implementing a daisy chain..
IEEE 1394
ISA (Industry Standard Architecture) Bus
When it first appeared on the PC, the 8-bit ISA bus ran at a modest 4.77 MHz - the same speed as the
processor. It was improved over the years, eventually becoming the ISA bus in 1982 (with the advent
of the IBM PC/AT using the Intel 80286 processor and 16-bit data bus). At this stage it kept up with
the speed of the system bus, first at 6 MHz and later at 8 MHz.
The ISA bus specifies a 16-bit connection driven by an 8-MHz clock which seems primitive
compared to the speed of today’s processors. It has a theoretical data transfer rate of up to 16 MB/s.
Functionally, this rate would reduce by a half to 8 MB/s since one bus cycle is required for addressing and another bus cycle is required for the 16 data bits. In the real world it is capable of more like
5 MB/s—still sufficient for many peripherals. A huge number of ISA expansion cards ensured its
continued presence into the late 1990s.
As processors became faster and gained wider data paths, the basic ISA design was not able to
change to keep pace. As recently as the late 1990s most ISA cards remained 8-bit technology. The
few types with 16-bit data paths (hard disk controllers, graphics adapters, and some network adapters) are constrained by the low throughput levels of the ISA bus. But expansion cards in faster bus
slots can handle these processes better.
However, there are areas where a higher transfer rate is essential. High-resolution graphic
displays need massive amounts of data, particularly to display animation or full-motion video.
Modern hard disks and network interfaces are certainly capable of higher rates. The first attempt to
establish a new standard was the Micro Channel Architecture (MCA) introduced by IBM. This was
closely followed by Extended ISA (EISA), developed by a consortium of IBM’s major competitors.
Although both these systems operate at clock rates of 10 MHz and 8 MHz respectively, they are 32-bit
and capable of transfer rates well over 20 MB/s. As its name suggests, an EISA slot can also take a
Desktop and Notebook Personal Computers (PCs)
conventional ISA card. However, MCA is not compatible with ISA. Neither system flourished
because they were too expensive to merit support on all but the most powerful file servers.
Local Bus
Intel 80286 motherboards could run expansion slots and the processor at different speeds over the
same bus. However, dating from the introduction of the 386 chip in 1987, motherboards provided
two bus systems. In addition to the “official” bus—whether ISA, EISA, or MCA—there was also a
32-bit “system bus” connecting the processor to the main memory. The rise in popularity of the
Graphical User Interface (GUI)—such as Microsoft Windows—and the consequent need for faster
graphics drove the concept of local bus peripherals. The bus connecting them was commonly referred
to as the “local bus.” It functions only over short distances due to its high speed and to the delicate
nature of the processor.
Initial efforts to boost speed were proprietary. Manufacturers integrated the graphics and hard
disk controller into the system bus. This achieved significant performance improvements but limited
the upgrade potential of the system. As a result, a group of graphics chipset and adapter manufacturers—the Video Electronics Standards Association (VESA)—established a non-proprietary, highperformance bus standard in the early 1990s. Essentially, this extended the electronics of the 486
system bus to include two or three expansion slots—the VESA Local Bus (VL-Bus). The VL-Bus
worked well and many cards became available, predominately graphics and IDE controllers.
The main problem with VL-Bus was its close coupling with the main processor. Connecting too
many devices risked interfering with the processor itself, particularly if the signals went through a
slot. VESA recommended that only two slots be used at clock frequencies up to 33 MHz, or three if
they are electrically buffered from the bus. At higher frequencies, no more than two devices should
be connected. And at 50 MHz or above they should both be built into the motherboard.
The fact that the VL-Bus ran at the same clock frequency as the host CPU became a problem as
processor speeds increased. The faster the peripherals are required to run, the more expensive they
are due to the difficulties associated with manufacturing high-speed components. Consequently, the
difficulties in implementing the VL-Bus on newer chips (such as the 40-MHz and 50-MHz 486s and
the new 60/66-MHz Pentium) created the perfect conditions for Intel’s PCI (Peripheral Component
Intel’s original work on the PCI standard was published as revision 1.0 and handed over to the PCI
SIG (Special Interest Group). The SIG produced the PCI Local Bus Revision 2.0 specification in
May 1993. It took in engineering requests from members and gave a complete component and
expansion connector definition which could be used to produce production-ready systems based on
5 volt technology.
Beyond the need for performance, PCI sought to make expansion easier to implement by offering
plug and play (PnP) hardware. This type of system enables the PC to adjust automatically to new
cards as they are plugged in, obviating the need to check jumper settings and interrupt levels.
Windows 95 provided operating system software support for plug and play, and all current
motherboards incorporate BIOS that is designed to work with the PnP capabilities it provides.
By 1994 PCI was established as the dominant Local Bus standard. The VL-Bus was essentially
an extension of the bus the CPU uses to access main memory. PCI is a separate bus isolated from the
Chapter 8
CPU, but having access to main memory. As such, PCI is more robust and higher performance than
VL-Bus. Unlike the latter which was designed to run at system bus speeds, the PCI bus links to the
system bus through special bridge circuitry and runs at a fixed speed, regardless of the processor
clock. PCI is limited to five connectors, although two devices built into the motherboard can replace
each. It is also possible for a processor to support more than one bridge chip. It is more tightly
specified than VL-Bus and offers a number of additional features. In particular, it can support cards
running from both 5 volt and 3.3 volt supplies using different “key slots” to prevent the wrong card
being plugged into the wrong slot.
In its original implementation PCI ran at 33 MHz. This was raised to 66 MHz by the later PCI
2.1 specification. This effectively doubled the theoretical throughput to 266 MB/s which is 33 times
faster than the ISA bus. It can be configured as both a 32-bit and a 64-bit bus. Both 32-bit and 64-bit
cards can be used in either. 64-bit implementations running at 66 MHz—still rare by mid-1999—
increase bandwidth to a theoretical 524 MB/s. PCI is also much smarter than its ISA predecessor,
allowing interrupt requests (IRQs) to be shared. This is useful because well-featured, high-end
systems can quickly run out of IRQs. Also, PCI bus mastering reduces latency, resulting in improved
system speeds.
Since mid-1995, the main performance-critical components of the PC have communicated with
each other across the PCI bus. The most common PCI devices are the disk and graphics controllers
which are mounted directly onto the motherboard or onto expansion cards in PCI slots.
PCI-X v1.0 is a high performance addendum to the PCI Local Bus specification. It was co-developed
by IBM, Hewlett-Packard, and Compaq—normally competitors in the PC server market—and was
unanimously approved by the PCI SIG. Fully backward compatible with standard PCI, PCI-X is seen
as an immediate solution to the increased I/O requirements for high-bandwidth enterprise applications. These applications include Gigabit Ethernet, Fibre Channel, Ultra3 SCSI (Small Computer
System Interface), and high-performance graphics.
PCI-X increases both the speed of the PCI bus and the number of high-speed slots. With the
current design, PCI slots run at 33 MHz and one slot can run at 66 MHz. PCI-X doubles the current
performance of standard PCI, supporting one 64-bit slot at 133 MHz for an aggregate throughput of
1 GB/s. The new specification also features an enhanced protocol to increase the efficiency of data
transfer and to simplify electrical timing requirements, an important factor at higher clock frequencies.
For all its performance gains, PCI-X is being positioned as an interim technology while the same
three vendors develop a more long-term I/O bus architecture. While it has potential use throughout
the computer industry, its initial application is expected to be in server and workstation products,
embedded systems, and data communication environments.
The symbolism of a cartel of manufacturers making architectural changes to the PC server
without consulting Intel is seen as a significant development. At the heart of the dispute is who gets
control over future server I/O technology. The PCI-X faction, already wary of Intel’s growing
dominance in the hardware business, hopes to wrest some control. They want to develop and define
the next generation of I/O standards which they hope Intel will eventually support. Whether this will
succeed—or generate a standards war—is a moot point. The immediate effect is that it has provoked
Intel into leading another group of vendors in the development of rival I/O technology that they refer
to as “Next Generation I/O.”
Desktop and Notebook Personal Computers (PCs)
Next Generation High-Speed Interfaces
Intel has announced its support for PCI Express (formerly known as 3GIO and Arapahoe) which will
potentially make its way into desktop PCs in the next few years. Meanwhile, Motorola has pledged
support for RapidIO and serial RapidIO, and AMD is pushing HyperTransport (formerly known as
LDT—Lightning Data Transport). Each consortium has a few supporters for the technology specification and market promotion, and they will eventually build products supporting each technology.
These, and several other future high-speed interface technologies, are based on serial IOs with clock
and data recovery (the clock is encoded with the data). These interfaces take far fewer pins to
parallel interfaces, resulting in much higher speeds, higher reliability, and better EMI management.
As fast and wide as the PCI bus was, one task threatened to consume all its bandwidth—displaying
graphics. Early in the era of the ISA bus, monitors were driven by Monochrome Display adapter
(MDA) and Color Graphics Array (CGA) cards. A CGA graphics display could show four colors (two
bits of data) with 320 × 200 pixels of screen resolution at 60 Hz. This required 128,000 bits of data
per screen, or just over 937 KB/s. An XGA image at a 16-bit color depth requires 1.5 MB of data for
every image. And at a vertical refresh rate of 75 Hz, this amount of data is required 75 times each
second. Thanks to modern graphics adapters, not all of this data has to be transferred across the
expansion bus. But 3D imaging technology created new problems.
3D graphics have made it possible to model fantastic, realistic worlds on-screen in enormous
detail. Texture mapping and object hiding require huge amounts of data. The graphics adapter needs
to have fast access to this data to prevent the frame rate from dropping, resulting in jerky action. It
was beginning to look as though the PCI peak bandwidth of 132 MB/s was not up to the job.
Intel’s solution was to develop the Accelerated Graphics Port (AGP) as a separate connector that
operates off the processor bus. The AGP chipset acts as the intermediary between the processor and:
Level 2 cache contained in the Pentium II’s Single Edge Contact Cartridge
System memory
The graphics card
The PCI bus
This is called Quad Port acceleration.
AGP operates at the speed of the processor bus, now known as the frontside bus. At a clock rate
of 66 MHz—double the PCI clock speed—the peak base throughput is 264 MB/s.
For graphics cards specifically designed to support it, AGP allows data to be sent during both the
up and down clock cycle. This effectively doubles the clock rate to 133 MHz and the peak transfer
rate to 528 MB/s—a process knows as X2. To improve the length of time that AGP can maintain this
peak transfer, the bus supports pipelining which is another improvement over PCI. A pipelining X2
graphics card can sustain throughput at 80% of the peak. AGP also supports queuing of up to 32
commands via Sideband Addressing (SBA) where the commands are being sent while data is being
received. According to Intel, this allows the bus to sustain peak performance 95% of the time.
AGP’s four-fold bandwidth improvement and graphics-only nature ensures that large transfers of
3D graphics data don’t slow up the action on screen. Nor will graphics data transfers be interrupted
by other PCI devices. AGP is primarily intended to boost 3D performance, so it also provides other
improvements that are specifically aimed at this function.
Chapter 8
With its increased access speed to system memory over the PCI bus, AGP can use system
memory as if it is actually on the graphics card. This is called Direct Memory Execute (DIME). A
device called a Graphics Aperture Remapping Table (GART) handles the RAM addresses. GART
enables them to be distributed in small chunks throughout system memory rather than hijacking one
large section. And it presents them to a DIME-enabled graphics card as if they’re part of on-board
memory. DIME allows much larger textures to be used because the graphics card can have a much
larger memory space in which to load the bitmaps used.
AGP was initially only available in Pentium II systems based on Intel’s 440LX chipset. But
despite no Intel support, and thanks to the efforts of other chipset manufacturers such as VIA, by
early 1998 it found its way onto motherboards designed for Pentium-class processors.
Intel’s release of version 2.0 of the AGP specification, combined with the AGP Pro extensions to
this specification, mark an attempt to have AGP taken seriously in the 3D graphics workstation
market. AGP 2.0 defines a new 4X-transfer mode that allows four data transfers per clock cycle on
the 66-MHz AGP interface. This delivers a maximum theoretical bandwidth of 1.0 GB/s between the
AGP device and system memory. The new 4X mode has a much higher potential throughput than
100-MHz SDRAM (800 MB/s). So the full benefit was not seen until the implementation of 133-MHz
SDRAM and Direct Rambus DRAM (DRDRAM) in the second half of 1999. AGP 2.0 was supported
by chipsets launched early in 1999 to provide support for Intel’s Pentium III processor.
AGP Pro is a physical specification aimed at satisfying the needs of high-end graphics card
manufacturers who are currently limited by the maximum electrical power that can be drawn by an
AGP card (about 25 watts). AGP Pro caters to cards that draw up to 100 watts. It uses a slightly
longer AGP slot that will also take current AGP cards.
Table 8.1 shows the burst rates, typical applications, and the outlook for the most popular PC
interfaces—ISA, EISA, PCI, and AGP.
Table 8.1: Interfaces Summary
Typical Applications
Burst Data Rates
Sound cards, modems
2 MB/s to 8.33
Expected to be phased out
Network, SCSI
33 MB/s
Nearly phased out,
superseded by PCI
Graphics cards, SCSI
adapters, new
generation sound cards
266 MB/s
Standard add-in peripheral
bus with most market
Graphics cards
528 MB/s
Standard in all Intel-based
PCs from the Pentium II;
coexists with PCI
USB (Universal Serial Bus)
Developed jointly by Compaq, Digital, IBM, Intel, Microsoft, NEC, and Nortel, USB offers a
standardized connector for attaching common I/O devices to a single port, thus simplifying today’s
multiplicity of ports and connectors. Significant impetus behind the USB standard was created in
Desktop and Notebook Personal Computers (PCs)
September of 1995 with the announcement of a broad industry initiative to create an open host
controller interface (HCI) standard for USB. The initiative wanted to make it easier for companies
including PC manufacturers, component vendors, and peripheral suppliers to more quickly develop
USB-compliant products. Key to this was the definition of a nonproprietary host interface, left
undefined by the USB specification itself, that enabled connection to the USB bus.
Up to 127 devices can be connected by daisy chaining or by using a USB hub (which has a
number of USB sockets and plugs for a PC or for other device). Seven peripherals can be attached to
each USB hub device. This can include a second hub to which up to another seven peripherals can be
connected, and so on. Along with the signal, USB carries a 5V power supply so that small devices
such as handheld scanners or speakers do not have to have their own power cable.
Devices are plugged directly into a four-pin socket on the PC or hub using a rectangular Type A
socket. All cables permanently attached to the device have a Type A plug. Devices that use a separate
cable have a square Type B socket, and the cable that connects them has a Type A and Type B plug.
USB overcomes the speed limitations of UART (Universal Asynchronous Receiver-Transmitters)-based serial ports. USB runs at a staggering 12 Mb/s which is as fast as networking technologies
such as Ethernet and Token Ring. It provides enough bandwidth for all of today’s peripherals and
many foreseeable ones. For example, the USB bandwidth will support devices such as external CDROM drives and tape units as well as ISDN and PABX interfaces. It is also sufficient to carry digital
audio directly to loudspeakers equipped with digital-to-analog converters, eliminating the need for a
soundcard. However, USB is not intended to replace networks. To keep costs down, its range is
limited to 5 meters between devices. A lower communication rate of 1.5 Mb/s can be set up for
lower-bit-rate devices like keyboards and mice, saving space for things that really need it.
USB was designed to be user-friendly and is truly plug-and-play. It eliminates the need to install
expansion cards inside the PC and then reconfigure the system. Instead, the bus allows peripherals to
be attached, configured, used, and detached while the host and other peripherals are in operation.
There is no need to install drivers, figure out which serial or parallel port to choose, or worry about
IRQ settings, DMA channels, and I/O addresses. USB achieves this by managing connected peripherals in a host controller mounted on the PC’s motherboard or on a PCI add-in card. The host controller
and subsidiary controllers in hubs manage USB peripherals. This helps reduce the load on the PC’s
CPU time and improves overall system performance. In turn, USB system software installed in the
operating system manages the host controller.
Data on the USB flows through a bidirectional pipe regulated by the host controller and by
subsidiary hub controllers. An improved version of bus mastering called isochronous data transfer
allows portions of the total bus bandwidth to be permanently reserved for specific peripherals. The
USB interface contains two main modules—the Serial Interface Engine (SiE), responsible for the bus
protocol, and the Root Hub, used to expand the number of USB ports.
The USB bus distributes 0.5 amps of power through each port. Thus, low-power devices that
might normally require a separate AC adapter can be powered through the cable. (USB lets the PC
automatically sense the power that is required and deliver it to the device.) Hubs may derive all
power from the USB bus (bus powered), or they may be powered from their own AC adapter.
Powered hubs with at least 0.5 amps per port provide the most flexibility for future downstream
devices. Port switching hubs isolate all ports from each other so that a shorted device will not bring
down the others.
Chapter 8
The promise of USB is a PC with a single USB port instead of the current four or five different
connectors. One large powered device, like a monitor or a printer, would be connected onto this and
would act as a hub. It would link all the other smaller devices such as mouse, keyboard, modem,
document scanner, digital camera, and so on. Since many USB device drivers did not become
available until its release, this promise was never going to be realized before the availability of
Windows 98.
USB architecture is complex, and a consequence of having to support so many different types of
peripheral is an unwieldy protocol stack. However, the hub concept merely shifts expense and
complexity from the system unit to the keyboard or monitor. The biggest impediment to USB’s
success is probably the IEEE 1394 standard. Today most desktop and laptop PCs come standard with
a USB interface.
USB 2.0
Compaq, Hewlett-Packard, Intel, Lucent, Microsoft, NEC, and Philips are jointly leading the
development of a next-generation USB 2.0 specification. It will dramatically extend performance to
the levels necessary to provide support for future classes of high-performance peripherals. At the
time of the February 1999 IDF (Intel Developers Forum), the projected performance hike was on the
order of 10 to 20 times over existing capabilities. By the time of the next IDF in September 1999,
these estimates had been increased to 30 to 40 times the performance of USB 1.1, based on the
results of engineering studies and test silicon. At these levels of performance, the danger that rival
IEEE 1394 bus would marginalize USB would appear to have been banished forever. Indeed,
proponents of USB maintain that the two standards address differing requirements. The aim of USB
2.0 is to provide support for the full range of popular PC peripherals, while IEEE 1394 targets
connection to audiovisual consumer electronic devices such as digital camcorders, digital VCRs,
DVDs, and digital televisions.
USB 2.0 will extend the capabilities of the interface from 12 Mb/s, which is available on USB
1.1, to between 360-480 Mb/s on USB 2.0. It will provide a connection point for next-generation
peripherals which complement higher performance PCs. USB 2.0 is expected to be both forward and
backward compatible with USB 1.1. It is also expected to result in a seamless transition process for
the end user.
USB 1.1’s data rate of 12 Mb/s is sufficient for PC peripherals such as:
Digital cameras
Digital joysticks
Wireless base stations
Cartridge, tape, and floppy drives
Digital speakers
USB 2.0’s higher bandwidth will permit higher-functionality PC peripherals that include higher
resolution video conferencing cameras and next-generation scanners, printers, and fast storage units.
Desktop and Notebook Personal Computers (PCs)
Existing USB peripherals will operate with no change in a USB 2.0 system. Devices such as
mice, keyboards, and game pads will operate as USB 1.1 devices and not require the additional
performance that USB 2.0 offers. All USB devices are expected to co-exist in a USB 2.0 system. The
higher speed of USB 2.0 will greatly broaden the range of peripherals that may be attached to the
PC. This increased performance will also allow a greater number of USB devices to share the
available bus bandwidth, up to the architectural limits of USB. Given USB’s wide installed base,
USB 2.0's backward compatibility could prove a key benefit in the battle with IEEE 1394 to be the
consumer interface of the future.
IEEE 1394
Commonly referred to as FireWire, IEEE 1394 was approved by the Institute of Electrical and
Electronics Engineers (IEEE) in 1995. It was originally conceived by Apple who currently receives
$1 royalty per port. Several leading IT companies including Microsoft, Philips, National Semiconductor, and Texas Instruments have since joined the 1394 Trade Association (1394ta).
IEEE 1394 is similar to USB in many ways, but it is much faster. Both are hot-swappable serial
interfaces, but IEEE 1394 provides high-bandwidth, high-speed data transfers significantly in excess
of what USB offers. There are two levels of interface in IEEE 1394—one for the backplane bus
within the computer and another for the point-to-point interface between device and computer on the
serial cable. A simple bridge connects the two environments. The backplane bus supports datatransfer speeds of 12.5, 25, or 50 Mb/s. The cable interface supports speeds of 100, 200 and 400 Mb/s
—roughly four times as fast as a 100BaseT Ethernet connection and far faster than USB’s 1.5 or 12
Mb/s speeds. A 1394b specification aims to adopt a different coding and data-transfer scheme that
will scale to 800 Mb/s, 1.6 Gb/s, and beyond. Its high-speed capability makes IEEE 1394 viable for
connecting digital cameras, camcorders, printers, TVs, network cards, and mass storage devices to a
IEEE 1394 cable connectors are constructed with the electrical contacts located inside the structure
of the connector, thus preventing any shock to the user or contamination to the contacts by the user’s
hands. These connectors are derived from the Nintendo GameBoy connector. Field-tested by children
of all ages, this small and flexible connector is very durable. These connectors are easy to use even
when the user must blindly insert them into the back of machines. Terminators are not required and
manual IDs need not be set.
IEEE 1394 uses a six-conductor cable up to 4.5 meters long. It contains two pairs of wires for
data transport and one pair for device power. The design resembles a standard 10BaseT Ethernet
cable. Each signal pair as well as the entire cable is shielded. As the standard evolves, new cable
designs are expected that will allow more bandwidth and longer distances without the need for
An IEEE 1394 connection includes a physical layer and a link layer semiconductor chip, and
IEEE 1394 needs two chips per device. The physical interface (PHY) is a mixed-signal device that
connects to the other device’s PHY. It includes the logic needed to perform arbitration and bus
initialization functions. The Link interface connects the PHY and the device internals. It transmits
and receives 1394-formatted data packets and supports asynchronous or isochronous data transfers.
Both asynchronous and isochronous formats are included on the same interface. This allows both
non-real-time critical applications (printers, scanners) and real-time critical applications (video,
Chapter 8
audio) to operate on the same bus. All PHY chips use the same technology, whereas the Link is
device-specific. This approach allows IEEE 1394 to act as a peer-to-peer system as opposed to USB’s
client-server design. As a consequence, an IEEE 1394 system needs neither a serving host nor a PC.
Asynchronous transport is the traditional method of transmitting data between computers and
peripherals. Data is sent in one direction followed by an acknowledgement to the requester. Asynchronous data transfers place emphasis on delivery rather than timing. The data transmission is
guaranteed and retries are supported. Isochronous data transfer ensures that data flows at a pre-set
rate so that an application can handle it in a timed way. This is especially important for time-critical
multimedia data where just-in-time delivery eliminates the need for costly buffering. Isochronous
data transfers operate in a broadcast manner where one, or many, 1394 devices can “listen” to the
data being transmitted. Multiple channels (up to 63) of isochronous data can be transferred simultaneously on a 1394 bus. Since isochronous transfers can only take up a maximum of 80% of the 1394
bus bandwidth, there is enough bandwidth left over for additional asynchronous transfers.
IEEE 1394’s scaleable architecture and flexible peer-to-peer topology make it ideal for connecting high-speed devices such as computers, hard drives, and digital audio and video hardware.
Devices can be connected in a daisy chain or a tree topology scheme. Each IEEE 1394 bus segment
may have up to 63 devices attached to it. Currently, each device may be up to 4.5 meters apart.
Longer distances are possible with and without repeater hardware. Improvements to current cabling
are being specified to allow longer distance cables. Bridges may connect over 1000 bus segments,
thus providing growth potential for a large network. An additional feature is the ability of transactions
at different speeds to occur on a single device medium. For example, some devices can communicate
at 100 Mb/s while others communicate at 200 Mb/s and 400 Mb/s. IEEE 1394 devices may be hotplugged—added to or removed from the bus—even with the bus in full operation. Topology changes
are automatically recognized when the bus configuration is altered. This “plug and play” feature
eliminates the need for address switches or other user intervention to reconfigure the bus.
As a transaction-based packet technology, 1394 can be organized as if it were memory space
connected between devices, or as if devices resided in slots on the main backplane. Device addressing is 64 bits wide, partitioned as 10 bits for network Ids, six bits for node Ids, and 48 bits for
memory addresses. The result is the capability to address 1023 networks of 63 nodes, each with
281TB of memory. Memory-based addressing rather than channel addressing views resources as
registers or memory that can be accessed with processor-to-memory transactions. This results in easy
networking. For example, a digital camera can easily send pictures directly to a digital printer
without a computer in the middle. With IEEE 1394, it is easy to see how the PC could lose its
position of dominance in the interconnectivity environment—it could be relegated to being just a
very intelligent peer on the network.
The need for two pieces of silicon instead of one will make IEEE 1394 peripherals more expensive than SCSI, IDE, or USB devices. Hence, it is inappropriate for low-speed peripherals. But its
applicability to higher-end applications such as digital video editing is obvious, and it is clear that
the standard is destined to become a mainstream consumer electronics interface used for connecting
handy-cams, VCRs, set-top boxes, and televisions. However, its implementation to date has been
largely confined to digital camcorders where is it known as iLink.
Desktop and Notebook Personal Computers (PCs)
A chipset or “PCI set” is a group of microcircuits that orchestrate the flow of data to and from key
components of a PC. This includes the CPU itself, the main memory, the secondary cache, and any
devices on the ISA and PCI buses. The chipset also controls data flow to and from hard disks and
other devices connected to the IDE channels. While new microprocessor technologies and speed
improvements tend to receive all the attention, chipset innovations are equally important.
Although there have always been other chipset manufacturers such as SIS, VIA Technologies,
and Opti, Intel’s Triton chipsets were by far the most popular for many years. Indeed, the introduction of the Intel Triton chipset caused something of a revolution in the motherboard market, with just
about every manufacturer using it in preference to anything else. Much of this was due to the ability
of the Triton to get the best out of both the Pentium processor and the PCI bus, and it also offered
built-in master EIDE support, enhanced ISA bridge, and the ability to handle new memory technologies like EDO (Extended Data Output) and SDRAM. However, the potential performance
improvements in the new PCI chipsets will only be realized when used in conjunction with BIOS
capable of taking full advantage of the new technologies.
During the late 1990s things became far more competitive. Acer Laboratories (ALI), SIS, and
VIA all developed chipsets designed to operate with Intel, AMD, and Cyrix processors. 1998 was a
particularly important year in chipset development when an unacceptable bottleneck—the PC’s 66-MHz
system bus—was finally overcome. Interestingly, it was not Intel but rival chipmakers that made the
first move, pushing Socket 7 chipsets to 100 MHz. Intel responded with its 440BX, one of many
chipsets to use the ubiquitous North-bridge/South-bridge architecture. It was not long before Intel’s
hold on the chipset market loosened further still and the company had no one but itself to blame. In
1999 its single-minded commitment to Direct Rambus DRAM (DRDRAM) left it in the embarrassing position of not having a chipset that supported the 133-MHz system bus speed that its latest
range of processors were capable of. This was another situation its rivals were able to exploit to gain
market share.
The motherboard is the main circuit board inside the PC. It holds the processor and the memory and
expansion slots. It connects directly or indirectly to every part of the PC. It is made up of a chipset
(known as the “glue logic”), some code in ROM, and various interconnections, or buses. PC designs
today use many different buses to link their various components. Wide, high-speed buses are difficult
and expensive to produce. The signals travel at such a rate that even distances of just a few centimeters cause timing problems. And the metal tracks on the circuit board act as miniature radio antennae
transmitting electromagnetic noise that introduces interference with signals elsewhere in the system.
For these reasons PC design engineers try to keep the fastest buses confined to the smallest area of
the motherboard and use slower, more robust buses for other parts.
The original PC had minimum integrated devices, just ports for a keyboard and a cassette deck
(for storage). Everything else, including a display adapter and floppy or hard disk controllers, was an
add-in component connected via expansion slot. Over time more devices were integrated into the
motherboard. It is a slow trend, though. I/O ports and disk controllers were often mounted on
expansion cards as recently as 1995. Other components (graphics, networking, SCSI, sound) usually
remain separate. Many manufacturers have experimented with different levels of integration, build-
Chapter 8
ing in some, or even all of these components. But there are drawbacks. It is harder to upgrade the
specification if integrated components can’t be removed, and highly integrated motherboards often
require nonstandard cases. Furthermore, replacing a single faulty component may mean buying an
entire new motherboard. Consequently, those parts of the system whose specification changes fastest
—RAM, CPU, and graphics—tend to remain in sockets or slots for easy replacement. Similarly,
parts that not all users need such as networking or SCSI are usually left out of the base specification
to keep costs down.
Motherboard development consists largely of isolating performance-critical components from
slower ones. As higher-speed devices become available, they are linked by faster buses, and lowerspeed buses are relegated to supporting roles. In the late 1990s there was also a trend towards putting
peripherals designed as integrated chips directly onto the motherboard. Initially this was confined to
audio and video chips, obviating the need for separate sound or graphics adapter cards. But in time
the peripherals integrated in this way became more diverse and included items such as SCSI, LAN,
and even RAID controllers. While there are cost benefits to this approach, the biggest downside is
the restriction of future upgrade options.
BIOS (Basic Input/Output System)
All motherboards include a small block of Read Only Memory (ROM) which is separate from the
main system memory used for loading and running software. The ROM contains the PC’s BIOS and
this offers two advantages. The code and data in the ROM BIOS need not be reloaded each time the
computer is started, and they cannot be corrupted by wayward applications that write into the wrong
part of memory. A flash-upgradeable BIOS may be updated via a floppy diskette to ensure future
compatibility with new chips, add-on cards etc.
The BIOS is comprised of several separate routines serving different functions. The first part
runs as soon as the machine is powered on. It inspects the computer to determine what hardware is
fitted. Then it conducts some simple tests to check that everything is functioning normally—a
process called the power-on self-test. If any of the peripherals are plug-and-play devices, the BIOS
assigns the resources. There’s also an option to enter the Setup program. This allows the user to tell
the PC what hardware is fitted, but thanks to automatic self-configuring BIOS, this is not used so
much now.
If all the tests are passed, the ROM tries to boot the machine from the hard disk. Failing that, it
will try the CD-ROM drive and then the floppy drive, finally displaying a message that it needs a
system disk. Once the machine has booted, the BIOS presents DOS with a standardized API for the
PC hardware. In the days before Windows, this was a vital function. But 32-bit “protect mode”
software doesn’t use the BIOS, so it is of less benefit today.
Most PCs ship with the BIOS set to check for the presence of an operating system in the floppy
disk drive first, then on the primary hard disk drive. Any modern BIOS will allow the floppy drive to
be moved down the list so as to reduce normal boot time by a few seconds. To accommodate PCs
that ship with a bootable CD-ROM, most BIOS allow the CD-ROM drive to be assigned as the boot
drive. BIOS may also allow booting from a hard disk drive other than the primary IDE drive. In this
case, it would be possible to have different operating systems or separate instances of the same OS
on different drives.
Desktop and Notebook Personal Computers (PCs)
Windows 98 (and later) provides multiple display support. Since most PCs have only a single
AGP slot, users wishing to take advantage of this will generally install a second graphics card in a
PCI slot. In such cases, most BIOS will treat the PCI card as the main graphics card by default.
However, some allow either the AGP card or the PCI card to be designated as the primary graphics
While the PCI interface has helped by allowing IRQs to be shared more easily, the limited
number of IRQ settings available to a PC remains a problem for many users. For this reason, most
BIOS allow ports that are not in use to be disabled. It will often be possible to get by without
needing either a serial or a parallel port because of the increasing popularity of cable and ADSL
Internet connections and the ever-increasing availability of peripherals that use the USB interface.
Motherboards also include a separate block of memory made from very low-power consumption
CMOS (complementary metal oxide silicon) RAM chips. This memory is kept “alive” by a battery
even when the PC’s power is off. This stores basic information about the PC’s configuration—
number and type of hard and floppy drives, how much memory and what kind, and so on. All this
used to be entered manually but modern auto-configuring BIOS does much of this work. In this case,
the more important settings are advanced settings such as DRAM timings. The other important data
kept in CMOS memory are the time and date which are updated by a Real Time Clock (RTC). The
clock, CMOS RAM, and battery are usually all integrated into a single chip. The PC reads the time
from the RTC when it boots up. After that, the CPU keeps time, which is why system clocks are
sometimes out of sync. Rebooting the PC causes the RTC to be reread, increasing their accuracy.
System Memory
System memory is where the computer holds programs and data that are in use. Because of the
demands made by increasingly powerful software, system memory requirements have been accelerating
at an alarming pace over the last few years. The result is that modern computers have significantly
more memory than the first PCs of the early 1980s, and this has had an effect on development of the
PC’s architecture. It takes more time to store and retrieve data from a large block of memory than
from a small block. With a large amount of memory, the difference in time between a register access
and a memory access is very great, and this has resulted in extra layers of “cache” in the storage
Processors are currently outstripping memory chips in access speed by an ever-increasing
margin. Increasingly, processors have to wait for data going in and out of main memory. One
solution is to use cache memory between the main memory and the processor. Another solution is to
use clever electronics to ensure that the data the processor needs next is already in cache.
Primary Cache or Level 1 Cache
Level 1 cache is located on the CPU. It provides temporary storage for instructions and data that are
organized into blocks of 32 bytes. Primary cache is the fastest form of storage. It is limited in size
because it is built into the chip with a zero wait-state (delay) interface to the processor’s execution unit.
Primary cache is implemented using static RAM (SRAM). It was traditionally 16KB in size until
recently. SRAM is manufactured similarly to processors—highly integrated transistor patterns are
photo-etched into silicon. Each SRAM bit is comprised of between four and six transistors, which is
Chapter 8
why SRAM takes up much more space than DRAM (which uses only one transistor plus a capacitor).
This, plus the fact that SRAM is also several times the cost of DRAM, explains why it is not used
more extensively in PC systems.
Launched at the start of 1997, Intel’s P55 MMX processor was noteworthy for the increase in
size of its Level 1 cache—32 KB. The AMD K6 and Cyrix M2 chips launched later that year upped
the ante further by providing Level 1 caches of 64 KB. The control logic of the primary cache keeps
the most frequently used data and code in the cache. It updates external memory only when the CPU
hands over control to other bus masters, or during direct memory access (DMA) by peripherals such
as floppy drives and sound cards.
Some chipsets support a “write-back” cache rather than a “write-through” cache. Write-through
happens when a processor writes data simultaneously into cache and into main memory (to assure
coherency). Write-back occurs when the processor writes to the cache and then proceeds to the next
instruction. The cache holds the write-back data and writes it into main memory when that data line
in cache is to be replaced. Write-back offers about 10% higher performance than write-through, but
this type of cache is more costly. A third type of write mode writes through with buffer, and gives
similar performance to write-back.
Secondary (external) Cache or Level 2 Cache
Most PCs are offered with a secondary cache to bridge the processor/memory performance gap.
Level 2 cache is implemented in SRAM and uses the same control logic as primary cache.
Secondary cache typically comes in two sizes—256 KB or 512 KB. It can be found on the
motherboard. The Pentium Pro deviated from this arrangement, placing the Level 2 cache on the
processor chip itself. Secondary cache supplies stored information to the processor without any delay
(wait-state). For this purpose, the bus interface of the processor has a special transfer protocol called
burst mode. A burst cycle consists of four data transfers where only the addresses of the first 64 are
output on the address bus. Synchronous pipeline burst is the most common secondary cache.
To have a synchronous cache, a chipset must support it. It can provide a 3-5% increase in PC
performance because it is timed to a clock cycle. This is achieved by specialized SRAM technology
which has been developed to allow zero wait-state access for consecutive burst read cycles. Pipelined
Burst Static RAM (PB SRAM) has an access time from 4.5 to 8 nanoseconds and allows a transfer
timing of 3-1-1-1 for bus speeds up to 133 MHz. These numbers refer to the number of clock cycles
for each access of a burst mode memory read. For example, 3-1-1-1 refers to three clock cycles for
the first word and one cycle for each subsequent word.
For bus speeds up to 66 MHz, Synchronous Burst Static RAM (Sync SRAM) offers even faster
performance—up to 2-1-1-1 burst cycles. However, with bus speeds above 66 MHz its performance
drops to 3-2-2-2, significantly slower than PB SRAM. There is also asynchronous cache, which is
cheaper and slower because it is not timed to a clock cycle. With asynchronous SRAM, available in
speeds between 12 and 20 nanoseconds, all burst read cycles have a timing of 3-2-2-2 on a 50- to
66-MHz CPU bus. This means there are two wait-states for the leadoff cycle and one wait-state for
the following three burst-cycle transfers.
Main Memory/RAM
A PC’s third and principal level of system memory is main memory, or Random Access Memory
(RAM). It is an impermanent source of data, but is the main memory area accessed by the hard disk.
Desktop and Notebook Personal Computers (PCs)
It acts as a staging post between the hard disk and the processor. The more data available in the
RAM, the faster the PC runs.
Main memory is attached to the processor via its address and data buses. Each bus consists of a
number of electrical circuits, or bits. The width of the address bus dictates how many different
memory locations can be accessed. And the width of the data bus dictates how much information is
stored at each location. Every time a bit is added to the width of the address bus, the address range
doubles. In 1985 Intel’s 386 processor had a 32-bit address bus, enabling it to access up to 4 GB of
memory. The Pentium processor increased the data bus width to 64 bits, enabling it to access 8 bytes
of data at a time.
Each transaction between the CPU and memory is called a bus cycle. The number of data bits a
CPU can transfer during a single bus cycle affects a computer’s performance and dictates what type
of memory the computer requires. By the late 1990s most desktop computers were using 168-pin
DIMMs, which supported 64-bit data paths.
Main PC memory is built up using DRAM (dynamic RAM) chips. DRAM chips are large,
rectangular arrays of memory cells with support logic used for reading and writing data in the arrays.
It also includes refresh circuitry to maintain the integrity of stored data. Memory arrays are arranged
in rows and columns of memory cells called word-lines and bit-lines, respectively. Each memory cell
has a unique location, or address, defined by the intersection of a row and a column.
DRAM is manufactured similarly to processors—a silicon substrate is etched with the patterns
that make the transistors and capacitors (and support structures) that comprise each bit. It costs much
less than a processor because it is a series of simple, repeated structures. It lacks the complexity of a
single chip with several million individually located transistors. And DRAM uses half as many
transistors so it is cheaper than SRAM. Over the years several different structures have been used to
create the memory cells on a chip. In today’s technologies, the support circuitry generally includes:
Sense amplifiers to amplify the signal or charge detected on a memory cell.
Address logic to select rows and columns.
Row Address Select (/RAS) and Column Address Select (/CAS) logic to latch and resolve
the row and column addresses, and to initiate and terminate read and write operations.
Read and write circuitry to store information in the memory’s cells or read that which is
stored there.
Internal counters or registers to keep track of the refresh sequence, or to initiate refresh
cycles as needed.
Output Enable logic to prevent data from appearing at the outputs unless specifically
A transistor is effectively an on/off switch that controls the flow of current. In DRAM each
transistor holds a single bit. If the transistor is “open” (1), current can flow; if it’s “closed” (0),
current cannot flow. A capacitor is used to hold the charge, but it soon escapes, losing the data. To
overcome this problem, other circuitry refreshes the memory (reading the value before it disappears),
and writes back a pristine version. This refreshing action is why the memory is called dynamic. The
refresh speed is expressed in nanoseconds, and it is this figure that represents the speed of the RAM.
Most Pentium-based PCs use 60- or 70-ns RAM.
The process of refreshing actually interrupts/slows data access, but clever cache design minimizes this. However, as processor speeds passed the 200-MHz mark, no amount of caching could
Chapter 8
compensate for the inherent slowness of DRAM. Other, faster memory technologies have largely
replaced it.
The most difficult aspect of working with DRAM devices is resolving the timing requirements.
DRAMs are generally asynchronous, responding to input signals whenever they occur. As long as the
signals are applied in the proper sequence, with signal duration and delays between signals that meet
the specified limits, DRAM will work properly.
Popular types of DRAM used in PCs are:
Fast Page Mode (FPM)
EDO (Extended Data Out)
Burst EDO
Synchronous (SDRAM)
Direct Rambus (DRDRAM)
Double Data Rate (DDR DRAM).
While much more cost effective than SRAM per megabit, traditional DRAM has always suffered
speed and latency penalties making it unsuitable for some applications. Consequently, product
manufacturers have often been forced to opt for the more expensive but faster SRAM technology.
However, by 2000 system designers had another option available to them. It offered advantages of
both worlds - fast speed, low cost, high density, and lower power consumption. Though 1T-SRAM—
Monolithic System Technology (MoSys)—calls its design an SRAM, it is based on single-transistor
DRAM cells. Like any DRAM, data must be periodically refreshed to prevent loss. What makes the
1T-SRAM unique is that it offers a true SRAM-style interface that hides all refresh operations from
the memory controller. Traditionally, SRAMs have been built using a bulky four- or six- transistor
(4T, 6T) cell. The MoSys 1T-SRAM device is built on a single transistor (1T) DRAM cell, allowing
a reduction in die size of between 50 and 8% compared to SRAMs of similar density.
Regardless of technology type, most desktop PCs for the home are configured with 256 MB of
RAM. But it is not atypical to find systems with 512-MB or even 1-GB configurations.
SIMMs (Single Inline Memory Module)
Memory chips are generally packaged into small plastic or ceramic, dual inline packages (DIPs)
which are assembled into a memory module. The SIMM is a small circuit board designed to accommodate surface-mount memory chips. SIMMs use less board space and are more compact than
previous memory-mounting hardware.
By the early 1990s the original 30-pin SIMM was superseded by the 72-pin variety. These
supported 32-bit data paths and were originally used with 32-bit CPUs. A typical motherboard of the
time offered four SIMM sockets. These were capable of accepting either single-sided or double-sided
SIMMs with module sizes of 4, 8, 16, 32 or even 64 MB. With the introduction of the Pentium
processor in 1993, the width of the data bus was increased to 64 bits. When 32-bit SIMMs were used
with these processors, they had to be installed in pairs with each pair of modules making up a
memory bank. The CPU communicated with the bank of memory as one logical unit.
DIMMs (Dual Inline Memory Module)
By the end of the millennium memory subsystems standardized around an 8-byte data interface. The
DIMM had replaced the SIMM as the module standard for the PC industry. DIMMs have 168 pins in
two (or dual) rows of contacts, one on each side of the card. With the additional pins a computer can
Desktop and Notebook Personal Computers (PCs)
retrieve information from DIMMs, 64 bits at a time instead of 32- or 16-bit transfers that are usual
with SIMMs.
Some of the physical differences between 168-pin DIMMs and 72-pin SIMMs include the length
of module, the number of notches on the module, and the way the module installs in the socket.
Another difference is that many 72-pin SIMMs install at a slight angle, whereas 168-pin DIMMs
install straight into the memory socket and remain vertical to the system motherboard. Unlike
SIMMs, DIMMs can be used singly and it is typical for a modern PC to provide just one or two
DIMM slots.
Presence Detect
When a computer system boots up, it must detect the configuration of the memory modules in order
to run properly. For a number of years parallel-presence detect (PPD) was the traditional method of
relaying the required information by using a number of resistors. PPD used a separate pin for each bit
of information and was used by SIMMs and some DIMMs to identify themselves. However, PPD was
not flexible enough to support newer memory technologies. This led the JEDEC (Joint Electron
Device Engineering Council) to define a new standard—serial-presence detect (SPD). SPD has been
in use since the emergence of SDRAM technology.
The Serial Presence Detect function is implemented by using an 8-pin serial EEPROM chip. This
stores information about the memory module’s size, speed, voltage, drive strength, and number of
row and column addresses. These parameters are read by the BIOS during POST. The SPD also
contains manufacturing data such as date codes and part numbers.
Parity Memory
Memory modules have traditionally been available in two basic flavors—non-parity and parity.
Parity checking uses a ninth memory chip to hold checksum data on the contents of the other eight
chips in that memory bank. If the predicted value of the checksum matches the actual value, all is
well. If it does not, memory contents are corrupted and unreliable. In this event a non-maskable
interrupt (NMI) is generated to instruct the system to shut down and avoid any potential data corruption.
Parity checking is quite limited—only odd numbers of bit errors are detected (two parity errors
in the same byte will cancel themselves out). Also, there is no way of identifying the offending bits
or fixing them. In recent years the more sophisticated and more costly Error Check Code (ECC)
memory has gained in popularity. Unlike parity memory, which uses a single bit to provide protection to eight bits, ECC uses larger groupings. Five ECC bits are needed to protect each eight-bit
word, six for 16-bit words, seven for 32-bit words, and eight for 64-bit words.
Additional code is needed for ECC protection and the firmware that generates and checks the
ECC can be in the motherboard itself or built into the motherboard chipsets. Most Intel chips now
include ECC code. The downside is that ECC memory is relatively slow. It requires more overhead
than parity memory for storing data and causes around a 3% performance loss in the memory
subsystem. Generally, use of ECC memory is limited to mission-critical applications and is more
commonly found on servers than on desktop systems.
What the firmware does when it detects an error can differ considerably. Modern systems will
automatically correct single-bit errors, which account for most RAM errors, without halting the
system. Many can also fix multi-bit errors on the fly. Where that’s not possible, they automatically
reboot and map out the bad memory.
Chapter 8
Flash Memory
Flash memory is a solid-state, nonvolatile, rewritable memory that works like RAM combined with a
hard disk. It resembles conventional memory, coming in the form of discrete chips, modules, or
memory cards. Just like with DRAM and SRAM, bits of electronic data are stored in memory cells.
And like a hard disk drive, flash memory is nonvolatile, retaining its data even when the power is
turned off.
Although flash memory has advantages over RAM (its nonvolatility) and hard disks (it has no
moving parts), there are reasons why it is not a viable replacement for either. Because of its design,
flash memory must be erased in blocks of data rather than single bytes like RAM. It has significantly
higher cost, and flash memory cells have a limited life span of around 100,000 write cycles. And
while “flash drives” are smaller, faster, consume less energy, and are capable of withstanding shocks
up to 2000 Gs (equivalent to a 10-foot drop onto concrete) without losing data, their limited capacity
(around 100 MB) make them an inappropriate alternative to a PC’s hard disk drive. Even if capacity
were not a problem, flash memory cannot compete with hard disks in price.
Since its inception in the mid-1980s, flash memory has evolved into a versatile and practical
storage solution. Several different implementations exist. NOR flash is a random access device
appropriate for code storage applications. NAND flash—optimized for mass storage applications—is
the most common form. Its high speed, durability, and low voltage requirements have made it ideal
for use in applications such as digital cameras, cell phones, printers, handheld computers, pagers,
and audio recorders.
Some popular flash memory types include SmartMedia and CompactFlash. Flash memory
capacities used in consumer applications include 128 MB, 256 MB, 512 MB, and 1 GB.
Hard-Disk Drive
When the power to a PC is switched off, the contents of the main memory (RAM) are lost. It is the
PC’s hard disk that serves as a nonvolatile, bulk storage medium and as the repository for documents,
files, and applications. It is interesting to recall that back in 1954 when IBM first invented the hard
disk, capacity was a mere 5 MB stored across fifty 24-in platters. It was 25 years later that Seagate
Technology introduced the first hard disk drive for personal computers, boasting a capacity of up to
40 MB and data transfer rate of 625 Kb/s. Even as recently as the late 1980s 100 MB of hard disk
space was considered generous! Today this would be totally inadequate since it is hardly enough to
install the operating system, let alone a huge application.
The PC’s upgradability has led software companies to believe that it doesn’t matter how large
their applications are. As a result, the average size of the hard disk rose from 100 MB to 1.2 GB in
just a few years. By the start of the new millennium a typical desktop hard drive stored 18 GB across
three 3.5-in platters. Thankfully, as capacity has gone up, prices have come down. Improved area
density levels are the dominant reason for the reduction in price per megabyte.
It is not just the size of hard disks that has increased. The performance of fixed disk media has
also evolved considerably. Users enjoy high-performance and high-capacity data storage without
paying a premium for a SCSI-based system.
When Sony and Philips invented the Compact Disc (CD) in the early 1980s, even they could not have
imagined what a versatile carrier of information it would become. Launched in 1982, the audio CD’s
Desktop and Notebook Personal Computers (PCs)
durability, random access features, and audio quality made it incredibly successful, capturing the
majority of the market within a few years. CD-ROM followed in 1984, but it took a few years longer
to gain the widespread acceptance enjoyed by the audio CD. This consumer reluctance was mainly
due to a lack of compelling content during the first few years the technology was available. However,
there are now countless games, software applications, encyclopedias, presentations, and other
multimedia programs available on CD-ROM. What was originally designed to carry 74 minutes of
high-quality digital audio can now hold up to 650 MB of computer data, 100 publishable photographic scans, or even 74 minutes of VHS-quality full-motion video and audio. Many discs offer a
combination of all three, along with other information.
Today’s mass-produced CD-ROM drives are faster and cheaper than they have ever been.
Consequently, not only is a vast range of software now routinely delivered on CD-ROM, but many
programs (databases, multimedia titles, games, movies) are also run directly from CD-ROM—often
over a network. The CD-ROM market now embraces internal, external and portable drives, caddyand tray-loading mechanisms, single-disc and multi-changer units, SCSI and EIDE interfaces, and a
plethora of standards.
In order to understand what discs do and which machine reads which CD, it is necessary to
identify the different formats. The information describing a CD standard is written in books with
colored covers. A given standard is known by the color of its book cover:
Red Book – The most widespread CD standard; it describes the physical properties of the
compact disc and the digital audio encoding.
Yellow Book – Written in 1984 to describe the storage of computer data on CD, i.e.,
CD-ROM (Read Only Memory).
Green Book – Describes the CD-interactive (CD-i) disc, player, and operating system.
Orange Book – Defines CD-Recordable discs with multi-session capability. Part I defines
CD-MO (Magneto Optical) rewritable discs; Part II defines CD-WO (Write Once) discs;
Part III defines CD-RW (Rewritable) discs.
White Book – Finalized in 1993; defines the VideoCD specification.
Blue Book – Defines the Enhanced Music CD specification for multi-session pressed disc
(i.e., not recordable) containing audio and data sessions. These discs are intended to be
played on any CD audio player, on PCs, and on future custom designed players.
CD-I Bridge is a Philips/Sony specification for discs intended to play on CD-i players and
platforms such as the PC. Photo CD has been specified by Kodak and Philips based on the CD-i
Bridge specification.
CD-ROM Applications
Most multimedia titles are optimized for double or, at best, quad-speed drives. If video is recorded
to play back in real time at a 300 KB/s sustained transfer rate, anything faster than double-speed is
unnecessary. In some cases a faster drive may be able to read off the information quickly into a
buffer cache from where it is subsequently played, freeing the drive for further work. However, this
is rare.
Pulling off large images from a PhotoCD would be a perfect application for a faster CD-ROM
drive. But decompressing these images as they are read off the disc results in a performance ceiling
of quad-speed. In fact, just about the only application that truly needs fast data transfer rates is
copying sequential data onto a hard disc. In other words, installing software.
Chapter 8
Fast CD-ROM drives are only fast for sustained data transfer, not random access. An ideal
application for high-sustained data transfer is high-quality digital video, recorded at a suitably high
rate. MPEG-2 video as implemented on DVDs requires a sustained data transfer of around 580 KB/s.
This compares to MPEG-1’s 170 KB/s found on existing White Book VideoCDs. However, a standard 650-MB CD-ROM disc would last less than 20 minutes at those high rates. Hence, high-quality
video will only be practical on DVD discs which have a much higher capacity.
Normal music CDs and CD-ROMs are made from pre-pressed discs and encased in indentations
on the silver surface of the internal disc. To read the disc, the drive shines a laser onto the CD-ROM’s
surface, and by interpreting the way in which the laser light is reflected from the disc, it can tell
whether the area under the laser is indented.
Thanks to sophisticated laser focusing and error detection routines, this process is pretty close to
ideal. However, there is no way the laser can change the indentations of the silver disc. This means
there’s no way of writing new data to the disc once it has been created. Thus, the technological
developments to enable CD-ROMs to be written or rewritten have necessitated changes to the disc
media as well as to the read/write mechanisms in the associated CD-R and CD-RW drives.
At the start of 1997 it appeared likely that CD-R and CD-RW drives would be superseded by
DVD technology, almost before they had gotten off the ground. During that year DVD Forum
members turned on each other, triggering a DVD standards war and delaying product shipment.
Consequently, the writable and rewritable CD formats were given a new lease on life. For professional users, developers, small businesses, presenters, multimedia designers, and home recording
artists the recordable CD formats offer a range of powerful storage applications. CD media compatibility is their big advantage over alternative removable storage technologies such as MO, LIMDOW,
and PD. CD-R and CD-RW drives can read nearly all the existing flavors of CD-ROMs, and discs
made by CD-R and CD-RW devices can be read on both (MultiRead-capable) CD-ROM drives and
DVD-ROM drive. Due to their wide compatibility, a further advantage is the low cost of media. CDRW media is cheap and CD-R media even cheaper. Their principal disadvantage is that there are
limitations to their rewriteability. CD-R is not rewritable at all. And until recently, CD-RW discs had
to be reformatted to recover the space taken by deleted files when a disc becomes full. This is unlike
competing technologies which all offer true drag-and-drop functionality with no such limitation.
Even now, CD-RW rewriteability is less than perfect, resulting in a reduction of a CD-RW disc’s
storage capacity.
After a life span of ten years, during which time the capacity of hard disks increased a hundredfold,
the CD-ROM finally got the facelift it needed to take it into the next century. In 1996 a standard for
DVD (initially called digital videodisc, but eventually known as digital versatile disc) was finally
agreed upon.
The movie companies immediately saw a big CD as a way of stimulating the video market,
producing better quality sound and pictures on a disc that costs considerably less to produce than a
VHS tape. Using MPEG-2 video compression—the same system that will be used for digital TV,
satellite, and cable transmissions—it is possible to fit a full-length movie onto one side of a DVD
disc. The picture quality is as good as live TV and the DVD-Video disc can carry multichannel
digital sound.
Desktop and Notebook Personal Computers (PCs)
For computer users, however, DVD means more than just movies. While DVD-Video has been
grabbing the most headlines, DVD-ROM is going to be much bigger for a long time to come. Over
the next few years computer-based DVD drives are likely to outsell home DVD-Video machines by a
ratio of at least 5:1. With the enthusiastic backing of the computer industry in general, and CD-ROM
drive manufacturers in particular, DVD-ROM drives are being used more than CD-ROM drives.
Initially, movies will be the principal application to make use of DVD’s greater capacity.
However, the need for more capacity in the computer world is obvious to anyone who already has
multi-CD games and software packages. With modern-day programs fast outgrowing the CD, the
prospect of a return to multiple disc sets (which appeared to have disappeared when CD-ROM took
over from floppy disc) was looming ever closer. The unprecedented storage capacity provided by DVD
lets application vendors fit multiple CD titles (phone databases, map programs, encyclopaedias) on a
single disc, making them more convenient to use. Developers of edutainment and reference titles are
also free to use video and audio clips more liberally. And game developers can script interactive
games with full-motion video and surround-sound audio with less fear of running out of space.
When Philips and Sony got together to develop CD, there were just the two companies talking
primarily about a replacement for the LP. Engineers carried out decisions about how the system
would work and all went smoothly. However, the specification for the CD’s successor went entirely
the other way, with arguments, confusions, half-truths, and Machiavellian intrigue behind the scenes.
It all started badly with Matsushita Electric, Toshiba, and moviemaker Time/Warner in one
corner with their Super Disc (SD) technology. In the other corner were Sony and Philips, pushing
their Multimedia CD (MMCD) technology. The two disc formats were totally incompatible, creating
the possibility of a VHS/Betamax-type battle. Under pressure from the computer industry, the major
manufacturers formed a DVD Consortium to develop a single standard. The DVD-ROM standard that
resulted at the end of 1995 was a compromise between the two technologies, but relied heavily on
SD. The likes of Microsoft, Intel, Apple, and IBM gave both sides a simple ultimatum—produce a
single standard quickly, or don’t expect any support from the computer world. The major developers,
11 in all, created an uneasy alliance under what later became known as the DVD Forum. They
continued to bicker over each element of technology being incorporated in the final specification.
The reasons for the continued rearguard actions were simple. For every item of original technology put into DVD, a license fee has to be paid to the owners of the technology. These license fees
may only be a few cents per drive, but when the market amounts to millions of drives a year, it is
well worth arguing over. If this didn’t make matters bad enough, in waded the movie industry.
Paranoid about losing all its DVD-Video material to universal pirating, Hollywood first decided
it wanted an anti-copying system along the same lines as the SCMS system introduced for DAT
tapes. Just as that was being sorted out, Hollywood became aware of the possibility of a computer
being used for bit-for-bit file copying from a DVD disc to some other medium. The consequence was
an attempt to have the U.S. Congress pass legislation similar to the Audio Home Recording Act (the
draft was called “Digital Video Recording Act”) and to insist that the computer industry be covered
by the proposed new law.
While their efforts to force legislation failed, the movie studios did succeed in forcing a deeper
copy protection requirement into the DVD-Video standard. The resultant Content Scrambling System
Chapter 8
(CSS) was finalized toward the end of 1996. Further copy-protection systems have been developed
subsequent to this.
There are five physical formats, or books, of DVD:
DVD-ROM is a high-capacity data storage medium.
DVD-Video is a digital storage medium for feature-length motion pictures.
DVD-Audio is an audio-only storage format similar to CD-Audio.
DVD-R offers a write-once, read-many storage format akin to CD-R.
DVD-RAM was the first rewritable (erasable) flavor of DVD to come to market and has
subsequently found competition in the rival DVD-RW and DVD+RW format.
DVD discs have the same overall size as a standard 120-mm diameter, 1.2-mm thick CD, and
provide up to 17 GB of storage with higher than CD-ROM transfer rates. They have access rates
similar to CD-ROM and come in four versions:
DVD-5 is a single-sided, single-layered disc boosting capacity seven-fold to 4.7 GB.
DVD-9 is a single-sided, double-layered disc offering 8.5 G.
DVD-10 is a 9.4 GB dual-sided single-layered disc.
DVD-18 will increase capacity to a huge 17 GB on a dual-sided, dual-layered disc.
Like DVD discs, there is little to distinguish a DVD-ROM drive from an ordinary CD-ROM drive
since the only giveaway is the DVD logo on the front. Even inside the drive there are more similarities than differences. The interface is ATAPI (AT Attachment Packet Interface) or SCSI for the more
up-market drives and the transport is much like any other CD-ROM drive. CD-ROM data is recorded
near the top surface of a disc. DVDs data layer is right in the middle so that the disc can be doublesided. Hence, the laser assembly of a DVD-ROM drive needs to be more complex than its CD-ROM
counterpart so it can read from both CD and DVD media. An early solution entailed having a pair of
lenses on a swivel—one to focus the beam onto the DVD data layers and the other to read ordinary
CDs. Subsequently, more sophisticated designs have emerged that eliminate the need for lens
switching. For example, Sony’s “dual discrete optical pickup” design has separate lasers optimized
for CD (780-nm wavelength) and DVD (650-nm wavelength). Many Panasonic drives employ an
even more elegant solution that avoids the need to switch either lenses or laser beams. They use a
holographic optical element capable of focusing a laser beam at two discrete points.
DVD-ROM drives spin the disk a lot slower than their CD-ROM counterparts. But since the data
is packed much closer together on DVD discs, the throughput is substantially better than a CD-ROM
drive at equivalent spin speed. While a 1x CD-ROM drive has a maximum data rate of only 150 KB/s, a
1x DVD-ROM drive can transfer data at 1,250 KB/s—just over the speed of an 8x CD-ROM drive.
DVD-ROM drives became generally available in early 1997. These early 1x devices were
capable of reading CD-ROM discs at 12x speed—sufficient for full-screen video playback. As with
CD-ROM, higher speed drives appeared as the technology matured. By the beginning of 1998 multispeed DVD-ROM drives had already reached the market. They were capable of reading DVD media
at double-speed, producing a sustained transfer rate of 2,700 KB/s. They were also capable of
spinning CDs at 24-speed, and by the end of that year DVD read performance had been increased to
5-speed. A year later, DVD media reading had improved to six-speed (8,100 KB/s) and CD-ROM
reading to 32-speed.
Desktop and Notebook Personal Computers (PCs)
There is no standard terminology to describe the various generations of DVD-ROM drives.
However, second generation (DVD II) is usually used to refer to 2x drives capable of reading CD-R/
CD-RW media. Third generation (DVD III) usually means 5x, or sometimes, 4.8x or 6x drives—
some of which can read DVD-RAM media.
Removable Storage
Back in the mid-1980s when a PC had a 20-MB hard disk, a 1.2-MB floppy was a capacious device
capable of backing up the entire drive on a mere 17 disks. By early 1999 the standard PC hard disk
had a capacity of between 3 GB and 4 GB—a 200-fold increase. In the same period the floppy’s
capacity increased by less than 20%. As a result, it is now at a disadvantage when used in conjunction with any modern large hard disks. For most users, the standard floppy disk just isn’t big enough
In the past this problem only affected a tiny proportion of users, and solutions were available for
those that did require high-capacity removable disks. For example, SyQuest’s 5.25-in, 44- or 88-MB
devices have been the industry standard in publishing. They are used for transferring large DTP or
graphics files from the desktop to remote printers. They are quick and easy to use but are reasonably
Times have changed and today everybody needs high-capacity removable storage. These days,
applications do not come on single floppies, they come on CD-ROMs. Thanks to Windows and the
impact of multimedia, file sizes have gone through the ceiling. A Word document with a few embedded graphics results in a multi-megabyte data file, quite incapable of being shoehorned into a floppy
disk. There is nogetting around the fact that a PC just has to have some sort of removable, writable
storage with a capacity in tune with current storage requirements. It must be removable for several
reasons—to transport files between PCs, to back up personal data, to act as an over-spill for the hard
disk, to provide (in theory) unlimited storage. It is much easier to swap removable disks than to
install another hard disk to obtain extra storage capacity.
Phase-change Technology
Panasonic’s PD system, boasting the company’s own patented phase-change technology, has been
around since late 1995. Considered innovative at the time, the PD drive combines an optical disk
drive capable of handling 650-MB capacity disks along with a quad-speed CD-ROM drive. It is the
only erasable optical solution that has direct overwrite capability. Phase-change uses a purely optical
technology that relies only on a laser. It is able to write new data with just a single pass of the read/
write head.
With Panasonic’s system the active layer is made of a material with reversible properties. A very
high-powered laser heats that portion of the active layer where data is to be recorded. This area cools
rapidly, forming an amorphous spot of low reflectivity. A low-powered laser beam detects the
difference between these spots and the more reflective, untouched, crystalline areas thus identify a
binary “0” or “1.”
By reheating a spot, re-crystallization occurs, resulting in a return to its original highly reflective
state. Laser temperature alone changes the active layer to crystalline or amorphous, according to the
data required, in a single pass. Panasonic’s single-pass phase-change system is quicker than the twopass process employed by early MO (Magneto Optical) devices. However, modern day single-pass
MO devices surpass its level of performance.
Chapter 8
Floppy Replacements
Today’s hard disks are measured in gigabytes, and multimedia and graphics file sizes are often
measured in tens of megabytes. A capacity of 100 MB to 250 MB is just right for performing the
traditional functions of a floppy disk (moving files between systems, archiving, backing up files or
directories, sending files by mail, etc.). It is not surprising that drives in this range are bidding to be
the next-generation replacement for floppy disk drives. They all use flexible magnetic media and
employ traditional magnetic storage technology.
Without doubt, the most popular device in this category is Iomega’s Zip drive, launched in 1995.
The secret of the Zip’s good performance (apart from its high, 3,000-rpm spin rate) is a technology
pioneered by Iomega. Based on the Bernoulli aerodynamic principle, it sucks the flexible disk up
towards the read/write head rather than vice-versa. The disks are soft and flexible like floppy disks
which makes them cheap to make and less susceptible to shock.
The Zip has a capacity of 94 MB and is available in both internal and external versions. The
internal units are fast, fit a 3.5-inch bay, and come with a choice of SCSI or ATAPI interface. They
have an average 29-ms seek time and a data transfer rate of 1.4 KB/s. External units originally came
in SCSI or parallel port versions only. However, the Zip 100 Plus version, launched in early 1998,
offered additional versatility. It was capable of automatically detecting which interface applied and
operating accordingly. The range was further extended in the spring of 1999 when Iomega brought a
USB version to market. In addition to the obvious motivation of Windows 98 with its properly
integrated USB support, the success of the Apple iMac was another key factor behind the USB
Any sacrifice the external version makes in terms of performance is more than offset by the
advantage of portability. It makes the transfer of reasonable-sized volumes of data between PCs a
truly simple task. The main disadvantage of Zip drives is that they are not backward compatible with
3.5in floppies.
The end of 1996 saw the long-awaited appearance of OR Technology’s LS-120 drive. The
technology behind the LS-120 had originally been developed by Iomega, but was abandoned and
sold on to 3M. The launch had been much delayed, allowing the rival Zip drive plenty of time to
become established. Even then the LS-120 was hampered by a low-profile and somewhat muddled
marketing campaign. Originally launched under the somewhat confusing brand name “a:DRIVE,”
the LS-120 was promoted by Matsushita, 3M, and Compaq. It was initially available only preinstalled on the latter’s new range of Deskpro PCs. Subsequently, OR Technology offered licenses
to third party manufacturers in the hope they would fit the a:DRIVE instead of a standard floppy to
their PCs.
However, it was not until 1998 that Imation Corporation (a spin-off of 3M’s data storage and
imaging businesses) launched yet another marketing offensive. Under the brand name “SuperDisk,”
the product began to gain serious success in the marketplace. A SuperDisk diskette looks very similar
to a common 1.44-MB, 3.5-inch disk. But it uses a refinement of the old 21-MB floptical technology
to deliver much greater capacity and speed. Named after the “laser servo” technology it employs, an
LS-120 disk has optical reference tracks on its surface that are both written and read by a laser system.
These “servo tracks” are much narrower and can be laid closer together on the disk. For example, an
LS-120 disk has a track density of 2,490 tpi compared with 135 tpi on a standard 1.44-MB floppy. As
a result, the LS-120 can hold 120 MB of data.
Desktop and Notebook Personal Computers (PCs)
The SuperDisk LS-120 drive uses an IDE interface (rather than the usual floppy lead) which uses
up valuable IDE connections. This represents a potential problem with an IDE controller which
supports only two devices rather than an EIDE controller which supports four. While its 450-KB/s
data transfer rate and 70-ms seek time make it 5 times faster than a standard 3.5-in.floppy drive, it’s
comparatively slow spin rate of 720 rpm mean that it is not as fast as a Zip drive. However, there are
two key points in the LS-120 specification that represent its principal advantages over the Zip. First,
there is backward compatibility. In addition to 120-MB SuperDisk diskettes, the LS-120 can accommodate standard 1.44-MB and 720-KB floppy disks. And these are handled with a 3-fold speed
improvement over standard floppy drives. Second, compatible BIOS allows the LS-120 to act as a
fail-safe start-up drive in the event of a hard disk crash. Taken together, these make the LS-120 a
viable alternative to a standard floppy drive.
Early 1999 saw the entry of a third device in this category with the launch of Sony’s HiFD drive.
With a capacity of 200 MB per disk, the HiFD provides considerably greater storage capacity than
either Iomega’s 100-MB Zip or Imation’s 120-MB SuperDisk. It was initially released as an external
model with a parallel port connector and a pass-through connector for a printer. Equipping the HiFD
with a dual-head mechanism provides compatibility with conventional 1.44-MB floppy disks. When
reading 1.44-MB floppy disks, a conventional floppy-disk head is used. This comes into direct
contact with the media surface which rotates at just 300 rpm. The separate HiFD head works more
like a hard disk, gliding over the surface of the disk without touching it. This allows the HiFD disk to
rotate at 3,600 rpm with a level of performance that is significantly better than either of its rivals.
However, the HiFD suffered a major setback in the summer of 1999 when read/write head misalignment problems resulted in major retailers withdrawing the device from the market.
The 200-MB to 300-MB range is best understood as super-floppy territory. This is about double the
capacity of the would-be floppy replacements with performance more akin to a hard disk than to a
floppy disk drive. Drives in this group use either magnetic storage or magneto-optical technology.
The magnetic media drives offer better performance. But even MO drive performance, at least for the
SCSI versions, is good enough to allow video clips to be played directly from the drives.
In the summer of 1999 Iomega altered the landscape of the super-floppy market with the launch
of the 250-MB version of its Zip drive. Like its predecessor, this is available in SCSI and parallel
port versions. The parallel port version offers sustained read performance around twice the speed of
the 100-MB device and sustained write speed about 50% faster. The actual usable capacity of a
formatted 250-MB disk is 237 MB This is because, like most hard drive and removable media
manufacturers, Iomega’s capacity ratings assume that 1 MB equals 1,000,000 bytes rather than the
strictly correct 1,048,576 bytes. The Zip 250 media is backward compatible with the 100-MB disks.
The only downside is the poor performance of the drive when writing to the older disks.
By the new millennium the SuperDisk format had already pushed super-floppy territory by
doubling its capacity to 240 MB. In 2001 Matsushita gave the format a further boost with the
announcement of its FD32MB technology which gives high-density 1.44-MB floppy users the option
of reformatting the media to provide a storage capacity of 32 MB per disk.
The technology increases the density of each track on the HD floppy by using the SuperDisk
magnetic head for reading and the conventional magnetic head for writing. FD32MB takes a conven-
Chapter 8
tional floppy with 80 circumference-shaped tracks, increases that number to 777, and reduces the
track pitch from the floppy’s normal 187.5 microns to as little as 18.8 microns.
Hard Disk Complement
The next step up in capacity, 500 MB to 1 GB, is enough to back-up a reasonably large disk partition. Most such devices also offer good enough performance to function as secondary, if slow, hard
disks. Magnetic and MO technology again predominates, but in this category they come up against
competition from a number of Phase-change devices.
Above 1 GB the most commonly used removable drive technology is derived from conventional
hard disks. They not only allow high capacities, but also provide fast performance, pretty close to
that of conventional fixed hard disks. These drives behave just like small, fast hard disks, with recent
hard disk complements exceeding 100 GB storage spaces.
Magnetic storage and MO are again the dominant technologies. Generally, the former offers
better performance and the latter larger storage capacity. However, MO disks are two-sided and only
half the capacity is available on-line at any given time.
Right from its launch in mid-1996 the Iomega Jaz drive was perceived to be a groundbreaking
product. The idea of having 1GB removable hard disks with excellent performance was one that
many high-power PC users had been dreaming of. When the Jaz appeared on the market there was
little or no competition to what it could do. It allowed users to construct audio and video presentations and transport them between machines. In addition, such presentations could be executed
directly from the Jaz media with no need to transfer the data to a fixed disk. In a Jaz, the twin
platters sit in a cartridge protected by a dust-proof shutter which springs open on insertion to provide
access to well-tried Winchester read/write heads. The drive is affordably priced, offers a fast 12-ms
seek time coupled with a data transfer rate of 5.4 MB/s, and comes with a choice of IDE or SCSI-2
interfaces. It is a good choice for audiovisual work, capable of holding an entire MPEG movie.
SyQuest’s much-delayed riposte to the Jaz, the 1.5-GB SyJet, finally came to market in the
summer of 1997. It was faster than the Jaz, offering a data transfer rate of 6.9 MB/s. It came in
parallel-port and SCSI external versions as well as an IDE internal version. Despite its larger
capacity and improved performance, the SyJet failed to achieve the same level of success as the Jaz.
When first launched in the spring of 1998, the aggressively-priced 1-GB SparQ appeared to have
more prospect of turning the tables on Iomega. Available in both external parallel port or internal
IDE models, the SparQ achieved significantly improved performance by using high-density single
platter disks and a spindle speed of 5,400 rpm. It turned out to be too little too late, though. On 2
November 1998, SyQuest was forced to suspend trading and subsequently file for bankruptcy under
Chapter 11 of U.S. law. A few months later, in early 1999, the company was bought by archrival
However, it was not long before Iomega’s apparently unchangeable position in the removable
storage arena came under threat from another quarter. Castlewood Systems had been founded in 1996
by one of the founders of SyQuest, and in 1999 its removable media hard drives began to make the
sort of waves that had been made by the ground-breaking Iomega Zip drive some years previously.
The Castlewood ORB is the first universal storage system built using cutting-edge magnetoresistive (MR) head technology. This makes it very different from other removable media drives
that use 20-year-old hard drive technology based on thin film inductive heads. MR hard drive
Desktop and Notebook Personal Computers (PCs)
technology—first developed by IBM—permits a much larger concentration of data on the storage
medium and is expected to allow real densities to grow at a compound annual rate of 60% in the next
decade. The Castle ORB drive uses 3.5-inch removable media that is virtually identical to that used
in a fixed hard drive. With a capacity of 2.2 GB and a claimed maximum sustained data transfer rate
of 12.2 MB/s, it represents a significant step forward in removable drive performance, making it
capable of recording streaming video and audio. One of the benefits conferred by MR technology is
a reduced component count. This supports two of the ORB’s other advantages over its competition—
cooler operation and an estimated MTBF rating 50% better than other removable cartridge products.
Both will be important factors as ORB drives are adapted to mobile computers and a broad range of
consumer products such as digital VCRs (which use batteries or have limited space for cooling fans).
In addition, in mid-1999 the ORB drive and media were available at costs that were a factor of 2 and
3, respectively, less than competitive products.
The Blue Laser
The blue laser represents the best chance that optical storage technology has for achieving a significant increase in capacity over the next ten years. The laser is critical to development because the
wavelength of the drive’s laser light limits the size of the pit that can be read from the disc. If a
future-generation disc with pit sizes of 0.1 micrometer were placed in today’s CD-ROM drives, the
beam from its laser would seem more like a floodlight covering several tracks instead of a spotlight
focusing on a single dot.
The race is now on to perfect the blue laser in a form that can be used inside an optical disc
layer. They have a smaller wavelength, so the narrower beam can read smaller dots. But blue lasers
are proving a difficult nut to crack. They already exist for big systems and it is likely that they will
be used extensively in the future for making the masters for DVD discs. However, this will require
special laser-beam recorders the size of a wardrobe. They cost thousands and need a super-clean,
vibration-free environment in which to work properly.
The challenge now is to build an affordable blue laser that fits into a PC-ROM drive. Solid-state
blue lasers work in the laboratory, but not for long. The amount of power the laser needs to produce
is a lot for a device hardly bigger than a match-head. Getting the laser to fire out of one end while
not simultaneously punching a hole in its other end has been creating headaches for developers.
Techniques moved forward during 1997 and DVD discs and drives have become a big success.
The current target is to use blue lasers for discs that contain 15 GB per layer. Using current DVD
disc design could automatically produce a double-sided, 30-GB disc, or a dual-layer disc that could
store 25 GB on a side.
From a PC data file point of view, this is an unnecessarily large capacity since most users are
still producing documents that fit onto a single floppy. The main opportunity is the ability to store
multimedia, video, and audio. Even though the desktop PC-based mass storage market is big, moving
optical disc into the video recorder/player market would create a massive industry for the 5-inch
silver disc.
Video or graphics circuitry is usually fitted to a card or built on the motherboard and is responsible
for creating the picture displayed on a monitor. On early text-based PCs this was a fairly mundane
task. However, the advent of graphical operating systems dramatically increased the amount of
Chapter 8
information to be displayed. It became impractical for the main process or handle all this information. The solution was to off-load screen activity processing to a more intelligent generation of
graphics card.
As multimedia and 3D graphics use has increased, the role of the graphics card has become ever
more important. It has evolved into a highly efficient processing engine that can be viewed as a
highly specialized co-processor. By the late 1990s the rate of development in graphics chips had
reached levels unsurpassed in any other area of PC technology. Major manufacturers such as 3dfx,
ATI, Matrox, nVidia, and S3 were working to a barely believable six-month product life cycle! One
of the consequences of this has been the consolidation of major chip vendors and graphics card
Chip maker 3dfx started the trend in 1998 with its acquisition of board manufacturer STB
systems. This gave 3dfx a more direct route to market with retail product and the ability to manufacture and distribute boards bearing its own brand. Rival S3 followed suit in the summer of 1999 by
buying Diamond Multimedia, thereby acquiring its graphics and sound card, modem, and MP3
technologies. Weeks later 16-year veteran Number Nine announced the abandonment of its chip
development business in favor of board manufacturing.
All this maneuvering left nVidia as the last of the major graphics chip vendors without its own
manufacturing facility. Many speculated that it would merge with close partner, Creative Labs. While
there had been no developments on this front by mid-2000, nVidia’s position had been significantly
strengthened by S3’s sale of its graphics business to VIA Technologies in April of that year. S3
portrayed the move as an important step in transforming the company from a graphics-focused
semiconductor supplier to a more broadly based Internet appliance company. It left nVidia as sole
remaining big player in the graphics chip business. In any event, it was not long before S3’s move
would be seen as a recognition of the inevitable.
In an earnings announcement at the end of 2000, 3dfx announced the transfer of all patents,
patents pending, the Voodoo brand name, and major assets to bitter rivals nVidia. In effect, they
recommended the dissolution of the company. In hindsight, it could be argued that 3dfx’s acquisition
of STB in 1998 had simply hastened the company’s demise. It was at this point that many of its
board manufacturer partners switched their allegiance to nVidia. At the same time, nVidia sought to
bring some stability to the graphics arena by making a commitment about future product cycles.
They promised to release a new chip every autumn, and a tweaked and optimized version of that chip
each following spring. To date, they have delivered on their promise, and deservedly retained their
position of dominance.
Resolution is a term often used interchangeably with addressability, but it more properly refers to the
sharpness, or detail, of a visual image. It is primarily a function of the monitor and is determined by
the beam size and dot pitch (sometimes called “line pitch”). An image is created when a beam of
electrons strikes phosphors which coat the base of the monitor’s screen. A pixel is a group of one red,
one green, and one blue phosphor. A pixel represents the smallest piece of the screen that can be
controlled individually, and each pixel can be set to a different color and intensity. A complete screen
image is composed of thousands of pixels, and the screen’s resolution (specified by a row by column
figure) is the maximum number of displayable pixels. The higher the resolution, the more pixels that
can be displayed, and the more information the screen can display at any given time.
Desktop and Notebook Personal Computers (PCs)
Resolutions generally fall into predefined sets. Table 8.2 below shows the video standards used
since CGA which was the first to support color/graphics capability:
Table 8.2: Summary of Video Standards
No. of Colors
Color Graphics
640 x 200
160 x 200
Graphics Adapter
640 x 350
16 from 64
Video Graphics
640 x 480
16 from 262,144
320 x 200
Extended Graphics
800 x 600
16.7 million
1024 x 768
Super Extended
Graphics Array
1280 x 1024
Ultra XGA
1600 x 1200
Pixel addressing that is better than the VGA standard lacks a widely accepted standard. This
presents a problem for everyone—manufacturers, system builders, programmers, and end users. As a
result, each vendor must provide specific drivers for each supported operating system for each of
their cards. The XGA, configurable with 500 KB or 1 MB, was the first IBM display adapter to use
VRAM. SXGA and UXGA are subsequent IBM standards, but neither has been widely adopted.
Typically, an SVGA display can support a palette of up to 16.7 million colors. But the amount of
video memory in a particular computer may limit the actual number of displayed colors to something
less than that. Image-resolution specifications vary. In general, the larger the diagonal screen
measure of an SVGA monitor, the more pixels it can display horizontally and vertically. Small
SVGA monitors (14 in. diagonal) usually use a resolution of 800×600. The largest (20 in. or greater,
diagonal) can display 1280×1024, or even 1600×1200 pixels.
Pixels are smaller at higher resolutions. Prior to Windows 95—and the introduction of scaleable
screen objects—Windows icons and title bars were always the same number of pixels in size whatever the resolution. Consequently, the higher the screen resolution, the smaller these objects
appeared. The result was that higher resolutions worked much better on larger monitors where the
pixels are correspondingly larger. These days the ability to scale Windows objects, coupled with the
option to use smaller or larger fonts, yields far greater flexibility. This makes it possible to use many
15-in. monitors at screen resolutions up to 1024×768 pixels, and 17-in. monitors at resolutions up to
Color depth
Each pixel of a screen image is displayed using a combination of three different color signals—red,
green, and blue. The precise appearance of each pixel is controlled by the intensity of these three
Chapter 8
beams of light. The amount of information that is stored about a pixel determines its color depth. The
more bits that are used per pixel (“bit depth”), the finer the color detail of the image.
For a display to fool the eye into seeing full color, 256 shades of red, green, and blue are
required. That is eight bits for each of the three primary colors, or 24 bits in total. However, some
graphics cards actually require 32 bits for each pixel to display true color due to the way they use the
video memory. Here, the extra eight bits are generally used for an alpha channel (transparencies).
High color uses two bytes of information to store the intensity values for the three colors—five
bits for blue, five for red, and six for green. This results in 32 different intensities for blue and red,
and 64 for green. Though this results in a very slight loss of visible image quality, it offers the
advantages of lower video memory requirements and faster performance.
256-color mode uses a level of indirection by introducing the concept of a “palette” of colors,
selectable from a range of 16.7 million colors. Each color in the 256-color palette is described by the
standard 3-byte color definition used in true color. Hence, red, blue, and green each get 256 possible
intensities. Any given image can then use any color from its associated palette.
The palette approach is an excellent compromise solution. For example, it enables far greater
precision in an image than would be possible by using the eight available bits to assign each pixel a
2-bit value for blue and 3-bit values each for green and red. The 256-color mode is a widely used
standard, especially for business, because of its relatively low demands on video memory.
Dithering substitutes combinations of colors that a graphics card can generate for colors that it
cannot produce. For example, if a graphics subsystem is capable of handling 256 colors, and an
image that uses 65,000 colors is displayed, colors that are not available are replaced by colors
created from combinations of colors that are available. The color quality of a dithered image is
inferior to a non-dithered image.
Dithering also refers to using two colors to create the appearance of a third, giving a smoother
appearance to otherwise abrupt transitions. That is, it uses patterns to simulate gradations of gray or
color shades, or of anti-aliasing.
Components of a PC Graphics Card
The modern PC graphics card consists of four main components: graphics processor, video memory,
a random access memory digital-to-analog converter (RAMDAC), and driver software.
Graphics Processor
Instead of sending a raw screen image to the frame buffer, the CPU’s dedicated graphics processing
chip sends a smaller set of drawing instructions. These are interpreted by the graphics card’s proprietary driver and executed by the card’s on-board processor.
Operations including bitmap transfers and painting, window resizing and repositioning, line
drawing, font scaling, and polygon drawing can be handled by the card’s graphics processor. It is
designed to process these tasks in hardware at far greater speeds than the software running on the
system’s CPU could process them. The graphics processor then writes the frame data to the frame
buffer. As there is less data to transfer, there is less congestion on the system bus, and CPU workload
is greatly reduced.
Desktop and Notebook Personal Computers (PCs)
Video Memory
The memory that holds the video image is also referred to as the frame buffer and is usually implemented on the graphics card itself. Early systems implemented video memory in standard DRAM.
However, this requires continual refreshing of the data to prevent it from being lost, and data cannot
be modified during this refresh process. Performance is badly degraded at the very fast clock speeds
demanded by modern graphics cards.
An advantage of implementing video memory on the graphics board itself is that it can be
customized for its specific task. And this has resulted in a proliferation of new memory technologies:
Video RAM (VRAM) – A special type of dual-ported DRAM which can be written to and
read from at the same time. Hence, it performs much better since it requires far less frequent
Windows RAM (WRAM) – Used by the Matrox Millennium card, it is dual-ported and can
run slightly faster than conventional VRAM.
EDO DRAM – Provides a higher bandwidth than normal DRAM, can be clocked higher
than normal DRAM, and manages read/write cycles more efficiently.
SDRAM – Similar to EDO RAM except that the memory and graphics chips run on a
common clock used to latch data. This allows SDRAM to run faster than regular EDO RAM
SGRAM – Same as SDRAM but also supports block writes and write-per-bit which yield
better performance on graphics chips that support these enhanced features.
DRDRAM – Direct RDRAM is a totally new, general-purpose memory architecture that
promises a 20-fold performance improvement over conventional DRAM.
Some designs integrate the graphics circuitry into the motherboard itself and use a portion of the
system’s RAM for the frame buffer. This is called unified memory architecture and is used only to
reduce costs. Since such implementations cannot take advantage of specialized video memory
technologies, they will always result in inferior graphics performance.
The information in the video memory frame buffer is an image of what appears on the screen,
stored as a digital bitmap. But while the video memory contains digital information its output
medium, the monitor, uses analogue signals. The analogue signal requires more than just an on or off
signal since it is used to determine where, when, and with what intensity the electron guns should be
fired as they scan across and down the front of the monitor. This is where the RAMDAC comes in.
The RAMDAC reads the contents of the video memory many times per second, converts it into an
analog RGB signal, and sends it over the video cable to the monitor. It does this by using a look-up
table to convert the digital signal to a voltage level for each color. There is one digital-to-analog
converter (DAC) for each of the three primary colors used by the CRT to create a complete spectrum
of colors. The intended result is the right mix needed to create the color of a single pixel. The rate at
which the RAMDAC can convert the information, and the design of the graphics processor itself,
dictates the range of refresh rates that the graphics card can support. The RAMDAC also dictates the
number of colors available in a given resolution, depending on its internal architecture.
Driver Software
Modern graphics card driver software is vitally important when it comes to performance and features. For most applications, the drivers translate what the application wants to display on the screen
into instructions that the graphics processor can use. The way the drivers translate these instructions
Chapter 8
is very important. Modern graphics processors do more than change single pixels at a time; they have
sophisticated line and shape drawing capabilities and they can move large blocks of information
around. It is the driver’s job to decide on the most efficient way to use these graphics processor
features, depending on what the application requires to be displayed.
In most cases a separate driver is used for each resolution or color depth. Taking into account the
different overheads associated with different resolutions and colors, a graphics card can have markedly different performance at different resolutions, depending on how well a particular driver has
been written and optimized.
Sound Cards
Sound is a relatively new capability for PCs because no one really considered it when the PC was
first designed. The original IBM-compatible PC was designed as a business tool, not as a multimedia
machine. Hence, it is hardly surprising that nobody thought of including a dedicated sound chip in its
architecture. Computers were seen as calculating machines; the only kind of sound necessary was the
beep that served as a warning signal. For years, the Macintosh has had built-in sound capabilities far
beyond beeps and clicks, but PCs with integrated sound are still few and far between. That’s why
PCs continue to require an add-in board or sound card to produce decent quality sound.
The popularity of multimedia applications over the past few years has accelerated the development of the sound card. The increased competition between manufacturers has led to these devices
becoming cheaper and more sophisticated. Today’s cards not only make games and multimedia
applications sound great, but with the right software, users can also compose, edit, and print their
own music. And they can learn to play the piano, record and edit digital audio, and play audio CDs.
The modern PC sound card contains several hardware systems relating to the production and capture
of audio. The two main audio subsystems are for digital audio capture and replay and music synthesis. The replay and music synthesis subsystem produces sound waves in one of two ways—through
an internal FM synthesizer, and by playing a digitized, or sampled, sound.
The digital audio section of a sound card consists of a matched pair of 16-bit digital-to-analog
(DAC) and analog-to-digital (ADC) converters and a programmable sample rate generator. The
computer reads the sample data to or from the converters. The sample rate generator clocks the
converters and is controlled by the PC. While it can be any frequency above 5 kHz, it is usually a
fraction of 44.1 kHz.
Most cards use one or more Direct Memory Access (DMA) channels to read and write the digital
audio data to and from the audio hardware. DMA-based cards that implement simultaneous recording and playback (or full duplex operation) use two channels. This increases the complexity of
installation and the potential for DMA clashes with other hardware. Some cards also provide a direct
digital output using an optical or coaxial S/PDIF connection.
A card’s sound generator is based on a custom DSP (digital signal processor) that replays the
required musical notes. It multiplexes reads from different areas of the wave-table memory at
differing speeds to give the required pitches. The maximum number of notes available is related to
the processing power available in the DSP and is referred to as the card’s “polyphony.”
DSPs use complex algorithms to create effects such as reverb, chorus, and delay. Reverb gives
the impression that the instruments are being played in large concert halls. Chorus gives the impres154
Desktop and Notebook Personal Computers (PCs)
sion that many instruments are playing at once, when in fact there’s only one. Adding a stereo delay
to a guitar part, for example, can “thicken” the texture and give it a spacious stereo presence.
Portable/Mobile Computing
No area of personal computing has changed more rapidly than portable technology. With software
programs getting bigger all the time and portable PCs being used for a greater variety of applications, manufacturers have their work cut out for them. They must attempt to match the level of
functionality of a PC in a package that can be used on the road. This has led to a number of rapid
advancements in both size and power. By mid-1998 the various mobile computing technologies had
reached a level where it was possible to buy a portable computer that was as fast as a desktop
machine, yet capable of being used in the absence of a mains electricity supply for over five hours.
In mid-1995 Intel’s processor of choice for notebook PCs was the 75-MHz version. This was available in a special thin-film package - the Tape Carrier Package (TCP)—designed to ease heat
dissipation in the close confines of a notebook. It also incorporated Voltage Reduction Technology
which allowed the processor to “talk” to industry standard 3.3-volt components, while its inner core,
operating at 2.5 volts, consumed less power to promote a longer battery life. In combination, these
features allowed system manufacturers to offer high-performance, feature-rich notebook computers
with extended battery lives.
The processors used in notebook PCs have continued to gain performance features, and use less
power. They have traditionally been a generation behind desktop PCs. The new CPU’s low power
consumption, coupled with multimedia extension technology, provide mini-notebook PC users with
the performance and functionality to effectively run business and communication applications “on
the road.”
Expansion Devices
Many notebook PCs are proprietary designs, sharing few common, standard parts. A consequence of
this is that their expansion potential is often limited and the cost of upgrading them is high. While
most use standard CPUs and RAM, these components generally fit in unique motherboard designs,
housed in unique casing designs. Size precludes the incorporation of items like standard PCI slots or
drive bays, or even relatively small features like SIMM slots. Generally, the only cheap way to
upgrade a notebook is via its native PC Card slots.
A port replicator is not so much an expansion device; it’s more a means to facilitate easier
connectivity between the notebook PC and external peripherals and devices. The main reason for the
existence of port replicators is the fragility of PC connectors, which are designed for only so many
insertions. The replicator remains permanently plugged into a desktop PC. It makes the repeated
connections necessary to maintain synchronization between mobile computing devices (notebook,
PDA, desktop PC) via the replicator’s more robust connections.
A desk-based docking station takes the concept a stage further, adding desktop PC-like expansion
opportunities to mobile computing devices. A full docking station will generally feature expansion
slots and drive bays to which ordinary expansion cards and external peripheral devices may be fit. It
may also come complete with an integrated monitor stand and provide additional higher-performance
interfaces such as SCSI.
Chapter 8
PC Card
In the early 90’s the rapid growth of mobile computing drove the development of smaller, lighter,
more portable tools for information processing. PC Card technology was one of the most exciting
innovations. The power and versatility of PC Cards quickly made them standard equipment in mobile
The rapid development and worldwide adoption of PC Card technology has been due in large part to
the standards efforts of the Personal Computer Memory Card International Association (PCMCIA).
First released in 1990, the PC Card Standard defines a 68-pin interface between the peripheral
card and the socket into which it is inserted. The standard defines three standard PC Card form
factors—Type I, Type II, and Type III. The only difference between the card types is thickness. Type
I is 3.3 mm, Type II is 5.0 mm, and Type III is 10.5 mm. Because they differ only in thickness, a
thinner card can be used in a thicker slot, but a thicker card can not be used in a thinner slot. The
card types each have features that fit the needs of different applications. Type I PC Cards are typically used for memory devices such as RAM, Flash, and SRAM cards. Type II PC Cards are typically
used for I/O devices such as data/fax modems, LANs, and mass storage devices. Type III PC Cards
are used for devices whose components are thicker, such as rotating mass storage devices. Extended
cards allow the addition of components that must remain outside the system for proper operation,
such as antennas for wireless applications.
In addition to electrical and physical specifications, the PC Card Standard defines a software
architecture to provide “plug and play” capability across the widest product range. This software is
made up of Socket Services and Card Services which allow for interoperability of PC Cards.
Socket Services is a BIOS-level software layer that isolates PC Card software from system
hardware and detects the insertion and removal of PC Cards. Card Services describes an Application
Programming Interface (API) which allows PC Cards and sockets to be shared by multiple clients
such as devices drivers, configuration utilities, or application programs. It manages the automatic
allocation of system resources such as memory, and interrupts once Socket Services detects that a
card has been inserted.
Derived from the Peripheral Component Interconnect (PCI) Local Bus signaling protocol, CardBus is
the 32-bit version of PC Card technology. Enabled in the February 1995 release of the PC Card
Standard, CardBus features and capabilities include 32 bits of address and data, 33-MHz operation,
and bus master operation.
Battery Technology
Historically, the technology of the batteries that power notebook PCs has developed at a somewhat
slower rate than other aspects of mobile computing technology. Furthermore, as batteries get better
the power advantage is generally negated by increased consumption from higher performance PCs,
with the result that average battery life remains fairly constant.
Nickel-Cadmium (NiCad) was a common chemical formulation up to 1996. As well as being
environmentally unfriendly, NiCad batteries were heavy, low in capacity, and prone to “memory
effect.” The latter was a consequence of recharging a battery before it was fully discharged. The
Desktop and Notebook Personal Computers (PCs)
memory effect caused batteries to “forget” their true charge. Instead, they would provide a much
smaller charge, often equivalent to the average usage time prior to its recharge. Fortunately, NiCad
batteries are almost impossible to find in notebook PCs these days.
The replacement for Nickel-Cadmium was Nickel-Metal Hydride (NiMH), a more environmentally friendly formulation. NiMH has a higher energy density and is less prone to the memory effect.
By 1998 the most advanced commercially available formulation was Lithium Ion (Li-Ion). Lithium
Ion has a longer battery life (typically around three hours) than NiMH and does not have to be fully
discharged before recharging. It is also lighter to carry. Although the price differential is narrowing,
Li-Ion continues to carry a price premium over NiMH.
The main ingredient of a Li-Ion battery is lithium. It does not appear as pure lithium metal, but
Atoms within the graphite of the battery’s anode (positive terminal)
Lithium-cobalt oxide or lithium-mangan oxide in the cathode (negative terminal)
Lithium salt in the battery’s electrolyte
It relies on reversible action. When being charged, some of the lithium ions usually stored in the
cathode are released. They make their way to the graphite (carbon) anode where they combine into
its crystalline structure. When connected to a load such as a notebook PC, the electrochemical
imbalance within the battery created by the charging process is reversed, delivering a current to the
load. Li-Ion is used in all sorts of high-end electronic equipment such as mobile phones and digital
cameras. One downside is that it self-discharges over a few months if left on the shelf, so it requires
care to keep it in optimum condition.
1999 saw Lithium polymer (Li-polymer) emerge as the most likely battery technology of the
future. Using lithium—the lightest metal on earth - this technology offers potentially greater energy
densities than Li-Ion. Instead of using a liquid electrolyte, as in conventional battery technologies,
Li-polymer uses a solid or gel material impregnated with the electrolyte. Hence, batteries to be made
in almost any shape, allowing them to be placed in parts of a notebook case that would normally be
filled with air. Cells are constructed from a flexible, multi-layered 100-micron thick film laminate
that does not require a hard leak-proof case. The laminate comprises five layers—a current-collecting
metal foil, a cathode, an electrolyte, a lithium foil anode, and an insulator.
Early lithium polymer battery prototypes proved unable to produce currents high enough to be
practical. Bell core technology (named after the laboratory that invented it) is a variation on the
lithium polymer theme. It addresses the problem by using a mixture of liquid and polymer electrolytes that produce higher levels of current. A number of manufacturers have licensed this technology.
Initially, the principal market for these devices will be PDAs and mobile phones where space and
weight are at a premium and value is high. This will be followed by the increasing use in notebook
Zinc-Air technology is another emerging technology and possible competitor to Li-polymer. As
the name suggests, Zinc-Air batteries use oxygen in the chemical reaction to produce electricity, thus
removing the need to include metal reaction elements. Energy density by weight is reduced, but
volume increases due to the need for air-breathing chambers in the battery. This reduces its appeal
for small form factor devices and may relegate it to being a niche technology.
Chapter 8
Requirements Overdose
The PC is facing lackluster revenue growth. In several geographies, revenues and earnings are
declining. Thanks to Moore’s Law, as process technologies continue to shrink and component
manufacturers derive benefits in performance, density, power, and cost, they can pass these on to
consumers. Until just a few years ago, applications running on PCs demanded processing power and
memory densities that the components could never satisfy. This caused consumers to upgrade their
PCs every few years. However, today’s PCs seem adequate in performance and functionality with
current applications. Combined with a large population base (in the U.S.) that already owns at least
one PC, this is causing a slowdown in overall PC demand. Hence, revenues and earnings are falling
for several PC vendors and their component suppliers. These companies are trying to diversify
revenues through new streams such as set-top boxes, gaming consoles, and networking equipment.
New PC Demand Drivers
To enhance demand for PCs, new demand drivers will have to be created. Some of these include
support for new interfaces and the introduction of new and entertaining products and software
Incremental Changes to the Operating System may not be Enough
While Microsoft Corporation has released its next-generation operating system, Windows XP, with
great fanfare, the move will not be enough to pull the PC industry out of its downturn. Beyond XP, a
road map of stepwise refinements planned for desktops over the next couple of years may help the
industry struggle back to growth. But no major architectural leap can be predicted until 2004, and the
massive PC industry will lumber forward for a time without a strong driver.
Windows XP shrinks PC boot time to about 35 seconds while boosting stability and ease-of-use,
especially for wireless networking. Meatier hardware advances will beef up next year’s computers
with USB 2.0, Gigabit Ethernet, Bluetooth, and a greater variety of core logic chip sets. Gigabit
Ethernet’s arrival to the desktop motherboard will rapidly bridge the cost gap between 10/100 and
10/100/1000 Ethernet silicon. In 2004, there is a good possibility of new product concepts when the
Arapahoe replacement to the PCI bus arrives.
153 million PCs shipped worldwide in 2001, down 6% from 2000’s record high. However, a 13%
rebound in unit shipments in 2002 was based on the strength of cost reductions in Intel Pentium 4class systems, Advanced Micro Device’s move to 0.13-micron and silicon-on-insulator technology,
and a fresh set of graphics and chip set offerings. In addition, Windows XP will have a new version
that will include native support for the 480-Mb/s USB 2.0 interface as well as the short-range
wireless Bluetooth link.
Some companies will craft their own software to roll out interfaces (e.g., Bluetooth) earlier, but
such moves come with a potential cost in development and compatibility. Working on their own
drivers for Bluetooth will cause compatibility problems. This will also raise software development
costs which are already significantly higher than hardware development costs for some high-volume
PC subsystems. Further, Bluetooth user profiles have yet to be written, and that could delay broad
deployment of Bluetooth for as much as two years.
XP is less prone to crashes because it is based on the more stable Windows NT code base. That
will lower OEM support costs dramatically. And XP’s improved support for networking, particularly
for IEEE 802.11b wireless LANs, is one of its greatest strengths. Dell Computer, IBM, and other PC
Desktop and Notebook Personal Computers (PCs)
manufacturers have been shipping notebooks with optional 802.11b internal MiniPCI cards. XP adds
the ability to recognize access points automatically for users on the move among office, home, and
public WLANs without requiring manual reconfiguration of their cards. This is much better in XP
than in Windows 2000. It will build mass appeal and prompt people to start using wireless LANs
more often.
This shift could open the door to new devices and services that complement the PC. These would
include new systems like the tablet PCs and improved versions of MP3 players and Internet jukeboxes. The next step is introducing pieces of Microsoft’s Universal Plug and Play (UPnP) initiative
for automatically discovering systems on a network. Microsoft’s separate .Net and a parallel Web
services initiative (announced by Sun Microsystems’ initiative One™) aims to enable Web-based
services that mix and match component-based applications to anticipate and automate user wants.
The bottom line is that PCs are moving beyond hardware to software. The real challenges are no
longer in the processor, memory, and graphics. As the hardware becomes more commoditized, it
requires less effort. The effort lies in helping users manage how they deploy, transition, and protect
their data.
More radical hardware changes will come in 2004 with the deployment of the Arapahoe interface, anticipated as a faster serial replacement for PCI.
Arapahoe will put video on the same I/O bus instead of on a dedicated bus, which is a big
advantage. Users will also be able to use cable versions so they can have split designs. Here, they
could place something on and something under the desk as an option. The desktop could be broken
into multiple pieces, something that was not possible before because fast enough data transfer rates
over an external cable were not available. Users could also have smaller desktops because Arapahoe
has a smaller physical interface than PCI. There will be a significant architectural shift with the rise
of fast interfaces like Arapahoe and Gigabit Ethernet.
The hard disk drive will be eliminated from the PC in the next three to five years, leading to an
era of network-based storage. But with hard drives being so cheap, it might be better to have a local
cache for data, even though users may want to mirror and manage a lot of that from a corporate
network storage system.
Tablet PC
Microsoft has unveiled a new technology to help bring the PC back to its former leading role. Called
a Tablet PC, it is a pen-driven, fully functional computer. It is built on the premise that next-generation hardware and software takes advantage of the full power of today’s desktop computers. Even
sleek portables need the power of a desktop PC.
Companies such as Sun and Oracle envision a more varied future, with most of the Internet’s
complexity placed on central servers. Here, everything from pagers to cell phones will be used to
access the Internet via these servers. However, Microsoft believes that intelligence and computing
power will reside at the consumer end.
The browser-based era of the Internet, where servers do the heavy computations and PCs merely
display Web pages, is in decline. The browser model which has been the focus for the last five years
is really showing its age.
While the computer world has historically been either centrally or locally based, the need now is
for a more balanced approach. Vast amounts of data will reside on the server and powerful desktops
Chapter 8
will be needed to sort and interpret the data. However, peer-to-peer computing will not render the
server obsolete.
Server software will enable “information agents” that can filter and prioritize all the different
messages being sent to a person. The world will be filled with lots of medium-powered servers rather
than a smaller number of very powerful ones. A profusion of smart client devices (e.g., tablet PCs,
gaming consoles, PCs in automotives, handhelds, and smart cell phones) will be able to deliver a
richer Internet experience.
The Tablet PC has a 500- to 600-MHz CPU, 128 MB of RAM, a 10-GB hard disk, and universal
serial bus (USB) ports for keyboard and mouse. It is also based on Microsoft Windows. The Tablet
PC also features Microsoft’s ClearType, a technology that makes text on LCD (liquid crystal display)
screens more readable.
While it will cost more than the equivalent portable model when it comes out, it presents an
opportunity to both grow the portable PC market and become a significant part of that market.
Among the technologies behind the Tablet PC is what Microsoft calls “rich ink.” This feature allows
a person to take notes on the screen as if it were an infinite pad of paper, albeit a pricey one. The
Tablet PC converts handwritten pen strokes into graphics that can be handled similarly to ordinary
computer type. Although the computer doesn’t recognize specific words, it distinguishes between
words and pictures. It also lets people perform some word processing-like tasks such as inserting
space between lines, copying text, or boldfacing pen strokes.
The tablet PC platform is a next-generation wireless device charged with merging the computing
power of a PC with the mobility of a digital consumer appliance. Both Intel and Transmeta microprocessors will power the devices which may come in a wide range of shapes and sizes. They will be
manufactured initially by Acer, Compaq, Fujitsu, Sony, and Toshiba. A full-function, slate-like
computer, the Tablet, will feature the newly announced Windows XP operating system. It will take
advantage of pen-based input in the form of digital ink.
The Tablet PC will be an even more important advance for PCs than notebook computers were. It
combines the simplicity of paper with the power of the PC. It abandons the traditional keyboard and
mouse for the more ergonomic pen. Microsoft positions this device as more of a PC than as a
consumer appliance. This is due to the fallout in “information appliances” as a commercial viability,
with both 3Com and Gateway pulling products from both roadmaps and shelves.
It is a real PC with all of the power and ability of a desktop system, and with the portability to
replace laptops. Apple Computer’s Newton and Pocket Pad were ambitious precursors to PDA-like
devices, and the tablet market has been limited to the vertical marketplace. Hence, the success of a
product like the tablet PC in the mainstream market requires an industry-wide effort. Form factors,
price points, and features for Tablet PC devices will be left up to the OEM, with the only unbending
stipulation that the device is a full-function PC.
Given the current state of the PC industry, it’s important to think of new ways to expand the
market. Many of the tablet PC’s components are leveraged off the notebook platform, but improvements such as power consumption and cost are on the way.
Next Generation Microsoft Office—Microsoft Office XP
The next version of the dominant PC office suite will allow people to control the program and edit
text with their voices via speech-recognition technology. Another new feature known as “smart tags”
Desktop and Notebook Personal Computers (PCs)
allows Office users to dynamically include information from the Internet into documents. Smart tags
pop up after different actions, offering additional suggests and items. For example, a formatting
options list might appear after a user pastes information into a document. A new version of Office
also will include a Web-based collaboration program known as SharePoint.
Instant Messaging—A New Form of Entertainment
Instant messaging technology is viewed as entertainment as well as a productivity enhancement tool
and could become a significant growth driver for the Internet-enabled devices and PCs. First introduced by AOL in 1997, IM technology allows people to communicate with each other real-time via
the Internet. Now several companies including Microsoft and Yahoo! have introduced instant
messaging service.
There are three ways commonly used to establish the connection—the centralized network
method, the peer-to-peer method, and a combination of the two.
Centralized Networks – In a centralized network, users are connected through a series of
connected servers. When a message is sent from a user, the server directs it through the
network to the recipient’s server which, in turn, forwards the message to the recipient. The
message travels throughout the network. This method is currently used by MSN messaging
Peer-to-Peer – With this method, a unique IP address is assigned to each user on the network. When you log on to the network, you are assigned an IP address. You also receive the
IP addresses of all the people on your buddy list that are on the network, establishing direct
connections among them. Thus, when a message is sent it is directly sent to the recipient.
Consequently, the message transfer rates are much quicker than in the centralized network
method. ICQ uses the peer-to-peer methodology.
Combination Method – The combination method uses either the centralized network method
or peer-to-peer method, depending on the type of message sent. Currently, AOL Instant
Messaging service (AIM) uses the combination method. When a text is sent, AIM uses the
centralized network method. When a data intensive message such as a picture, a voice, or a
file is sent, it uses the peer-to-peer connection.
Notebook/Portable/Laptop PC
The desktop PC may be headed for the geriatric ward, but svelte and brawny notebooks are ready to
take its place as the most significant computer product. Market data paints a dismal future for PCs,
with flat rather than double-digit sales growth. But notebooks are showing surprising resilience. In
fact, after years of being stalled at about 20% of the overall PC market, notebooks are widening their
share at the expense of desktop systems.
Portable PC sales are doing so well that many computer companies are shifting their sales focus
to notebooks and away from desktop PCs to drive market growth. There is a widening gulf between
the two product categories. Worldwide, notebook shipments grew 21% year over year, compared
with paltry desktop PC growth of 1.6%. In the United States notebook shipments increased 6% over
fourth quarter 1999, while desktop PC shipments stalled at 0.1% growth. While desktops may
account for the lion’s share of PC revenue, notebooks are expected to continue closing the gap.
The notebook PC market is also less saturated than that for PCs. For example, only 20% of
computer users in the corporate segment rely on notebooks. Every employee who needs a PC already
has one. But of those desktop users, 80% are potential notebook buyers. One way for vendors to
Chapter 8
generate sales is to offer notebooks while narrowing the price gap between notebooks and desktops.
In the United States and Europe desktop PCs have shifted to a replacement market from a growth
market. Companies with strong service organizations will use the growing notebook market to bridge
the gap with the more mature desktop PC segment. Instead of selling 2,000 notebooks, a vendor
could build a relationship by offering an upgrade to a new operating system. Even if customers are
not buying as frequently, the vendor could establish relationships for the next time they buy.
Falling component prices, particularly for LCDs (liquid-crystal displays), are also expected to
lead to lower notebook prices. LCD panels typically account for as much as 40% of a laptop’s cost.
Lower prices are expected to have the most impact on consumer sales. But the major factor fueling
notebook sales is increasingly better perceived value over desktop PCs. While there have been no
recent compelling changes to desktop PCs, notebooks are benefiting from advances in wireless
communications, mainly in the form of IEEE 802.11b and Bluetooth networking. Wireless networking is a clear driver of notebook sales, particularly with 802.11b terminals available in 30 airports
around the country and in major hotel chains. The next step is wireless networking appearing in other
public places like restaurants and coffee shops. The value proposition for wireless LAN in public
places is really strong. Several manufacturers including Apple Computer, Dell Computer, Compaq,
and IBM are among the major manufacturers offering wireless networking antennas integrated inside
their notebooks.
IBM developed a full-function portable that can also interpret data and drawings written on
notepads. It is also testing a wearable PC as well as a new mobile companion. Apple and Dell have
opted for putting the most performance in the smallest size possible, rivaling some workstations in
terms of power and features. The notebook market will significantly grow as a percentage of the total
PC market.
PocketPC Operating System (OS) Platform
Microsoft Corporation is taking sharper aim at the handheld market by introducing a new PocketPC
2002 platform. It supports fewer processors and promises to serve enterprise-level customers. This
will play a big role in its grand plan to chip away at Palm Inc.’s leadership in the handheld market.
The rollout signals a subtle shift in direction for Microsoft, which until now has tried to build a
handheld platform that appealed to all potential users, from business executives to soccer moms.
However, with the new release the software giant is clearly homing in on customers who can buy the
devices in volume as productivity tools for their companies. The introduction of PocketPC will help
boost the use of the PC as a consumer device. At the same time, Microsoft surprised industry experts
by narrowing the list of processors that the PocketPC will support. Microsoft’s strategy is a solid one,
especially as the economic downturn continues to discourage consumers from buying handheld
computers for personal applications. The corporate market for handhelds is just starting to kick in
now and it is believed that the PocketPC will eventually overtake the Palm OS.
Both PocketPC 2002 and the PocketPC operating system introduced in April 2000 are built on
Windows CE. They act as so-called “supersets” to add a variety of applications and software components to the CE foundation. The PocketPC system is an attempt to right the wrongs of Windows CE
which had been spectacularly unsuccessful in its early attempts to enter the palmtop computing
market. With the newest version of the PocketPC, software will come closer to finding its niche in
the handheld marketplace.
Desktop and Notebook Personal Computers (PCs)
Like the earlier version, PocketPC 2002 is aimed at the high end of the handheld computing
market. Products that employ it are generally larger and have color screens, longer battery life,
higher price tags, and greater functionality than the electronic organizers popularized by Palm.
Pocket PCs also offer far more computing power than the lower-end palm tops. Using Intel’s SA-1110
processor, for example, HP/Compaq’s iPaq handheld can operate at 206-MHz clock speeds. In
contrast, low-end Palm systems employ a 33-MHz Motorola Dragonball processor.
However, the newest version of the PocketPC operating system promises even more functionality. Microsoft added support for Windows Media Video, including streaming video and improved
electronic reading capabilities via a new Microsoft Reader. More important, the company has
targeted enterprise customers by introducing the ability to connect to corporate information via a
virtual private network. It has also made major development investments in connectivity by adding
options ranging from local-area networks such as 802.11b and personal-area networks such as
Bluetooth, to wide-area networks. With the Pocket PC 2002 Microsoft is laying the groundwork for
more powerful wireless e-mail capabilities that could make PocketPC handhelds appeal to large
corporations with hundreds or even thousands of traveling employees. Marrying the organizer
features to wireless e-mail capabilities will allow a lot of senior employees to sign up for the service.
Paring down the hardware support from three processor cores to one allows Microsoft and the
developers to streamline their efforts. Using only ARM-based processors simplifies outside application development, cuts costs, and still provides hardware makers with a choice of chip set suppliers.
The company’s engineers are already working with Intel on its SA-1110 and X-Scale processors, as
well as with Texas Instruments on the ARM 920 processor core. Additional ARM-based chip set
suppliers include Parthus, Agilent, SD Microelectronics, and Linkup. The industry analysts are
expecting the vast majority of Pocket PC hardware makers to opt for Intel’s StrongARM processors,
even though the ARM-based processors have been much more successful. Microsoft selected ARMbased processors over Hitachi- or MIPS-based processors based on performance and power efficiency
—the Intel StrongARM typically operates at about 1.3 to 1.5 MIPS per mW, while much of the
market ranges between 0.5 and 0.9 MIPS per mW.
MIPS and Hitachi have both emphasized that their processor cores will continue to be supported
in Windows CE. Even though major manufacturers such as Casio and Hewlett-Packard now use other
processors, Microsoft’s decision will not have a profound effect on sales of Pocket PCs. Microsoft’s
processor choice would have a near-term impact on consumers. HP/Compaq iPaq already uses the
ARM processor. Existing iPaq users will be able to upgrade to Pocket PC 2002, or later to Talisker,
whereas others will not.
Ultimately, Microsoft’s strategies will help the company continue to close the longstanding gap
with Palm. In the PDA market, Microsoft Windows CE and Pocket PC platforms have 30% of the
market in 2003, up from an earlier figure in “the teens” during the first quarter of 2001. In 2003,
Palm still holds a 49% share. That figure for the Palm-based PDA market share has dropped from a
steady level in excess of 70% during the last several years. In 2006 it is predicted that Palm and
Microsoft-based platforms will both have 47% market shares. For the same reason, Microsoft’s
enterprise focus may continue to be the best way to chip away at Palm’s dominance. No one has hit
on the feature that would be compelling for soccer moms. When they do, the consumer space will be
tremendous for handhelds. Until then, Microsoft is doing the best thing by hammering away at the
enterprise focus.
Chapter 8
The growing ubiquity of the Internet and the convergence of voice, data, and video are bringing
interesting applications to the home. Lower PC prices have brought multiple PCs into the home.
However, newer digital consumer appliances that use the Internet to provide particular functions are
arriving in the market. The big question is whether the digital consumer device will replace the PC in
its entirety. Digital consumer appliances provide dedicated functionality, while the PC is an ideal
platform to become the residential gateway of the future. It could provide home networking capabilities to digital consumer devices, PC peripherals, and other PCs. Hence, while digital consumer
devices will continue to penetrate homes and provide the convenience of the Internet, they will not
replace PCs. Both will continue to coexist.
PC Peripherals
A peripheral device is an external consumer device that attaches to the PC to provide a specific
functionality. Peripherals include printers, scanners, smart card readers, keyboards, mice, displays,
and digital cameras. Some devices such as printers, scanners, cameras, and smart card readers serve
as more than just PC peripherals, and devices such as the keyboard, mouse, and display are required
to operate the PC. This chapter discusses these peripherals as well because of their importance and
because of developments in these products. Note that displays are covered in Chapter 10 and digital
cameras are discussed in Chapter 11.
Printers have always been a mainstream output technology for the computing industry. In earlier
times impact and thermal printers were the cornerstone printer technologies most frequently used.
Within the last decade this has dramatically shifted to ink jet and laser technologies, which are
extremely price sensitive. Also, note that most of the printer business is held by a relatively few large
players. In 1998 the worldwide laser printer market was almost 10 million units and the inkjet market
was over 45 million units.
While printers are largely thought off as PC peripherals, leading manufacturers have announced
that next-generation printer products will connect to set-top boxes and media centers to allow
consumers to print from these gateways. This allows consumers to print images and documents
stored on the hard-drive of the set-top box. Sleek and slim printer models are being introduced.
These devices will no longer be hidden in a home office but rather are a part of the entertainment
Due to the widespread use of small office/home office (SOHO) operations, and the corresponding need for consolidation of the desktop area, another extension exists—the multifunction
peripheral (MFP). MFPs combine FAX machine, copier, scanner, and printer all into a single unit. An
ink jet or laser printer resides at the heart of an MFP output device.
Laser Printer
In the 1980s dot matrix and laser printers were dominant. Inkjet technology did not emerge in any
significant way until the 1990s. In 1984 Hewlett Packard introduced a laser printer based on technology developed by Canon. It worked like a photocopier, but with a different light source. A photocopier page is scanned with a bright light, while a laser printer page is scanned with a laser. In a laser
printer the light creates an electrostatic image of the page on a charged photoreceptor, and the
photoreceptor attracts toner in the shape of an electrostatic charge.
Chapter 9
Laser printers quickly became popular due to their high print quality and their relatively low
running costs. As the market for lasers developed, competition among manufacturers became
increasingly fierce, especially in the production of budget models. Prices dropped as manufacturers
found new ways of cutting costs. Output quality has improved with 600-dpi resolution becoming
more standard. And form factor has become smaller, making them more suited to home use.
Laser printers have a number of advantages over rival inkjet technology. They produce better
quality black text documents than inkjets do, and they are designed more for longevity. That is, they
turn out more pages per month at a lower cost per page than inkjets do. The laser printer fits the bill
as an office workhorse. The handling of envelopes, cards, and other non-regular media is another
important factor for home and business users. Here, lasers once again have the edge over inkjets.
Considering what goes into a laser printer, it is amazing they can be produced for so little money.
In many ways, laser printer components are far more sophisticated than computer components. For
The RIP (raster image processor) might use an advanced RISC processor.
The engineering which goes into the bearings for the mirrors is very advanced.
The choice of chemicals for the drum and toner is a complex issue.
Getting the image from a PC’s screen to paper requires an interesting mix of coding,
electronics, optics, mechanics, and chemistry.
A laser printer needs to have all the information about a page in its memory before it can start
printing. The type of printer being used determines how an image is transferred from PC memory to
the laser printer. The crudest arrangement is the transfer of a bitmap image. In this case, there is not
much the computer can do to improve on the quality. It can only send a dot for a dot.
However, if the system knows more about the image than it can display on the screen, there are
better ways to communicate the data. A standard A4 sheet is 8.5 inches across and 11inches deep. At
300 dpi, that is more than eight million dots, compared with the eight hundred thousand pixels on a
1024 by 768 screen. There is obviously scope for a much sharper image on paper–even more so at
600 dpi where a page can have 33 million dots.
A major way to improve quality is to send a page description consisting of outline/vector
information, and to let the printer make the best possible use of this information. For example, if the
printer is told to draw a line from one point to another, it can use the basic geometric principle that a
line has length but not width. Here, it can draw that line one dot wide. The same holds true for
curves that can be as fine as the printer’s resolution allows. This principle is called device-independent printing.
Text characters can be handled in the same way since they are made up of lines and curves. But a
better solution is to use a pre-described font shape such as TrueType or Type-1 formats. Along with
precise placement, the page description language (PDL) may take a font shape and scale it, rotate it,
or generally manipulate it as necessary. And there’s the added advantage of requiring only one file
per font as opposed to one file for each point size. Having predefined outlines for fonts allows the
computer to send a tiny amount of information—one byte per character—and produce text in any of
several different font styles and sizes.
PC Peripherals
An image to be printed is sent to a printer via page description language instructions. The printer’s
first job is to convert those instructions into a bitmap, an action performed by the printer’s internal
processor. The result is an image (in memory) where every dot is placed on the paper. Models
designated “Windows printers” don’t have their own processors, so the host PC creates the bitmap,
writing it directly to the printer’s memory.
At the heart of the laser printer is a small rotating drum called the organic photo-conducting
cartridge (OPC). It has a coating that allows it to hold an electrostatic charge. A laser beam scans
across the surface of the drum, selectively imparting points of positive charge onto the drum’s
surface. They will ultimately represent the output image. The area of the drum is the same as that of
the paper onto which the image will eventually appear. Every point on the drum corresponds to a
point on the sheet of paper. In the meantime, the paper is passed through an electrically charged wire
that deposits a negative charge on it.
On true laser printers, selective charging is accomplished by turning the laser on and off as it
scans the rotating drum. A complex arrangement of spinning mirrors and lenses does this. The
principle is the same as that of a disco mirror ball. The lights bounce off the ball onto the floor, track
across the floor, and disappear as the ball revolves. In a laser printer, the mirror drum spins extremely
rapidly and is synchronized with the on/off switching of the laser. A typical laser printer performs
millions of on/off switches every second.
The drum rotates inside the printer to build one horizontal line at a time. Clearly, this has to be
done very accurately. The smaller the rotation, the higher the resolution down the page. The step
rotation on a modern laser printer is typically 1/600th of an inch, giving a 600-dpi vertical resolution
rating. Similarly, the faster the laser beam is switched on and off, the higher the resolution across the
As the drum rotates to present the next area for laser treatment, the area written on moves into
the laser toner. Toner is a very fine black powder negatively charged so that it attracts to the positively charged points on the drum surface. So after a full rotation, the drum’s surface contains the
whole of the required black image.
A sheet of paper now comes into contact with the drum, fed in by a set of rubber rollers. As it
completes its rotation, it lifts the toner from the drum by magnetic attraction. This is what transfers
the image to the paper. Negatively charged areas of the drum don’t attract toner, so they result in
white areas on the paper.
Toner is specially designed to melt very quickly. A fusing system now applies heat and pressure
to the imaged paper to adhere the toner permanently. Wax is the toner ingredient that makes it more
amenable to the fusion process. It is the fusing rollers that cause the paper to emerge from a laser
printer warm to the touch.
The final stage is to clean the drum of any remnants of toner, getting it ready for the cycle to
start again. There are two forms of cleaning—physical and electrical. With the first, the toner that
was not transferred to the paper is mechanically scraped off the drum and collected in a bin. With
electrical cleaning, the drum is covered with an even electrical charge so the laser can write on it
again. An electrical element called a corona wire performs this charging. Both the felt pad which
cleans the drum and the corona wire need to be changed regularly.
Chapter 9
Inkjet Printers
Although inkjets were available in the 1980s, it was only in the 1990s that prices dropped enough to
bring the technology to the high street. Canon claims to have invented what it terms “bubble jet”
technology in 1977 when a researcher accidentally touched an ink-filled syringe with a hot soldering
iron. The heat forced a drop of ink out of the needle and so began the development of a new printing
Inkjet printers have made rapid technological advances in recent years. The three-color printer
has been around for several years now and has succeeded in making color inkjet printing an affordable option. But as the superior four-color model became cheaper to produce, the swappable
cartridge model was gradually phased out.
Traditionally, inkjets have had one massive attraction over laser printers—their ability to produce
color—and that is what makes them so popular with home users. Since the late 1990s when the price
of color laser printers began to reach levels that made them viable for home users, this advantage has
been less definitive. However, in that time the development of inkjets capable of photographicquality output has done much to help them retain their advantage in the realm of color.
The down side is that, although inkjets are generally cheaper to buy than lasers, they are more
expensive to maintain. Cartridges need to be changed more frequently and the special coated paper
required to produce high-quality output is very expensive. When it comes to comparing the cost per
page, inkjets work out about ten times more expensive than laser printers.
Since the invention of the inkjet, color printing has become immensely popular. Research in
inkjet technology is making continual advances with each new product on the market showing
improvements in performance, usability, and output quality. As the process of refinement continues,
inkjet printer prices continue to fall.
Like laser printing, Inkjet printing is a non-impact method. Ink is emitted from nozzles as they pass
over a variety of possible media. Liquid ink in various colors is squirted at the paper to build up an
image. A print head scans the page in horizontal strips, using a motor assembly to move it from left
to right and back. Another motor assembly rolls the paper in vertical steps. A strip of the image is
printed and then the paper moves on, ready for the next strip. To speed things up, the print head
doesn’t print just a single row of pixels in each pass, but a vertical row of pixels at a time.
On ordinary inkjets the print head takes about half a second to print a strip across a page. Since
A4 paper is about 8.5 inches wide, and inkjets operate at a minimum of 300 dpi, this means there are
at least 2,475 dots across the page. Therefore, the print head has about 1/5000th of a second to
respond to whether a dot needs printing. In the future, fabrication advances will allow bigger print
heads with more nozzles firing at faster frequencies. They will deliver native resolutions of up to
1200 dpi and print speeds approaching those of current color laser printers (3 to 4 ppm in color, 12 to
14 ppm in monochrome).
Drop-on-demand (DOD) is the most common type of inkjet technology. It works by squirting
small droplets of ink onto paper through tiny nozzles. It’s like turning a hose on and off 5,000 times a
second. The amount of ink propelled onto the page is determined by the driver software which
dictates which nozzles shoot droplets, and when.
PC Peripherals
The nozzles used in inkjet printers are hair-fine. On early models they clogged easily. On
modern inkjet printers this is rarely a problem, but changing cartridges can still be messy on some
machines. Another problem with inkjet technology is a tendency for the ink to smudge immediately
after printing. But this has improved drastically during the past few years with the development of
new ink compositions.
Thermal Technology
Most inkjets use thermal technology where heat is used to fire ink onto the paper. Here, the squirt is
initiated by heating the ink to create a bubble until the pressure forces it to burst and hit the paper.
The bubble then collapses as the element cools and the resulting vacuum draws ink from the reservoir to replace the ink that was ejected. This is the method favored by Canon and Hewlett Packard.
Thermal technology imposes certain limitations on the printing process. For example, whatever type
of ink is used must be resistant to heat because the firing process is heat-based. The use of heat in
thermal printers creates a need for a cooling process as well, which levies a small time overhead on
the printing process.
Tiny heating elements are used to eject ink droplets from the print head’s nozzles. Today’s
thermal inkjets have print heads containing between 300 and 600 nozzles, each about the diameter of
a human hair (approx. 70 microns). These deliver drop volumes of around 8 – 10 pico-liters (a picoliter is a million millionth of a liter). Dot sizes are between 50 and 60 microns in diameter. By
comparison, the smallest dot size visible to the naked eye is around 30 microns. Dye-based cyan,
magenta, and yellow inks are normally delivered via a combined CMY print head. Several small
color ink drops—typically between four and eight—can be combined to deliver a variable dot size.
Black ink, which is generally based on bigger pigment molecules, is delivered from a separate print
head in larger drop volumes of around 35 Pl.
Nozzle density that corresponds to the printer’s native resolution varies between 300 and 600
dpi, with enhanced resolutions of 1200 dpi increasingly available. Print speed is chiefly a function of
the frequency with which the nozzles can be made to fire ink drops. It’s also a function of the width
of the swath printed by the print head. Typically this is around 12 MHz and half an inch, respectively. This yields print speeds of between 4 to 8 ppm (pages per minute) for monochrome text and 2
to 4 ppm for color text and graphics.
Piezo-electric Technology
Epson’s proprietary inkjet technology uses a piezo crystal at the back of the ink reservoir, similar to a
loudspeaker cone. It flexes when an electric current flows through it. Whenever a dot is required, a
current is applied to the piezo element. The element flexes and forces a drop of ink out of the nozzle.
There are several advantages to the piezo method. The process allows more control over the
shape and size of ink droplet release. The tiny fluctuations in the crystal allow for smaller droplet
sizes and higher nozzle density. Unlike thermal technology, the ink does not have to be heated and
cooled between cycles, which saves time. The ink is tailored more for its absorption properties than
it is for its ability to withstand high temperatures. This allows more freedom for developing new
chemical properties in inks.
Epson’s mainstream inkjets at the time of this writing have black print heads with 128 nozzles,
and color (CMY) print heads with 192 nozzles (64 for each color). This arrangement addresses a
resolution of 720 by 720 dpi. Because the piezo process can deliver small and perfectly formed dots
Chapter 9
with high accuracy, Epson can offer an enhanced resolution of 1440 by 720 dpi. However, this is
achieved by the print head making two passes which results in a print speed reduction. For use with
its piezo technology, Epson has developed tailored inks which are solvent-based and extremely
quick-drying. They penetrate the paper and maintain their shape rather than spreading out on the
surface and causing dots to interact with one another. The result is extremely good print quality,
especially on coated or glossy paper.
Color Perception
Visible light is sandwiched between ultraviolet and infrared. It falls between 380 nm (violet) and 780
nm (red) on the electromagnetic spectrum. White light is composed of approximately equal proportions of all the visible wavelengths. When these shine on or through an object, some wavelengths are
absorbed while others are reflected or transmitted. It is the reflected or transmitted light that gives
the object its perceived color. For example, leaves are their familiar colors because chlorophyll
absorbs light at the blue and red ends of the spectrum and reflects the green part in the middle.
The “temperature” of the light source, measured in Kelvin (K), affects an object’s perceived
color. White light as emitted by the fluorescent lamps in a viewing box or by a photographer’s
flashlight has an even distribution of wavelengths. It corresponds to a temperature of around 6,000 K
and does not distort colors. Standard light bulbs, however, emit less light from the blue end of the
spectrum. This corresponds to a temperature of around 3,000 K and causes objects to appear more
Humans perceive color via the retina—a layer of light-sensitive cells on the back of the eye. Key
retinal cells contain photo-pigments that render them sensitive to red, green, or blue light. (The other
light sensitive cells are called rods and are only activated in dim light.) Light passing through the
eye is regulated by the iris and focused by the lens onto the retina where cones are stimulated by the
relevant wavelengths. Signals from the millions of cones are passed via the optic nerve to the brain
which assembles them into a color image.
Creating Color
Creating color accurately on paper has been one of the major areas of research in color printing. Like
monitors, printers closely position different amounts of key primary colors which, from a distance,
merge to form any color. This process is known as dithering.
Monitors and printers do this slightly differently because monitors are light sources while the
output from printers reflects light. Monitors mix the light from phosphors made of the primary
additive colors—red, green, and blue (RGB). Printers use inks made of the primary subtractive
colors—cyan, magenta, and yellow (CMY). White light is absorbed by the colored inks, reflecting
the desired color. In each case the basic primary colors are dithered to form the entire spectrum.
Dithering breaks a color pixel into an array of dots so that each dot is made up of one of the basic
colors, or left blank.
The reproduction of color from the monitor to the printer output is a major area of research
known as color matching. Colors vary from monitor to monitor and the colors on the printed page do
not always match up with what is displayed on-screen. The color generated on the printed page is
dependent on the color system used and on the model of printer used. It is not generated by the
colors shown on the monitor. Printer manufacturers have put lots of money into the research of
accurate monitor/printer color matching.
PC Peripherals
Modern inkjets can print in color and black and white, but the way they switch between the two
varies between different models. The number of ink types in the machine determines the basic
design. Printers containing four colors—cyan, magenta, yellow, and black (CMYK)—can switch
between black and white text and color images on the same page with no problem. Printers equipped
with only three colors cannot.
Some of the cheaper inkjet models have room for only one cartridge. You can set them up with a
black ink cartridge for monochrome printing or with a three-color cartridge (CMY) for color printing, but you cannot set them up for both at the same time. This makes a big difference to the
operation of the printer. Each time you want to change from black and white to color, you must
physically swap the cartridges. When you use black on a color page, it will be made up of the three
colors. These result in an unsatisfactory dark green or gray color, usually referred to as composite
black. However, the composite black produced by current inkjet printers is much better than it was a
few years ago due to the continual advancements in ink chemistry.
Print Quality
The two main determinants of color print quality are resolution (measured in dots per inch–dpi), and
the number of levels or graduations that can be printed per dot. Generally, the higher the resolution
and the more levels per dot, the better the overall print quality.
In practice, most printers make a trade-off. Some opt for higher resolution and others settle for
more levels per dot, the best solution depending on the printer’s intended use. For example, graphic
arts professionals are interested in maximizing the number of levels per dot to deliver photographic
image quality. But general business users require reasonably high resolution so as to achieve good
text quality as well as good image quality.
The simplest type of color printer is a binary device in which the cyan, magenta, yellow, and
black dots are either on (printed) or off (not printed). Here, no intermediate levels are possible. If ink
(or toner) dots can be mixed together to make intermediate colors, then a binary CMYK printer can
only print eight “solid” colors (cyan, magenta, yellow, red, green and blue, black and white). Clearly,
this isn’t a big enough palette to deliver good color print quality, which is where half-toning comes in.
Half-toning algorithms divide a printer’s native dot resolution into a grid of halftone cells. Then
they turn on varying numbers of dots within these cells to mimic a variable dot size. By carefully
combining cells containing different proportions of CMYK dots, a half-toning printer can fool the
human eye into seeing a palette of millions of colors rather than just a few.
There is an unlimited palette of solid colors in continuous tone printing. In practice, “unlimited”
means 16.7 million colors, which is more than the human eye can distinguish. To achieve this, the
printer must be able to create and overlay 256 shades per dot per color, which requires precise
control over dot creation and placement. Continuous tone printing is largely the province of dye
sublimation printers. However, all mainstream-printing technologies can produce multiple shades
(usually between 4 and 16) per dot, allowing them to deliver a richer palette of solid colors and
smoother halftones. Such devices are referred to as “con-tone” printers.
Recently, six-color inkjet printers have appeared on the market targeted at delivering photographic-quality output. These devices add two further inks—light cyan and light magenta. These
make up for current inkjet technology’s inability to create very small (and therefore light) dots. Sixcolor inkjets produce more subtle flesh tones and finer color graduations than standard CMYK
Chapter 9
devices. However, they are likely to become unnecessary in the future when ink drop volumes are
expected to shrink to around 2 to 4 pico-liters. Smaller drop sizes will also reduce the amount of
half-toning required because a wider range of tiny drops can be combined to create a larger palette of
solid colors.
Rather than simply increasing dpi, long-time market leader Hewlett Packard has consistently
supported increasing the number of colors that can be printed on an individual dot to improve print
quality. They argue that increasing dpi sacrifices speed and causes problems arising from excess ink,
especially on plain paper. In 1996 HP manufactured the DeskJet 850C, the first inkjet printer to print
more than eight colors (i.e., two drops of ink) on a dot. Over the years, it has progressively refined its
PhotoREt color layering technology to the point where, by late 1999, it could produce an extremely
small 5-pl drop size and up to 29 ink drops per dot. This was sufficient to represent over 3,500
printable colors per dot.
Color Management
The human eye can distinguish about one million colors, the precise number depending on the
individual observer and viewing conditions. Color devices create colors in different ways, resulting
in different color gamut. Color can be described conceptually by a three-dimensional HSB model:
Hue (H) – Describes the basic color in terms of one or two dominant primary colors (red or
blue-green, for example). It is measured as a position on the standard color wheel and is
described as an angle in degrees between 0 to 360.
Saturation (S) – Referred to as chroma. It describes the intensity of the dominant colors
and is measured as a percentage from 0 to 100%. At 0% the color would contain no hue and
would be gray. At 100% the color is fully saturated.
Brightness (B) – Describes the color’s proximity to white or black, which is a function of
the amplitude of the light that stimulates the eye’s receptors. It is measured as a percentage.
If any hue has a brightness of 0%, it becomes black. At 100% it becomes fully light.
RGB (Red, Green, Blue) and CMYK (Cyan, Magenta, Yellow, Black) are other common color
models. CRT monitors use the former, creating color by causing red, green, and blue phosphors to
glow. This system is called additive color. Mixing different amounts each of red, green, or blue
creates different colors. Each color can be measured from 0 to 255. If red, green, and blue are set to
0, the color is black. If all are set to 255, the color is white.
Applying inks or toner to white paper creates printed material. The pigments in the ink absorb
light selectively so that only parts of the spectrum are reflected back to the viewer’s eye. This is
called subtractive color. The basic printing ink colors are cyan, magenta, and yellow. A fourth ink,
black, is usually added to create purer, deeper shadows and a wider range of shades. By using
varying amounts of these “process colors” a large number of different colors can be produced. Here
the level of ink is measured from 0% to 100%. For example, orange is represented by 0% cyan, 50%
magenta, 100% yellow, and 0% black.
The CIE (Commission Internationale de l’Eclairage) was formed early in this century to develop
standards for the specification of light and illumination. It was responsible for the first color space
model. This defined color as a combination of three axes—x, y, and z. In broad terms, x represents
the amount of redness in a color, y the amount of greenness and lightness (bright-to-dark), and z the
amount of blueness. In 1931 this system was adopted as the CIE x*y*z model and is the basis for
most other color space models. The most familiar refinement is the Yxy model in which the near
PC Peripherals
triangular xy planes represent colors with the same lightness, with lightness varying along the Y-axis.
Subsequent developments such as the L*a*b and L*u*v models released in 1978 map the distances
between color coordinates more accurately to the human color perception system.
For color to be an effective tool, it must be possible to create and enforce consistent, predictable
color in a production chain. The production chain includes scanners, software, monitors, desktop
printers, external PostScript output devices, prepress service bureau, and printing presses. The
dilemma is that different devices simply can’t create the same range of colors. All of this color
modeling effort comes into its own in the field of color management. Color management uses the
device-independent CIE color space to mediate between the color gamut of the various devices.
Color management systems are based on generic profiles of different color devices that describe their
imaging technologies, gamut, and operational methods. These profiles are then fine-tuned by
calibrating actual devices to measure and correct any deviations from ideal performance. Finally,
colors are translated from one device to another, with mapping algorithms choosing the optimal
replacements for out-of-gamut colors that cannot be handled.
Until Apple introduced ColorSync as a part of its System 7.x operating system in 1992, color
management was left to specific applications. These high-end systems have produced impressive
results, but they are computationally intensive and mutually incompatible. Recognizing the problems
of cross-platform color, the ICC (International Color Consortium, originally named the ColorSync
Profile Consortium) was formed in March 1994 to establish a common device profile format. The
founding companies included Adobe, Agfa, Apple, Kodak, Microsoft, Silicon Graphics, Sun
Microsystems, and Taligent.
The goal of the ICC is to provide true portable color that will work in all hardware and software
environments. It published its first standard—version 3 of the ICC Profile Format—in June 1994.
There are two parts to the ICC profile. The first contains information about the profile itself such as
what device created the profile, and when. The second is color-metric device characterization which
explains how the device renders color. The following year Windows 95 became the first Microsoft
operating environment to include color management and support for ICC-compliant profiles via the
ICM (Image Color Management) system.
Whatever technology is applied to printer hardware, the final product consists of ink on paper. These
two elements are vitally important to producing quality results. The quality of output from inkjet
printers ranges from poor, with dull colors and visible banding, to excellent, near-photographic
Two entirely different types of ink are used in inkjet printers. One is slow and penetrating and
takes about ten seconds to dry. The other is a fast drying ink that dries about 100 times faster. The
former is better suited to straightforward monochrome printing, while the latter is used for color.
Different inks are mixed in color printing so they need to dry as quickly as possible to avoid blurring.
If slow-drying ink is used for color printing, the colors tend to bleed into one another before they’ve
The ink used in inkjet technology is water-based and this poses other problems. The results from
some of the earlier inkjet printers were prone to smudging and running, but over the past few years
there have been enormous improvements in ink chemistry. Oil-based ink is not really a solution to
Chapter 9
the problem because it would impose a far higher maintenance cost on the hardware. Printer manufacturers are making continual progress in the development of water-resistant inks, but the results
from inkjet printers are still weak compared to lasers.
One of the major goals of inkjet manufacturers is to develop the ability to print on almost any
media. The secret to this is ink chemistry, and most inkjet manufacturers will jealously protect their
own formulas. Companies like Hewlett Packard, Canon and Epson invest large sums of money in
research to make continual advancements in ink pigments, qualities of light-fastness and waterfastness, and suitability for printing on a wide variety of media.
Today’s inkjets use dyes based on small molecules (<50 nm) for cyan, magenta, and yellow inks.
These have high brilliance and wide color gamut, but aren’t sufficiently light-fast or water-fast.
Pigments based on larger (50- to 100-nm) molecules are more waterproof and fade-resistant. But they
cannot yet deliver the range of colors that dyes do, and they aren’t transparent. This means that
pigments are currently only used for the black ink. Current and future developments are concentrating on creating water-fast and light-fast CMY inks based on smaller pigment-type molecules.
Most current inkjet printers require high-quality coated or glossy paper for the production of photorealistic output, but this can be very expensive. One of the aims of inkjet printer manufacturers is to
make color printing media-independent. The attainment of this goal is generally measured by the
output quality achieved on plain copier paper. This has vastly improved over the past few years, but
coated or glossy paper is still needed to achieve full-color photographic quality. Some printer
manufacturers, like Epson, even have their own proprietary paper that is optimized for use with piezo
Inkjet printers can become expensive when printer manufacturers tie you to their proprietary
consumables. Paper produced by independent companies is much cheaper than that supplied directly
by printer manufacturers. But it tends to rely on its universal properties and rarely takes advantage of
the idiosyncratic features of particular printer models.
A great deal of research has gone into the production of universal paper types that are optimized
specifically for color inkjet printers. PLUS Color Jet paper, produced by Wiggins Teape, is a coated
paper produced specifically for color inkjet technology. Conqueror CX22 is designed for black ink
and spot-color business documents, and is optimized both for inkjet and laser printers.
Paper pre-conditioning seeks to improve inkjet quality on plain paper by priming the media to
receive ink with an agent that binds pigment to the paper. This reduces dot gain and smearing. A
great deal of effort is brought to bear on trying to achieve this without incurring a dramatic performance hit. If this yields results, one of the major barriers to widespread use of inkjet technology will
have been removed.
Manageability and Costs
There is no doubt that the inkjet printer has been one of desktop computing’s success stories. Its first
phase of development was the monochrome inkjet of the late 1980s–a low-cost alternative to the laser
printer. The second phase spanned the arrival of color and its development to the point of effective
photographic quality. It gave the inkjet an all-round capability unmatched by any other printer technology. But when it comes to manageability and running costs, inkjet trails laser technology by some
distance. Inkjet’s third phase of development will focus on improving these aspects of the technology.
PC Peripherals
Hewlett Packard’s HP2000C inkjet signaled encouraging progress in this direction. Most inkjet
printers combine the ink reservoir and the print head in one unit. When the ink runs out, it’s necessary to replace both, even though print heads can have a lifetime many times that of ink reservoirs.
The HP2000C differs radically from traditional designs by using a modular system in which the ink
cartridges and print heads are kept as separate units. The printer uses four pressurized cartridges
which hold 8 cm3 of ink each. They remain static underneath a hinged cover at the front of the
printer. These are connected by tubes that are integrated with a standard ribbon-style cable that runs
to the print head carriage. Internal smart chips monitor the supply and activate a plunger on the
relevant cartridge when it requires a refill. Each ink cartridge can keep track of how much ink it has
used and how much remains, even if it is moved between printers. The print heads are also selfmonitoring. They trigger an alert when they need to be replaced. The whole system can look at the
requirements for a particular print job and start only if there is sufficient ink to complete it.
Wasted ink is also a problem that adversely affects running costs. Printers that combine cyan,
yellow, and magenta inks from a single tri-color cartridge require the replacement of a whole
cartridge if just one reservoir empties. This must be done regardless of how much ink is left in the
other two reservoirs. The solution deployed by a number of printers is to employ a separate, independently replaceable ink cartridge for each color. The downside to this is increased maintenance effort.
An inkjet printer that uses four cartridges typically requires twice the attention of a printer that
combines the three colors.
The HP2000C includes another innovative feature for manageability. It incorporates a second
paper tray so that two different paper types can be kept in the printer to minimize user attention. Like
the ability to warn of impending ink depletion, this is essential in a networked environment.
Print capacities must also improve. Print speeds have exceeded 10 ppm, and increased cartridge
capacities have come with these increased print speeds. Inkjet manufacturers are expected to introduce workgroup color printers with much larger secondary ink containers linked to small primary ink
reservoirs located close to or in the print head. These printers will automatically replenish the small
primary reservoir from the secondary as needed.
Paper is another area where running cost reductions can be made. Expectations are that the
preoccupation with obtaining photographic quality on high-cost glossy paper will diminish. Inkjet
technologies will focus on obtaining better results from plain paper for the next generations of inkjet
Other Printers
While lasers and inkjets dominate market share, there are a number of other important print technologies. Solid ink has a significant market presence because it’s capable of producing high-quality
output on a wide range of media. And thermal wax transfer and dye sublimation play an important
role in more specialized printing fields. Dot matrix technology remains relevant in situations where a
fast impact printer is required, but it is at a big disadvantage in that it does not support color printing.
Solid Ink
Marketed almost exclusively by Tektronix, solid ink printers are page printers that use solid wax ink
sticks in a “phase-change” process. They liquefy wax ink sticks into reservoirs. Then they squirt the
ink onto a transfer drum from where it is cold-fused onto the paper in a single pass.
Chapter 9
Once they’ve been warmed up, thermal wax devices should not be moved or wax damage may
occur. They are intended to remain powered up in a secure area. They are designed to be shared over
a network so they are equipped with Ethernet, parallel, and SCSI ports that allow for comprehensive
Solid ink printers are generally cheaper than color lasers. They are economical to run considering their low component count and the Tektronix policy of giving black ink away free. Output quality
is good with multi-level dots supported by high-end models. But output quality is generally not as
good as the best color lasers for text and graphics. And it’s not as good as the best inkjets for photographs. Resolution starts at a native 300 dpi, rising to a maximum of around 850 by 450 dpi. Color
print speed is typically 4 ppm in standard mode, rising to 6 ppm in a reduced resolution mode.
Solid ink printers are well suited for general business use. They are also good for specialized
tasks such as large-format printing and delivering color transparencies at high speeds. They offer
connectivity and relatively low running costs, and they accept the widest range of media of any color
printing technology.
Dye-sublimation printers are specialized devices used widely in graphic arts and photographic
applications. True dye-subs work by heating the ink so that it turns from a solid into a gas. The
heating element can be set to different temperatures to control the amount of ink laid down in one
spot. This means that color is applied as a continuous tone rather than in dots as with an inkjet. One
color is laid over the whole of one sheet at a time, starting with yellow and ending with black. The
ink is on large rolls of film that contain sheets of each color. For example, for an A4 print the film
roll contains an A4-size sheet of yellow, followed by a sheet of cyan, and so on. Dye sublimation
requires particularly expensive special paper. This is because the dyes are designed to diffuse into the
paper surface, mixing to create precise color shades. Print speeds are low, typically between 0.25 and
0.5 ppm.
There are now some inkjet printers on the market that actually deploy dye-sublimation techniques.
This type of printer uses the technology differently than a true dye-sub. Its inks are in cartridges that
can only cover the page one strip at a time. It uses a heating element to heat the inks to form a gas.
The heating element can reach temperatures up to 500° C—higher than the average dye sublimation
printer can produce. The proprietary Micro Dry technique employed in Alps’ printers is an example
of this hybrid technology. These devices operate at 600 to 1200 dpi. In some, the standard cartridges
can be swapped for special “photo ink” units to produce photographic-quality output.
Thermo Autochrome
The thermo autochrome (TA) print process is considerably more complex than either inkjet or
laser technology. It has emerged recently in printers marketed as companion devices for use with
digital cameras. TA paper contains three layers of pigment—cyan, magenta, and yellow—each of
which is sensitive to a particular temperature. Of these pigments, yellow has the lowest temperature sensitivity, then magenta, followed by cyan. The printer is equipped with both thermal and
ultraviolet heads and the paper is passed beneath these three times. For the first pass, the paper is
selectively heated at the temperature necessary to activate the yellow pigment. This is then fixed
PC Peripherals
by the ultraviolet before passing onto the next color (magenta). Although the last pass (cyan) isn’t
followed by an ultraviolet fix, the end results are claimed to be far more permanent than those
obtained with dye-sublimation.
Thermal Wax
Thermal wax is another specialized technology. It is very similar to dye-sublimation and is well
suited to transparency printing. It uses CMY or CMYK rolls containing page-sized panels of plastic
film coated with wax-based colorants. Thermal wax printers works by melting ink dots. The printers
are generally binary, though some higher-end models are capable of producing multi-level dots on
special thermal paper. Resolution and print speeds are low—typically 300 dpi and around 1 ppm—
which makes them suitable for specialized applications only.
Dot Matrix
Dot matrix was the dominant print technology in the home computing market before inkjet technology emerged. Dot matrix printers produce characters and illustrations by striking pins against an ink
ribbon to print closely spaced dots in the appropriate shape. They are relatively expensive and do not
produce high-quality output. However, they can print on continuous stationary multi-page forms,
something laser and inkjet printers cannot do.
Print speeds, specified in characters per second (cps), vary from about 50 to over 500 cps. Most
dot-matrix printers offer different speeds depending on the desired print quality. Print quality is
determined by the number of pins (the mechanisms that print the dots). Typically, this varies from
between 9 to 24. The best dot-matrix printers (24 pins) are capable of near letter-quality type.
Figure 9.1 illustrates a typical printer, and Figure 9.2 shows a multi-function peripheral.
Figure 9.1: Printer Block Diagram
Chapter 9
Figure 9.2: Multi-Function Peripheral Block Diagram
The scanner should be accessible by any PC in the home, enabling the transfer of scanned images
and other files across the home network. Fundamentally, a scanner works like a digital camera. An
image is scanned through a lens and onto either a CMOS sensor or a charge-coupled device (CCD).
A CCD is an array of light-sensitive diodes. The sensor chip is typically housed on a daughter card
along with numerous A/D converters. The CCD and its circuitry create a digital reproduction of the
image through a series of photodiodes—each containing red, green, and blue filters—which respond
to different ranges of the optical spectrum. Once the picture is scanned, the DSP and pixel coprocessor produces a JPEG image that can be displayed on a screen. Most of this work is done by the
DSP processing power. Scanners are differentiated by their scanning quality
Digital imaging has come of age. Equipment that was once reserved for expensive applications is
now commonplace on the desktop. The powerful PCs required to manipulate digital images are now
considered entry level, so it comes as no surprise to learn that scanners are one of the fastest growing
markets today.
The list of scanner applications is almost endless. This has resulted in the development of
products to meet specialized requirements:
High-end drum scanners capable of scanning both reflective art and transparencies, from
35- mm slides to 16-foot x 20-in material at high (over 10,000 dpi) resolutions.
Compact document scanners designed exclusively for OCR and document management.
Dedicated photo scanners that move a photo over a stationary light source.
Slide/transparency scanners that pass light through an image rather than reflecting light off
of it.
Handheld scanners for the budget end of the market, or for those with little desk space.
PC Peripherals
Flatbed scanners are the most versatile and popular format. They are capable of capturing color
pictures, documents, and pages from books and magazine. With the right attachments, they can even
scan transparent photographic film.
On the simplest level, a scanner is a device that converts light into 0s and 1s (a computer-readable
format). That is, scanners convert analog data into digital data.
All scanners work on the same principle of reflectance or transmission. The image is placed
before the carriage, which consists of a light source and sensor. In the case of a digital camera, the
light source could be the sun or artificial lighting. When desktop scanners were first introduced,
many manufacturers used fluorescent bulbs as light sources, but these have two distinct weaknesses.
They rarely emit consistent white light for long. And while they are on, they emit heat which can
distort the other optical components. For these reasons, most manufacturers have moved to “coldcathode” bulbs. These differ from standard fluorescent bulbs in that they have no filament. They
operate at much lower temperatures and are more reliable. Standard fluorescent bulbs are now found
primarily on low-cost units and older models.
By late 2000, Xenon bulbs emerged as an alternative light source. Xenon produces a very stable,
full-spectrum light source that’s long lasting and quick to initiate. However, xenon light sources do
consume power at a higher rate than cold cathode tubes.
To direct the light from the bulb onto the sensors that read light values, CCD scanners use
prisms, lenses, and other optical components. Like eyeglasses and magnifying glasses, these items
can vary quite a bit in quality. A high-quality scanner uses high-quality glass optics that are colorcorrected and coated for minimum diffusion. Lower-end models typically skimp in this area, using
plastic components to reduce costs.
The amount of light reflected by or transmitted through the image and picked up by the sensor is
converted to a voltage proportional to the light intensity. The brighter the part of the image, the more
light is reflected or transmitted, resulting in a higher voltage. This analog-to-digital conversion
(ADC) is a sensitive process that is susceptible to electrical interference and noise in the system. In
order to protect against image degradation, the best scanners on the market today use an electrically
isolated analog-to-digital converter that processes data away from the main circuitry of the scanner.
However, this introduces additional manufacturing costs, so many low-end models include integrated
analog-to-digital converters built into the scanner’s primary circuit board.
The sensor component is implemented using one of three different technologies:
PMT (photo-multiplier tube) – A device inherited from drum scanner technology.
CCD (charge-coupled device) – The type of sensor used in desktop scanners.
CIS (contact image sensor) – A newer technology which integrates scanning functions into
fewer components, allowing scanners to be more compact in size.
Figure 9.3 shows the block diagram of a scanner.
Chapter 9
Figure 9.3: Scanner Block Diagram
Smart Card Readers
Instead of fumbling for coins, imagine buying the morning paper using a card charged with small
denominations of money. The same card could be used to pay for a ride on public transportation. And
after arriving at work, you could use that card to unlock the security door, enter the office, and boot
up your PC with your personal configuration. In fact, everything you purchase, whether direct or
through the Internet, would be made possible by the technology in this card. It may seem far-fetched
but the rapid advancements of semiconductor technologies make this type of card a reality. In some
parts of the world, the “smart card” has already started to obsolete cash, coins, and multiple cards.
An essential part of the smart card system is the card reader, which is used to exchange or transfer
Why is the smart card replacing the magnetic strip card? Because the smart card can hold up to a
100 times more information and data than a traditional magnetic strip card. The smart card is
classified as an integrated circuit (IC) card. There are actually two types of IC card—memory cards
and smart cards. Memory cards contain a device that allows the card to store various types of data.
However, they do not have the ability to manipulate this data. A typical application for memory type
cards is a pre-paid telephone card. These cards hold typically between 1 KB and 4 KB of data. A
memory card becomes a smart card with the addition of a microprocessor. The key advantage of
smart cards is that they are easy to use, convenient, and can be used in several applications. They
provide benefits to both consumers and merchants in many different industries by making data
portable, secure, and convenient to access.
Components of a Smart Card
A smart card resembles an ordinary credit card, but it has embedded IC’s (memory and microcontroller)
and contacts for the IC’s on one side. It may also include a magnetic strip for conventional transactions. The embedded microcontroller enables the card to make computations and decisions, and to
manipulate data. Figure 9.4 shows a block diagram of the smart card internal circuitry.
PC Peripherals
Charge Pump
EEPROM Oscillator
by 1 or 2
Control Logic
Time Base
Pin 0
Figure 9.4: Block Diagram of Smart Card Circuitry
A typical smart card consists of an 8-bit microcontroller (MCU), 16 KB of ROM, 512 bytes of
RAM, and up to 16 KB of EEPROM or flash memory, all on a single device. There are many types
of memory within the card. For temporary data storage, it contains RAM which is only used when
power is applied (usually when the card is in contact with the reader). It also contains ROM which
stores fixed data and the operating system. The use of non-volatile memory such as EEPROM or
Flash memory is ideal for storing data that changes, such as an account PIN or transaction data. This
type of data must remain stored once power is removed.
Manufacturers of smart cards are moving to a 32-bit microprocessor to increase processing
power and handle more applications.
There are two types of smart card:
Contact smart cards – These require insertion into a smart card reader.
Contact-less smart cards – These require only close proximity to an antenna.
The contact smart card has a small gold chip about one-half inch in diameter on the front
(instead of a magnetic strip on the back like a credit card). When the card is inserted into a smart
card reader, it makes contact with the electrical connectors that read information from the chip and
write to the chip.
Chapter 9
A contact-less smart card looks like a typical credit card, but it has a built-in microprocessor and
an antenna coil that enables it to communicate with an external antenna. Contact-less smart cards are
used when transactions must be processed quickly, as in mass-transit toll collection. The
“combicard” is a single card that functions as both a contact and contact-less card.
Smart cards have several advantages over traditional magnetic strip cards:
Proven to be more reliable than the magnetic strip card.
Can store up to 100 times more information than the magnetic strip card.
Reduce tampering and counterfeiting through high security mechanisms.
Can be reusable.
Have a wide range of applications (banking, transportation, health care, etc.).
Are compatible with portable electronics (PCs, telephones, PDAs, etc.).
Can store many types of information (finger print data, credit, debit and loyalty card details,
self-authorization data, access control information, etc.).
History of Smart Cards
Bull CP8 and Motorola developed the first “smart card” in 1977. It was a two-chip solution consisting of a microcontroller and a memory device. Motorola produced a single chip card called the
SPOM 01.
Smart cards have taken off at a phenomenal rate in Europe by replacing traditional credit cards.
The key to smart card success has been its ability to authorize transactions off-line. A smart card
stores the “charge” of cash, enabling a purchase up to the amount of money stored in the card.
Motorola’s single chip solution was quickly accepted into the French banking system. It served as a
means of storing the cardholder’s account number and personal identification numbers (PIN) as well
as transaction details. By 1993 the French banking industry completely replaced all bankcards with
smart cards.
In 1989 Bull CP8 licensed its smart card technology for use outside the French banking system.
The technology was then incorporated into a variety of applications such as Subscriber Identification
Modules (SIM cards) in GSM digital mobile phones. In 1996 the first combined modem/smart card
reader was introduced. We will probably soon see the first generation of computers that read smart
cards as a standard function.
In May 1996 five major computer companies (IBM, Apple, Oracle, Netscape, and Sun) proposed
a standard for a “network computer” designed to interface directly with the Internet, and it has the
ability to use smart cards. Also in 1996 the alliance between Hewlett Packard, Informix, and
Gemplus was launched to develop and promote the use of smart cards for payment and security on
all open networks.
Besides e-commerce, some smart card applications are:
Transferring favorite addresses from a PC to a network computer
Downloading airline ticket and boarding pass
Booking facilities and appointments via Websites
Storing log-on information for using any work computer or terminal
Smart Card Market Potential
Smart card usage worldwide is increasing at an extraordinary rate. There are already 2 billion smart
card units in circulation. A leading smart card reader manufacturer (Gemplus) predicts that by 2018
there will be 5 billion phone cards alone. And this does not include smart cards for medical informa182
PC Peripherals
tion, personal ID, loyalty information, etc. All of these smart cards in circulation require smart card
To date, Europe has dominated the smart card industry in both production and usage. The region
produces as much as 90% of the world’s smart cards and consumes about two thirds. However,
Europe’s share of the smart card market has been declining as the cards have started gaining popularity in other parts of the world. By the end of the decade, smart card usage is expected to be evenly
split Europe, the Americas, and Asia.
College campus smart cards have been a major success in the U.S. These multi-application smart
card systems provide students with services such as
ATM access
Library check out
Dormitory access
Payment services at vending machines, laundry, telephones, and book stores
The number of students carrying smart cards in the U.S. has grown to more than 1 million since
1996. This represents approximately one in 17 students. The growth in campus cards is producing a
generation of people who already understand the benefits of smart card technology and who may be
more inclined to use them in larger, open system applications.
Smart Card Applications
Smart cards are being used in applications ranging from stored value cards (SVCs) to
transportation, medical, and identification cards. As they become cheaper to produce,
disposable smart cards will become available alongside long-term use cards, such as multifunction credit cards. The low-cost disposable smart card will become commonplace in
applications such as one-day travel cards, flight tickets, and even concert tickets. Smart card
applications are increasing daily in this rapidly growing area. The following list describes
some of the ways smart cards are being used today.
Stored Value Cards (SVCs) – Also known as electronic purses, they are being championed
by companies such as Mondex International (Mondex), Banksys (Proton), and Chipper
International (Chipper). They allow small denominations of money to be stored on the card
in various currencies. SVCs can be used for small value purchases where it is inappropriate
to use a credit card. The SVC needs to be ‘charged’ with cash. As each transaction is
completed, the appropriate amount is deducted until the card is empty. Some companies are
producing small key-ring type readers with a small display that can read the amount left on
a SVC. For example, a single card could be used as a credit card, debit card, SVC, access
card, video rental card, and medical record file. Other applications for SVCs include
vending machines, parking meters, pay TV, cinemas, and convenience stores.
Phone Cards – Public pay phones are beginning to replace the coin slot with a smart-card
reader. Using smart cards to pay for calls reduces the need to carry cash and prevents theft
from the phone company. This type of stored value card counts down the money spent on
each call. It can then be re-charged or disposed of when it’s empty.
Health Care Cards – Health care cards can store pertinent information such as:
• Cardholder’s doctor
• Blood type
• Allergic reactions
Chapter 9
• Medications
• Next of kin
• Emergency telephone numbers
• Dental records
• Health card details
• Scheduled medical visits
This type of card has proved to be very popular in Germany, with most of the population
carrying one.
Transportation Cards – It is estimated that there are 20 billion commuter transactions
worldwide. All these transactions take time, so the need for smart card technology is
increasing, especially for contact-less transactions. For example, smart cards could drastically reduce the time it takes to pay for a subway ticket and pass through the security
barrier. As you pass through a turnstile, you could hold your card next to the reader. The
card would be read, money would be deducted, and the entrance barrier would be opened.
Pre-pay Utility Meter Cards – This type of card is very popular in regions with a lot of
seasonal worker movement. They are also good for short-term tenant agreements where
utilities (water, electricity, and gas) must be paid for in advance or as-used. The card is
“charged” with money and then inserted into the utility card reader. Money is deducted for a
certain amount of gas, electricity, or water. The meter usually shows as a countdown of how
much fuel is left. This type of utility payment has proved very popular in South Africa.
Personal ATM – Public and private telephones and PCs with smart card readers could make
personal ATMs possible. This would allow users to load funds onto their smart cards from
their bank accounts. Or they could top up the limit of a pre-authorized debit card. Financial
institutions can make these facilities available wherever there is a phone, without the need
for costly traditional ATMs. These telephone transactions can enable funds to be transferred
from person to retailer, bank to account holder, and even person to person.
Smart Card System Architecture
The complete smart card system consists of the card, the card reader, and the operating software. The
smart card contains the appropriate IC’s to store and manipulate data. When the card is inserted into
a reader, the reader communicates with the smart card microcontroller to perform authentication
functions. The card then performs the requested function such as a deduction of a stored value. The
card is removed when the transaction is complete, and the reader is ready to accept the next card
Standardization plays an important role in the interaction between the card and the reader. For
example, the need to carry a separate smart card for each merchant purchased from is inefficient.
Consumers already carry several different cards including video rental cards, discount shopping
cards, and credit and debit cards. Standardization in cards and card readers would allow for one card
to be accepted and read by different readers.
A multi-functional operating system is the key to a multi-application card. It would prevent
interference among the programs. JavaCard (Sun Microsystems) and MULTOS™ are two operating
systems that let cards perform multiple functions. Some companies support Java and some support
Multos, but fortunately for users, they are compatible with each other. Microsoft Smart Card for
Windows is another provider of OS software for smart cards.
PC Peripherals
MULTOS™ is a multi-application operating system for smart cards; the MAOSCO consortium
controls its specifications. The main elements of the MULTOS™ architecture are a virtual machine
(MEL interpreter) and an application loader. Smart card applications are coded in MEL (MULTOS
Executable Language) and are hardware independent. In contrast to the JavaCard, the MULTOS™
virtual machine (VM) is completely realized inside the card. Generally, there is a distinction between
off-card and on-card VMs. MULTOS™ offers a 100% on-card VM with firewalls between the
applications. With this design, MULTOS™ meets the highest security criteria.
The application loader uses a process called dynamic loading and unloading. This process
ensures secure smart card application loading and deletion, to and from the EEPROM. This also
applies to cards already issued. The application loader, the loading procedures, and data formats are
all part of the MULTOS™ specifications.
Recently, many companies have teamed up to provide smart card users with extra benefits., creators of the Web currency beenz, and Mondex International have teamed up to work
on the development of a Multos smart card capable of carrying Mondex e-cash, beenz, and complimentary e-commerce services. The companies envision that the card will be used with PCs, wireless
devices, and digital TV, as well as in the high street. Mondex is cash in electronic form and is
particularly suited to high volume, low value payments in the real and virtual worlds. Beenz is a
universal web currency. Beenz cannot be bought directly by consumers, but earned on-line by
consumers visiting, interacting with, or shopping at web sites. It can be spent with participating
merchants on thousands of products. Beenz now have the potential to be earned in the real world as
well as on-line in the virtual world.
American Express Blue Card has launched the first mainstream U.S. credit card that has both a
magnetic strip and a smart card chip. The company is offering a free “on-line wallet.” It stores
personal information that can be secured with a free smart card reader that connects to a user’s PC. A
PIN unlocks the wallet on-line when the smart card is inserted into the reader. A major benefit is that
users do not have to type in their account details every time they wish to purchase on-line.
The loyalty card is the predicted killer-application for smart cards. This application is really
taking off in the U.S. with drug chain Rite-Aid and movie rental chain Blockbuster leading the way.
The smart card loyalty scheme is transacted at the point of sale (POS) terminal. It offers discounts
and other incentives according to loyalty points earned. Rite-Aid also sells a smart card gift certificate which is redeemable (with stored value programmed into the chip) for merchandise in the store.
What is a Smart Card Reader?
An essential part of the smart card system is the card reader. The card reader exchanges or transfers
information. There are several types of card readers, each specific to a particular application. One
type connects to a PC to purchase via the Internet (e-commerce) or to load money onto a smart card
through on-line banking. Another type is a handheld wireless terminal used by taxi drivers and in
There are two main categories of smart card reader—contact and contact-less. The difference
between them is that the contact-less version contains an antenna coil. It sends and receives data
without the need to make contact with the smart card. The basic functionality of both types of smart
card reader is the same. With the introduction of the combicard (both contact and contact-less
functions on one card), the card reader may accept both types of card interrogation and interaction.
Chapter 9
The contact smart card tends to be used for cash transactions, whereas the contact-less type is used
more for access control applications.
Smart card readers for use with PCs would use a PC for data input and a monitor for viewing
card status. Smart card reader functions may also be integrated into cash registers, vending machines,
public payphones, set top boxes, mobile phones, utility meters, and so forth.
Figure 9.5 shows the basic components of a smart card reader.
Main Data Processing
Figure 9.5: Smart Card Reader Block Diagram
The functional blocks that make up the system are:
Main data processing – This is typically a 16- or 32-bit microprocessor for computational
Memory – Stores data (operating system and variable/data storage) and microprocessor
boot code.
Security logic – Aids data encryption.
Card reader interface – Supports a smart card reader (contact and contact-less) and a
magnetic card reader.
Keypad and keypad decoder – Enters PINs, data, and the associated logic necessary to
decode character input.
LCD display driver
Modem and modem interface – Interfaces to wireless, cellular, and radio modems (usually
PCMCIA type).
To describe the functions of the smart card reader, let’s review a typical consumer transaction.
This is shown pictorially in Figure 9.6.
The merchant inserts the smart card into the card reader and power is applied to the card. The
reader communicates with the smart card MCU to perform the card authentication cycle. During the
initial read function, the smart card interface logic passes the data to the card reader microprocessor
via the security logic. The reader then instructs the user to enter a PIN via a message on the LCD.
The user enters his/her PIN via the keypad. This is authenticated by the reader microprocessor. The
PIN is verified by the microcontroller in the card which compares the entered PIN to the PIN stored
PC Peripherals
Figure 9.6: Typical Customer Transaction
in its RAM. If the comparison is negative, the CPU refuses to work. The smart card keeps track of
how many wrong PINs are entered. If it is over a predetermined number, the card blocks itself
against any future use.
If the PIN is verified, the merchant enters the amount to be deducted. If the smart card has a
stored value matching the amount, the deduction of the stored value occurs. If the transaction is not a
SVC transaction, the amount to be debited from the bank account is verified using the modem
(wireless, cellular, or radio). When the transaction is complete, the card is ejected and removed. The
smart card reader is then ready for the next transaction.
The flow of information between an interface device and a smart card occurs via transport
protocols in the form of command-response pairs. In most cases the card reader has the role of
master. That is, the card reader generates and processes the commands. Smart cards are making high
technology applications of the 21st century easier and more secure for everyone. The smart card
market is on the brink of realizing its full worldwide potential. This will be realized not only in GSM
phones, but on a wider scale for cash-less transactions, loyalty schemes, access control systems,
medical record cards, etc. This year we will see personal computers shipped with smart card readers
as standard equipment. This combination will unlock widespread worldwide acceptance of multiapplication smart cards. Handheld battery-powered smart card readers in taxis and buses will become
common place.
Next-generation smart card manufacturers are looking to integrate fingerprint-sensors for market
resurgence. The interest in fingerprint sensor technology is reemerging after years of being relegated
to a niche application due to cost and technology limitations. Yet there is evidence that fingerprint ID
devices may be breaking out of their niche. Some players incorporate the sensors into consumerbased systems like laptops, and sensor vendors tout improved ruggedness and accuracy.
Chapter 9
A computer keyboard is an array of switches, each of which sends the PC a unique signal when it is
pressed. Mechanical and rubber membranes are the two types of switches most commonly used.
Mechanical switches are spring-loaded “push to make” types. They complete the circuit when
pressed, and break it when released. These are the types used in click-type keyboards that provide
tactile feedback.
Membranes are composed of three sheets. The first has conductive tracks printed on it, the
second is a separator with holes in it, and the third is a conductive layer with bumps on it. A rubber
mat covering provides the springy feel. When a key is pressed it pushes the two conductive layers
together to complete the circuit. On top is a plastic housing that includes sliders to keep the keys
The force displacement curve is an important keyboard factor. It describes how much force is
needed to depress a key, and how this force varies during the key’s downward travel. Research shows
that most people prefer 80 g to 100 g. Game consoles may go to 120 g or higher, while other keys
could be as low as 50 g.
The keys are connected as a matrix and their row and column signals feed into the keyboard’s
microcontroller chip. The chip is mounted on a circuit board inside the keyboard and interprets the
signals with its built-in firmware program. For example, a particular key press might signal row 3,
column B which the controller decodes as an ‘A’ and sends the appropriate code to the PC. These
scan codes are defined as standard in the PC’s BIOS, though the row and column definitions are
specific only to that particular keyboard.
Increasingly, keyboard firmware is becoming more complex as manufacturers make their
keyboards more sophisticated. It is not uncommon for a programmable keyboard in which some keys
have switchable multiple functions to need 8 KB of ROM to store its firmware. Most programmable
functions are executed through a driver running on the PC.
A keyboard’s microcontroller is also responsible for negotiating with the keyboard controller in
the PC. It reports its presence and allows PC software to do things like toggle the status light on the
keyboard. The two controllers communicate asynchronously over the keyboard cable.
Many ergonomic keyboards angle the two halves of the main keypad to allow the elbows to rest
in a more natural position. Apple’s Adjustable Keyboard has a wide, gently sloping wrist rest that
splits down the middle. It enables the user to find the most comfortable typing angle. It has a
detachable numeric keypad so the user can position the mouse closer to the alphabetic keys. Cherry
Electrical sells a similar split keyboard for the PC. It is the Microsoft Natural Keyboard which sells
in the largest volumes and is one of the cheapest. This keyboard also separates the keys into two
halves. The manufacturer claims its undulating design accommodates the natural curves of the hand.
In the early 1980s the first PCs were equipped with only a keyboard. By the end of the decade a
mouse device had become an essential for running the GUI-based Windows operating system.
The most common mouse used today is optoelectronic. It has a steel ball for weight and it is rubbercoated for grip. As it rotates it drives two rollers, one each for x and y displacement. A third springloaded roller holds the ball in place against the other two. These rollers turn two disks that have
radial slots cut in them. Each disk rotates between a photo-detector cell, and each cell contains two
PC Peripherals
offset light-emitting diodes (LEDs) and light sensors. As the disk turns, the sensors see the light
appear to flash, indicating movement. The offset between the two light sensors shows the direction of
There is a switch inside the mouse for each button, and also a microcontroller that interprets the
signals from the sensors and the switches. It uses its firmware to translate them into packets of data
that are sent to the PC. Serial mice use 12V power. They also use an asynchronous protocol from
Microsoft composed of three bytes per packet to report x and y movement and button presses. PS/2
mice use 5 volts and an IBM-developed communications protocol and interface.
1999 saw the introduction of the most radical mouse design advancement since its first appearance in 1968. This was Microsoft’s revolutionary IntelliMouse. Gone were the mouse ball and the
other moving parts inside the mouse used to track mechanical movement. They were replaced by a
tiny CMOS optical sensor—the same chip used in digital cameras—and an on-board digital signal
processor (DSP).
Called the IntelliEye, this infrared optical sensor emits a red glow beneath the mouse. It captures
high-resolution digital snapshots at the rate of 1,500 images per second. These images are compared
by the DSP and translated into on-screen pointer movements. The technique, called image correlation processing, executes 18 million instructions per second (MIPS). It results in smoother, more
precise pointer movement. The absence of moving parts means that the mouse’s traditional enemies
(food crumbs, dust, grime, etc.) are all but avoided. The IntelliEye works on nearly any surface such
as wood, paper, and cloth. It does have some difficulty, though, with reflective surfaces such as CD
jewel cases, mirrors, and glass.
Newer optical mice are more accurate and less fussy than the roller-ball types. With optical
sensors and a DSP chip, the latest Wireless IntelliMouse Explorer has a video sensor. It can take
6,000 video snapshots per second of the work surface, allowing smooth and precise cursor movement
on the screen. And this can all be done wirelessly.
A peripheral device is an external device that attaches permanently or temporarily to a PC to provide
a specific functionality. Devices such as printers, scanners, smart card readers, keyboards, and mice
are examples of peripheral devices. Each device provides a service that enables consumers to input
or output information.
Digital Displays
Many recent advances have been made in flat-panel displays, also known as digital displays. Consumer interest runs high for the sharpest, largest screen money can buy. From high-definition
television to scientific and engineering workstations to handheld personal organizers and cell phones,
end users are asking for high resolution, great color, and small form factor, all at the lowest possible
cost. Such expectations have display designers exploring innovative schemes for representing
information in a flat-panel format. Significantly, the only non-flat-panel technology in use is the
cathode ray tube (CRT), which can still offer a competitive performance/cost ratio but is becoming
increasingly limited in terms of the wide variety of emerging applications. Three essential conditions
driving the flat panel display market are its technological and commercial readiness (in view quality,
manufacturing efficiency, and system integration), the substantial capacity investment, and the lowerend selling prices.
To provide such advances, manufacturers are looking at a various digital display technologies.
Each technology, however, provides advantages that are ideal for only a few applications. Liquid
crystal displays may be ideal for notebook PCs and desktop monitors, but Plasma Display Panels
(PDPs) provide a good mid-range between high-definition TV and conventional television monitors.
Meanwhile, LCoS (liquid-crystal-on-silicon) chips are looking to provide the essential low-cost
driver for a new generation of rear-projection high-definition TV systems. In a similar vein, vendors
are pursuing organic LED technology as an enabler for robust, small, and low-cost display elements
in portable consumer appliances. Organic LEDs are easily processed onto a variety of substrates, and
produce high-quality images when magnified by compact optical systems. Of course, all of these
technologies will eventually run into more-exotic variants such as active-matrix liquid-crystal panel
and thick dielectric electro-luminescent (TDEL) technologies. These technologies and their manufacturers are after a rich market with a production of 8 million units in 2003 and an estimate of 13
million units in 2007. Production location for digital displays will probably be the Asia Pacific
region in order to gain maximum cost benefits.
Whatever the actual system used to display information, one continually increasing demand is
the amount of information being fed to displays. Larger size, higher resolution, low power, and realtime video requirements all feed into this data bottleneck. Migration from the bulky CRT monitors to
digital displays, however, is still occurring due to the reduced form factor, lower weight, and better
pictures and sound of digital displays.
Digital Displays
CRTs—Cathode Ray Tubes
In an industry in which development is so rapid, it is somewhat surprising that the technology behind
monitors and televisions is over 100 years old. The CRT was developed by Ferdinand Braun, a
German scientist, in 1897, but was not used in the first television sets until the late 1940s. Although
the CRTs found in modern monitors have undergone modifications to improve picture quality, they
still follow the same basic principles. Despite predictions of its impeding demise, it looks as if the
CRT will maintain its dominance in the PC monitor and television market for the near future.
Anatomy of the CRT
A CRT is an oddly shaped, sealed glass bottle with no air inside. Its overall shape begins with a slim
neck at one end, and gradually spreads outward (similar to the shape of a funnel) until it forms a
large base at the other end. The surface of the CRT base, which is the monitor’s screen, can be either
almost flat or slightly convex, and is coated on the inside with a matrix of thousands of tiny phosphor
dots. Phosphors are chemicals that emit light when excited by a stream of electrons, and different
phosphors emit different colored light. Each dot consists of three blobs of colored phosphor, one red
(R), one green (G), and one blue (B). These groups of three phosphors make up what is known as a
single pixel.
The “bottleneck” of the CRT contains an electron gun, which is composed of a cathode, a heat
source, and focusing elements. Color monitors have three separate guns, one for each phosphor
color—R, G, and B. Combinations of different intensities of red, green, and blue phosphors can
create the illusion of millions of colors. This effect is called additive color mixing, and is the basis
for all color CRT displays.
Images are created when electrons are fired from the electron gun converge to strike their
respective phosphor blobs (triads), and each is illuminated to a greater or lesser extent. When
electrons strike the phosphor, light is emitted in the color of the individual phosphor blobs. The
electron gun must be heated by a built-in heater before it will liberate electrons (negatively charged)
from the cathode. The emitted electrons are then narrowed into a tiny beam by the focus elements.
The electrons are drawn toward the phosphor dots by a powerful, positively charged anode, located
near the screen.
The phosphors in a group are so close together that the human eye perceives the combination as
a single colored pixel. Before the electron beam strikes the phosphor dots, it travels through a
perforated sheet located directly in front of the phosphor layer known as the “shadow mask.” Its
purpose is to “mask” the electron beam, forming a smaller, more rounded point that can strike
individual phosphor dots cleanly and minimize “over-spill,” a condition in which the electron beam
illuminates more than one dot.
The electron beam is moved around the screen by magnetic fields generated through a deflection
yoke that encircles the narrow neck of the CRT. The beam always begins moving from the top left
corner (as viewed from the front), and sweeps from left to right across the row of pixels. The beam is
controlled so that it will flash on and off as it sweeps across the row, or “raster,” allowing energized
electrons to collide only with the phosphors that correlate to the pixels of the image that is to be created
on the screen. These collisions convert the electron’s energy into light. Once a pass has been completed, the electron beam moves down one raster and begins again from the left side. This process is
repeated until an entire screen is drawn, at which point the beam returns to the top left to start again.
Chapter 10
The most important aspect of any monitor is that it should give a stable display at the chosen
resolution and color palette. A screen that shimmers or flickers, particularly when most of the picture
is showing white, can cause itchy or painful eyes, headaches, and migraines. It is also important that
the performance characteristics of a monitor be carefully matched with those of the graphics card
driving it. It is of no consequence to have an extremely high performance graphics accelerator,
capable of ultra high resolutions at high flicker-free refresh rates, if the monitor cannot lock onto the
A monitor’s three key specifications are the maximum resolution it will display, at what refresh
rate, and whether this is in interlaced or non-interlaced mode.
Resolution and Refresh Rate
The resolution of a monitor is the number of pixels used by the graphics card to describe the desktop
(the display area of the monitor), expressed as a horizontal-by-vertical figure. Standard VGA
resolution is 640 x 480 pixels. The most common SVGA resolutions are 800 x 600 and 1024 x 768
Refresh rate, or vertical frequency, is measured in hertz (Hz), and represents the number of
frames displayed on the screen per second. Too few, and the eye will notice the intervals in between
frames and perceive a flickering display. The worldwide accepted refresh rate for a flicker-free
display is 70 Hz and above, although standards bodies such as the Video Electronics Standards
Association (VESA) are pushing for higher rates of 75 Hz or 80 Hz.
A computer’s graphics circuitry creates a signal based on the Windows desktop resolution and
refresh rate. This signal is known as the horizontal scanning frequency, or HSF, and is measured in
kHz. Raising the resolution and/or refresh rate increases the HSF signal. A multi-scanning or
“autoscan” monitor is capable of locking on to any signal that lies between a minimum and maximum HSF for the monitor. If the signal falls out of the monitor’s range, it will not be displayed.
CRT Mask Technologies
The different CRT mask technologies include interlacing, dot pitch, dot trio, aperture grill, slotted
mask, and enhanced dot pitch.
An interlaced monitor is one in which the electron beam draws every other line, say one, three, and
five until the screen is full, then returns to the top to fill in the even blanks (say lines two, four, six,
and so on). An interlaced monitor offering a 100-Hz refresh rate only refreshes any given line 50
times a second, giving an obvious shimmer. A non-interlaced (NI) monitor draws every line before
returning to the top for the next frame, resulting in a far steadier display. A non-interlaced monitor
with a refresh rate of 70 Hz or greater is necessary to be sure of a stable display.
Masks and Dot Pitch
The maximum resolution of a monitor is dependent on more than just its highest scanning frequencies. Another factor is dot pitch, the physical distance between adjacent phosphor dots of the same
color on the inner surface of the CRT. Typically, this is between 0.22 mm and 0.3 mm. The smaller
the number, the finer and better resolved the detail. However, trying to supply too many pixels to a
monitor without a sufficient dot pitch to cope causes the very fine details, such as the writing beneath
icons, to appear blurred.
Digital Displays
There is more than one way to group three blobs of colored phosphor—indeed, there is no
reason why they should even be circular blobs. A number of different schemes are currently in use,
and care needs to be taken in comparing the dot pitch specification of the different types. With
standard dot masks, the dot pitch is the center-to-center distance between two nearest-neighbor
phosphor dots of the same color, which is measured along a diagonal. The horizontal distance
between the dots is 0.866 times the dot pitch. For masks that use stripes rather than dots, the pitch
equals the horizontal distance. This means that the dot pitch on a standard dot-mask CRT should be
multiplied by 0.866 before it is compared with the dot pitch of these other types of monitors.
The difficulty in directly comparing the dot pitch values of different displays means that other
factors—such as convergence, video bandwidth, and focus—are often a better basis for comparing
monitors than dot pitch.
Dot Trio
The vast majority of computer monitors use circular blobs of phosphor and arrange them in triangular formation. These groups are known as “triads” and the arrangement is a dot trio design. The
shadow mask is located directly in front of the phosphor layer—each perforation corresponding with
phosphor dot trios—and assists in masking unnecessary electrons, avoiding over-spill and resultant
blurring of the final picture.
Because the distance between the source and the destination of the electron stream towards the
middle of the screen is less than at the edges, the corresponding area of the shadow mask gets hotter.
To prevent it from distorting and redirecting the electrons incorrectly, manufacturers typically
construct it from Invar, an alloy with a very low coefficient of expansion.
This is all very well, except that the shadow mask used to avoid overspill occupies a large
percentage of the screen area. Where there are portions of mask, there is no phosphor to glow and
less light means a duller image. The brightness of an image matters most for full-motion video, and
with multimedia becoming an increasingly important market consideration, a number of improvements make the dot-trio mask designs brighter. Most approaches to minimizing glare involve filters
that also affect brightness. The new schemes filter out the glare without affecting brightness as much.
Toshiba’s Microfilter CRT places a separate filter over each phosphor dot and makes it possible
to use a different color filter for each color dot. Filters over the red dots, for example, let red light
shine through, but they also absorb other colors from ambient light shining on screen – colors that
would otherwise reflect as glare. The result is brighter, purer colors with less glare. Other companies
are offering similar improvements. Panasonic’s Crystal Vision CRTs use a technology called dyeencapsulated phosphor, which wraps each phosphor particle in its own filter. ViewSonic offers an
equivalent capability as part of its new SuperClear screens.
Aperture Grill
In the 1960s, Sony developed an alternative tube technology known as Trinitron. It combined the
three separate electron guns into one device; Sony refers to this as a Pan Focus gun. Most interesting
of all, Trinitron tubes were made from sections of a cylinder, vertically flat and horizontally curved,
as opposed to conventional tubes using sections of a sphere, which are curved in both axes. Rather
than grouping dots of red, green, and blue phosphor in triads, Trinitron tubes lay their colored
phosphors down in uninterrupted vertical stripes.
Chapter 10
Consequently, rather than use a solid perforated sheet, Trinitron tubes use masks that separate
the entire stripes instead of each dot; Sony calls this the “aperture grill.” This replaces the shadow
mask with a series of narrow alloy strips that run vertically across the inside of the tube. Rather than
using conventional phosphor dot triplets, aperture grill-based tubes have phosphor lines with no
horizontal breaks, and so rely on the accuracy of the electron beam to define the top and bottom
edges of a pixel. Since less of the screen area is occupied by the mask and the phosphor is uninterrupted vertically, more of the screen can glow, resulting in a brighter, more vibrant display. The
equivalent measure to dot pitch in aperture grill monitors is known as “stripe pitch.”
Because aperture grill strips are very narrow, there is a possibility that they might move due to
expansion or vibration. In an attempt to eliminate this, horizontal damper wires are fitted to increase
stability. This reduces the chances of aperture grill misalignment, which can cause vertical streaking
and blurring. The down-side is that damper wires obstruct the flow of electrons to the phosphors and
are therefore just visible upon close inspection. Trinitron tubes below 17-inch or so manage with one
wire, while the larger models require two. A further down-side is mechanical instability. A tap on the
side of a Trinitron monitor can cause the image to wobble noticeably for a moment. This is understandable given that the aperture grill’s fine vertical wires are held steady in only one or two places
horizontally. Mitsubishi followed Sony’s lead with the design of its similar Diamondtron tube.
Slotted Mask
Capitalizing on the advantages of both the shadow mask and aperture grill approaches, NEC has
developed a hybrid mask type that uses a slot-mask design borrowed from a TV monitor technology
that originated in the late 1970s by RCA and Thorn. All non-Trinitron TV sets use elliptically shaped
phosphors grouped vertically and separated by a slotted mask.
In order to allow more electrons through the shadow mask, the standard round perforations are
replaced with vertically aligned slots. The design of the trios is also different, and features rectilinear
phosphors that are arranged to make best use of the increased electron throughput.
The slotted mask design is mechanically stable due to the crisscross of horizontal mask sections
but exposes more phosphor than a conventional dot-trio design. The result is not quite as bright as an
aperture grill but much more stable and still brighter than dot-trio. It is unique to NEC, and the
company capitalized on the design’s improved stability in early 1996 when it fit the first
ChromaClear monitors to come to market with speakers and microphones and claimed them to be
“the new multimedia standard.”
Enhanced Dot Pitch
Developed by Hitachi, the largest designer and manufacturer of CRTs in the world, Enhanced Dot
Pitch (EDP) mask technology came to market in late 1997. Enhanced Dot Pitch technology takes a
slightly different approach, concentrating more on the phosphor implementation than the shadow
mask or aperture grill.
On a typical shadow mask CRT, the phosphor trios are more or less arranged equilaterally,
creating triangular groups that are distributed evenly across the inside surface of the tube. Hitachi has
reduced the distance between the phosphor dots on the horizontal, creating a dot trio that’s more akin
to an isosceles triangle. To avoid leaving gaps between the trios, which might reduce the advantages
of this arrangement, the dots themselves are elongated, and are therefore oval rather than round.
Digital Displays
The main advantage of the EDP design is increased clarity, which is most noticeable in the
representation of fine vertical lines. In conventional CRTs, a line drawn from the top of the screen to
the bottom will sometimes “zigzag” from one dot trio to the next group below, and then back to the
one below that. Bringing adjacent horizontal dots closer together reduces zigzag, and has an effect on
the clarity of all images.
CRT Electron Beam
If the electron beam is not lined up correctly with the shadow mask or aperture grill holes, the beam
is prevented from passing through to the phosphors, thereby causing a reduction in overall pixel
illumination. As the beam scans, however, it may sometimes regain alignment and therefore pass
through the mask/grill to the phosphors. The result of this varying realignment is that the brightness
rises and falls, producing a wavelike pattern on the screen, referred to as moiré. Moiré patterns are
often most visible when a screen background is set to a pattern of dots, as in a gray screen background consisting of alternate black and white dots, for example. The phenomenon is actually
common in monitors with improved focus techniques, because monitors with poor focus will have a
wider electron beam and therefore have more chance of hitting the target phosphors instead of the
mask/grill. In the past the only way to eliminate moiré effects was to defocus the beam. Now,
however, a number of monitor manufacturers have developed techniques to increase the beam size
without degrading the focus.
A large part of the effort toward improving the CRT’s image is aimed at creating a beam with
less spread. This will allow the beam to more accurately address smaller individual dots on the
screen without impinging on adjacent dots. This can be achieved by forcing the beam through
smaller holes in the electron gun’s grid assembly, but at the cost of decreasing the image brightness.
Of course, this can be countered by driving the cathode with a higher current so as to liberate more
electrons. Doing this, however, causes the barium that is the source of the electrons to be consumed
more quickly and reduces the life of the cathode.
Sony’s answer to this dilemma is SAGIC, or small aperture G1 with impregnated cathode. The
SAGIC approach comprises a cathode impregnated with tungsten and barium material whose shape
and quantity have been varied so as to avoid the high current required for a denser electron beam that
consumes the cathode. This arrangement allows the first element in the grid, known as G1, to be
made with a much smaller aperture, thus reducing the diameter of the beam that passes through the
rest of the CRT. By early 1999 this technology had helped Sony reduce its aperture grill pitch to
0.22mm, down from the 0.25 mm of conventional Trinitron tubes. The tighter beam and narrower
aperture grill worked together to provide a noticeably sharper image.
In addition to dot size, control over dot shape is also essential, and the electron gun must correct
dot shape errors that occur naturally due to the geometry of the tube for optimal performance. The
problem arises because the angle at which the electron beam strikes the screen must necessarily vary
across the screen’s width and height. For dots in the center of the screen, the beam comes straight
through the electron gun and, undeflected by the yoke, strikes the phosphor at a perfect 90 degrees.
However, as the beam scans closer to the edges of the screen, it strikes the phosphor at an increasing
angle, with the result that the area illuminated becomes increasingly elliptical as the angle changes.
The effect is most pronounced in the corners—especially with screens that are not perfectly flat—
when the dot grows in both directions. It is essential that the monitor’s electronics compensate for
the problem in order to maintain image quality.
Chapter 10
It is possible to alter the shape of the beam itself, in sync with the sweeping of the beam across
the screen, by using additional components in the electron gun. In effect, the beam is made elliptical
in the opposite direction so that the final dot shape on the screen remains circular.
Controls and Features of the CRT
Not so long ago, advanced controls were found only on high-end monitors. Now, even budget models
boast a wealth of image-correction controls. This is just as well since the image fed through to the
monitor by the graphics card can be subject to a number of distortions. An image can sometimes be
too far to one side, or appear too high up on the screen, or need to be made wider, or taller. These
adjustments can be made using the horizontal or vertical sizing and positioning controls. The most
common of the “geometric controls” is barrel/pincushion, which corrects the image from dipping in
or bowing out at the edges. Trapezium correction can straighten sides that slope in together, or out
from each other. Parallelogram corrections will prevent the image from leaning to one side, while
some models even allow the entire image to be rotated.
Making more common appearances too are on-screen controls. These are superimposed graphics
that appear on the screen (obscuring parts of the main image), usually indicating what is about to be
adjusted. This is similar to TV sets superimposing, say, a volume bar while the sound is being
adjusted. There is no standard for on-screen graphics, and consequently there is a huge range of
icons, bars, colors, and sizes in use, some better than others. The main point, however, is to render
adjustments as intuitively, as quickly, and as easily as possible.
Sizes and Shapes
By the beginning of 1998, 15-inch monitors were gradually slipping to bargain-basement status, and
the 17-inch size, an excellent choice for working at 1,024 x 768 resolution, was moving into the slot
reserved for mainstream desktops. At the high end, a few 21-inch monitors were offering resolutions
as high as 1800 x 1440.
In late 1997 a number of 19-inch monitors appeared on the market, with prices and physical
sizes close to those of high-end 17-inch models, offering a cost-effective compromise for high
resolution. A 19-inch CRT is a good choice for 1280 x 1024—the minimum resolution needed for
serious graphics or DTP, and the power user’s minimum for business applications. It’s also a practical
minimum size for displaying at 1600 x 1200, although bigger monitors are preferable for that
One of the main problems with CRTs is their bulk. The larger the viewable area gets, the more
the CRT’s depth increases. The long-standing rule-of-thumb is that a monitor’s depth matches its
diagonal CRT size. CRT makers have been trying to reduce the depth by moving from the current 90degree deflection to 100 or 110 degrees. However, the more the beam is deflected, the harder it is to
maintain focus. Radical measures to reduce CRT depth include putting the deflection coils inside the
glass CRT; they normally sit around the CRT’s neck.
The result of this development effort is the so-called “short-neck” CRT. In early 1998 17-inch
short-neck monitors measuring around 15 inches deep reached the market. The new technology has
taken over the 17-inch, 19-inch and 21-inch sizes, and a new rule-of-thumb was established where
the monitor depth is about two inches shorter than its diagonal size.
The shape of a monitor’s screen is another important factor. The three most common CRT
shapes are spherical (a section of a sphere, used in the oldest and most inexpensive monitors),
Digital Displays
cylindrical (a section of a cylinder, used in aperture-grill CRTs), and flat square (a section of a sphere
large enough to make the screen nearly flat). A flat square tube (FST) is standard for current monitor
Flat Square Tubes (FSTs)
Flat Square Tubes improve on earlier designs by having a screen surface with only a gentle curve.
They also have a larger display area, closer to the actual tube size, and nearly square corners. There
is a design penalty for a flatter, more square screen, because the less of a spherical section the screen
surface is, the harder it is to control the geometry and focus of the image on that screen. Modern
monitors use microprocessors to apply techniques like dynamic focusing to compensate for the
flatter screen.
These screens require the use of a special alloy, Invar, for the shadow mask. The flatter screen
means that the shortest beam path is in the center of the screen. This is the point where the beam
energy tends to concentrate, and consequently the shadow mask gets hotter here than at the corners
and sides of the display. Uneven heating across the mask can make it expand and then warp and
buckle. Any distortion in the mask means that its holes no longer register with the phosphor dot
triplets on the screen, and image quality will be reduced. Invar alloy is used in the best monitors
because it has a low coefficient of expansion.
By 2000, completely flat screens had become commonplace. One of the problems with flat
screens is that they accentuate the problem of the electron beam shape being elliptical at the point at
which it strikes the screen at its edges. Furthermore, the use of perfectly flat glass give rise to an
optical illusion caused by the refraction of light, resulting in the image looking concave. Consequently, some tube manufacturers have introduced a curve to the inner surface of the screen to
counter the concave appearance.
Multimedia Monitors
Sound facilities have become commonplace on many PCs, requiring additional loudspeakers and
possibly a microphone as well. The “multimedia monitor” avoids having separate boxes and cables
by building-in loudspeakers of some sort, maybe a microphone, and in some cases a camera for
video conferencing. At the back of these monitors are connections for a sound card.
However, the quality of these additional components is often questionable, adding very little to
the cost of manufacture. For high-quality sound, nothing is better than good external speakers, which
can also be properly magnetically shielded.
Another development that has become increasingly available since the launch of Microsoft’s
Windows 98, which brought with it the necessary driver software, is USB-compliant CRTs. The
Universal Serial Bus (USB) applies to monitors in two ways. First, the monitor itself can use a USB
connection to allow screen settings to be controlled with software. Second, a USB hub can be added
to a monitor (normally in its base) for use as a convenient place to plug in USB devices such as
keyboards and mice. The hub provides the connection to the PC.
Digital CRTs
Nearly 99% of all video displays sold in 1998 were connected using an analog VGA interface, an
aging technology that represents the minimum standard for a PC display. In fact, today VGA represents an impediment to the adoption of new flat panel display technologies, largely because of the
Chapter 10
added cost for the flat panel systems to support the analog interface. Another fundamental drawback
is the degradation of image quality that occurs when a digital signal is first converted to analog and
then back to digital before driving an analog-input liquid crystal display (LCD) panel.
The autumn of 1998 saw the formation of Digital Display Working Group (DDWG) that included computer industry leaders Intel, Compaq, Fujitsu, Hewlett-Packard, IBM, NEC, and Silicon
Image. The objective of the DDWG was to deliver a robust, comprehensive, and extensible specification of the interface between digital displays and high-performance PCs. In the spring of 1999, the
DDWG approved the first version of the Digital Visual Interface (DVI) specification, based on
Silicon Image’s PanelLink technology and using a Transition Minimized Differential Signaling
(TMDS) digital signal protocol.
While primarily of benefit to flat panel displays (which can now operate in a standardized alldigital environment without the need to perform an analog-to-digital conversion on the signals from
the graphics card driving the display device), the DVI specification potentially has ramifications for
conventional CRT monitors too.
Most complaints about poor CRT image quality can be traced to incompatible graphics controllers on the motherboard or graphics card. In today’s cost-driven market, marginal signal quality is not
uncommon. The incorporation of DVI with a traditional analog CRT monitor will allow monitors to
be designed to receive digital signals, with the necessary digital-to-analog conversion being carried
out within the monitor itself. This will give manufacturers added control over final image quality,
making consumer differentiation based on image quality much more of a factor than it has been
hitherto. However, the application of DVI with CRT monitors is not all easy sailing.
Originally designed for use with digital flat panels, one of the drawbacks of DVI is that it has a
comparatively low bandwidth of 165 MHz. This means that a working resolution of 1280 x 1024 could
be supported at up to an 85-Hz refresh rate. Although this is not a problem for LCD monitors, it is a
serious issue for CRT displays. The DVI specification supports a maximum resolution of 1600 x 1200
at a refresh rate of only 60 MHz—totally unrealistic in a world of ever-increasing graphics card
performance and ever-bigger and cheaper CRT monitors.
The proposed solution is the provision of additional bandwidth overhead for horizontal and
vertical retrace intervals, facilitated through the use of two TMDS links. Digital CRTs that are
compliant with VESA’s Generalized Timing Formula (GTF) would then be capable of easily supporting resolutions exceeding 2.75 million pixels at an 85-Hz refresh rate.
Another problem with DVI is that it is more expensive to digitally scale the refresh rate of a
monitor than using a traditional analog multisync design. This could lead to digital CRTs being more
costly than their analog counterparts. An alternative is for digital CRTs to have a fixed frequency and
resolution like an LCD monitor, and thereby eliminate the need for multisync technology.
Digital Visual Interface technology anticipates that screen refresh functionality will become part
of the display itself in the future. Using this methodology, new data need only be sent to the display
when changes to that data must be displayed. With a selective refresh interface, DVI can maintain
the high refresh rates required to keep a CRT display ergonomically pleasing, while avoiding an
artificially high data rate between the graphics controller and the display. Of course, a monitor would
have to employ frame buffer memory to enable this feature.
Digital Displays
Safety Standards
In the late 1980s concern over possible health issues related to monitor use led Swedac, the Swedish
testing authority, to make recommendations concerning monitor ergonomics and emissions. The
resulting standard was called MPR1. This standard was amended and became the internationally
adopted MPR2 standard in 1990, which called for the reduction of electrostatic emissions by infusing
a conductive coating onto the monitor screen.
A further standard, called TCO, was introduced in 1992 by the Swedish Confederation of
Professional Employees. The electrostatic emission levels in TCO92 were based on what monitor
manufacturers thought was possible rather than on any particular safety level, while MPR2 had been
based on what they could achieve without a significant cost increase. The TCO92 standard set stiffer
emission requirements, and also required monitors to meet the international EN60950 standard for
electrical and fire safety. Subsequent TCO standards were introduced in 1995 and again in 1999.
Apart from Sweden, the main impetus for safety standards has come from the U.S. In 1993, the
Video Electronics Standards Association (VESA) initiated its DPMS, or Display Power Management
Signaling standard. A DPMS compliant graphics card enables the monitor to achieve four states: on,
standby, suspend, and off, at user-defined periods. Suspend mode must draw less than 8W so that the
CRT, its heater, and its electron gun are likely to be shut off. Standby takes power consumption to
less than about 25W, with the CRT heater usually left on for faster resuscitation.
The Video Electronics Standards Association has also produced several standards for plug-andplay monitors. Known under the banner of DDC (Display Data Channel), they should, in theory,
allow a system to figure out and select the ideal settings. In practice, however, this very much
depends on the combination of hardware.
The Environmental Protection Agency’s (EPA’s) Energy Star is a power-saving standard,
mandatory in the U.S. and widely adopted in Europe, requiring a main power saving mode drawing
less than 30W. Energy Star was initiated in 1993, but really took hold in 1995 when the U.S, government, the world’s largest PC purchaser, adopted a policy to buy only Energy Star compliant products.
Other relevant standards include:
ISO 9241 part 3 – The international standard for monitor ergonomics.
EN60950 – The European standard for the electrical safety of IT equipment.
The German TUV/EG mark – This means a monitor has been tested to both standards, in
addition to the German standard for basic ergonomics (ZH/618) and MPR2 emission levels.
The quality of the monitor and graphics card, and (in particular) the refresh rate at which the combination can operate, are of crucial importance in ensuring that users who spend long hours in front of
a CRT monitor can do so in as much comfort as possible. These, however, are not the only factors
that should be considered for comfort of use. Physical positioning of the monitor is also important,
and expert advice has recently been revised in this area. It had been thought previously that the
center of the monitor should be at eye level. It is now believed that to reduce fatigue as much as
possible, the top of the screen should be at eye level, with the screen between 0 and 30 degrees
below the horizontal and tilted slightly upwards. Achieving this arrangement with furniture designed
in accordance with the previous rules, however, is not easily accomplished without causing other
problems with respect to seating position and, for example, the comfortable positioning of keyboard
Chapter 10
and mouse. It is also important to sit directly in front of the monitor rather than to one side, and to
locate the screen so as to avoid reflections and glare from external light sources.
The Future of CRTs
With a 100-year head start over competing screen technologies, the CRT is still a formidable
technology. It is based on universally understood principles and employs commonly available
materials. The result is cheap-to-make monitors capable of excellent performance, producing stable
images in true color at high display resolutions.
Among the CRT’s most obvious shortcomings are:
Power consumption; it uses too much electricity.
Mis-convergence and color variations across the screen.
Its clunky high-voltage electrical circuits and strong magnetic fields create harmful electromagnetic radiation.
It is simply too big in size and weight.
There seems little doubt that the consumer of the future will demand a larger screen for home
entertainment. Those who have experienced sporting events or PC games on larger screens recognize
that what they get is a more compelling, more immersive experience—and they won’t willingly go
back. Sales of DVD players from manufacturers to retailers continues to grow aggressively, and in
2002 for the first time the sales of DVD players outpaced VCR sales. Many of those purchasers want
to see their movies on the largest screen possible.
Vacuum-tube technology continues to be the conventional display technology of the home due to
its low cost, simple manufacturing process, and good image quality. Yet, as consumers demand larger
screen sizes, CRT displays are taxed for brightness while trying to maintain a sharp image. Their
depth, weight, and sheer volume make them a bulky appliance to place in the home. The marketleading large screen (39 inches diagonal) direct-view CRT-based TV measures some 28 inches front
to back, and weighs more than 250 pounds.
With even those who have the biggest vested interest in CRTs spending vast sums on research
and development, it is inevitable that one of the several flat panel display (FPD) technologies will
win in the long run. However, this race is taking longer than was once thought, and current estimates
suggest that flat panels are unlikely to account for greater than 50% of the market before the year
2006. Flat panel displays are being developed to realize a thinner unit with low voltage requirements
and low power consumption – objectives not easily achieved with CRTs.
Digital Displays/Flat Panel Displays
With many countries finalizing standards and launching broadcast services, digital TV (DTV) is
becoming a force in the worldwide electronics industry. Manufacturers of DTVs and their components such as LCD panels, plasma display panels, and semiconductors are ramping-up production for
this market. Digital TV set sales will increase steadily over the next several years as digital terrestrial, and in some cases satellite, broadcasts become more common and unit prices fall, due to
technological advances and improved economies of scale.
By 2005, annual worldwide DTV set shipments will reach 26 million units, according to
Cahners In-Stat Group, with most sales occurring in North America, Europe, Japan, and several AsiaPacific nations. Manufacturers are planning to add several new features to the basic DTV set design:
Digital cable ready – Very important for the U.S. market, where over 60 million households
are cable subscribers.
Digital Displays
Digital connections – Either IEEE 1394 or DVI.
Hard disk drives – Will be added to DTV sets to enable personal video recording and to
cache data broadcasts.
Internet access – Some manufacturers will integrate 56K dial-up, cable, or DSL modems.
Electronic program guides – Already present in digital cable and satellite systems, some
DTV manufacturers will build EPG software into their sets.
The projected numbers of units for PDPs, LCDs, and rear projection displays is shown in
Figure 10.1, courtesy of Gartner Group.
Figure 10.1: Market Numbers of Flat Panel Displays (FPDs), 2000-2007 (Gartner 2003)
While Japanese producers pioneered FPD manufacturing, they have had limited success in
holding on to market share relative to the more aggressive Korean manufacturers, who have unquestionably benefited from a second mover advantage. Furthermore, other shifts are also occurring in
the region, as can be seen with Philips, Samsung, and Sharp investing in display centers in Taiwan
and moving design and manufacturing centers to China.
Manufacturers are also exploring new display technology to create brighter pictures by using
alternatives to the CRT. Until very recently, most audio-video (AV) and data communications devices
used CRT monitors as their main display device. These monitors, however, require high voltage for
the emission and angle control of the electron beams, and it is difficult to slim down the size of CRT
units. Flat panel display technology is the perfect answer to demands for more compact displays with
greater energy efficiency and diversity. However, CRTs will continue to be used in the majority of
direct view and rear projection sets due to their lower cost. In addition, CRTs still offer the best
possible level of black reproduction.
Flat panel digital display technologies include:
Liquid crystal displays (LCD) – A liquid crystal display consists of an array of tiny segments (called pixels) that can be manipulated to present information. This basic idea uses
liquid crystal technology and is common to all displays, ranging from simple calculators
Chapter 10
and wristwatches to a full color LCD television and computer screen. Manufacturers are
increasing the production of 20- to 30-inch panels to be used as TV displays. The resulting
economies of scale will make these sets more affordable. The different LCD types include:
• Active matrix LCDs (AMLCDs) – AMLCDs give good resolution and, in the color
version, vivid colors because each pixel is controlled by its own transistor. Thus,
AMLCDs have become the standard in the hand-held market. However, AMLCDs are
rather heavy and power-hungry because they are based on layers of glass and require a
backlight. In addition, the contents of the screen can only be viewed if the user is facing
the device head-on. There are two types of AMLCDs; TFT (thin-film transistor) and two
terminal non-linear devices.
• Liquid crystal on silicon (LCoS) – Thomson developed the first LCoS DTV set in
cooperation with Three-Five Systems, Corning, and ColorLink. Liquid Crystal on
Silicon uses three microdisplay imagers, optics, and a prism to enable thinner sets with
a brighter picture.
• Passive matrix LCDs (PMLCDs) – PMLCDs use fewer electrodes, which are arranged
in strips across the screen, as opposed to AMLCDs, which control each pixel individually. While this makes them less expensive and easier to manufacture, the color and
clarity are negatively affected. Passive Matrix LCDs are reasonably well suited to
smaller devices with monochrome screens. They are also known as STN (super twisted
nematic) LCDs.
• Ferro-electric LCD
• Polymer-Dispersed LCD
• Cholestric LCD (ChLCD) – ChLCDs are differentiated in that once an image is
displayed on the screen, no additional power need be expended to keep it there, thus
saving the battery. Cholestric LCD also have a wider viewing angle and can be made of
plastic, improving ease-of-use and cutting weight. However, ChLCDs’ ability to display
color is limited, which makes the technology better suited for low-end applications.
Plasma display panels (PDP) – The term plasma refers to a gas that consists of electrons,
positively charged particles known as anions, and neutral particles. The PDP operates by
passing electricity through neon gas, causing it to become temporarily charged; light is
produced when the gas spontaneously discharges. The displays operate at high voltages, low
currents, and low temperatures, resulting in long operating lifetimes. Plasma technology
allows larger flat panel sets than LCD, ranging from 36 to 60 inches. The overwhelming
majority of buyers are corporate, but consumer purchases will increase as prices decline.
Plasma Display Panels types include DC operation and AC operation (include Color AC
Organic light emitting diode (OLED) – OLED refers to the light-emitting diode devices
made from organic materials. Types of OLED include monochromatic and full-color.
Digital light processing (DLP) – Developed by Texas Instruments, DLP uses a digital micromirror device to modulate reflected light.
Electro-luminescent (EL) displays – These displays directly convert the electrical energy
into luminous energy. In EL displays, the electrons associated with the chemical elements in
solid material acquire a high energy level from the electric field. Light is produced when the
electrons return to their normal, or ground state, condition. Types of EL displays include AC
thin-film electroluminescent (TFEL) display panels, color AC-TFEL displays, AM-TFEL
displays, AC Hybrid EL displays, DC Powder EL displays, and transparent EL displays.
Digital Displays
Organic EL is a self-emitting flat panel display that uses the color radiating character of
organic material by electrical stimulation. Organic EL uses a silicon thin film transistor as
the switching device for each picture element, and has a brighter display, lower power
consumption, and faster response time than TFT-LCD. This type of display is not technologically stable at this time, and it is still under development for commercial production.
Field emission displays (FED) – FEDs are flat, thin vacuum tubes that use arrays of electron
emitters (cathodes) to produce electrons, which are accelerated towards a phosphorus-coated
(anode) faceplate that emits visible light.
Vacuum fluorescent display (VFD) – A VFD is a flat CRT that uses multiple cathodes and a
matrix deflection system.
Microdisplays – Microdisplays are very small displays (one inch or less diagonal measurement) that are viewed through the use of optics. These miniature displays are generally the
size of the semiconductors from which they are made (the size of a quarter), and are viewed
close to the eye—either in eyeglasses or an eyepiece—yielding an image that looks like a
full-sized screen. To date, these devices have been limited by their cost, social acceptability,
and time-to-market issues. We think, however, that this method for viewing data will gain
wider acceptance over the next few years as costs come down and the unit’s power efficiency is recognized. The different types of microdisplays include:
• Transmissive microdisplays
• Reflective microdisplays:
- LCoS
- Microelectromechanical Systems (MEMS)
- Digital Micro-mirror Device (DMD)
- Grating Light Valve (GLV)
- Actuated Mirror Array (AMA)
• Emissive microdisplays:
- TFEL microdisplays
- Vacuum Fluorescent on Silicon (VFoS) microdisplays
- OLED microdisplays
• Scanning microdisplays
Digital light amplification (D-ILA) – The basis for D-ILA is electronic valve technology
using liquid crystal on silicon. The technology was developed and is owned by JVC.
Inorganic LED displays – Inorganic LEDs, or LEDs, were developed in 1960s as an outgrowth of semiconductor technology. The devices emit light when a forward bias voltage is
applied to the PN junction in a single crystal of gallium arsenide (GaAs) or other III-V
compounds. By appropriate doping and/or use of crystals containing III-V materials, it is
possible to produce emissions of red, green, yellow, and blue light.
Flat-thin CRTs – Because of the stiff competition from digital display CRTs, manufacturers
are improving the manufacturing process for thinness, lightweight, flatness, and low power.
Thin CRTs are emerging as a viable screen solution because they have wider viewing
angles, they can smoothly display motion on the screen, and they have great color. However,
at this point, they are expensive due to low availability. Long term, they could be a practical
option because they are good for outdoor usage, easy to add to a CRT manufacturer’s
current production lines, and simple to produce.
Light-emitting polymer (LEP) diode screens – Based on plastic, these screens can be
flexible and based on any shape or size. As of 2003, LEP technologies were limited to
prototypes and degrade after 10,000 hours of use.
Chapter 10
Electromechanical displays
Rotatable dipole displays
Electro-phoretic imaging displays (EPID)
Electrochromic displays
The most popular display types used in consumer appliances, digital TVs, and PC monitors are
LCDs, PDPs, DLPs, LCoS, and OLEDs, which are described in more detail below.
LCD—Liquid Crystal Displays
Liquid crystals were first discovered in the late 19th century by the Austrian botanist, Friedrich
Reinitzer. The term “liquid crystal,” itself, was coined shortly afterward by German physicist Otto
Lehmann. Liquid crystals are almost transparent substances, exhibiting the properties of both solid
and liquid matter. Light passing through liquid crystals follows the alignment of the molecules that
make them up—a property of solid matter. In the 1960s it was discovered that charging liquid
crystals with electricity changed their molecular alignment, and consequently the way light passed
through them—a property of liquids.
LCDs consist of two glass plates with liquid crystal material (transparent organic polymers)
between them. The LCD has an array of tiny segments (called pixels) that can be manipulated to
present information. Many LCDs are reflective, meaning that they use only ambient light to illuminate the display. LCDs contain transparent organic polymers that respond to an applied voltage. To
form the display, manufacturers deposit a polarizing film on the outer surfaces of the two ultra-flat
glass (or quartz) substrates, with a matrix of transparent indium tin oxide (ITO) electrodes on the
inner surfaces of these substrates. With micron-sized spacers holding the two substrates apart, the
sandwich is joined together. The substrates are cut into one or more displays, depending on the
original size of the substrates (from 12-inch to 22-inch square). The outer edges of each display are
then sealed with a gasket, the interior air is evacuated, and the void is injected with liquid crystals.
The polarizers on the front and back of the display are oriented 90° from one another. With this
orientation, no light can pass through unless the polarization of the light is altered. Liquid crystals
are a means for changing the polarization. When no voltage is applied, liquid crystals can be aligned
in twisted (90°) or super-twisted (270°) configurations. With these configurations the polarity of light
is rotated, allowing the light to pass through the front polarizer, thus illuminating the viewing
surface. When a voltage is applied, the liquid crystals align to the electric field created, the polarity
of the incoming light does not change, and the viewing surface appears dark.
All LCDs must have a source of reflected or back lighting. This source is usually a metal halide,
cold cathode, fluorescent, or halogen bulb placed behind the back plate. Since the light must pass
through polarizers, glass, liquid crystals, filters, and electrodes, the light source must be of sufficient
wattage to generate the desired brightness of the display. Typically, the internal complexity of the
display blocks greater than 95% of the original light from exiting on the viewer’s side. As a result,
the generation of unseen light causes a major drain on a battery-operated LCD’s power source.
Some LCD systems perform much better than others. The greater twist angle of super-twisted
nematic (STN) liquid crystals allows a much higher contrast ratio (light to dark) and faster response
than conventional twisted nematic (TN) crystals. For color displays, each visible pixel must consist
of three adjoining cells, one with a red filter, one with a blue filter, and one with a green filter, in
order to achieve the red-green-blue (RGB) color standard. While color decreases the resolution of the
Digital Displays
display, it also adds information to the display, particularly for desktop publishing and scientific
Since its advent in 1971 as a display medium, liquid crystal displays have moved into a variety
of fields, including miniature televisions, digital still and video cameras, and monitors. Today, many
believe that the LCD is the most likely technology to replace the CRT monitor because there is no
bulky picture tube and a lot less power is consumed than with its CRT counterparts. The technology
involved has been developed considerably since its inception, to the point where today’s products no
longer resemble the clumsy, monochrome devices of old. It has a head start over other flat screen
technologies and an apparently unassailable position in notebook and handheld PCs where it is
available in two forms – low-cost, dual-scan twisted nematic (DSTN) and high image quality thin
film transistor (TFT).
The LCD is a transmissive technology. The display works by letting varying amounts of a fixedintensity white backlight through an active filter. The red, green, and blue elements of a pixel are
achieved through simple filtering of the white light.
Most liquid crystals are organic compounds consisting of long rod-like molecules that, in their
natural state, arrange themselves with their long axes roughly parallel. It is possible to precisely
control the alignment of these molecules by flowing the liquid crystal along a finely grooved surface.
The alignment of the molecules follows the grooves; if the grooves are exactly parallel, then the
alignment of the molecules also becomes exactly parallel. In their natural state, LCD molecules are
arranged in a loosely ordered fashion with their long axes parallel. However, when they come into
contact with a grooved surface in a fixed direction, they line up parallel along the grooves.
The first principle of an LCD consists of sandwiching liquid crystals between two finely grooved
surfaces, where the grooves on one surface are perpendicular (at 90 degrees) to the grooves on the
other. If the molecules at one surface are aligned north to south, and the molecules on the other are
aligned east to west, then those in between are forced into a twisted state of 90 degrees. Light
follows the alignment of the molecules, and therefore is also twisted through 90 degrees as it passes
through the liquid crystals. When a voltage is applied to the liquid crystal, the molecules rearrange
themselves vertically, allowing light to pass through untwisted.
The second principle of a LCD relies on the properties of polarizing filters and light itself.
Natural light waves are oriented at random angles. A polarizing filter is simply a set of incredibly
fine parallel lines. These lines act like a net, blocking all light waves apart from those (coincidentally) oriented parallel to the lines. A second polarizing filter with lines arranged perpendicular (at 90
degrees) to the first would therefore totally block this already polarized light. Light would only pass
through the second polarizer if its lines were exactly parallel with the first, or if the light itself had
been twisted to match the second polarizer.
A typical twisted nematic (TN) liquid crystal display consists of two polarizing filters with their
lines arranged perpendicular (at 90 degrees) to each other, which, as described above, would block
all light trying to pass through. But the twisted liquid crystals are located in-between these
polarizers. Therefore, light is polarized by the first filter, twisted through 90 degrees by the liquid
crystals, and finally made to completely pass through the second polarizing filter. However, when an
electrical voltage is applied across the liquid crystal, the molecules realign vertically, allowing the
Chapter 10
light to pass through untwisted but to be ultimately blocked by the second polarizer. Consequently,
no voltage equals light passing through, while applied voltage equals no light emerging at the other
The crystals in an LCD could be alternatively arranged so that light passed when there was a
voltage, and not passed when there was no voltage. However, since computer screens with graphical
interfaces are almost always lit, arranging the crystals in the no-voltage-equals-light-passing configuration saves power.
Liquid crystal displays follow a different set of rules than CRT displays, offering advantages in terms
of bulk, power consumption, and flicker, as well as “perfect” geometry. They have the disadvantage
of a much higher price, a poorer viewing angle, and less accurate color performance.
While CRTs are capable of displaying a range of resolutions and scaling them to fit the screen,
an LCD panel has a fixed number of liquid crystal cells, and can display only one resolution at fullscreen size using one cell per pixel. Lower resolutions can be displayed by using only a portion of
the screen. For example, a 1024 x 768 panel can display at a resolution of 640 x 480 by using only
66% of the screen. Most LCDs are capable of rescaling lower-resolution images to fill the screen
through a process known as rathiomatic expansion. However, this works better for continuous-tone
images like photographs than it does for text and images with fine detail, where it can result in badly
aliased objects as jagged artifacts appear to fill in the extra pixels. The best results are achieved by
LCDs that resample the screen when scaling it up, thereby anti-aliasing the image when filling in the
extra pixels. Not all LCDs can do this, however.
Unlike CRT monitors, the diagonal measurement of an LCD is the same as its viewable area, so
there is no loss of the traditional inch or so behind the monitor’s faceplate or bezel. This makes any
LCD a match for a CRT 2 to 3 inches larger. A number of leading manufacturers already have 18.1inch TFT models on the market capable of a native resolution of 1280 x 1024.
A CRT has three electron guns whose streams must converge faultlessly in order to create a
sharp image. There are no convergence problems with an LCD panel, because each cell is switched
on and off individually. This is one reason why text looks so crisp on an LCD monitor. There’s no
need to worry about refresh rates and flicker with an LCD panel—the LCD cells are either on or off.
An image displayed at a refresh rate as low as between 40-60 Hz should not produce any more
flicker than one at a 75-Hz refresh rate.
Conversely, it’s possible for one or more cells on the LCD panel to be flawed. On a 1024 x 768
monitor, there are three cells for each pixel—one each for red, green, and blue, which amounts to
nearly 2.4 million cells (1024 x 768 x 3 = 2,359,296). There’s only a slim chance that all of these
will be perfect; more likely, some will be stuck on (creating a “bright” defect) or off (resulting in a
“dark” defect). Some buyers may think that the premium cost of an LCD display entitles them to
perfect screens—unfortunately, this is not the case.
LCD monitors also have other elements that you don’t find in CRT displays. The panels are lit
by fluorescent tubes that snake through the back of the unit, causing a display to sometimes exhibit
brighter lines in some parts of the screen than in others. It may also be possible to see ghosting or
streaking, where a particularly light or dark image can affect adjacent portions of the screen. And
fine patterns such as dithered images may create moiré or interference patterns that jitter.
Digital Displays
Viewing angle problems on LCDs occur because the technology is a transmissive system, which
works by modulating the light that passes through the display, while CRTs are emissive. With
emissive displays, there is a material that emits light at the front of the display, which is easily
viewed from greater angles. In a LCD, as well as passing through the intended pixel obliquely,
emitted light passes through adjacent pixels, causing color distortion.
Currently, most LCD monitors plug into a computer’s familiar 15-pin analog VGA port and use
an analog-to-digital converter to convert the signal into a form the panel can use. An industry
standard specification for a digital video port has been created by VESA. Liquid crystal display
monitors will incorporate both analog and digital inputs.
Creating color
In order to create the shades required for a full-color display, there have to be some intermediate
levels of brightness between all light and no light passing through. The varying levels of brightness
required to create a full-color display are achieved by changing the strength of the voltage applied to
the crystals. The liquid crystals in fact untwist at a speed directly proportional to the strength of the
applied voltage, thereby allowing the amount of light passing through to be controlled. In practice,
though, the voltage variation of today’s LCDs can only offer 64 different shades per element (6-bit)
as opposed to full-color CRT displays, which can create 256 shades (8-bit). Color LCDs use three
elements per pixel, which results in a maximum of 262,144 colors (18-bit), compared to true-color
CRT monitors supplying 16,777,216 colors (24-bit).
As multimedia applications become more widespread, the lack of true 24-bit color on LCD
panels is becoming an issue. While 18-bit is fine for most applications, it is insufficient for photographic or video work. Some LCD designs manage to extend the color depth to 24-bit by displaying
alternate shades on successive frame refreshes, a technique known as Frame Rate Control (FRC).
However, if the difference is too great, flicker is perceived.
Hitachi has developed a technique whereby the voltage applied to adjacent cells to create
patterns changes very slightly across a sequence of three or four frames. With it, Hitachi can simulate
not quite 256 grayscales, but still a highly respectable 253 grayscales, which translates into over 16
million –colors—virtually indistinguishable from 24-bit true color.
Figure 10.2 shows the anatomy of the LCD projector, and Figure 10.3 shows the block diagram
of a high-resolution LCD monitor controller.
Types of LCDs
TFT-LCD, or active matrix – A flat panel display that uses a thin film transistor (TFT)
formed with silicon as the switching device of the picture element. A silicon layer forms a
thin film transistor on glass substrate, and is able to have two physical properties, amorphous and polycrystal:
• Amorphous – TFT-LCD in which amorphous silicon phase is amorphous TFT-LCD.
Amorphous TFT-LCD has less electronics mobility than poly-type, and is normally
used for ordinary product applications, such as portable/notebook PC and desktop LCD
monitors, that do not absolutely require very fine pixel pitch.
• Polysilicon – Poly crystal silicon phases are divided into low-temperature polysilicon
(LTPS) and high-temperature polysilicon (HTPS) by the temperature at which the
silicon layer forms a glass substrate. The LTPS phase has a better electron mobility that
makes fine pixel pitch possible, and is usually adopted by applications employing small
Chapter 10
Figure 10.2: Anatomy of the LCD Projector
Figure 10.3: High-Resolution LCD Monitor Controller
Digital Displays
and midsize, but high-resolution, handheld displays such as PDAs and DSCs. The HTPS
phase is used in projector and viewfinder applications.
The TFT is the most popular type of LCD in the market today.
TFD-LCD – A flat panel display that uses thin film diode (TFD) formed with silicon as the
switching device of the picture element. It has a simpler manufacturing process than TFTLCD, but did not become the standard product in the market because most vendors focused
on TFT-LCD due to its well-accumulated technology infrastructure.
LCoS – Liquid-Crystal-on-Silicon is primarily used for projector applications.
STN-LCD, or passive matrix – A flat panel display that uses super-twisted nematic liquid
crystal technology, and adopts a passive switching circuit for the picture element. There are
two forms of STN-LCD, Monochromatic and Color:
• Monochrome STN-LCD – Monochrome STN-LCD does not have a color filter and
only displays black-and-white images, with some grayscale output images. Monochrome STN-LCD is mainly used for applications that do not need color displays and
are cost sensitive, such as cellular handsets.
• Color STN-LCD – Color STN-LCD uses a color filter consisting of red, green, and
blue unit cells for color display. It is applied to various applications from small cellular
handsets to large portable PCs, owing to its relatively low price. However, it is gradually losing market share due to its limitations of slow response time and narrow viewing
Twisted Nematic (TN) LCDs – Used for watches and calculators.
Thin Film Transistor Displays
Many companies have adopted Thin Film Transistor (TFT) technology to improve color screens. In a
TFT screen, also known as active matrix, an extra matrix of transistors is connected to the LCD
panel—one transistor for each color (RGB) of each pixel. These transistors drive the pixels, eliminating the problems of ghosting and slow response speed that afflict non-TFT-LCDs. The result is
screen response times of the order of 25 ms, contrast ratios in the region of 200:1 to 400:1, and
brightness values between 200 and 250 cd/m2 (candela per square meter).
The liquid crystal elements of each pixel are arranged so that in their normal state (with no
voltage applied) the light coming through the passive filter is “incorrectly” polarized and thus
blocked. But when a voltage is applied across the liquid crystal elements they twist up to ninety
degrees in proportion to the voltage, changing their polarization and letting more light through. The
transistors control the degree of twist and hence the intensity of the red, green, and blue elements of
each pixel forming the image on the display.
Thin film transistor screens can be made much thinner than LCDs, making them lighter. They
also have refresh rates now approaching those of CRTs because current runs about ten times faster in
a TFT than in a DSTN screen. Standard VGA screens need 921,000 transistors (640 x 480 x 3), while
a resolution of 1024 x 768 needs 2,359,296, and each transistor must be perfect. The complete matrix
of transistors has to be produced on a single, expensive silicon wafer, and the presence of more than
a couple of impurities means that the whole wafer must be discarded. This leads to a high wastage
rate and is the main reason for the high price of TFT displays. It’s also the reason why there are
liable to be a couple of defective pixels where the transistors have failed in any TFT display.
There are two phenomena that define a defective LCD pixel: a “lit” pixel, which appears as one
or several randomly placed red, blue and/or green pixel elements on an all-black background, or a
Chapter 10
“missing” or “dead” pixel, which appears as a black dot on all-white backgrounds. The former
failure mode is the more common, and is the result of a transistor occasionally shorting in the “on”
state and resulting in a permanently “turned-on” (red, green or blue) pixel. Unfortunately, fixing the
transistor itself is not possible after assembly. It is possible to disable an offending transistor using a
laser. However, this just creates black dots that would appear on a white background. Permanently
turned-on pixels are a fairly common occurrence in LCD manufacturing, and LCD manufacturers set
limits, based on user feedback and manufacturing cost data, as to how many defective pixels are
acceptable for a given LCD panel. The goal in setting these limits is to maintain reasonable product
pricing while minimizing the degree of user distraction from defective pixels. For example, a 1024 x
768 native resolution panel, containing a total of 2,359,296 (1024 x 768 x 3) pixels, that has 20
defective pixels would have a pixel defect rate of (20/2,359,296)*100 = 0.0008%. The TFT display
has undergone significant evolution since the days of the early, twisted Nnematic (TN) technology
based panels.
In-Plane Switching
Jointly developed by Hosiden and NEC, In-Plane Switching (IPS) was one of the first refinements to
produce significant gains in the light-transmissive characteristics of TFT panels. In a standard TFT
display, when one end of the crystal is fixed and a voltage applied, the crystal untwists, changing the
angle of polarization of the transmitted light. A downside of basic TN technology is that the alignment of liquid crystal molecules alters the further away they are from the electrode. With IPS, the
crystals are horizontal rather than vertical, and the electrical field is applied between each end of the
crystal. This improves the viewing angles considerably, but means that two transistors are needed for
every pixel instead of the one needed for a standard TFT display. Using two transistors means that
more of the transparent area of the display is blocked from light transmission, so brighter back lights
must be used, increasing power consumption and making the displays unsuitable for notebooks.
Polysilicon Panels
The thin-film transistors that drive individual cells in the overlying liquid crystal layer in traditional
active-matrix displays are formed from amorphous silicon (a-Si) deposited on a glass substrate. The
advantage of using amorphous silicon is that it doesn’t require high temperatures, so fairly inexpensive glass can be used as a substrate. A disadvantage is that the non-crystalline structure is a barrier
to rapid electron movement, necessitating powerful driver circuitry.
It was recognized early on in flat-panel display research that a crystalline or polycrystalline (an
intermediate crystalline stage comprising many small interlocked crystals analogous to a layer of
sugar) of silicon would be a much more desirable substance to use. Unfortunately, this could only be
created at very high temperatures (over 1,000oC), requiring the use of quartz or special glass as a
substrate. However, in the late 1990s manufacturing advances allowed the development of lowtemperature polysilicon (p-Si) TFT displays, formed at temperatures around 450oC. Initially, these
were used extensively in devices that required only small displays, such as projectors and digital
One of the largest cost elements in a standard TFT panel is the external driver circuitry, requiring a large number of external connections from the glass panel, because each pixel has its own
connection to the driver circuitry. This requires discrete logic chips arranged on PCBs around the
periphery of the display, limiting the size of the surrounding casing. A major attraction of p-Si
Digital Displays
technology is that the increased efficiency of the transistors allows the driver circuitry and peripheral
electronics to be made as an integral part of the display. This considerably reduces the number of
components for an individual display; Toshiba estimates 40% fewer components and only 5% as
many interconnects as in a conventional panel. The technology will yield thinner, brighter panels
with better contrast ratios, and allow larger panels to be fitted into existing casings. Since screens
using p-Si are also reportedly tougher than a-Si panels, it’s possible that the technology may allow
the cheaper plastic casings used in the past—but superseded by much more expensive magnesium
alloy casings—to stage a comeback.
By 1999, the technology was moving into the mainstream PC world, with Toshiba’s announcement of the world’s first commercial production of 8.4 in and 10.4 in low-temperature p-Si (LTPS)
displays suitable for use in notebook PCs. The next major advance is expected is to see LTPS TFTs
deposited on a flexible plastic substrate—offering the prospect of a roll-up notebook display.
CRT Feature Comparison
Table 10.1 provides a feature comparison between a 13.5-inch passive matrix LCD (PMLCD) and
active matrix LCD (AMLCD) and a 15-inch CRT monitor.
Table 10.1: Feature Comparison Between LCDs and CRTs
300 ms
70 - 90
60K hrs
> 140
25 ms
70 - 90
60K hrs
> 190
220 - 270
Contrast ratio is a measure of how much brighter a pure white output is compared to a pure
black output. The higher the contrast the sharper the image and the more pure the white will be.
When compared with LCDs, CRTs offer by far the greatest contrast ratio.
Response time is measured in milliseconds and refers to the time it takes each pixel to respond
to the command it receives from the panel controller. Response time is used exclusively when
discussing LCDs, because of the way they send their signal. An AMLCD has a much better response
time than a PMLCD. Conversely, response time doesn’t apply to CRTs, because of the way they
handle the display of information (an electron beam exciting phosphors).
There are many different ways to measure brightness. The higher the level of brightness (represented in the table as a higher number), the brighter the white displays. When it comes to the life
span of a LCD, the figure is referenced as the mean time between failures for the flat panel. This
means that if it is runs continuously it will have an average life of 60,000 hours before the light burns
out. This would be equal to about 6.8 years. On the face of it, CRTs can last much longer than that.
However, while LCDs simply burn out, CRT’s get dimmer as they age, and in practice don’t have the
ability to produce an ISO-compliant luminance after around 40,000 hours of use.
Chapter 10
LCD Market
The total TFT-LCD revenue for PC-related applications, portable PCs, and LCD monitors is projected to grow to $22.2 billion by 2005. Non-PC applications, cellular handsets, PDAs, car
navigation systems, digital still cameras, digital video cameras, Pachinko, and portable games
revenue will increase to $7.7 billion in 2005. The revenue share of non-PC applications among the
entire TFT-LCD market will be 24.5% through 2005.
In 2000, the total TFT-LCD demand reached 31.7 million units, while total supply was 32.2
million units. The demand for portable PCs in 2000 was 26.0 million units, with a 30.4% of year-toyear growth rate, and the demand for TFT portable PCs was 24.6 million units, with a 33.6%
year-to-year growth. Meanwhile, LCD monitor demand (including LCD TVs) was recorded at 7.0
million units, with a high 60% growth rate. In 2001, the demand for TFTs was at 42.2 million units.
However, price declines have continued from 2000 through 2004. Prices of 14.1-inch and 15.0-inch
LCDs, the main products for portable PCs and LCD monitors, declined by 24% and 30%, respectively, in 2001. Key applications of TFT-LCDs are segments, such as portable PCs, LCD monitors,
and LCD TVs, that require large LCDs.
The 15-inch grade monitor led the market in 2003, commanding about an 80% share. For the
post-15-inch grade market, aggressive marketing by several LCD manufacturers and the development
of low-end products that possess conventional viewing angle technology will offer a more easily
accessible price level for the 17-inch grade monitor. Larger size models (i.e., greater than 20-inch)
are not expected to see enlarged demand, and recorded a mere 2% share in 2003. That means bigger
TFT-LCD models will have limited adoption in the monitor segment due to their high price.
Today, portable PCs account for about 80% of the consumption of LCDs. LCDs are practical for
applications where size and weight are important. However, LCDs do have many problems with the
viewing angle, contrast ratio, and response times. These issues need to be solved before LCDs can
completely replace CRTs.
DisplaySearch reports that LCD TV sales accelerated in 2003 with 182% growth in 2003 after
113% growth in 2002. A total of 567,530 LCD TV units were shipped in first quarter of 2003, with
Japan accounting for 69% of the market. Revenue generated for the same quarter for LCD TVs was
$685 million. The expected LCD TV units are expected to reach 745,300. The value of LCD TV sets
will increase from $3.4 billion in 2003 to $19.7 billion in 2007. Top vendors for LCD TVs are Sharp
Electronics, LG Electronics, Samsung Electronics, Sony, and Panasonic (Matsushita Electrical). The
optimism regarding the potential growth is based on a number of factors including:
Larger substrate fabs, in which larger sized panels will be better optimized
and more cost effective
Lower panel prices
Added competition
Rapid emergence of HDTV
LCD Applications in PC Monitors
During 2000 through 2002, LCD monitor demand showed a 62% cumulative annual growth rate
(CAGR), while CRT demand is expected to show a meager 7% CAGR for the coming few years.
Consequently, the portion of TFTs in the total monitor market increased to 4% in 2002 and to 22% in
2004. Liquid crystal display monitor demand, excluding LCD TVs, was 7 million units in 2000, and
reached 12.2 million units in 2002. As LCD module prices continuously decrease, the demand for
Digital Displays
LCD monitors will act as a key LCD market driver rather than portable PCs. However, the consumer
market that supports monitor demand seems quite price sensitive, and LCD monitor demand will
maintain its vulnerable position, which can fluctuate easily as a function of pricing and economic
In terms of display size, the 15-inch grade will undoubtedly lead the market by 2004 with more
than 75% share. For the post-15 inch grade market, the 17-inch grade will gain the second position in
2004 at 14% share, and the 18.1-inch grade will be the third position at 7.5% share due to relatively
conservative marketing.
Total monitor sales were 120 million units in 2001, with CRTs claiming 108 million and TFTLCDs claiming 12 million units, and with a respective market share of 90% and 10%. By 2004, the
total monitor sales will exceed 157 million units with CRTs selling 122 million units and having a
78% market share and TFTs having 22% market share and 35 million units. In terms of the worldwide LCD monitor market demand by region, Japan’s market share continues to decrease. Off the
12.2 million LCD monitor units shipped in 2001, the market share of respective geographies were:
U.S. at 22.6%, Europe at 16.6%, Japan at 51.5%, Asia/Pacific at 4.5%, and ROW (Rest of the World)
at 4.8%. Comparatively, of the predicted 34.8 million units of LCD monitors that will be shipped the
geographical market share changes to: U.S. – 32.6%, Europe – 20.8%, Japan – 28.2%, Asia/Pacific –
11%, and ROW – 7.3%.
In 2001, the supplies of 14-inch and 15-inch based TFT-LCDs were 68.1 and 54.9 million units,
respectively. In 2002, the market for 14-inch and 15-inch TFT-LCDs reached 90.6 and 75.3 million
units, representing growth rates of 32.9% and 36.8%, respectively. In light of this business situation,
many Japanese vendors accelerated their penetration into new and still profitable applications, such
as cellular phones and PDAs. Due to an overall cellular market recession and the delay of thirdgeneration (3G) mobile services, however, the adoption of TFTs in cellular handsets will not occur
until 2004. During 2001 and 2002, Japanese, Korean and Taiwanese vendors focused on scaling up
the LCD monitor market’s size via reductions in production costs. If production yields catch up with
current a-Si TFTs, LTPS will be a strong solution in the long term to overcome the price declines
caused by a surplus market.
The continued oversupply market has pushed down the pricing of TFTs regardless of differences
in TFT sizes. In 2000, however, among products aimed at portable PCs, the 13.3-inch model recorded the largest price decline (a 38.3% drop) due to its waning demand in the market. In the case
of products for LCD monitors, the 17-inch model recorded the largest decline in 2000 (a 51.5%
drop). However, the prices of large-sized TFTs (i.e., over 17-inch) were abnormally high due to low
production yield. The large price declines were caused by both the oversupply market and yield
During 2001 and 2002, TFT prices for portable PCs and 15-inch LCD monitors dropped by
about 30–40%, and the larger sizes dropped more by about 45%. The sharp price drops—almost
collapse—prompted most vendors to sell their products for less than their total production cost. From
2Q01, price declines will break even with the variable cost for new entrants. Large-scale loss may act
as a resistor to price declines. From 2001, however, as demand from the LCD monitor segment
seems to mainly pull the TFT market, considering its cost-sensitive demand structure, it appears
difficult to make the price trend level off as long as total supply exceeds total demand.
Chapter 10
LCD demand was 42.8 million units in 2001 and 57.1 million units in 2002 with 33% growth.
LCD monitor demand driver was the portable/notebook PC monitors.
In 2002, LCD monitor demand was 21 million units and continues to see a CAGR of 55.8%. The
key driver for LCD monitor market growth will be the continuous price declines in TFT-LCDs and
LCD monitor systems. Unlike the portable PC market, which mainly depends on corporate demand,
LCD monitor demand will largely rely on consumer market and CRT replacement requirements. By
2005, TFT will have a 26.1% share of the total monitor market (including CRT and LCD). During
the same period, LCD monitor demand will show a 32.8% CAGR while CRT monitor demand will
show only a 3.6% CAGR. LCD monitor demand, excluding LCD TV, was 13.5 million units in 2001
and will reach 42 million units by 2005. As the portable PC demand will show a stable 15.6% CAGR
for same period, booming LCD monitor demand will mainly tract the total TFT-LCD market from
2001. However, the consumer market that drives monitor demand seems quite price sensitive, and
LCD monitor demand will maintain a vulnerable position that can be fluctuated easily by prices and
economic status.
In terms of display size, in 2001, the 15-inch grade had an 80% share in the market and will
keep its leading position through 2005, even though the portion will decrease to 65% in 2005. Owing
to LCD price declines and increasing demand for multimedia environments, larger LCDs with higher
resolution will gain more shares by 2005. LCDs larger than 15-inch had a 15% share in 2001, which
will increase to 35% in 2005. However, due to continued difficulties in yield, the share of LCDs
larger than 20-inch will not easily increase in market share, and this segment of the market will reach
only 4% by 2005. In larger products, the 17-inch and 18-inch LCDs will be the main drivers of the
market. The 17-inch grade will get more share than the 18-inch grade due to aggressive promotion
and film technology adoption, but the 18-inch grade will continue to increase its share as production
yield improves.
The different types of LCD monitors, based on inches of screen, are:
12.1-inch – Starting in the first quarter of 2001, the 12.1-inch TFT-LCD changed its staple
resolution from SVGA to XGA in response to market demands, and it was able to maintain
a relatively small price decline compared with other sizes. During 2000 and the first quarter
of 2001, the 12.1-inch model maintained a stable demand increase due to strong market
needs for real portability. However, following the 13.3-inch model’s faster price decline in
March 2001, the 12.1-inch and 13.3-inch TFT-LCDs have maintained an almost-identical
price level. This may be one of the key inhibitors to demand growth in spite of the strong
portability features of the 12.1-inch size. Additionally, less legibility caused by higher
resolution may be a weak point in promotion.
13.3-inch – Sandwiched by the 12.1-inch and 14.1-inch sizes, the 13.3-inch model has been
losing market share and recorded large and continuous price declines in the first quarter of
2001. However, starting in the second quarter of 2001, as the prices of the 13.3-inch and
12.1-inch models came close, it seemed to show an average level of price declines. Despite
lots of negative outlooks in the market, many big portable PC companies such as Compaq,
Hewlett-Packard, IBM, Toshiba, and NEC keep a certain number of 13.3-inch projects, and
even have plans to add new ones due to the size’s recovered competitiveness.
14.1-inch – The 14.1-inch TFT-LCD holds the top market share in the portable PC market,
and this trend will be sustained for a long while. However, it may not avoid price decline
trends despite high market demands. Basically, production facilities for the 14.1-inch model
Digital Displays
are also refitted for the production of 17-inch models, and the facilities for 13.3-inch TFTLCDs are optimized for 15-inch or 18.1-inch TFT-LCDs. This means 14.1-inch-optimized
facilities have difficulties in production control by size mix compared with 13.3-inch-, 15inch-, and 18.1-inch-optimized facilities due to a premature 17-inch market. The 14.1-inch
prices will decline for a comparatively long period until the 17-inch demand picks up.
15-inch – There have been lots of efforts to promote big screen-size portable PCs since the
market changed to oversupply. However, the market still appears to be reluctant to use that
wide, but quite heavy, product. In that sense, the prices of 15-inch displays for portable PCs
could not differ from the overall market trend. The 15-inch displays for portable PC applications require a comparatively high technology level in development and production, and
new industry entrants still have some difficulties in joining as strong players. Therefore, a
certain level of price premium has been kept compared with 15-inch displays for monitor
applications, and this trend will continue through the year. The toughest competition has
been in the 15-inch-for-monitor applications market because it is almost the only playable
market for Taiwanese new entrants, and giants in Korea also focus on this market. However,
the market’s booming demands seemed to ease the steep price declines in the second quarter
of 2001. Unlike portable PCs, demand for LCD monitors mostly relies on the consumer
market and is quite pricing sensitive. Consequently, the price movement of the 15-inch
monitor applications market is restricted by a range, in which strong market demands can be
kept regardless of the supply-versus-demand situation.
17-inch – During the first quarter of 2001, 17-inch prices declined relatively little. However,
this was mainly because there was already an aggressive price reduction in the fourth quarter
of 2000. Despite offensive promotion by a Korean vendor and several Taiwanese vendors,
the proportion of 17-inch models in the LCD monitor market is still minor due to the stillhigh market price of monitor sets. As mentioned previously, the 17-inch and 14.1-inch sizes
jointly own production facilities, and, consequently, 14.1-inch-focused vendors need an
aggressive promotion for 17-inch to escape tough market competition. Based on that sense,
17-inch prices will decline continuously for some time regardless of the market situation.
18.1-inch – Even though the 18.1-inch model hit the road earlier than the 17-inch model,
the 18.1-inch has smaller market share due to a passive promotion by vendors. Basically, the
18.1-inch TFT-LCD shares production facilities with the 15-inch TFT-LCD, and there was
no strong need for aggressive promotion, ignoring the low production yield. However, the
surplus market, steadily improved yield, and already decided next investment pulled 18.1inch players into more aggressive promotion. During 2001, the price gap between 18.1-inch
and 17-inch models rapidly narrowed.
Despite signals of a decrease in demand in the PC market, portable PC demand will show stable
growth. Portable PC demand will not be deeply affected by slow demand in the consumer market due
to its focus on the corporate market. In the portable PC market, demand for the dominant 14-inch
TFTs will continue to be the mainstream offering. Meanwhile, 13-inch and 12.1-inch TFTs will
continue their gradual descent. However, the application diversification of portable PCs and the
introduction of new Web-based applications will increase the demand for less-than 10-inch products.
The movement toward bigger display sizes in portable PC will converge at the 14 or 15-inch levels,
and it will strongly drive the competition in display quality that meets multimedia requirements such
as high resolution, brightness, portability, and power consumption rather than display size. This trend
will be the penetration barrier to new entrants.
Chapter 10
LCDs for Non-PC Applications
Currently, TFT-LCDs are the only flat panel displays that are mass produced and have a large
infrastructure already established. These LCDs are mainly used for portable PC applications, but are
now rapidly evolving into monitors and other more segmented applications. Despite the overall PC
market contraction, the TFT-LCD market is showing a relatively high growth in volume, although
prices have continued to collapse. The TFT-LCD is one of the typical equipment industries that has
had a periodic discordance between demand and supply. This demand-supply imbalance has caused
difficulties for vendors and customers doing production planning and purchasing.
Unlike the portable PC market, LCD monitor demand comes from the consumer market and is
leveraged by low average selling prices. Similar to the memory industry, the competition in the TFTLCD market is capital and cost intensive, and is driven by players in Korea, Japan, and Taiwan.
So far, TFT-LCD vendors have recorded huge financial loses because of the depressed state of
the market. As the profit margins of TFT-LCD manufacturing continue to narrow, production cost
reductions by process optimization and increasing capacity for economies of scale will become
critical for the survival of many of the suppliers. As a result of this trend, many efforts have been
made to extend TFT-LCD applications beyond PC products. The suppliers have been the major
proponents of non-PC applications, mainly due to a strong captive market. Some of these applications are cellular handsets, PDAs, car navigation systems, digital still cameras (DSCs), digital video
cameras, Pachinko, portable games, and LCD TVs.
Cellular Handsets
Looking at the large, fast growing mobile handset market, display manufacturers are targeting
cellular handset displays. The TFT-LCD has been expected to form a genuine demand starting 2003,
following the scheduled mobile service evolution to 2.5-generation (2.5G) and 3G levels, and the
rapid growth of the handset market itself. However, despite the expectations, the following factors
have been slowing the overall cellular handset demand:
Expectation of the launch of next-generation service (2.5G and 3G) has delayed handset
Next-generation service technology is unfinished.
European service providers spent too much on the 3G-service radio wave auction, weakening their financial structure and hindering full-scale investment for service infrastructure.
The market stagnancy caused by these factors has pushed most handset vendors to devote
themselves to a survival competition in the low-cost, voice-focused 2G market. Consequently, the
formation of a genuine market for TFT-LCD displays for cellular handsets has been delayed, and the
displays still have comparatively expensive prices and immature operating features. Currently,
several types of 2G (including technologies such as GSM, TDMA and CDMA) services exist by
service regions and providers. This diversity will likely remain in 2.5G (including technologies such
as GPRS/EDGE, TDMA/EDGE, cdma2000, W-CDMA) services. Despite its obscure service commencement schedule and demand growth potential, 3G services will take two standards; W-CDMA
for European players, and cdma2000 for U.S. and Asian players.
In 2001, the overall cellular handset market was around 425 million units, and it will grow to
672 million in 2005, with a 13% CAGR (market research by IDC). In terms of switch-on schedule,
3G service began in Japan and Korea in 2002, because replacement demand is strong due to a short
1- to 1.5-year handset life cycle. The second wave for 3G service will take place in Europe and Asia
Digital Displays
around 2004, but the huge burden that radio wave auction places on European players may make the
service schedule obscure. In the United States, where cellular service users’ growth is slower than it
is in other developed countries, 3G service will be launched during 2004–2005, a relatively late
schedule. Also, 2.5G will become the mainstream solution, not the interim step, due to the huge
investment requirement of 3G service.
3G service began in 2001 at the service testing level and have a minor demand level, but the
initial service launching of 3G was in 2003. In 2002, 3G demand reached 8.8 million units and took
a 2% share, and 2.5G demand was 66 million units with a 14.9% share, which is much more than
3G. During 2002–2005, 3G demand will show a high CAGR of 155% and will reach 146 million
units in 2005, while 2.5G demand will show a 80.4% CAGR and will reach 388 million units in
2005. The overall cellular handset market will grow at a 14.7% CAGR during the same period, as 3G
service will commence in Japan/Korea, Europe, and the United States. Despite the high growth rate
and the spread of the 3G service area worldwide, 3G will only have a 21.7% share in 2005. The rest
of the market will be taken by 2.5G and even 2G service due to new subscribers who are more
focused on the cost of the handset than service quality and diversity. The market recession in cellular
handsets will push many players into an unsound financial status and slowdown the construction of
widespread 3G infrastructure. The recession will also lower the 3G share beneath its expected level.
Basically, a small portion of TFT-LCD may be adopted in 2.5G handsets, but will mainly be used in
3G service, in which non-voice, visual content can be operated. Therefore, TFT-LCD demand from
cellular handsets will be directly influenced by the 3G service schedule and its growth.
The STN-LCD is now commonly used for cellular handsets and had a 96% share, while other
displays, including TFT and organic EL, reached only a 4% share in the 2.5G category in 2001. In
2005, the STN-LCD share will decline to 88%, while the other display types will share the remaining
12%. To smoothly operate video content in 3G services, displays must be in color with high response
time features because amorphous TFT-LCD will capture the main market among active color
displays. The LPS display, despite better features such as lower power consumption and high
resolution, will still be relegated to a niche market because of the high production cost caused by low
yields. Organic EL will be affected by large barriers such as premature technology and the industry
being unprepared for mass production, and the display type will only take a 2% share in 2004. In
2005, the LPS portion of the market will be larger than conventional TFT-LCD, with a share of 7.8%,
because of the shift in production and yield improvements. Conventional TFT-LCD will have a 7.5%
share. Despite the small share of organic EL, only 4.4% in 2005, it will show 151% CAGR during
2001–2005 (higher than any other display), and will gain a meaningful share because of its excellent
performance and high brightness during outdoor use. However, the high CAGR of non-STN active
displays is caused by their small initial volume, and their demand will be limited in 2.5G and 3G
service. In 2005, overall color displays, TFT, LPS, and organic EL will take a share of 19.7%,
slightly under that of 3G service (21.7%), reflecting the 3G-dependent demand structure of color
displays in cellular handsets.
PDAs are grouped as part of smart handheld devices (SHDs). These devices are primarily grouped as
handheld companions, including PC companions, personal companions, and vertical applications
devices that include a pen tablet, pen notepad, and keypad. Most of the displays used in SHDs are
Chapter 10
larger than 8 inches, similar to PC products. Only personal companions such as PDAs use the small
and midsize displays.
The PDA market continues to grow because:
Flexible point of access – Rapid progresses in the mobile communication environment will
enable a flexible point of access with versatile devices.
Declining manufacturing costs – Continuous declines in the cost of key PDA parts, such as
memories and display devices, will make PDAs affordable.
Until recently, the PDA market was led by dedicated companies such as Palm and Handspring.
However, the success of this category of products has brought PC and consumer device manufacturing giants such as Compaq, HP, Sharp, Sony, and Casio to the market. Market researcher IDC reports
that in 2001 the total PDA market exceeded 15 million units, and will reach 58 million in 2005, with
a 39.2% CAGR. Palm OS has taken the top share in the built-in operating systems market, followed
by Windows-CE. However, Palm OS will lose share gradually as many new market entrants will
adopt PocketPC because of its compatibility with existing PC systems.
Monochrome STN has the dominant share of the PDA display market due to its unbeatably low
cost. However, active driving displays, such as TFT-LCDs, will gradually gain share through 2005
for the following reasons:
Price – Continuous TFT-LCD price declines due to excessive production capacity
Features – Strong multimedia PDA display feature developments, required to operate
various upcoming application software
Because PDAs are used outdoors and require full-time power-on to access mobile communications, they will need high brightness and an advanced level of power-saving features; and LPS will
be the most adequate candidate among the currently available displays. However, higher production
costs will make LPS concede its top share spot to the amorphous TFT-LCD color display market.
Amorphous TFT-LCD’s share will grow to 40%, and LPS will increase its share to about 25% in
2005. In display quality and specification, organic EL will exceed the TFT-LCDs, and it will be one
of the potential PDA displays. However, due to its premature production infrastructure, the development of the organic EL market was retarded until 2003. In 2003, it garnered 7.4% of initial share,
which will grow to 20.7% in 2005. In 2005, TFT-LCD, including LPS revenue for PDA, will exceed
$2.3 billion, much higher growth than for overall PDA unit demand, 39% reflecting rapid growth of
TFT-LCD adoption rate. During same period amorphous TFT-LCD and LPS market will grow with
66.8% and 66.7% unit CAGR respectively.
Car Navigation (Telematics) Systems
The car navigation system market has initially been growing without direct relation to the mobile
communication environment, and its high system price has made the after-market installation
category its mainstay. In addition, the major market for car navigation systems has been regionally
limited to Europe and Japan, where road systems are quite complicated. The systems could not be
successful in the U.S. market, which, although it has the largest automobile demand, has a relatively
well-planned road system. However, the following factors will offer a driving force for the car
navigation system market:
Increasing demand – Demand for recreational vehicles and family traveling is gradually
Digital Displays
Better displays – The entrance of a new service concept, “telematics,” related to the fastevolving mobile communications environment, is strongly pushing the introduction of a new
display terminal for automobiles and car navigation systems.
Both mobile service providers that are suffering from a slow market and automobile manufacturers that have to cope with tough competition due to excessive production capacity are eager for the
creation of a new business field. Besides the basic positioning feature, other functions including total
service provisions for auto travelers (e.g., accommodations, food reservations, and an automatic
rescue signal in emergencies) will be added to the car navigation system, and it will bring a new
business opportunity to both automobile manufacturers and mobile service providers. These functions will accelerate the long-term demand growth of car navigation systems in both the installed
base and after-market segments.
The total car navigation system market was just over 4 million units in 2001 and will reach 10.2
million in 2005, for a 25.2% CAGR. Because the car navigation system is basically used and
powered inside the car environment, it does not strongly require a power saving feature; however, it
needs a wide viewing angle due to its slanting location in relation to the driver. Therefore, amorphous TFT-LCD, which has a lower cost structure than LPS and wider viewable angles than STN,
will be an adequate solution for car navigation system displays. Amorphous TFT-LCD will gradually
increase to 94.2% by 2005. STN LCDs make up the remaining percentage for telematics systems.
The 5.8-inch and 7-inch wide format displays will be standard for car navigation systems. In
2001, 5.8-in. displays garnered a 52.8% share, which was larger than that of 7-inch displays due to
the high TFT-LCD price. However, as TFT-LCD prices drop and the features of car navigation
systems, such as DVD watching, gradually add-up, the 7-inch display will garner more market share.
In 2004, the market share of the 7-inch display will exceed that of the 5.8-inch display, and the 7inch display will maintain a larger share through 2005. Total TFT-LCD revenue from car navigation
systems will grow to $891 million in 2005 (IDC research). Among the various non-PC applications,
car navigation systems will use comparatively large TFT-LCDs and consume a relatively large input
substrate capacity. However, the TFT-LCD revenue portion from car navigation systems will be
smaller than that from small applications due to the low price-per-unit.
Digital Still Camera
As PCs add more upgraded features and the Web becomes the fundamental PC working environment,
the demand for digital still cameras will be on track for sound growth. In 2001, the overall digital
still camera market shipped 17.5 million units, and it will reach 38.9 million units in 2005, with a
22.1% CAGR. However, because digital still cameras are still behind conventional film cameras in
image quality and feature-considered prices, the demand for full-scale replacement will be hindered.
Therefore, digital still cameras will form a differentiated demand from that of conventional cameras,
and will mainly be used under a PC-related working environment until the cost/performance ratio
exceeds that of conventional cameras. In addition, the fact that a PC is necessary to achieve any
output image will be a barrier to widespread demand.
Displays for digital still cameras require the following:
High resolution for a clear output image
Low power consumption for a portable user environment
Comparatively high brightness to make images visible outside
Chapter 10
In 2005, among all displays for digital still cameras, amorphous TFT-LCD demand will grow to
9.1 million units, with a 69% share. In 2005, 2.3 million LPS units will be shipped, and the demand
will grow to 4.0 million in 2005, maintaining a similar market share. Currently, the display satisfying
all of these needs with an affordable price is TFT-LCDs, including LPS. By 2005, TFT-LCDs share
will increase to 77%. In 2001, amorphous TFT-LCD had a 27% share; a smaller share than that of
LPS because of the better features and smaller display cost in digital still camera BOM. However, as
more LCD vendors focus on low-cost amorphous TFT-LCD production and specification, amorphous
TFT-LCD demand will show a slightly larger growth than LPS. During 2001–2005, the amorphous
type will record a 26.4% CAGR, while LPS will show a 24.2% CAGR. In 2005, amorphous TFTLCD will increase its share to 31%.
In 2001, the combined demand for TFT-LCD and LPS was 12.3 million units, and it will reach
29.9 million units in 2005, with a 25.0% CAGR. During the same period, the TFT-LCD price will
drop by 31.4%, resulting in a lower revenue CAGR (13.7%) than volume growth. In 2005, total TFTLCD revenue for digital still cameras will reach $512 million units in 2005.In comparison, TFDs will
reduce its market share with the least CAGR over the five years, with only 8.9 million units shipping
in 2005 compared with 29.9 million units for TFT-LCD and LPS.
Digital Video Camera
Based on basic demand for video cameras, a continuously evolving Web environment and PC
upgrades will result in sound growth of digital video camera demand. In addition, replacement
demand of analog products will accelerate the growth of digital still cameras, regardless of the
overall slow demand of consumer electronic goods. In 2001, the total digital video camera demand
was 7.2 million units, and it will grow to 13.1 million in 2005, with a 15.9% CAGR.
Displays for digital video cameras basically require the following:
Fast response time – Needed for a smooth motion picture
Additional brightness – Required for outdoor use
Low power consumption – Required for a portable environment
Consequently, TFT-LCDs, including LPS, are the only possible solution for digital video
cameras through 2005. This is in spite of the entrance of other active driving displays, such as
organic EL, due to the TFT-LCD’s low production cost. The TFT-LCD displays, including LPS, will
record $353 million in 2005. During this period, digital video camera unit demand will record a
15.9% CAGR; however, because of continuing price cuts, TFT-LCD revenue will show a lower
CAGR of 5.4%. The TFT-LCD market has been through a few supply shortage/supply surplus cycles
because of its generic equipment industry character, and this market fluctuation will remain for a
time in the PC-related applications market with portable PC and LCD monitors. However, despite the
overall TFT-LCD situation, TFT-LCD prices will continue to decline during 2001–2005 because of
the relatively small size of the market and low demand for customized product designs.
The Pachinko machine has quite a large market, but mostly in Japan. Pachinko is a mixture between
a slot machine and pinball. Because it is a gambling machine, Pachinko requires comparatively large
displays for various non-PC applications and consequently consumes a meaningful portion of input
capacity. However, the specialized environment of the gambling machine industry, which has to be
under official control and influence as well as regionally dependent demand, make it difficult to
enter this market.
Digital Displays
In 2001, the size of the Pachinko display market was 3.3 million units, which will decrease to
2.8 million in 2005, with –3.7% CAGR as the market continues to lose volume. The Pachinko market
is quite saturated and does not have a large growth potential. Consequently, the upcoming display
demand will be mainly for replacement needs. As prices have continued to decline, most Pachinko
machines have already adopted TFT-LCD displays. For the forecast period, 2001–2005, the overall
Pachinko machine TFT-LCD demand will gradually decrease. Therefore, the Pachinko machine
display market will remain unattractive and inaccessible for non-Japanese LCD vendors.
Pachinko machine screens do not require high-level features such as power saving, but must be
reliable due to an indoor and quite severe operating environment. Thus, mainly conventional amorphous TFT-LCD and even low-cost TFD-LCD screens will be used, along with a smaller number of
STN displays. The continuous TFT-LCD price declines will expel the STN displays from this
product category in 2005. In 2001, the overall TFT-LCD revenue for Pachinko machines was $353
million, which will decrease to $239 million, with a –9.3% CAGR in 2005. The revenue CAGR
decline will be larger than the volume CAGR decline during the 2001–2005 period due to LCD price
Portable Game
Like the Pachinko machine market, the portable game market also has unevenly distributed regional
demand, with most of the machines sold in Japan. In other areas, console-box-type game machines
without displays are quite popular. Game machines form a huge worldwide market, and many giant
companies, including Nintendo, Sony, and Microsoft, focus on it with newly developed devices. As
more complicated features are adopted by game machines, portable products with limited specifications and display quality will not grow enough to match the increasing demand in the game machine
market overall.
Researcher IDC expects that the total demand for portable games will continue to decrease from
2003 to 2005. Amorphous TFT-LCD is the dominant portable game display and reported an over
90% share in 2003. The remainder of the market share goes to STN, at 4.7%. A low production cost
and power-saving features are the main reasons that TFT-LCD is dominant. However, it is also
because the specific supply structure has led to a sole vendor covering the market, which effectively
bars LPS displays from portable games. In 2001, the overall portable game market shipped 24.5
million units, and a 95.3% share (23.4 million units) was provided by TFT-LCD. In 2005, the total
market will decrease to 19.5 million units, with a negative 5.5% CAGR, but the market share for
TFT will increase to 99.1% as the TFT-LCD prices gradually decline and replace STN displays in
the market.
In 2001, the overall TFT-LCD revenue coming from portable game products was $815 million,
but it will decrease to $462 million in 2005 as both the market volume and TFT-LCD prices will
decrease. During this period, TFT-LCD revenue will show a negative 13.2% CAGR, which is a larger
revenue decline in CAGR than that of the volume CAGR decline. However, the portable game
display market is almost entirely covered by one LCD vendor, Sharp; thus, the decline in demand
will not directly affect the overall industry.
Differences between CRTs and LCDs
An important difference between CRT monitors and LCD panels is that the former requires an analog
signal to produce a picture, and the latter require a digital signal. This fact makes the setup of an
Chapter 10
LCD panel’s position, clock, and phase controls critical in order to obtain the best possible display
quality. It also creates difficulties for panels that do not possess automatic setup features, and thus
require these adjustments to be made manually.
The problem occurs because most panels are designed for use with current graphics cards, which
have analog outputs. In this situation, the graphics signal is generated digitally inside the PC,
converted by the graphics card to an analog signal, then fed to the LCD panel where it must be
converted back to into a digital signal. For the complete process to work properly, the two converters
must be adjusted so that their conversion clocks are running in the same frequency and phase. This
usually requires that the clock and phase for the converter in the LCD panel be adjusted to match that
of the graphics card. A simpler and more efficient way to drive a LCD panel would be to eliminate
the two-step conversion process and drive the panel directly with a digital signal. The LCD panel
market is growing from month-to-month, and with it the pressure on graphics adapter manufacturers
to manufacture products that eliminate the two-step conversion process.
One of the most important aspects of a consumer device, whether a PC or handheld device, is
the screen, since it is provides the ability to read the device. In the past few years, we have seen a
gradual transition from monochrome gray-scale screens to color in PC companions, PDAs, and smart
phones. Higher cost, power requirements, and the fact that it is hard to see what is on the screen in
high light conditions, such as outdoors, have impeded the use of color screens. The majority of
mobile consumer devices have heretofore been equipped with thin film transistor liquid crystal
displays (TFT-LCDs), which are lighter and thinner, and consume less power than a standard CRT.
As TFT-LCDs have become more prevalent, the average selling prices have also fallen. There have
been intermittent shortages of LCDs because of pricing and product availability, and manufacturers
from several countries ramping up production of new technologies such as PDPs, OLEDs, Thin CRT,
and light-emitting polymer (LEP) diodes.
Key LCD advantages include:
Sharpness – The image is said to be perfectly sharp at the native resolution of the panel.
LCDs using an analog input require careful adjustment of pixel tracking/phase.
Geometric distortion – Zero geometric distortion at the native resolution of the panel. Minor
distortion is apparent for other resolutions because images must be re-scaled.
Brightness – High peak intensity produces very bright images. This works best for brightly
lit environments.
Screen shape – Screens are perfectly flat.
Physical dimension – Thin, with a small footprint. They consume little electricity and
produce little heat.
Key LCD disadvantages include:
Resolution – Each panel has a fixed resolution format that is determined at the time of
manufacture, and that cannot be changed. All other image resolutions require re-scaling,
which can result in significant image degradation, particularly for fine text and graphics,
depending upon the quality of the image scaler. Therefore, most applications should only
use the native resolution of the panel.
Interference – LCDs using an analog input require careful adjustment of pixel tracking/
phase in order to reduce or eliminate digital noise in the image. Automatic pixel tracking/
phase controls seldom produce the optimum setting. Timing drift and jitter may require
frequent readjustments during the day. For some displays and video boards, it may not be
possible to entirely eliminate the digital noise.
Digital Displays
Viewing angle – Every panel has a limited viewing angle. Brightness, contrast, gamma and
color mixtures vary with the viewing angle. This can lead to contrast and color reversal at
large angles, and therefore, the panel needs to be viewed as close to straight ahead as
Black-level, contrast and color saturation – LCDs have difficulty producing black and very
dark grays. As a result they generally have lower contrast than CRTs, and the color saturation for low intensity colors is also reduced. Therefore, they are less suitable for use in
dimly lit and dark environments.
White saturation – The bright-end of the LCD intensity scale is easily overloaded, which
leads to saturation and compression. When this happens the maximum brightness occurs
before reaching the peak of the gray-scale or the brightness increases slowly near the
maximum. It requires careful adjustment of the contrast control.
Color and gray-scale accuracy – The internal gamma and gray-scale of an LCD is very
irregular. Special circuitry attempts to fix it, often with only limited success. LCDs typically produce fewer than 256 discrete intensity levels. For some LCDs, portions of the
gray-scale may be dithered. Images are pleasing but not accurate because of problems with
black-level, gray-level, and gamma, which affects the accuracy of the gray-scale and color
mixtures. Therefore, they are generally not suitable for professional image color balancing.
Bad pixels and screen uniformity – LCDs can have many weak or stuck pixels, which are
permanently on or off. Some pixels may be improperly connected to adjoining pixels, rows
or columns. Also, the panel may not be uniformly illuminated by the backlight, resulting in
uneven intensity and shading over the screen.
Motion artifacts – Slow response times and scan rate conversion can result in severe motion
artifacts and image degradation for moving or rapidly changing images.
Aspect ratio – LCDs have a fixed resolution and aspect ratio. For panels with a resolution of
1280 x 1024, the aspect ratio is 5:4, which is noticeably smaller than the 4:3 aspect ratio for
almost all other standard display modes. Some applications may require switching to a
letterboxed 1280 x 960, which has a 4:3 aspect ratio.
Cost – Considerably more expensive than comparable CRTs.
Further requirements of a display include:
Contrast ratio – The contrast ratio (the brightness of an image divided by the darkness of an
image) contributes greatly to visual enjoyment, especially when watching a video or
television. The LCD panel determines the contrast ratio by blocking out light from the
backlight—the blackness of the black and the brightness of the white define a monitor’s
contrast ratio.
Color depth and purity – The color filters of the LCD panel—the red, green, and blue subpixels—establish the colors shown on a flat panel display. The number of colors a panel can
display is a function of how many bits of information make up each pixel on the screen.
Response time – You need an LCD that has very fast response time when you are looking at
full motion video if the scene is going to look sharp and crisp. Otherwise, you would see
“comet tails” streaming behind moving images. Slow response times are unacceptable for
video and fast animation applications.
The following lists the most common video processing algorithms that will be used for LCDTVs:
Aspect ratio conversion
Chapter 10
Color compensation – Compensates for variations in the color performance of a display, and
allows any color to be addressed independently and adjusted without impacting other colors.
For example, the color depth of an LCD panel may be only six bits per pixel, providing
262,144 different colors—but an incoming analog computer signal may be eight bits per
pixel, providing 16.7 million colors.
Noise reduction.
Motion artifact reduction.
Video sample rate conversion.
PDP—Plasma Display Panels
Plasma Display Panels (PDPs) consist of front and back substrates with phosphors deposited on the
inside of the front plates. These displays have cells that operate similarly to a plasma or fluorescent
lamp, where the discharging of an inert gas between the glass plates of each cell generates light.
Depending upon the type of gas used, various colors can be generated. In a monochrome display, the
light from the gas discharge is what is seen on the display. To obtain a multicolor display, phosphor
is required. The plasma panel uses a gas discharge at each pixel to generate ultraviolet radiation that
excites the particular phosphor, which is located at each pixel. The PDP requires a wide range of
switching circuits and power supplies to control the on and off gas discharges, and to effect consistent quality images. It has been a complicated technology, in part because the Y-side driver circuit
must handle both address scanning and image-sustaining functions.
The newest generations of PDPs feature phosphors that are continuously “primed”; that is, the
panel remains illuminated at a low level so it stays active and responds to immediate luminescence.
But the downside impact of continued illumination—until recently—was that it created an everpresent low-level glow, which reduced contrast. The vital design objectives for new generations of
maturing PDP technology became constant illumination with high contrast and very high brightness,
or luminosity.
Advantages of PDPs over cathode ray tube (CRT) TVs are:
Accurate cell structures – PDPs have cell structures accurate enough to produce a geometrically perfect picture. Standard CRT TVs have a geometric distortion due to the inability of
the electron beam to focus on all points.
High brightness levels – PDPs are evenly illuminated without the typical dark/hot spots
observed from CRTs.
Perfect focus – PDPs can achieve perfect focus, whereas the CRT has regions that are less
focused than others.
No magnetic interference – PDPs are not susceptible to magnetic fields (while CRT’s
electron beam is influenced even by the earth’s magnetic fields).
Thin profile – A plasma monitor/TV is never more than 4 inches thick and is lightweight.
This allows PDPs to be hung on the wall, making it an excellent solution when dealing with
space restrictions.
Large screen size.
Full color with good color parity.
Fast response time for video capability.
Wide viewing angle of greater than 160 degrees in all directions.
Insensitivity to ambient temperatures.
Digital Displays
Overall, PDPs are believed to offer superior cost-to-view for TV applications in form factors
greater than 30-inches in the near term. The TFT-LCD and LCoS alternatives have yet to match the
combination of picture quality, price, form factor, or availability offered by PDPs, and OLED
technology has yet to gain critical mass in the small format segments alone. While PDPs face serious
competition from LCD systems in the sub-40-inch range and from rear- and front-projection in the
60-inch size range, they will find a sweet spot in the 40- to 60-inch range.
Disadvantages of PDPs are:
Costly to manufacture – Plasma technology is complex and the plasma manufacturing
process is time consuming. Hence, the yield when producing a line of plasma panels in a
factory is very low. Currently, PDPs are incapable of taking significant volumes of sales
away from CRT TV due to high consumer cost; approximately $10,000.
Pixel burn-out – Older generations of color plasma TV/monitors have experienced gas
pixels burning out on the panel, and creating a visible black spot on the screen. This leads
to frustrated customers who need to pay for the display’s repair. Also, due to phosphorus
degradation (amount of time to 50% luminance), the panel has a shorter panel lifetime.
Course resolution – Barrier ribs are one source of the tradeoffs inherent in PDP display
technology. They are responsible, for example, for PDPs relatively coarse resolution, which
appears to be limited to just over 30 pixels/inch. A PDP is required to be at least a 42- to 50inch size. Some PDP makers are working to optimize their barrier rib manufacturing
processes—sand blasting, screen printing, etc.—in order to achieve finer geometries and,
thus, higher resolution. But shrinking the barrier ribs has two significant drawbacks. First,
the smaller pixel cell area reduces the brightness of the display, and second, the smaller the
ribs, the more fragile they become. The result of broken ribs is either low manufacturing
yield (which raises cost), or the most disturbing of display defects, pixels that are always
No analog gray scale – Analog gray scale is impossible with a PDP, since its pixels are
either on or off due to gas physics, with no gradations in between. Gray scale is generated,
therefore, through digital frame-rate modulation, which introduces its own set of tradeoffs.
To get full gray scale, a single PDP frame may contain up to twelve sub-frames, each
consisting of combinations of so-called sustain pulses. Low gray levels typically require just
a few pulses, while high gray levels may require up to several hundred pulses. The need for
complex frame rate modulation to achieve gray scale means that PDPs require complex
control circuitry - specifically a heavyweight DSP (digital signal processing) circuit to
generate the data required, and a large high-frequency pulse circuit for the PDPs address and
sustain voltages. The additional complexity introduces cost, of course, in both parts count
and real estate.
Poor image quality – Image quality of PDP displays has radically improved since they were
first introduced a few years ago, and it is expected that image quality will continue to
improve. Nevertheless, because of the PDPs inherent tradeoffs, they still lag the video
quality and overall performance of the CRT. The more sub-frames, the less bright the
display, for example. This can be compensated for, of course, by making cell sizes bigger,
but this has the associated tradeoff of reduced resolution.
Poor performance – Performance issues similar to those associated with LCDS including;
luminance, contrast ratio, power efficiency (luminous efficiency), elimination of motion
Chapter 10
artifacts, increased resolution for advanced TV, driving voltage reduction, poor dynamic
range and black levels, and high pixel count.
Low luminance (non-sunlight readable) – High luminance is required for outdoor
Low contrast in high ambient light.
High power consumption and heat generation.
Among the flat-panel display candidates for large-screen TVs, the PDP has a clear head start,
with 42-inch-diagonal and larger PDP TVs already available in the commercial market. Plasma
Display Panels are capacitive devices that emit light through the excitation of phosphors. The PDP
adds the complexity of a gas chamber, which necessitates a high vacuum, and a precise threedimensional cell using an internal barrier-rib structure to define discrete pixel locations.
The cost of PDPs has also come down nicely over the past few years, driven in part by supply
glut and in part by manufacturing efficiencies. This sharp price reduction will continue, but the
ambitious cost reductions that the PDP industry touted early on have been repeatedly pushed back.
Further improvements also continue in areas of picture quality, including contrast ratio and brightness levels. It is expected that PDP TVs will still be prohibitively expensive even by 2004, and will
not make much of an impact on the consumer market. The rather complicated steps like barrier rib
construction, phosphor deposition onto the side walls of the barrier ribs, sealing, and evacuation are
major bottlenecks in the long and expensive PDP manufacturing processes. The PDP process requires
about 70 tools and 70 process steps. A theoretical display fabrication facility with an annual capacity
of 250,000 displays would require about a $250 million investment for PDP. Because the key
components are fabricated using photolithographic or thick-film deposition technology, yield
improvement holds the key to further cost reduction. Furthermore, a large number of driver ICs are
required to create light emissions from each cell, and the plasma discharges consume a high voltage
(150-200V) and a lot of power (300W). Finally, there continue to be technical challenges relating to
picture quality; for example, contrast tends to deteriorate over time because of a continuous occurrence of slight plasma discharge.
PDP Market
Plunging prices, improving quality, and enhanced features have enabled PDP systems to make major
inroads into the consumer market, thus, paving the way for rapid growth over the next several years,
according to iSuppli/Stanford Resources. The year 2002 brought dramatic gains in PDP sales with
revenue increasing by 62% and units sold increasing by 143%. The average factory price of a PDP
system fell to $2,743, down 33% from $4,100 in 2001, with the drop in PDP system pricing a result
of declines in panel pricing and material costs as well as a reduction in channel costs.
The worldwide PDP system factory revenue will grow to $11.2 billion in 2007, up from $2.2
billion in 2002. Unit shipments will rise to 8 million in 2007, up from 807,096 in 2002 and 1 million
in 2003. The cost-per-diagonal-inch fell from in excess of $200 to $150 in 2001, to $100 by the end
of 2002, and is predicted to fall to $75 by year-end 2003. It is predicted that by 2007 the consumer
sector will account for 81% of systems.
The main reasons for the increasing popularity of PDP systems among consumers are:
Declining prices – A 50-inch PDP that cost $10,000 in 2001 is expected to cost only $1,200
in 2010.
Digital Displays
Performance benefits – Better viewing angles, improved picture quality, and thin formfactors compared to competing technologies like CRTs, LCDs, and projectors.
Size availability – Plasma Displays are available in screen sizes ranging from 32-inch to 63inches, covering the needs of different markets and geographies.
Consumer outlet marketing – Availability of PDP TVs at mainstream consumer electronics
outlets rather than only at specialty electronics stores or through audio-video integrators.
Digital TV advancement – Worldwide migration toward Digital TV through cable and
satellite networks and HDTV format. The progressive scanning format of PDPs works well
with DTV content.
The global PDP TV industry is controlled (OEM) mainly by Sony, Panasonic (Matsushita),
Pioneer, NEC, and Fujitsu. The top five plasma panel manufacturers are Fujitsu-Hitachi Plasma,
Matsushita, Pioneer, NEC, and LG Electronics. The existing providers are, however, facing competition from Chinese, and Taiwanese companies are also set to join the market, using their ample
production capacity for PC monitors.
A combination of factors is responsible for the high prices, including:
Cost of driver ICs – Unless some way of significantly reducing the operating voltage can be
Driver chip expense – Although column driver chips are less expensive, a large number are
required. For example, 60 are needed to drive a 1920 x 1080 display, and, thus, represents a
significant cost.
Large power supply.
Specialized glass substrates.
Multiple interconnects.
EMI filters.
Metal electrode parts.
Low manufacturing yield – Although it is now approaching 80%.
Material costs – Plasma displays are driven by material costs such as low-cost ceramic and
packaging materials.
A number of cost reduction possibilities exist for plasma displays, and are listed below:
Cheaper substrate – Use ordinary soda-lime, float-process window glass. However, this
suffers substantial dimensional change during high-temperature processing.
Substrate layer replacement – Replace one of the glass substrates with a sheet of metal
laminated to a ceramic layer.
Different electrodes – Use all-metal electrodes instead of either transparent indium-tin-oxide
or tin-oxide electrodes in parallel with high conductivity, opaque metal bus bars.
Different scan driver scheme – Use single-scan addressing rather than dual-scan designs,
which requires half the number of data address drivers. However, the increased number of
horizontal lines requires more precise address time during the fixed frame time.
As for LCD TV, there are a number of commonly used video processing algorithms that are used
with PDP TVs, and are listed as follows:
Post-processing – Gamma correction, sharpness enhancement, color correction
Aspect ratio conversion
Noise reduction
Motion artifact reduction
Video sample rate conversion
Chapter 10
Plasma Display Panels are like CRTs in that they are emissive and use phosphor. They are also like
LCDs in their use of an X and Y grid of electrodes separated by an MgO dielectric layer, and
surrounded by a mixture of inert gases (such as argon, neon, or xenon) that are used to address
individual picture elements. They work on the principle that passing a high voltage through a lowpressure gas generates light. Essentially, a PDP can be viewed as a matrix of tiny fluorescent tubes
that are controlled in a sophisticated fashion. Each pixel, or cell, comprises a small capacitor with
three electrodes. An electrical discharge across the electrodes causes the rare gases sealed in the cell
to be converted to plasma form as it ionizes. Plasma is an electrically neutral, highly ionized substance consisting of electrons, positive ions, and neutral particles. Being electrically neutral, it
contains equal quantities of electrons and ions and is, by definition, a good conductor. Once energized, the cells of plasma release ultraviolet (UV) light that then strikes and excites red, green, and
blue phosphors along the face of each pixel, causing them to glow.
Within each cell, there are actually three subcells, one containing a red phosphor, another a blue
phosphor, and the third a green phosphor. To generate color shades, the perceived intensity of each
RGB color must be controlled independently. While this is done in CRTs by modulating the electron
beam current, and therefore also the emitted light intensities, PDPs accomplish shading by pulse
code modulation (PCM). Dividing one field into eight sub-fields, with each pulse-weighted according to the bits in an 8-bit word, makes it possible to adjust the widths of the addressing pulses in 256
steps. Since the eye is much slower than the PCM, it will integrate the intensity over time. Modulating the pulse widths in this way translates into 256 different intensities of each color—giving a total
number of color combinations of 256 x 256 x 256 = 16,777,216.
The fact that PDPs are emissive and use phosphor means that they have an excellent viewing
angle and color performance. Initially, PDPs had problems with disturbances caused by interference
between the PCM and fast moving pictures. However, this problem has been eliminated by finetuning the PCM scheme. Conventional plasma screens have traditionally suffered from low contrast.
This is caused by the need to “prime” the cells by applying a constant low voltage to each pixel.
Without this priming, plasma cells would suffer the same poor response time of household fluorescent tubes, making them impractical. The knock-on effect, however, is that pixels that should be
switched off still emit some light, reducing contrast. In the late 1990s, Fujitsu alleviated this problem
with new driver technology that improved contrast ratios from 70:1 to 400:1. By the year 2000 some
manufacturers claimed as much as 500:1 image contrast, albeit before the anti-glare glass is added to
the raw panels.
The biggest obstacle that plasma panels have to overcome is their inability to achieve a smooth
ramp from full-white to dark-black. Low shades of gray are particularly troublesome, and a noticeable posterized effect is often present during the display of movies or other video programming with
dark scenes. In technical terms, this problem is due to insufficient quantization, or digital sampling
of brightness levels. It’s an indication that the display of black remains an issue with PDPs.
Manufacturing is simpler than for LCDs, and costs are similar to CRTs at the same volume.
Compared to TFTs, which use photolithography and high-temperature processes in clean rooms,
PDPs can be manufactured in less clean factories using low-temperature and inexpensive direct
printing processes. However, with display lifetimes of around 10,000 hours, a factor not usually
Digital Displays
considered with PC displays—cost per hour—comes into play. For boardroom presentation use this is
not a problem, but for hundreds of general-purpose desktop PCs in a large company it is a different
However, the ultimate limitation of the plasma screen has proved to be pixel size. At present,
manufacturers cannot see how to get pixels sizes below 0.3 mm, even in the long term. For these
reasons PDPs are unlikely to play a part in the mainstream desktop PC market. For the medium term
they are likely to remain best suited to TV and multi-viewer presentation applications employing
large screens, from 25 to 70 inches.
Figure 10.4 shows the plasma display panel controller.
Figure 10.4: Plasma Display Panel Controller
Key Differences Between LCDs and PDPs
Key differences between LCDs and PDPs include form factor (display size), video quality, life span
and power consumption, capital cost.
Form Factor
It is difficult for PDPs to compete effectively with TFT-LCDs in form factors below 30-inch because
of the low manufacturing cost of TFT-LCDs. This is confirmed by the larger LCD shipments in unit
terms for the smaller form factors. It is still unknown as to which solution will win for form factors
exceeding 30-inch. However, TFT-LCD suppliers such as Sharp and Samsung are devoting substantial resources to overcoming technology and cost hurdles, which could propel TFT-LCD towards
becoming the mainstream TV display technology. Furthermore, it is estimated that TFT-LCDs only
have a 5% penetration within the TV market, and, therefore, the growth path appears to be extremely
promising for the second-half of this decade. Currently, the maximum size available for PDPs is 70
inches, but is 54-inch (demos) and 40-inch (in reality) for LCDs. Also, cost-per-area PDP becomes
cheaper at greater than 40-inches, while for LCD is much higher with larger screens.
Chapter 10
The LCD is lagging behind plasma in the large-screen flat panel TV market as 50-inch plasma
models are already in volume production. Non-Japanese vendors led by Samsung and LG Philips are
moving to supply large-screen LCD TVs as they exhibit 46-inch thin-film (TFT) models at electronic
shows. Samsung’s 46-inch model has picture quality equivalent to HDTV with a Wide Extended
Graphics Array (WXGA) standard resolution (1,280 x 720). It has high specs equivalent to those of
the plasma model, that is, a wide field angle (170 degrees vertically and horizontally), a contrast
ratio of 800:1, and a maximum brightness of 500 candles. Furthermore, Samsung has overcome the
slow response time issue of LCD TV with a fast response time of 12 ms—the minimum response rate
requirement to allow a viewer to watch an animated picture with comfort is 15 ms. Other companies
are innovating in different ways, such as Sharp, which has developed leading edge technology with a
4-inch TFT color display using a plastic substrate. This substrate enables a thinner, lighter, and
stronger display, which is suitable for mobile applications. It will be one-third the thickness of the
glass-based substrate display, one-fourth the weight, and have more than 10 times greater shock
Video Quality
The PDP approaches CRT video quality. Liquid crystal displays, however, require improvement in
transition speeds and black levels. Currently, TFT-LCDs account for an estimated 66% of total FPD
sales because the high resolution, color, and refresh rate. Another reason is that the integration of
driver circuits upon the transistor array continues to position the LCD as the best solution for
mainstream applications.
Life Span and Power Consumption
Two of the main criticisms of PDPs include their relatively short lifetimes and high power consumption. However, they have superior picture quality and an attractive form factor, and therefore, may
be entrenched as the premium technology platform within the ultra-large-area display market.
Capital Cost
Plasma display panel manufacturing facilities have lower amortized cost. However, LCD factories
tend to be similar in cost to a wafer fab (~ $1.5 billion). It is a very risky proposition if the market is
limited to a few thousand units. Anything 40 inches or higher is difficult to produce.
Other Considerations
Other considerations in the difference between LCDs and PDPs are viewing angle, light sources, and
color technology, as follows:
Viewing angle – PDPs allow 160 degrees plus horizontal viewing angle, and LCDs typically allow 90 degrees (up to 160 degrees possible) vertical viewing angle.
Light sources – PDP has an emissive (internal) light source, while LCDs have a transmissive
(external backlight).
Color technology – PDPs use phosphor (natural TV colors), while LCDs use color filters
(different color system than TV).
Key Differences Summary
To survive in the competitive flat panel market, vendors must continue to improve production
technology. In addition to the need for yield improvement of the panel, other components and parts
Digital Displays
in the TV system must be integrated as they account for the major cost portions. For instance, in the
case of LCD TV, no single vendor has a technical edge over the others, and reduction of component
cost holds the key to competitiveness and profitability.
PALCD—Plasma Addressed Liquid Crystal Display
A peculiar hybrid of PDP and LCD is the plasma addressed liquid crystal display (PALCD). Sony is
currently working, in conjunction with Tektronix, on making a viable PALCD product for consumer
and professional markets.
Rather than use the ionization effect of the contained gas for the production of an image,
PALCD replaces the active matrix design of TFT-LCDs with a grid of anodes and cathodes that use
the plasma discharge to activate LCD screen elements. The rest of the panel then relies on exactly
the same technology as a standard LCD to produce an image. Again, this won’t be targeted at the
desktop monitor market, but at 42-inch and larger presentation displays and televisions. The lack of
semiconductor controls in the design allow this product to be constructed in low-grade clean rooms,
reducing manufacturing costs. It is claimed to be brighter, and retains the “thin” aspect of a typical
plasma or LCD panel.
FEDs—Field Emission Displays
Some believe FED (field emission display) technology will be the biggest threat to LCD’s dominance in the panel display arena. The FEDs capitalize on the well-established
cathode-anode-phosphor technology built into full-sized CRTs, and use that in combination with the
dot matrix cellular construction of LCDs. However, Instead of using a single bulky tube, FEDs use
tiny “mini tubes” for each pixel, and the display can be built in approximately the same size as a
LCD screen.
Each red, green, and blue sub-pixel is effectively a miniature vacuum tube. Where the CRT uses
a single gun for all pixels, a FED pixel cell has thousands of sharp cathode points, or nanocones, at
its rear. These are made from material such as molybdenum, from which electrons can be pulled very
easily by a voltage difference. The liberated electrons then strike red, green, and blue phosphors at
the front of the cell. Color is displayed by using “field sequential color,” in which the display will
show all the green information first, then redraw the screen with red, followed by blue.
In a number of areas, FEDs appear to have LCDs beaten. Since FEDs produce light only from
the “on” pixels, power consumption is dependent on the display content. This is an improvement
over LCDs, where all light is created by a backlight that is always on, regardless of the actual image
on the screen. The LCD’s backlight itself is a problem that the FED does not have. The backlight of a
LCD passes through to the front of the display, through the liquid crystal matrix. It is transmissive,
and the distance of the backlight to the front contributes to the narrow viewing angle. In contrast, a
FED generates light from the front of the pixel, so the viewing angle is excellent—160 degrees both
vertically and horizontally.
Field Emission Displays also have redundancy built into their design, with most designs using
thousands of electron emitters for each pixel. Whereas one failed transistor can cause a permanently
on or off pixel on a LCD, FED manufacturers claim that FEDs suffer no loss of brightness even if
20% of the emitters fail.
Chapter 10
These factors, coupled with faster than TFT-LCD response times and color reproduction equal to
the CRT, make FEDs look a very promising option. The downside is that they may prove hard to
mass produce. While a CRT has just one vacuum tube, a SVGA FED needs 480,000 of them. To
withstand the differences between the vacuum and external air pressure, a FED must be mechanically
strong and very well sealed. By the late 1990s, six-inch color FED panels had already been manufactured, and research and development on 10-inch FEDs was proceeding apace.
DLP—Digital Light Processor
Digital Light Processors (DLPs) enable TV to be completely digital and provide superior video
performance required by home entertainment enthusiasts. Texas Instruments (TI) has developed and
patented this new technology (the semiconductor chip and the engine).
The DLP mirror chip is one of the most exciting innovations in display technology, as it has
been successfully exploited commercially. Fundamentally, the mirror chip is a standard static
memory chip design. Memory bits are stored in silicon as electrical charges in cells. An insulating
layer with a mirror finish above the cells is added, and is then etched out to form individual hinged
flat squares. When a memory bit is set, the charge in the cell attracts one corner of the square. This
changes the angle of the mirrored surface, and by bouncing light off this surface, pictures can be
formed. The optical semiconductor chip has an array of 480,000 (SVGA), 786,000 (XGA) or
1,310,000 (SXGA) hinged, microscopic mirrors mounted on a standard logic device.
Color is also a complication, since the mirror chip is basically a monochromatic device. To solve
this, three separate devices can be used, each illuminated with a primary color, and tiny mirrors
operate as optical switches to create a high resolution, full color image. Alternatively, a single device
can be placed behind a rotating color wheel with the chip displaying red, green, and blue components sequentially. The chip is fast enough to do this and the resulting picture looks fine on color
stills, but has problems handling moving images. Complicated optics are needed with DLPs to
convert a picture the size of a postage stamp into a projectable display. Heat is unavoidable because a
lot of light is focused on the chip in order to make the final image bright enough, and a large amount
of ventilation is required to cool the chip. This process is noisy, although the latest projectors have a
chip encased within a soundproof enclosure.
The DLP can be designed into both front and rear projection TVs. While the mirror chip is
currently only available in projectors, it is likely that they will appear in a back-projecting desktop
display eventually. Close to 1 million projectors and displays based on DLP technology have been
shipped in six years, and well over half those shipments have been made in the past two years. Using
only a single panel, for example, it allows a very small, lightweight optical subsystem to be used,
decreasing the size and weight of the projection engine, and allowing slim, lightweight, elegant
Recent DLP products feature contrast ratios of 1,300:1. This performance is due to the unique
method that a DMD (Digital Micromirror Device) uses to digitally switch light on and off. Because
the DMD steers light into or out of the projection path, optical designers can uniquely control and
limit the off-state light, independent of on-state light, which controls overall brightness. Prototype
DLP subsystems in the laboratory show the potential to deliver a contrast ratio in excess of 2,000:1
in the near future—a level of performance which rivals that of CRT and film.
Digital Displays
Contrast ratio is not, of course, the only factor in image quality. The overall superiority of DLP
technology in this area can be gauged, however, by the fact that a derivative—DLP Cinema technology—is currently the de facto standard projection technology as the movie industry begins the
transition to a new, digital era. Only DLP Cinema technology reliably delivers digitally accurate
images to a large screen, which, for most movie goers, is superior to images they see with traditional
celluloid-based film.
Particularly encouraging is the cost to manufacture of DLP technology. Although proven to be
highly manufacturable, yields continue to increase with significant scope for further improvement,
allowing costs to continue to be driven down. It is estimated that by 2005 the bill of materials cost of
a display product based on DLP technology will be comparable to that of a similar product based on
CRT technology. Manufacturers have announced plans to bring to market 42-inch-diagonal screen
tabletop TVs which, at $3,000, are priced comparably with CRT technology-based competitors. And
they will offer equal or superior image quality at a fraction of the CRT weight and footprint.
Figure 10.5 (a) and 10.5 (b) shows the anatomy and system functionality of the DLP. Figure 10.6
shows the digital image processing board of the DLP or LCD projector.
Figure 10.5 (a): Anatomy of the Digital Light Processor
Chapter 10
Figure 10.5 (b): Digital Light Processor System
Figure 10.6: LCD/DLP Projector—Digital Image Processing Board
Digital Displays
Organic LEDs
When magnified, high-resolution organic LED microdisplays less than one-inch diagonal in size
enable large, easy to view virtual images similar to viewing a computer screen or large TV screen.
The display-optics module can be adapted easily to many end products such as mobile phones and
other handheld Internet and telecommunications appliances, enabling users to access full Web and
fax pages, data lists, and maps in a pocket-sized device. The image can be superimposed on the
external world in a see-through optics configuration; or, with independent displays for both eyes, a
true 3-D image can be created.
Efficiency is always a concern for displays, but it is especially important for microdisplays that
are to be used in handheld appliances or headsets for wearable products. The best match to the range
of microdisplay requirements is achieved by the combination of OLEDs on silicon. The OLEDs are
efficient Lambertian emitters that operate at voltage levels (3-to-10 V) accessible with relatively lowcost silicon. They are capable of extremely high luminance, a characteristic that is especially
important for use in the helmets of military pilots, even though most of the time they would be
operated at much lower levels. Luminance is directly linear with current, so gray scale is easily
controlled by a current-control pixel circuit. Organic LEDs are very fast, with faster response than
liquid crystals, an important feature for video displays.
Fabrication is relatively straightforward, consisting of vacuum evaporation of thin organic layers,
followed by thin metal layers and a transparent conductor oxide layer. A whole wafer can be processed at one time, including a process for sealing, before the wafer is cut into individual displays.
Up to 750 small displays can be produced from a single 8-inch wafer, including interface and driver
electronics embedded in the silicon.
Organic LEDs were invented by C. W. Tang and S. A. Van Slyke of Kodak, who found that ptype and n-type organic semiconductors could be combined to form diodes, in complete analogy to
the formation of p-n junctions in crystalline semiconductors. Moreover, as with gallium arsenide and
related III-V diodes, the recombination of injected holes and electrons produced light efficiently. In
contrast to the difficult fabrication of III-V LEDs, where crystalline perfection is essential, organic
semiconductors can be evaporated as amorphous films, for which crystallization may be undesirable.
The prototypical Kodak OLED, which is used in consumer products today by Pioneer, is a downemitting stack, with light coming out through a transparent glass substrate.
For use on top of an opaque silicon chip, we modify the stack, starting with a metal anode with a
high work function and ending with a transparent cathode, followed by a layer of transparent indium tin
oxide. We also change the active layer to make it a white-light emitter. We do this by using a diphenylene
-vinylene type blue-green emitter, which is co-doped with a red dye to yield a white spectrum.
Even though an OLED microdisplay on silicon may have millions of subpixels, the OLED
formation can be rather simple because the complexity is all in the substrate. For each subpixel,
corresponding to a red, green, or blue dot, there is a small electrode pad, possibly 3.5 x 13.5 microns,
attached to an underlying circuit that provides current. The OLED layers can be deposited across the
complete active area, using shadow masking in the evaporator. This includes the cathode, which is
common for all pixels. This simple approach is made possible by the fact that the materials are not
good lateral conductors, so the very thin organic films cannot shunt current from one subpixel to
another. With such thin structures, light also does not leak into adjacent pixels, so contrast is maintained even between neighboring pixels.
Chapter 10
Organic LED devices are very sensitive to moisture, which attacks the cathode materials; to a
lesser degree, they are sensitive to oxygen, which can potentially degrade the organics. For this
reason they must be sealed in an inert atmosphere after fabrication and before being exposed to
ambient environmental conditions. Organic LEDs on silicon can be sealed with a thin film. This thin
film seal has been critical to the development of full color OLED microdisplays, since microscopic
color filters can be processed on top of a white OLED using photolithography after passivation with
the thin film seal.
The current through the OLED in a basic active-matrix OLED pixel cell is controlled by an
output transistor, and all emitting devices share a common cathode to which a negative voltage bias
is applied. This bias is set to allow the full dynamic range. Since the output transistor has a limited
voltage capability, depending upon the silicon process used, it is important to have a steep variation
in the OLED of luminance versus voltage so that a large variation in luminance.
Organic LEDs will become a major competitor to the TFT-LCD market due to the promise of
significantly lower power consumption, much lower manufacturing cost, thinner and lighter form
factors, as well as potentially sharper and brighter images and greater viewing angles. The light
emitting properties of AM-OLEDs eliminate the need for backlight units, backlight lamps, inverters,
color filters, liquid crystal, alignment film, and the manufacturing steps associated with these in LCD
production. A large number of companies are investing in research and development as well as
manufacturing capital in this emerging market, and on last count there were approximately 85.
Furthermore, there is a large willingness to cross-license technology in order to speed up commercialization. Organic LEDs are also highly appealing in the upcoming 2.5G and 3G wireless markets.
It is unlikely that OLEDs will be able to compete with incumbent technologies for large display
applications until the second half of the decade. Some of the companies involved in OLED development include Pioneer, Motorola, Samsung, Sanyo, Kodak, and TDK.
LED Video for Outdoors
Ever wonder what exactly was going on behind the scoreboard to keep these giant, super bright
displays running in a large outdoor stadium display during a bitter rainstorm? Many large screen
display systems are designed to operate much differently than your standard desktop monitor, as high
power consumption, adverse weather conditions, and long operating hours make this market one of
the most unique in the electronic display industry.
The popular LED video technology offers some of the best performance characteristics for the
large-venue outdoor sports facility, especially brightness and lifespan. The market for large screen
LED video displays is valued at nearly $1.5 billion by 2007, and is being targeted by all major
suppliers such as Fuji, Matsushita, Mitsubishi, Sony, and Toshiba, among others.
LCoS—Liquid Crystal on Silicon
Liquid Crystal on Silicon (LcoS) technology has received a great deal of attention with its anticipated lower cost structure, and a number of companies have raced to ramp up volume production.
However, LCoS development has been derailed by many of these companies due to more difficult
manufacturing problems, and has reached the end-market consumer only in very small volumes.
Digital Displays
One limitation of LCoS imagers is that they rely upon polarization to modulate the light, just as
LCD panels do. This effectively creates an upper limit to the contrast ratio, negatively impacting, for
example, the detail reproduction in dark scenes. Contrast ratio is a historic strength of CRT and filmbased projectors. Poor contrast ratio also limits the overall perceived sharpness or clarity of an image.
This is true even in relatively bright scenes, because the magnitude of the difference between bright
and dark areas of a scene is a key factor in how we perceive an image. Liquid Crystal on Silicon, then,
does not compete well with other display technologies in this key area of perceived image quality.
Considering the fundamental cost structure of LCoS, there is nothing unique in the wafer cost
compared to the wafer cost of any other display technology, such as DLP. Yet, unlike products based
on DLP technology that require only a single panel to deliver outstanding images, LCoS designers
rely on three panels to achieve adequate picture quality. Single-panel LCoS designs have faltered
because of manufacturing difficulties, or because of the problems inherent in complicated triple
rotating-prism optical systems. Liquid Crystal on Silicon technology market penetration will lag
until it can achieve a higher image performance from single-panel architectures.
There is potential for LCoS to build a sizeable niche within the flat panel TV market because of
extremely high resolution, good picture quality, scalability above 30 inches, and competitive long-term
pricing. This is likely to occur somewhere between PDPs at the premium high-end and TFT-LCDs,
which are likely to become the mainstay technology. Furthermore, they believe that LCoS can gain
momentum by 2005 due to the existing commercial success of two other microdisplay rear projection
systems—Digital Light Processing (DLP), from TI and High Temperature Polysilicon (HTPS) TFTLCD. However, this form of display is not as slim as a PDP, but is of equivalent weight, offers
superior power consumption, and a substantially longer operating life. Compared with the CRT rear
projection set, the LCoS form factor is far superior as it offers better picture quality, is far brighter,
and provides wide aspect ratios at HDTV resolution.
Comparison of Different Display Technologies
Table 10.2 summarizes the pros and cons for the different display technologies.
Table 10.2: Summary of the Different Display Technologies
Plasma (PDP)
LCD Panel
Rear Projection
Rear Projection
Thin and Alluring.
Prices, while stratospheric, are dropping.
As think as 3 inches.
Light enough to be hung from the wall.
Based on the same technologies as laptop
PC screens. Offer best picture quality of the
Brightness and color are superb.
Viewing angle is 175 degrees.
Even thinner than PDPs with no burn-in
Lowest price per screen inch.
Not subject to burn-in issues.
Proven technology.
Best image quality and response times.
Screen sizes up to 45 inches.
Brighter and sharper than most rear
projection LCDs.
Slender as a DLP.
Form factor.
Plasma (gas) expands at high altitudes and makes hissing noises.
Like CRTs, PDPs lose brightness as they age.
Slow Response times – hence not ideal for fast video.
Narrower viewing angle.
Issues with color consistency, contrast & display of true blacks.
Bigger the screen size – bulkier and heavier the set.
Slow response times.
Not as thin as PDP or LCDs.
Occasional flaws in video artifacts, flaws in source material.
Chapter 10
Three-dimensional (3-D) Displays
With flat screens in big demand today, most of the familiar corporate names are working hard to
develop increasingly flatter and thinner display technology. However, much research is also going
into bringing the image to life using three-dimensional (3-D) volumetric techniques.
There are many types of volumetric displays, but they are most often portrayed as a large
transparent sphere with imagery seeming to hover inside. Designers are required to grasp complex 3D scenes. While physical prototypes provide insight, they are costly and take time to produce. A 3-D
display, on the other hand, can be used as a “virtual prototyping station,” in which multiple designs
can be inspected, enlarged, rotated, and simulated in a natural viewing environment. The promise of
volumetric displays is that they will encourage collaboration, enhance the design process, and make
complex data much easier to understand.
Volumetric displays create images that occupy a true volume. In the still nascent vocabulary of
3-D display technology, it could also be argued that re-imaging displays, such as those using concave
mirror optics, are volumetric, since they form a real image of an object or display system placed inside.
Unlike many stereoscopic displays and old “3-D movies,” most volumetric displays are autostereoscopic; that is, they produce imagery that appears three-dimensional without the use of
additional eyewear. The 3-D imagery appears to hover inside of a viewing zone, such as a transparent
dome, which could contain any of a long list of potential display elements. Some common architectures use a rotating projection screen, a gas, or a series of liquid crystal panels. Another benefit of
volumetric displays is that they often have large fields of view, such as 360 degrees around the
display, viewable simultaneously by an almost unlimited number of people.
However, volumetric displays are difficult to design, requiring the interplay of several disciplines. Also, bleeding-edge optoelectronics are required for high-resolution imagery. In a volumetric
display, fast 3-D rasterization algorithms are needed to create fluid animation. Once the algorithms
‘rasterize’ the scene by converting 3-D data into voxels, the data slices are piped into graphics
memory and then into a projection engine. The term ‘voxels’ is short for “volume elements,” which
are the 3-D analog to pixels. Rasterization is the process in which specialized algorithms convert
mathematical descriptions of a 3-D scene into the set of voxels that best visually represent that scene.
These algorithms, which do the dirty work of mapping lines onto grids, are well-known in the world
of graphics cards and 2-D CRTs.
A typical high-end volumetric display, as bounded by the performance of SLM technology, might
have a resolution of roughly 100 million voxels. Because it’s difficult to set up 100 million emitters
in a volume, many 3-D displays project a swift series of 2-D imagery onto a moving projection
screen. Persistence of vision blends the stack of 2-D images into a sharp, perceived 3-D picture.
There are many different types of volumetric displays, including:
Bit mapped swept-screen displays – Those in which a high-frame-rate projector illuminates
a rotating projection screen with a sequence of 5,000 images per second, corresponding to
many slices through a 3-D data set.
Vector-scanned swept-screen displays – Those in which a laser or electron gun fires upon a
rotating projection screen.
‘Solid-state’ volumetric displays – Those that rely on a process known as two-step upconversion. For example, 3-D imagery can be created by intersecting infrared laser beams
within a material doped with rare-earth ions.
Digital Displays
Three Dimensional LCDs
Three dimensional displays can be used to realistically depict the positional relationships of objects
in space. They will find uses in medical applications requiring detailed viewing of body areas, as
well as in computer-aided manufacturing and design, retail applications, electronic books and games,
and other forms of entertainment. Therefore, several Japanese companies have announced that they
will establish a consortium to promote products and applications for 3-D stereographic LCDs in an
effort to bring the technology into the commercial mainstream. The consortium, which is spearheaded by Itochu, NTT Data, Sanyo, Sharp, and Sony, will aim to standardize hardware and software
to produce stereographic 3-D displays requiring no additional viewing aids, and to build an industry
infrastructure to develop and distribute 3-D content. However, analysts remain skeptical about how
successful the technology can be, and therefore believe that the market will remain a niche for at
least a few more years. The price premium for 3-D displays over standard displays will be a hurdle
until the production volumes increase and technology advances cut cost. Many existing components
such as glass and drivers can be adapted to 3-D displays; however, suppliers of GFX chipsets may
have to re-design their products to support the viewing of stereo-graphic display signals.
The Touch-screen
A touch-screen is an intuitive computer input device that works by simply touching the display
screen, either with a finger or a stylus, rather than typing on a keyboard or pointing with a mouse.
Computers with touch-screens have a smaller footprint, and can be mounted in smaller spaces; they
have fewer movable parts, and can be sealed. Touch-screens can be built in, or added on. Add-on
touch-screens are external frames that have a clear see-through touch-screen, and mount over the
outer surface of the monitor bezel. They also have a controller built into their external frame. Built-in
touch-screens are heavy-duty touch-screens mounted directly onto the CRT tube.
The touch-screen interface—whereby users navigate a computer system by simply touching
icons or links on the screen itself—is the most simple, most intuitive, and easiest to learn of all PC
input devices, and is fast becoming the interface of choice for a wide variety of applications. Touchscreen applications include:
Public information systems – Information kiosks, airline check-in counters, tourism displays, and other electronic displays that are used by many people who have little or no
computing experience. The user-friendly touch-screen interface can be less intimidating and
easier to use than other input devices, especially for novice users, making information
accessible to the widest possible audience.
Restaurant/point-of-sale (POS) systems – Time is money, especially in a fast paced restaurant or retail environment. Because touch-screen systems are easy to use, overall training
time for new employees can be reduced. And work can get done faster, because employees
can simply touch the screen to perform tasks, rather than entering complex key strokes or
Customer self-service – In today’s fast pace world, waiting in line is one of the things that
has yet to speed up. Self-service touch-screen terminals can be used to improve customer
service at busy stores, fast service restaurants, transportation hubs, and more. Customers can
quickly place their own orders or check themselves in or out, saving them time, and decreasing wait times for other customers.
Control/automation systems – The touch-screen interface is useful in systems ranging from
industrial process control to home automation. By integrating the input device with the
Chapter 10
display, valuable workspace can be saved. And with a graphical interface, operators can
monitor and control complex operations in real-time by simply touching the screen.
Computer based training – Because the touch-screen interface is more user-friendly than
other input devices, overall training time for computer novices, and therefore training
expense, can be reduced. It can also help to make learning more fun and interactive, which
can lead to a more beneficial training experience for both students and educators.
Basic Components of a Touch-screen
Any touch-screen system comprises the following three basic components:
Touch-screen sensor panel – Mounts over the outer surface of the display, and generates
appropriate voltages according to where, precisely, it is touched.
Touch-screen controller – Processes the signals received from the sensor panel, and translates these into touch event data that is passed to the PC’s processor, usually via a serial or
USB interface.
Software driver – Translates the touch event data into mouse events, essentially enabling the
sensor panel to emulate a mouse, and provides an interface to the PC’s operating system.
Touch-screen Technologies
The first touch-screen was created by adding a transparent surface to a touch-sensitive graphic
digitizer, and sizing the digitizer to fit a computer monitor. The initial purpose was to increase the
speed at which data could be entered into a computer. Subsequently, several types of touch-screen
technologies have emerged, each with its own advantages and disadvantages that may, or may not,
make it suitable for any given application.
Resistive Touch-screens
Resistive touch-screens respond to the pressure of a finger, a fingernail, or a stylus. They typically
comprise a glass or acrylic base that is coated with electrically conductive and resistive layers. The
thin layers are separated by invisible separator dots. When operating, an electrical current is constantly flowing through the conductive material. In the absence of a touch, the separator dots prevent
the conductive layer from making contact with the resistive layer. When pressure is applied to the
screen the layers are pressed together, causing a change in the electrical current. This is detected by
the touch-screen controller, which interprets it as a vertical/horizontal coordinate on the screen (x- and
y-axes) and registers the appropriate touch event. Resistive type touch-screens are generally the most
affordable. Although clarity is less than with other touch-screen types, they’re durable and able to
withstand a variety of harsh environments. This makes them particularly suited for use in POS
environments, restaurants, control/automation systems and medical applications.
Infrared Touch-screens
Infrared touch-screens are based on light-beam interruption technology. Instead of placing a layer on
the display surface, a frame surrounds it. The frame assembly is comprised of printed wiring boards
on which optoelectronics are mounted and concealed behind an IR-transparent bezel. The bezel
shields the optoelectronics from the operating environment while allowing IR beams to pass through.
The frame contains light sources (or light-emitting diodes) on one side, and light detectors (or
photosensors) on the opposite side. The effect of this is to create an optical grid across the screen.
When any object touches the screen, the invisible light beam is interrupted, causing a drop in the
signal received by the photosensors. Based on which photosensors stop receiving the light signals, it
is easy to isolate a screen coordinate. Infrared touch systems are solid state technology and have no
Digital Displays
moving mechanical parts. As such, they have no physical sensor that can be abraded or worn out with
heavy use over time. Furthermore, since they do not require an overlay—which can be broken—they
are less vulnerable to vandalism, and are also extremely tolerant of shock and vibration.
Surface Acoustic Wave Technology Touch-screens
Surface Acoustic Wave (SAW) technology is one of the most advanced touch-screen types. The SAW
touch-screens work much like their infrared brethren except that sound waves, not light beams, are cast
across the screen by transducers. Two sound waves, one emanating from the left of the screen and
another from the top, move across the screen’s surface. The waves continually bounce off reflectors
located on all sides of the screen until they reach sensors located on the opposite side from where they
originated. When a finger touches the screen, the waves are absorbed and their rate of travel thus
slowed. Since the receivers know how quickly the waves should arrive relative to when they were sent,
the resulting delay allows them to determine the x- and y-coordinates of the point of contact and the
appropriate touch event to be registered. Unlike other touch-screen technologies, the z-axis (depth) of
the touch event can also be calculated; if the screen is touched with more than usual force, the water in
the finger absorbs more of the wave’s energy, thereby delaying it even more. Because the panel is all
glass and there are no layers that can be worn, Surface Acoustic Wave touch-screens are highly durable
and exhibit excellent clarity characteristics. The technology is recommended for public information
kiosks, computer based training, or other high-traffic indoor environments.
Capacitive Touch-screens
Capacitive touch-screens consist of a glass panel with a capacitive (charge storing) material coating
on its surface. Unlike resistive touch-screens, where any object can create a touch, they require
contact with a bare finger or conductive stylus. When the screen is touched by an appropriate
conductive object, current from each corner of the touch-screen is drawn to the point of contact. This
causes oscillator circuits located at corners of the screen to vary in frequency depending on where
the screen was touched. The resultant frequency changes are measured to determine the x- and y- coordinates of the touch event. Capacitive type touch-screens are very durable, and have a high clarity.
They are used in a wide range of applications, from restaurant and POS use, to industrial controls
and information kiosks.
Table 10.3 summarizes the principal advantages and disadvantages of each of the described
Table 10.3: Summary of the Different Touch-screen Technologies
Finger or Stylus
Can be
damaged by
sharp objects
Finger or Stylus
Highly durable
Acoustic Wave
Finger or Softtipped Stylus
Susceptible to
dirt and
Finger Only
Highly durable
Chapter 10
Digital Display Interface Standards
Efforts to define and standardize a digital interface for video monitors, projectors, and display
support systems were begun in earnest in 1996. One of the earliest widely used digital display
interfaces is LVDS (low-voltage differential signaling), a low-speed, low-voltage protocol optimized
for the ultra-short cable lengths and stingy power requirements of laptop PC systems. Efforts to
transition LVDS to external desktop displays foundered when the rival chipmakers Texas Instruments
and National Semiconductor chose to promote different, incompatible flavors of the technology;
FPD-Link and Flat-Link, respectively. Other schemes such as Compaq’s Digital Flat Panel (DFP),
VESA Plug and Display, and National Semiconductor’s OpenLDI also failed to achieve widespread
Finally, the Digital Display Working Group (DDWG) came together at the Intel Developer
Forum in September, 1998, with the intent to put the digital display interface standard effort back on
the fast track. With an ambitious goal of cutting through the confusion of digital interface standards
efforts to date, the DDWG—whose initial members included computer industry leaders Intel,
Compaq, Fujitsu, Hewlett-Packard, IBM, NEC, and Silicon Image—set out to develop a universally
acceptable specification. The DDWG is an open industry group with the objective to address the
industry’s requirements for a digital connectivity specification for high-performance PCs and digital
displays. In April, 1999, the DDWG approved a draft Digital Visual Interface (DVI) specification,
and in so doing, brought the prospect of an elegant, high-speed all-digital display solution—albeit at
a fairly significant price premium—close to realization.
While DVI is based on TMDS (transition minimized differential signaling), a differential
signaling technology, LVDS (low voltage differential signaling), a similar technology, is equally, if
not more, relevant to the development of digital display interfaces. Both DVI and LVDS are discussed below.
DVI—Digital Visual Interface
Silicon Image’s PanelLink technology—a high-speed serial interface that uses TMDS to send data to
the monitor—provides the technical basis for the DVI signal protocol. Since the DFP and VESA Plug
and Display interfaces also use PanelLink, DVI can work with these previous interfaces by using
adapter cables.
The term “transition minimized” refers to a reduction in the number of high-to-low and low-tohigh swings on a signal. “Differential” describes the method of transmitting a signal using a pair of
complementary signals. The technique produces a transition-controlled, DC balanced series of
characters from an input sequence of data bytes. It selectively inverts long strings of 1s or 0s in order
to keep the DC voltage level of the signal centered around a threshold that determines whether the
received data bit is a 1 voltage level or a 0 voltage level. The encoding uses logic to minimize the
number of transitions, which helps avoid excessive electromagnetic interference (EMI) levels on the
cable, thereby increasing the transfer rate and improving accuracy.
The TMDS link architecture consists of a TMDS transmitter that encodes and serially transmits
a data stream over the TMDS link to a TMDS receiver. Each link is composed of three data channels
for RGB information, each with an associated encoder. During the transmit operation, each encoder
produces a single 10-bit TMDS-encoded character from either two bits of control data or eight bits of
pixel data, to provide a continuous stream of serialized TMDS characters. The first eight bits are the
Digital Displays
encoded data; the ninth bit identifies the encoding method, the tenth bit is used for DC balancing.
The clock signal provides a TMDS character-rate reference that allows the receiver to produce a bitrate-sampling clock for the incoming serial data streams.
At the downstream end, the TMDS receiver synchronizes itself to character boundaries in each
of the serial data streams, and then TMDS characters are recovered and decoded. All synchronization
queues for the receivers are contained within the TMDS data stream.
A fundamental principle of physics known as the “Copper Barrier” limits the amount of data that
can be squeezed through a single copper wire. The limit is about a bandwidth of 165 MHz, which
equates to 165 million pixels-per-second. The bandwidth of a single-link DVI configuration is
therefore capable of handling UXGA (1600 x 1200 pixels) images at 60 Hz. In fact, DVI allows for
up-to two TMDS links—providing sufficient bandwidth to handle digital displays capable of HDTV
(1920 x 1080), QXGA (2048 x 1536) resolutions, and even higher. The two links share the same
clock so that bandwidth can be divided evenly between them. The system enables one or both links,
depending on the capabilities of the monitor.
Digital Visual Interface also takes advantage of other features built into existing display standards. For example, provisions are made for both the VESA Display Data Channel (DDC) and
Extended Display Identification Data (EDID) specifications, which enable the monitor, graphics
adapter, and computer to communicate and automatically configure the system to support the
different features available in the monitor.
A new digital interface for displays poses a classic “chicken and egg” problem. Graphics adapter
card manufacturers can’t cut the analog cord to the millions of CRTs already on the desktops, and
LCD monitor makers can’t commit to a digital interface unless the graphics cards support it. Digital
Visual Interface addresses this via two types of connector; DVI-Digital (DVI-D), supporting digital
displays only, and DVI-Integrated (DVI-I), supporting digital displays backwards compatibility with
analog displays.
The connectors are cleverly designed so that a digital-only device cannot be plugged into an
analog-only device, but both will fit into a connector that supports both types of interfaces. The
digital connection uses 24 pins, sufficient for two complete TMDS channels, plus support for the
VESA DDC and EDID services. In fact, single-link DVI plug connectors implement only 12 of the
24 pins; dual-link connectors implement all 24 pins. The DVI-D interface is designed for a 12- or 24pin DVI plug connector from a digital flat panel. The DVI-I interface accommodates either a 12- or
24-pin DVI plug connector, or a new type of analog plug connector that uses four additional pins,
plus a ground plane plug to maintain a constant impedance for the analog RGB signals. A DVI-I
socket has a plus-shaped hole to accommodate the analog connection; a DVI-D socket does not.
Instead of the standard cylindrical pins found on familiar connectors, the DVI pins are flattened and
twisted to create a Low Force Helix (LFH) contact, designed to provide a more reliable and stable
link between the cable and the connector.
Of course, the emergence of a widely adopted pure digital interface also raises concerns about
copyright protection, since pirates could conceivably use the interface to make perfect copies of
copyrighted material from DVD and HDTV signals. To address this, Intel has proposed the HighBandwidth Digital Content Protection (HDCP) encryption specification. Using hardware on both the
graphics adapter card and the monitor, HDCP will encrypt data on the PC before sending it to the
display device, where it will be decrypted. The HDCP-equipped DVI cards will be able to determine
Chapter 10
whether or not the display device it is connected to also has HDCP features. If not, the card will still
be able to protect the content being displayed by slightly lower the image quality.
High-Bandwidth Digital Content Protection is really for the day when PCs are expected to be
used to output images to HDTV and other consumer electronic devices. It is even possible that firstrun movies will someday be streamed directly to people’s homes and displayed through a PC
plugged into a HDTV. Without something akin to HDCP to protect its content, Hollywood would
likely oppose the distribution of movies in this way.
LVDS—Low-Voltage Differential Signaling
Low-Voltage Differential Signaling (LVDS), as the name suggests, is a differential interconnectivity
standard. It uses a low voltage swing of approximately 350 mV to communicate over a pair of traces
on a PCB or cable. Digital TV, digital cameras, and camcorders are fueling the consumer demand of
high-quality video that offers a realistic visual experience. These have become an integral part of our
lifestyles. The other trend is to try connecting all these digital video equipment together so that they
can communicate with each other. Moving high-bandwidth digital video data within these appliances
and between them is a very challenging task.
There are very few interconnectivity standards that meet the challenge of handling high performance video data of 400 Mb/s. In addition to performance, invariably all of these applications
require the interconnectivity solution to offer superior immunity to noise and low power, and be
available at a low cost. No solution fits the bill better than LVDS.
Low-Voltage Differential Signaling was initially used with laptop PCs. It is now widely used
across digital video applications, telecom, and networking, and for system interconnectivity in
general across all applications. The integration of LVDS I/O within low-cost programmable logic
devices now enables high-performance interconnectivity alongside real-time, high-resolution image
LVDS Benefits
Low-Voltage Differential Signaling provides higher noise immunity than single-ended techniques,
allowing for higher transmission speeds, smaller signal swings, lower power consumption, and less
electro-magnetic interference than single-ended signaling. Differential data can be transmitted at
these rates using inexpensive connectors and cables. Low-Voltage Differential Signaling also
provides robust signaling for high-speed data transmission between chassis, boards, and peripherals
using standard ribbon cables and IDC connectors with 100 mil header pins. Point-to-point LVDS
signaling is possible at speeds of up-to 622 Mb/s and beyond.
Low-Voltage Differential Signaling also provides reliable signaling over cables, backplanes, and
on boards at data rates up-to 622 Mb/s. Reliable data transmission is possible over electrical lengths
exceeding 5 ns (30 inches), limited only by cable attenuation due to skin effect.
The high bandwidth offered by LVDS I/Os allows several lower data rate TTL signals to be
multiplexed/de-multiplexed on a single LVDS channel. This translates to significant cost savings in
reduced number of pins, number of PCB traces, number of layers of PCB, significantly reduced EMI,
and lower component costs.
Digital Displays
Mini-LVDS is a high-speed serial interface between the timing controller and the display’s column
drivers. The basic information carrier is a pair of serial transmission lines bearing differential
serialized video and control information. The number of such transmission-line pairs will depend on
the particular display system design, leaving room for designers to get the best data transfer advantage. And the big advantage is a significant decrease in components—switches and power supplies—
and a simpler design, one of the key parameters for product success in this highly competitive field.
There is no doubt that digital displays will penetrate the consumer market in products ranging from
TV sets to cell phones. Consumers are attracted to brighter and crisper products based on alternative
display technologies to replace the CRT in their homes. However, there are several contenders in this
market. While today the flat panel market is dominated by LCDs, the future seems apparent that
different technologies are ideal for different applications. The home entertainment screen technology
of the future must address four key requirements: it must offer image quality comparable with directview CRT, it must come in an aesthetically pleasing package that will fit in any home, it must be
reliable, and it must be affordable.
Of all these systems, the liquid crystal display is considered to be the most promising and the
display technology of choice, with the world market for TFT-LCDs having reached 46 million units
in 2001. In addition to being thin and lightweight, these displays run on voltages so low they can be
driven directly by large-scale integration (LSI). And since they consume low power, they can run for
long periods on batteries. Additional advantages over other types of flat panel display include
adaptability to full color, low cost, and a large potential for technological development. Its main
applications will be in laptop PCs, desktop monitors, multitask displays for PCs, information panels,
presentation screens, monitor displays for conferences, wall-mount TVs, and more.
The plasma display technology, having evolved to a pivotal stage in high image quality and
lower unit cost, are affordable and appropriate for use in consumer applications. For many people,
plasma has become synonymous with their conception of what the TV technology of the future will
look like; it is, after all, the closest thing we have to the sci-fi notion of a TV that hangs on the wall.
Certainly, plasma fulfills the second requirement; plasma panels are, for the most part, attractive
products. Unfortunately, this technology is too expensive in large screen sizes. The case for plasma is
not helped by the fact that the more affordable products announced thus far are at substantially
smaller screen sizes, presumably a function of low factory yields.
Liquid-crystal-on-silicon technology has received a great deal of attention with its anticipated
lower cost structure, and a number of companies have raced to ramp up volume production. However, due to more difficult manufacturing problems, LCoS development has been derailed by many
of these companies.
Digital Light Processing technology appears to offer the upside of plasma, with its ability to
enable slim, elegant product designs, but without the downside of high price. It appears to offer the
upside of LCoS technology in that it can be inherently inexpensive to manufacture while offering
excellent image quality, but without the downside of uncertainty over the viability of a single-panel
solution or the question marks over its manufacturability.
Chapter 10
Portable products such as cell phones, PDAs, web tablets, and wristwatches are also vying for
next-generation display technologies for sharper and color images. Output formats will range from
the current low pixel count monochrome reflective graphic images in low priced cell phones, up to
high definition full color video in high-end appliances. The high growth cell phone display market
has already blossomed into a $4 billion business and it promises to be the next competitive battleground between passive and active matrix LCDs. But two new display technologies, microdisplays
and OLEDs, are targeted at carving out a significant piece of the action.
However, even before they have any appreciation for the performance and features of flat-panel
monitors, consumers will buy a digital display only if the price is right; that is, if it costs less than a
CRT. There is some appreciation for the savings in desk space afforded by the thin flat-panel monitor, but few potential buyers place a premium on it, and few envision the wall- or boom-mount
monitors that have been shown as prototypes. Also, until the consumer is educated on which technology is ideal for a relevant application the market confusion will continue.
Digital Imaging—
Cameras and Camcorders
Camera technology is shifting from analog to digital. Most people have piles of photographs lying in
the closet. While they bring back good memories, the storage and sharing of photos has always been
an issue. With improvements in digital photography and the falling prices of hard disk drives and
PCs, the digital camera is gaining in popularity. Digital cameras are replacing 35-mm cameras
because of image quality, ease of use, compact size, and low cost. A digital camera is basically an
extension of the PC. The photos can be viewed on the camera’s built-in LCD screen or on a PC
monitor if the camera is plugged into a computer. The photos can be edited (color adjustments,
cropping, etc.) with a PC graphics application and sent to friends by e-mail, or printed.
Initially, digital cameras had difficulty capturing images of widely varying light patterns. They
generated images that were fuzzy when enlarged or washed out when using a flash. And they carried
a minimum price tag of $500. Although the concept was good and the promise of technology
development was assured, digital cameras still took a back seat to film-based cameras. However, the
market today for digital cameras has broadened significantly. Rapid technology changes in sensors,
chipsets, and pixel resolution contributed to this progress. Advanced semiconductor products, such as
CMOS sensors, helped improve digital imaging and overall camera performance. Meanwhile, the
price of digital camera components—namely charged-coupled devices (CCDs)—continues to fall
dramatically, reducing unit cost. Today, digital camera technology is transitioning from a niche
market to mass consumer market. The unit price for some entry-level digital cameras has dropped
below $50, giving the consumer greater choice in digital photography.
Just as digital cameras are taking over analog cameras, traditional video camcorders are migrating to digital format. Hence, this chapter covers two distinct types of camera devices—the digital
still camera and the digital camcorder.
Digital Still Cameras
Digital still cameras are one of the more popular consumer devices at the forefront of today’s digital
convergence. They enable instantaneous image capture, support a variety of digital file formats, and
can interoperate through an ever-growing variety of communications links. And, with the Internet
providing the superhighway to disseminate information instantaneously, digital images are now at
virtually everyone’s fingertips. From e-mail to desktop publishing, captured digital images are
becoming pervasive.
Chapter 11
Capturing images of precious moments is made even easier with the use of digital still cameras.
Unlike older film-based cameras, they have the ability to store images in digital formats such as
JPEG, which enables instantaneous file exchange and dissemination. This also means that the
information content is less susceptible to degradation as time goes by. Further, the Internet provides
an easy, fast, and no-cost medium to share one’s captured images with friends and family. There is
also a growing trend towards recording video clips and appending audio notes in digital still cameras.
This feature allows the user to have a more complete record of images and demonstrates the increasing value of digital convergence.
Market for Digital Still Cameras
The digital camera market is currently in the midst of unprecedented growth. The segments include
entry-level models up through 5 MP (megapixel) point-and-shoot models. These are segmented by
resolution level including VGA, XGA, 1 MP, 2 MP, 3 MP, and 4 MP. While digital cameras have
eclipsed film-based cameras in the high-end professional camera market, high prices have limited
their appeal to users of film-based cameras and personal computers. But these two groups will soon
see the price of digital cameras significantly reduced, and large unit sales will result. The worldwide
market for digital camera shipments is forecast by Dataquest to grow to over 16 million units by
2004. Semico forecasts digital camera sales growth from 5.2 million in 1999, to over 31 million by
2005. IC Insights reported that digital camera shipments in 2003 reached 29 million units and
associated revenues reached $6.1 billion. iSupply predicts that in 2005 digital camera shipments will
exceed 42 million units and factory revenues will reach $9.5 billion. This compares with the prediction of 35 million units of film cameras in 2005. Hence, 2005 will be the first year digital cameras
will outsell film cameras.
Some of the drawbacks of traditional analog film cameras are:
Cost of film
Cost of film processing
Cost of processing unwanted images
Film development time
Drop-off and pick-up time
Duplication and delivery
Time to retake poor quality photos
Storage of the pictures
The main reasons for digital camera growth are declining average selling prices, an improved
digital imaging infrastructure, and increased awareness of digital imaging. Not only have prices
declined, but new entry-level products have been introduced that allow consumers to sample the
technology without paying high prices. Entry-level models have peaked the curiosity of the larger
mass-market segment. The entry-level segment has been a significant driver in terms of unit shipments for the overall market.
As opposed to just a year ago, digital camera users now have options when deciding what to do
with their digital images. The digital imaging infrastructure has developed so that digital images may
be used in a variety of ways, and not just viewed on a PC monitor. Such examples are:
Personal inkjet printers are less expensive and of higher quality.
Declining PC prices are leading to PC penetration into consumer homes.
Internet sites are offering printing, storing, and sharing services.
Software packages are offering more robust feature sets.
Digital Imaging—Cameras and Camcorders
The awareness of such technology has increased dramatically and more consumers are giving
digital imaging a try. Coupled with improved marketing campaigns by camera, software, and Internet
vendors, this has pushed digital imaging into the spotlight as never before.
Some challenges for the digital photography technology include consumer awareness and the
pervasiveness of digital infrastructure. Drawbacks facing the digital camera industry include:
High initial cost
Poor resolution
The need for a TV or PC to see full-size pictures
The need for a quality printer to develop the photo
The formula for success in the digital camera market is multifaceted. It is important to produce a
picture with high-quality resolution and to view it as soon as the photo is taken. Decreasing prices of
imaging sensors and inexpensive, removable picture storage (flash memory cards) are helping the
market to grow. In addition, camera, PC, and IC vendors are working together to come up with
combinations that lower prices and make their systems more attractive. This will produce affordable
systems that provide seamless functionality with other equipment.
Some popular market trends include:
The PC and the Internet – The Internet has become a very important component of the
digital imaging infrastructure. Imaging sites are battling to become the premier imaging
Website and a default site for those looking to print, store, manipulate, or share their
images. One key issue on this front is how people are getting their images to the Internet or
to a specific email address. At this point, the PC remains the epicenter of digital image
processing. Images are downloaded to the PC from the digital camera and sent to their
ultimate location.
Digital camera card readers – In order to see the pictures, one traditionally had to hook
the camera to a cable dangling from a computer port, and then run PC photo software to
download pictures from a camera to the PC. This is doing it the hard way. It requires the
camera to be turned on, which drains the camera battery while performing the download.
Card readers solve this problem. They plug into the computer and accept the memory card
to provide quick access to the photos. This allows the camera to be turned off and put away
while viewing pictures. You simply slip the card into the slot and the photos appear on the
PC screen. This extends the life of the camera’s battery since it does not need to be recharged as often. Depending on the model, readers that plug into a USB port can transfer
photos at speeds up to 80-times faster than cards that plug into serial ports.
Bluetooth wireless technology – Wireless technology has added a new dimension to the
image-transfer process. The Internet could eclipse the PC as the dominant imaging medium,
and the digital camera could truly become a mobile device. The adoption of Bluetooth
wireless technology within the digital camera market will help the exchange of images
between the digital camera and the PC. While digital camera usability issues are being
worked out, Bluetooth will take a back seat. However, wireless technology—particularly
Bluetooth—will become an important component of digital imaging. Initial incorporation of
Bluetooth is expected via add-on devices because vendors will be slow to build the technology into the cameras. The add-on component could make use of a variety of camera ports
such as the USB port. As the price of Bluetooth declines and digital cameras become more
user friendly and widely used, Bluetooth will be integrated into a great many digital camera
designs. The instant gratification of the digital camera will then be complemented with the
Chapter 11
instant distribution of Bluetooth. By 2004, IDC expects Bluetooth to be integrated into 20%
of all U.S. digital camera shipments and 19% of worldwide shipments.
Megapixel – Megapixel digital cameras are breaking through feature and price barriers.
Megapixel is one-million pixels (picture elements) per image. However, the looser and more
widely accepted definition says that at least one dimension in the file you upload must be at
least 1,000 pixels or more. Thus, a 1024 by 768 camera, which technically delivers only
786,432 pixels, still counts as a megapixel unit. Megapixel cameras provide a marked
improvement in resolution compared to early digital camera models. Improved resolution
afforded by these cameras convinced some previously hesitant consumers to buy digital
cameras. As a result, the market for megapixel cameras has grown quickly. Today it represents in excess of 98% of worldwide digital camera shipments.
PDA and cell phone cameras – As it continues to grow into a mass-market product, the
digital camera is seeing convergence with other consumer devices such as PDAs and cell
phones. PDA manufacturers are bringing out modules that allow users to snap images and
make mini-movies which can then be exchanged with friends through Bluetooth or cellular
technologies. Availability of such modules presents manufacturers with a unique value
MP3 player and digital cameras convergence – Since MP3 players and digital cameras
require flash memory and similar processing power, manufacturers are bringing out
consumer devices that combine these functionalities.
Business applications – Business users have more to gain from digital photography than
home users. The technology lets the user put a photo onto the computer monitor within
minutes of shooting. This translates into a huge productivity enhancement and a valuable
competitive edge. Digitally captured photos are going into presentations, business letters,
newsletters, personnel ID badges, and Web- and print-based product catalogues. Moreover,
niche business segments that have relied heavily on traditional photography—such as realestate agents and insurance adjusters—now embrace digital cameras wholeheartedly. If the
requirement is to capture images in electronic form in the shortest possible time, then a
digital camera is the only choice. In fact, they are ideal for any on-screen publishing or
presentation use where PC resolution is between 640 x 480 and 1024 x 768 pixels. A digital
camera in this resolution range can quickly capture and output an image in a computerfriendly, bitmapped file format. This file can then be incorporated into a presentation or a
desktop publishing layout, or published on the World Wide Web.
Digital Still Camera Market Segments
The digital still camera market can be subdivided into several segments:
Entry-level – This segment includes users whose image quality requirements are not
critical. These cameras are usually low-cost and have limited picture resolution—generally
320 by 240 pixels, but as high as 640 by 480 pixels-per-exposure in VGA (video graphics
array). They have limited memory and features. These entry-level digital cameras are also
known as low-end or soft-display mobile cameras.
Point-and-shoot – These cameras provide more functionality, features, and control than the
entry-level types. Point-and-shoot digital cameras also have better images (standard 640 by
480 pixel resolution) than the entry-level cameras. The common characteristics of pointand-shoot digital cameras are enhanced sensitivity, exposure control, photographic controls,
high quality images, and ergonomics design. Some differentiated features include a zoom
lens, color LCD screen, auto focus, auto flash, and removable memory cards. Closely
Digital Imaging—Cameras and Camcorders
equivalent to 35-mm film-image quality, photo-quality point-and-shoot digital cameras are
specified with image resolution ranging from 800 to 3,000 pixels (0.8 to 3.0 megapixels).
Typical 35-mm film has a resolution of about 4,000 by 4,000 pixels. This segment of the
digital camera market, which is just getting started, is likely to be the fastest growing in the
coming years.
Professional – These digital cameras can produce images that equal or surpass film quality.
They are the finest, most expensive, and sophisticated cameras available to photographers.
These cameras have very high resolution, high memory capacity, and high-speed interface
technologies such as IEEE 1394 and USB 2.0. They consistently produce professional
quality images in 36-bit pixel color (12-bits per sub-pixel)—usually with resolutions greater
than 2.0 megapixels. They have a superb array of interchangeable lens options and a great
user interface complemented by numerous accessories. These cameras are designed to
replace 35-mm film cameras in professional news gatherings and document applications.
The professional-level market features expensive prepress, portrait, and studio cameras that
produce extremely high-resolution images.
Gadget – These cameras are built into toys and personal computers. They do not have as
high an image quality, but they are very good for snapshots and spur-of-the-moment image
captures. These cameras are derivatives of entry-level cameras and have stylized enclosures,
customized user interfaces, powerful imaging capabilities, and enhanced features.
Security – Security digital cameras are aimed at the surveillance market and provide image
capture of the area in focus. They are more rugged than the typical digital camera cousins
and can withstand outdoor weather. These cameras may be wired and/or wireless. They can
have time lapse and continuous scan capabilities. These cameras are rugged, fit into small
camera enclosures for covert placements, and are usually remote controlled.
Industrial digital cameras – These cameras are designed for the industrial environment
where normal non-rugged consumer type cameras would not last long. They function well in
conditions such as high and low temperature extremes, humid or wet environments, corrosive chemical environments, vibration, shock, and vacuum. Hence, they are rated for harsh
environments and have failsafe features. These cameras have automated and remotecontrolled operations. They support a variety of industrial interface options such as
Ethernet, CAN (Controller Area Network), I²C (Inter IC), and RS (Recommended Standard)-485.
Web cameras (webcams) – These are entry-level cameras that are low cost and have poorer
picture quality. They are commonly used for sending images across the Internet. They are
also referred to as a PC peripheral scanner. These cameras enable the user to bring the
picture image data directly into a PC application. They can also be used for creating new
types of entertainment with the camera and PC. They serve as a handy personal communication tool for exchanging images, sound, and moving pictures. Higher resolution is necessary
when printing, but such resolution is not necessary when enjoying images through TV or
any PC display monitors. Webcams range from the silly to the serious. A webcam might
point at anything from a coffeepot to a space-shuttle launch pad. There are business cams,
personal cams, private cams, and traffic cams. At a personal level, webcams have lots of
productive uses such as watching the house when one is out of town, checking on the
babysitter to make sure everything is okay, checking on the dog in the backyard, and letting
the grandparents see the new baby. If there is something that one would like to monitor
remotely, a webcam makes it easy!
Chapter 11
A simple webcam consists of a digital camera attached to a computer. Cameras like these have
dropped well below $30. They are easy to connect through a USB port, whereas earlier cameras
connected through a dedicated card or the parallel port. A software application connects to the
camera and grabs a frame from it periodically. For example, the software might grab a still image
from the camera once every 30 seconds. The software then turns that image into a normal JPEG file
and uploads it to the Web server. The JPEG image can then be placed on any Web page.
Inside the Digital Still Camera
In principle, a digital camera is similar to a traditional film-based camera. There is a viewfinder to
aim it, a lens to focus the image onto a light-sensitive device, and some means to store and remove
images for later use. In a conventional camera, light-sensitive film captures images and stores them
after chemical development. Digital photography uses a combination of advanced image sensor
technology and memory storage. It allows images to be captured in a digital format that is available
instantly—there is no need for a “development” process.
Although the principle may be the same as a film camera, the inner workings of a digital camera
are quite different. Once a picture is snapped, an embedded processor reads the light level of each
pixel and processes it to produce a 24-bit-per-pixel color image. Soon after the picture is taken, the
JPEG image is projected onto a LCD display on the back of the camera, or it may be compressed in
non-volatile flash memory storage via software. Digital cameras provide speed and the convenience
of instant development.
In the process of creating the pixels, an image is focused through a lens and onto an image
sensor which is an array of light-sensitive diodes. The image sensor is either a charge-coupled device
(CCD) or CMOS (complementary metal-oxide semiconductor) sensors. Each sensor element (chip)
converts light into a voltage proportional to the brightness which is passed into an analog-to-digital
converter (ADC). The ADC then translates the voltage fluctuations of the CCD into discrete binary
code. It does this through a series of photodiodes—each containing red, green, and blue filters—
which respond to different ranges of the optical spectrum. The sensor chip is typically housed on a
daughter card along with numerous ADCs. The digital reproduction of the image from the ADC is
sent to a DSP which adjusts contrast and detail, and compresses the image before sending it to the
storage medium. The brighter the light, the higher the voltage and the brighter the resulting computer
pixel. The more elements, the higher the resolution, and the greater the detail that can be captured.
Apart from the image sensors, the semiconductor content of a digital camera’s bill-of-materials
includes embedded micro-logic, flash memory, DRAM, analog, ADC, other logic, and discrete chips.
By design, digital cameras require a considerable amount of image processing power. Microprocessor vendors, ASIC vendors, and DSP vendors all view this requirement as a potential gold mine—a
huge market poised to buy their wares. Semiconductor manufacturers are looking at integrating five
key elements of this technology by incorporating them into the camera’s silicon and software. They
want to do this in order to deal with form factor and to gain maximum silicon revenue per digital
camera. These five key elements of technology and design include:
a sensor
an image processing unit
a microprocessor
memory (DRAM)
digital film storage (flash memory)
Digital Imaging—Cameras and Camcorders
Most of these single-chip devices accept images from a CCD (charge-coupled devices) or CMOS
(complementary metal-oxide semiconductor) sensor, process the in-coming image, filter it, compresses it, and pass it onto storage.
Image Sensors
An image sensor is a semiconductor device that converts photons to electrons for display or storage
purposes. An image sensor is to a digital camera what film is to a 35-mm camera—the device that
plays the central role in converting light into an image. The image sensor acts as the eye of the
camera, capturing light and translating it to an analog signal.
Digital image-sensor devices such as CMOS sensors and CCDs are the key components for clear
and bright resolution in digital cameras. The CCD or CMOS sensors are fixed in place and can continue
to take photos for the life of the camera. There is no need to wind film between two spools, which helps
minimize the number of moving parts. The more powerful the sensor, the better the picture resolution.
CCD Image Sensors
Charge-coupled devices are the technology at the heart of most digital cameras. They replace both
the shutter and film found in conventional cameras. The origins of the CCD lie in the 1960s when the
hunt was on for inexpensive, mass-producible memory solutions. Its eventual application as an
image-capture device had not even occurred to the scientists working with the initial technology.
Working at Bell Labs in 1969, Willard Boyle and George Smith came up with the CCD as a way to
store data, and Fairchild Electronics created the first imaging CCD in 1974. It was the predominant
technology used to convert light to electrical signals. Prior to use in digital cameras, CCDs were
implemented in video cameras (for commercial broadcasts), telescopes, medical imaging systems,
fax machines, copiers, and scanners. It was some time later before the CCD became part of the mainstreet technology that is now the digital camera.
The CCD works like an electronic version of a human eye. Each CCD consists of millions of
cells known as photosites, or photodiodes. Photosites are essentially light collecting wells that
convert optical information into an electric charge. When light particles, or photons, enter the silicon
body of the photosite, they provide enough energy for negatively charged electrons to be released.
The more light that enters the photosite, the more free electrons that are made available. Each
photosite has an electrical contact attached to it that. When a voltage is applied to this contact
(whenever photons enter the photosite), the silicon immediately below the photosite becomes
receptive to the freed electrons and acts as a container, or collection point, for them. Thus, each
photosite has a particular electrical charge associated with it. The greater the charge, the brighter the
intensity of the associated pixel.
The next stage in the process passes this charge to what is known as a read-out register. As the
charges enter and then exit the read-out register, they are deleted. Since the charge in each row is
coupled to the next, this has the effect of dragging the next in behind it. The signals are then
passed—as free of signal noise as possible—to an amplifier, and thence on to the ADC.
The photosites on a CCD actually respond to light of any hew—not to color. Color is added to
the image by means of red, green, and blue filters placed over each pixel. As the CCD mimics the
human eye, the ratio of green filters to that of red and blue is two-to-one. This is because the human
eye is most sensitive to yellow-green light. As a pixel can only represent one color, the true color is
made by averaging the light intensity of the pixels around it—a process known as color interpolation.
Chapter 11
Today, CCDs are the superior product for capturing and reproducing images. They are excellent
devices in terms of offering quality picture resolution, color saturation, and for having a low signalto-noise ratio (i.e., very little granular distortion visible in the image). On the down side, CCDs are
complex, power hungry, and expensive to produce. Depending on complexity, each CCD requires
three-to-eight supporting circuits in addition to multiple voltage sources. For these reasons, some IC
manufacturers believe CCDs have neared the end of their usefulness, especially in emerging applications that emphasize portability, low power consumption, and low cost.
Recognizing a glass ceiling in the conventional CCD design, Fujifilm (Fuji Photo Film Company) and Fujifilm Microdevices have developed a new, radically different CCD (Super CCD). It has
larger, octagonal-shaped photosites situated on 45-degree angles in place of the standard square
shape. This new arrangement is aimed at avoiding the signal noise that has previously placed limits
on the densities of photosites on a CCD. It is also aimed at providing improved color reproduction,
improvements in signal-to-noise ratio (SNR), image resolution, and power consumption. It also
offers a wider dynamic range and increased light sensitivity, all attributes that result in sharper, more
colorful digital images.
CMOS Image Sensors
CMOS sensor image capture technology first emerged in 1998 as an alternative to CCDs. The CMOS
imaging sensors include both the sensor and the support circuitry (ADC, timing circuits, and other
functions) on the same chip—circuitry needed to amplify and process the detected image. This
technology has a manufacturing advantage over CCD because the processes for producing CMOS are
the same as those currently used to produce processor, logic, and memory chips. CMOS chips are
significantly less expensive to fabricate than specialty CCD since they have proven, high-yield
production techniques with an existing infrastructure already in place. Another advantage is that
they have significantly lower power requirements than CCDs, thereby helping to extend camera
battery life. The also offer a smaller form factor and reduced weight as compared to CCDs. Furthermore, while CCDs have the single function of registering where light falls on each of the
hundreds-of-thousands of sampling points, CMOS can also be loaded with a host of other supporting
tasks such as analog-to-digital conversion, load signal processing, white balance, and camera control
handling. It is also possible to increase CMOS density and bit depth without increasing the cost.
All of these factors will help CMOS sensors reduce the overall cost of the digital cameras. Some
estimates conclude that the bill-of-materials for a CMOS image sensor digital camera design is 30%
less as compared to CCD image sensor camera design. Other estimates note that CMOS sensors need
1% of the system power and only 10% of the physical space of CCDs, with comparable image
quality. Also, as process migration continues to sub-micron levels, manufacturing CMOS sensors
using these processes will allow greater integration, resulting in increasing performance and lower
costs compared to CCDs. Increasing the on-chip processing of CMOS sensors (while also lowering
costs) provides additional potential for CMOS to out perform CCDs in image quality.
For these and other reasons, many industry analysts believe that almost all entry-level digital
cameras will eventually be CMOS-based. They believe that only mid-range and high-end units (for
PC video conferencing, video cell phones, and professional digital cameras) will use CCDs. Current
problems with CMOS, such as noisy images and an inability to capture motion correctly, remain to
be solved. Currently, CMOS technology clearly has a way to go before reaching parity with CCD
Digital Imaging—Cameras and Camcorders
However, developments are ongoing to improve the resolution and image quality of CMOS
image sensors. This, combined with the use of 0.15-and-smaller micron processing, enables more
pixels to be packed into a given physical area, resulting in a higher resolution sensor. Transistors
made with the 0.15-micron process are smaller and do not take up as much of the sensor space which
can then be used for light detection instead. This space efficiency enables sensor designs that have
smarter pixels that can provide new capabilities during image exposure without sacrificing light
sensitivity. With the release of higher quality CMOS sensors, this sensor technology will begin
penetrating digital camera design in high-quality professional markets. These markets include
professional cameras, film scanners, medical imaging, document scanning, and museum archiving.
In the longer term, it is anticipated that the sensor’s underlying technology will migrate down to the
larger consumer markets.
Differences between CCD and CMOS image sensors are described in Table 11.1:
Table 11.1: CCD vs. CMOS Sensors
Small pixel size
Low noise
Low dark current
High sensitivity
Multiple chips required
Multiple supply voltages needed
Specialized manufacturing needs
Single power supply
Low power
Single master clock
Easy integration of circuitry
Low system cost
Two types of memory are available—removable storage and disk storage.
Removable Storage
Just like processing power is critical for high-quality images, flash memory is required to store
digital images. Many first-generation digital cameras contained one or two megabytes of internal
memory suitable for storing around 30 standard-quality images at a size of 640 x 480 pixels. Unfortunately, once the memory had been filled, no more pictures could be taken until they were
transferred to a PC and deleted from the camera. To get around this, modern digital cameras use
removable storage. This offers two main advantages. First, once a memory card is full it can simply
be removed and replaced by another. Second, given the necessary PC hardware, memory cards can
be inserted directly into a PC and the photos read as if from a hard disk. Each 35 mm-quality digital
camera photo consumes a minimum of 800 KB of compressed data, or 3 MB of uncompressed data.
In order to store 24 photos, a digital camera will typically incorporate a 40-MB or 80-MB memory
To store an equivalent number of images with quality similar to conventional film, a digital
camera will use a flash memory card. Since 1999 two rival formats have been battling for domination
of the digital camera arena—CompactFlash and SmartMedia. First introduced in 1994 by SanDisk
Corporation, and based on flash memory technology, CompactFlash provides non-volatile storage
that does not require a battery to retain data. It is essentially a PC Card flash card that has been
Chapter 11
reduced to about one quarter of its original size. It uses a 50-pin connection that fits into a standard
68-pin Type II PC Card adapter. This makes it easily compatible with devices designed to use PC
Card flash RAM, with maximum capacities reaching 512 MB. Originally known by the awkward
abbreviation SSFDC (Solid State Floppy Disk Card) when it first appeared in 1996, the Toshibadeveloped SmartMedia cards are significantly smaller and lighter than CompactFlash cards. The
SmartMedia card uses its own proprietary 22-pin connection. But like its rival format, it is also
PCMCIA-ATA-compatible. It can therefore be adapted for use in notebook PC Card slots. The
SmartMedia card is capable of storing 560 high-resolution (1200 x 1024) still photographs, with
cost-per-megabyte being similar to that of CompactFlash.
Devices are available for both types of media to allow access via either a standard floppy disk
drive or a PC’s parallel port. The highest performance option is a SCSI device that allows PC Card
slots to be added to a desktop PC. CompactFlash has a far sturdier construction than its rival,
encapsulating the memory circuitry in a hard-wearing case. SmartMedia has its gold-colored contact
surface exposed and prolonged use can cause scoring on the contact surface. Its memory circuitry is
set into resin and sandwiched between the card and the contact. Storage capacity is becoming an
increasingly important aspect of digital camera technology. It is not clear which format will emerge
as winner in the standards battle. SmartMedia has gotten off to a good start, but CompactFlash is
used in PDAs as well, and this extra versatility might prove an important advantage in the long run.
A third memory technology, Sony’s Memory Stick, is also being adopted for digital still cameras. Smaller than a stick of chewing gum and initially available with a capacity of 32 MB, Memory
Stick is designed for use in small AV electronics products such as digital cameras and camcorders. Its
proprietary 10-pin connector ensures foolproof insertion, easy removal, and reliable connection. And
a unique Erasure Prevention Switch helps protect stored data from accidental erasure. Capacities had
risen to 128 MB by late 2001, with the technology roadmap for the product going all the way up to 1
GB. Infineon (formerly Siemens Semiconductor) and Hitachi announced their combined efforts in
the new multimedia card development and production. It is essential for vendors to consider these
memory card variations when planning future applications aimed at gaining a large share of the
consumer market.
These cards will require at least 200 MB of flash memory, and will be able to write 12 MB of
memory in a few seconds. At least one flash card is sold with the cameras, and it is used to store
pictures until they can be printed or transferred to a PC. Consumers can purchase greater capacity
flash memory cards as accessories. The highest capacity flash memory card on the market at the time
of this writing is 256 MB. This is capable of storing up to eight hours of digital music, more than 80
minutes of MPEG-4 video, or more than 250 high-resolution digital images.
Disk Storage
With the resolution of still digital cameras increasing apace with the emergence of digital video
cameras, the need for flexible, high-capacity image storage solutions has never been greater. Some
higher-end professional cameras use PCMCIA hard disk drives as their storage medium. They
consume no power once images are recorded and have much higher capacity than flash memory. For
example, a 170-MB drive is capable of storing up to 3,200 “standard” 640 by 480 images. But the
hard disk option has some disadvantages. An average PC card hard disk consumes around 2.5 watts
of power when spinning idle, more when reading/writing, and even more when spinning up. This
Digital Imaging—Cameras and Camcorders
means it is impractical to spin up the drive, take a couple of shots, and shut it down again. All shots
have to be taken and stored in one go, and even then the camera’s battery will last a pitifully short
length of time. Fragility and reliability are also a major concern. Moving parts and the extremely
tight mechanical tolerances to which hard drives are built make them inherently less reliable than
solid-state media.
One of the major advantages of a digital camera is that it is non-mechanical. Since everything is
digital, there are no moving parts, and a lot less that can go wrong. However, this did not deter Sony
from taking a step that can be viewed as being both imaginative and retrograde at the same time.
They included an integrated 3.5-inch floppy disk drive in some of their digital cameras.
The floppy disk media is universally compatible, cheap, and readily available. It is also easy to
use—no hassles with connecting wires or interfaces. However, the integrated drive obviously added
both weight and bulk to a device that is usually designed to be as compact as possible. Sony has
introduced a digital camera that can store images on an 8 cm/185 MB CD-R media. A mini-CD
provides sufficient capacity to store around 300 640 x 480 resolution images using JPEG compression. They have also announced plans to provide support for mini re-writable media as well as
write-once CD-R media. Performance implications that would have been unacceptable for digital
camera applications made certain trade-offs (like write-time) unavoidable. Notwithstanding the
constraints imposed, the primary benefit of CD-RW media—it’s reusability—is fully realized. Users
are able to delete images one at a time, starting with the most recent and working backward, and also
have the option to erase an entire disk via a “format” function.
Despite the trend towards removable storage, digital cameras still allow connection to a PC for the
purpose of image downloading. Transfer is usually via a conventional RS-232 serial cable at a
maximum speed of 115 Kb/s. Some models offer a fast SCSI connection. The release of Windows 98
in mid-1998 brought with it the prospect of connection via the Universal Serial Bus (USB), and
digital cameras are now often provided with both a serial cable and a USB cable. The latter is the
preferable option, allowing images to be downloaded to a PC more than three times faster than using
a serial connection. USB 2.0 offers 480 Mb/s support for transferring data and images. Bluetooth is
the next-generation technology that will allow the wireless transfer of images from the digital camera
to the PC.
Supplying a digital camera with drivers that allow the user to simply download images to a
standard image editing application is also becoming increasingly common. Some digital cameras
provide a video-out socket and S-Video cable to allow images to be displayed directly to a projector,
TV, or VCR. Extending the “slide show” capability further, some allow images to be uploaded to the
camera, enabling it to be used as a mobile presentation tool.
An increasing number of digital cameras have the ability to eliminate the computer and output
images directly to a printer. But without established interface standards, each camera requires a
dedicated printer from its own manufacturer. As well as the more established printer technologies,
there are two distinct technologies used in this field: thermo autochrome and dye sublimation.
Picture quality
The picture quality of a digital camera depends on several factors, including the optical quality of the
lens and image-capture chip, and the compression algorithms. However, the most important determi257
Chapter 11
nant to image quality is the resolution of the CCD. The more elements, the higher the resolution, and
the greater the detail that can be captured.
Despite the massive strides made in digital camera technology in recent years, conventional
wisdom remains that they continue to fall behind the conventional camera and film when it comes to
picture quality, though they offer flexibility advantages. However, since this assertion involves the
comparison of two radically different technologies, it is worth considering more closely.
Resolution is the first step to consider. While it is easy to state the resolution of a digital
camera’s CCD, expressing the resolution of traditional film in absolute terms is more difficult.
Assuming a capture resolution of 1280 x 960 pixels, a typical digital camera is capable of producing
a frame size of just over 2 or 3 million pixels. A modern top-of-the-line camera lens is capable of
resolving at least 200 pixels per mm. Since a standard 100 ASA, 35 mm negative is 24 x 36 mm, this
gives an effective resolution of 24 x 200 x 36 x 200, or 34,560,000 pixels. This resolution is rarely
achieved in practice and rarely required. However, on the basis of resolution, it is clear that digital
cameras still have some way to go before they reach the level of performance as their conventional
film camera counterparts.
The next factor to consider is color. Here, digital cameras have an advantage. Typically, the
CCDs in digital cameras capture color information in 24 bits-per-pixel. This equates to 16.7 million
colors and is generally considered to be the maximum number the human eye can perceive. On its
own this does not constitute a major advantage over film. However, unlike the silver halide crystals
in a film, a CCD captures each of the three component colors (red, green, and blue) with no bias.
Photographic film tends to have a specific color bias dependent on the type of film and the manufacturer. This can have an adverse effect on an image according to its color balance.
However, it is also the silver halide crystals that give photographic film its key advantage. While
the cells on a CCD are laid out in rows and columns, the crystals on a film are randomly arranged
with no discernible pattern. As the human eye is very sensitive to patterns, it tends to perceive the
regimented arrangement of the pixels captured by a CCD very easily, particularly when adjacent
pixels have markedly different tonal values. When you magnify photographic film there will be no
apparent regularity, though the dots will be discernible. It is for this reason that modern inkjet
printers use a technique known as “stochastic dithering” which adds a random element to the pattern
of the ink dots in order to smooth the transition from one tone to the next. Photographic film does
this naturally, so the eye perceives the results as less blocky when compared to digital stills.
There are two possible ways around this problem for digital cameras. Manufacturers can develop
models that can capture a higher resolution than the eye can perceive. Or, they can build in dithering
algorithms that alter an image after it has been captured by the CCD. Both options have downsides,
however, such as increased file sizes and longer processing times.
A color LCD panel is a feature that is present on all modern digital cameras. It acts as a mini GUI,
allowing the user to adjust the full range of settings offered by the camera. It is also an invaluable aid
to previewing and arranging photos without the need to connect to a PC. Typically, this can be used
to simultaneously display a number of the stored images in thumbnail format. It can also be used to
view a particular image full-screen, zoom in close, and, if required, delete it from memory.
Digital Imaging—Cameras and Camcorders
Few digital cameras come with a true, single-lens reflex (SLR) viewfinder. (Here, what the user
sees through the viewfinder is exactly what the camera’s CCD “sees.”) Most have the typical
compact camera separate viewfinder which sees the picture being taken from a slightly different
angle. Hence, they suffer the consequent problems of parallax. Most digital cameras allow the LCD
to be used for composition instead of the optical viewfinder, thereby eliminating this problem. On
some models this is hidden on the rear of a hinged flap that has to be folded out, rotated, and then
folded back into place. On the face of it this is a little cumbersome, but it has a couple of advantages
over a fixed screen. First, the screen is protected when not in use. Second, it can be flexibly positioned to allow the photographer to take a self-portrait, or to hold the camera overhead while
retaining control over the framing of the shot. It also helps with one of the common problems in
using a LCD viewfinder—difficulty viewing the screen in direct sunlight. The other downside is that
prolonged use causes batteries to drain quickly.
To remedy this problem, some LCDs are provided with a power-saving skylight to allow them to
be used without the backlight. In practice, this is rarely practical. Also, if there is sufficient ambient
light to allow the skylight to work, the chances are that it will also render the LCD unusable.
Digital cameras are often described as having lenses with equivalent focal lengths to popular 35
mm camera lenses. In fact, most digital cameras feature autofocus lenses with focal lengths around 8
mm. These provide equivalent coverage to a standard film camera because the imaging CCDs are so
much smaller than a frame of 35 mm film. Aperture and shutter speed control are also fully automated with some cameras, but also allow manual adjustment. Although optical resolution is not an
aspect that figures greatly in the way digital cameras are marketed, it can have a very important role
in image quality. Digital camera lenses typically have an effective range of up to 20 feet, an ISO
equivalency of between 100 and 160, and support shutter speeds in the 1/4 of a second to 1/500th of
a second range.
Zoom capability provides the camera with a motorized zoom lens that has an adjustable focal
length range. It can be equivalent to anything between a 36-mm (moderate wide angle) and 114-mm
(moderate telephoto) lens on a 35-mm format camera. But zoom doesn’t always mean a close up;
you can zoom out for a wide-angle view or you can zoom in for a closer view. Digital cameras may
have an optical zoom, a digital zoom, or both. An optical zoom actually changes the focal length of
your lens. As a result, the image is magnified by the lens (called the optics or optical zoom). With
greater magnification, the light is spread across the entire CCD sensor and all of the pixels can be
used. You can think of an optical zoom as a true zoom that will improve the quality of your pictures.
Some cameras have a gradual zoom action across the complete focal range, while others provide two
or three predefined settings. Digital zoom does not increase the image quality, but merely takes a
portion of an image and uses the camera’s software (and a process called interpolation) to automatically resize it to a full-screen image. Let’s say that one is shooting a picture with a 2X digital zoom.
The camera will use half of the pixels at the center of the CCD sensor and ignore all the other pixels.
Then it will use software interpolation techniques to add detail to the photo. Although it may look
like you are shooting a picture with twice the magnification, you can get the same results by shooting the photo without a zoom, and later enlarging the picture using your computer software. Some
digital cameras provide a digital zoom feature as an alternative to a true optical zoom. Others
provide it as an additional feature, effectively doubling the range of the camera’s zoom capability.
Chapter 11
A macro function is often provided for close-up work. This allows photos to be taken at a
distance of as close as 3 cm. But it more typically supports a focal range of around 10–50 cm. Some
digital cameras even have swiveling lens units capable of rotating through 270 degrees. They allow a
view of the LCD viewfinder panel regardless of the angle of the lens itself.
Some cameras offer a number of image exposure options. One of the most popular is a burst
mode that allows a number of exposures to be taken with a single press of the shutter—as many as 15
shots in a burst, at rates of between 1 and 3 shots-per-second. A time-lapse feature is also common
which delays multi-picture capture over a pre-selected interval. Another option is the ability to take
four consecutive images—each using only one-quarter of the available CCD array. This results in
four separate images stored on a single frame. Yet another is the ability to take multiple exposures at
a preset delay interval and tile the resulting images in a single frame.
Some cameras provide a manual exposure mode which allows the photographer a significant
degree of artistic license. Typically, four parameters can be set in this mode—color balance, exposure compensation, flash power, and flash sync. Color balance can be set for the appropriate lighting
condition—daylight, tungsten, or fluorescent. Exposure compensation alters the overall exposure of
the shot relative to the metered “ideal” exposure. This feature allows a shot to be intentionally underor over-exposed to achieve a particular effect. A flash power setting allows the strength of the flash to
be incrementally altered. And a flash sync setting allows use of the flash to be forced, regardless of
the camera’s other settings.
Features allowing a number of different image effects are becoming increasingly common. This
allows the selection of monochrome, negative, and sepia modes. Apart from their use for artistic
effect, the monochrome mode is useful for capturing images of documents for subsequent optical
character recognition (OCR). Some digital cameras also provide a “sports” mode that adds sharpness
to the captured images of moving objects, and a “night shooting” mode that allows for long
Panoramic modes differ in their degree of complexity. At the simpler end of the spectrum is the
option for a letterbox aspect image that simply trims off the top and the bottom edges of a standard
image, taking up less storage space as a result. More esoteric is the ability to produce pseudopanoramic shots by capturing a series of images and then using special-purpose software to combine
them into a single panoramic landscape.
A self-timer is a common feature, typically providing a 10-second delay between the time the
shutter is activated and when the picture is taken. All current digital cameras also have a built-in
automatic flash with a manual override option. The best cameras have a flash working range of up to
12 feet and provide a number of different flash modes. These include auto low-light, backlight flash,
fill flash for bright lighting shadow reduction, and force-off for indoor and mood photography, and
for red-eye reduction. Red-eye is caused by light reflected back from the retina which is covered in
blood vessels. One system works by shining an amber light at the subject for a second before the
main burst of light, causing the pupil to shrink so that the amount of reflected red light is reduced.
Another feature now available on digital cameras is the ability to watermark a picture with a
date and time, or with text. The recent innovation of built-in microphones provides for sound
annotation in standard WAV format. After recording, this sound can be sent to an external device for
playback, or played back on headphones using an ear socket.
Digital Imaging—Cameras and Camcorders
There are additional features that demonstrate the digital camera’s close coupling with PC
technology. One such feature allows thumbnail images to be emailed directly by camera-resident
software. Another is the ability to capture short video clips that can be stored in MPEG-1 format.
Higher-end models also provide support for two memory cards and have features more commonly
associated with SLR-format cameras. These features include detachable lenses and the ability to
drive a flash unit from either the integrated hot shoe or an external mount.
It is important to note that shooting with a digital camera is not always like shooting with a film
camera. Most units exhibit a 1- to 2-second lag time from when the shutter button is pressed to when
the camera captures the image. Getting used to this problem can take some time, and it makes some
cameras ill-suited for action shots. However, this is an area of rapid improvement and some of the
most recent cameras to reach the market have almost no delay.
Most digital cameras also require recovery time between shots for post-capture processing. This
includes converting the data from analog to digital, mapping, sharpening, and compressing the
image, and saving the image as a file. This interval can take from a few seconds to half-a-minute,
depending on the camera and the condition of the batteries.
In addition to regular alkaline batteries, most digital cameras use rechargeable nickel cadmium
or nickel hydride batteries. Battery lifetimes vary greatly from camera-to-camera. As a general rule,
the rechargeable batteries are typically good for between 45 minutes to 2 hours of shooting, depending on how much the LCD and flash are used. A set of four alkaline AA cells has a typical lifetime of
1 hour.
Figure 11.1 shows the block diagram of a digital still camera and camcorder.
Figure 11.1: Digital Camera System
Chapter 11
The basic components of a typical digital camera include:
Lens system – Consists of a lens, sensor (CMOS or CCD), servo motor, and driver.
Analog front end – Conditions the analog signal captured by the sensor, and converts it into
digital format before passing the signal on to the image processor.
Image processor – Major tasks of the image processor chip include initial image processing, gamma correction, color space conversion, compression, decompression, final image
processing, and image management.
Storage – Typically flash memory.
Display – Typically a color LCD screen.
Digital Camcorders
Camcorders, or video camera-recorders, have been around for nearly 20 years. Camcorders have
really taken hold in the United States, Japan, and around the world because they are an extremely
useful piece of technology available for under $500.
Digital camcorders are one of the more popular consumer devices at the front lines of today’s
digital convergence. As such, they enable instantaneous image and/or video capture, typically
support a variety of digital file formats, and can interoperate through an ever-growing variety of
communications links. Additionally, with the Internet providing the medium to disseminate information instantaneously, digital video clips are now at virtually everyone’s fingertips. From e-mail to
desktop video editing, digital video clips are becoming pervasive.
Market Demand for Digital Camcorders
The market demand for video related equipment such as digital camcorders is rapidly growing.
Gartner Dataquest reports that digital camcorders are expected to have a 28.7% CAGR (or compound
annual growth rate) between 2001 and 2004. This compares to analog camcorders that expect to see
a 7.5% CAGR for the same period. By 2004, the total market for digital camcorders is expected to
grow to 17 million units and factory revenues of U.S. $10.5 billion.
The Camcorder History
The video cassette recorder (VCR) allowed consumers to watch recorded video media whenever they
wanted. But the development of the VCR led naturally to the development of a combined video
camera and recording technology—the camcorder. A new generation of video cameras that are
entirely digital is emerging, and with it a trend towards totally digital video production. Previously,
desktop video editing relied on “capturing” analog video from Super VHS or Hi-8, or from the
professional level Betacam SP. Then, it would have to be converted into digital files on a PC’s hard
disk—a tedious and “lossy” process, with significant bandwidth problems. The digital videocassette
is small, contains metal-oxide tape, and is about three-quarters the size of a DAT. This cassette
format confers the significant advantage of allowing the cameras, capture cards, editing process, and
final mastering to all remain within the digital domain, end-to-end, in the recording process.
The first glimmer of camcorder things to come arrived in September 1976 when, for the first
time, JVC announced VHS in Japan. Along with its first VHS format VCR, JVC also showed two
companion video cameras, each weighing about three pounds. They could be attached to a 16.5pound shoulder-slung portable VCR. Two years later, Sony unveiled its first portable Beta format
VCR/video camera combination. And RCA followed with its own two-piece VHS system in 1978.
Digital Imaging—Cameras and Camcorders
These early portable video attempts were bulky and required a separate camera and portable
VCR. Almost simultaneously in 1982, two companies announced CAMera/reCORDER, or
camcorder, combinations. In 1982 JVC unveiled its new mini-VHS format, VHS-C. The same year
Sony announced its Betamovie Beta camcorder which it advertised with the catch-phrase, “Inside
This Camera Is a VCR.” The first Betamovie camcorder hit stores in May 1983. In 1984 photo giant
Kodak introduced a new camcorder format with its first 8-mm camcorder, the KodaVision 2000.
Sony followed with its 8-mm camcorder the following January, and the first hi-band 8-mm, or Hi8,
camcorder, in 1988.
Television producers took advantage of both our love of technology and our penchant for
voyeurism by introducing a plethora of “reality-based” TV shows, using camcorder footage submitted by viewers. The first of these was America’s Funniest Home Videos which premiered as a special
in 1989. It was a top 20 show in its first three seasons. The show, spurred on by the popularity of the
camcorder, stimulated even greater sales of the device. The camcorder reached its voyeuristic heights
in 1991 when George Holliday caught a trio of Los Angeles policemen beating a motorist named
Rodney King. The resulting furor prompted police departments across the country to install video
cameras in their patrol cars and consumers to start using camcorders to record civil disturbances.
Many disasters and news events would subsequently be captured by amateur videophiles. As the
quality of the footage produced by camcorders increased, many shoestring cable organizations
started to employ camcorders instead of professional video equipment. At home, film records of
family celebrations such as weddings and bar mitzvahs were replaced by video.
In 1992 Sharp became the first company to build in a color LCD screen to replace the conventional viewfinder. Nearly all camcorders today offer a swing-out LCD panel that keeps the user from
having to squint through a tiny eyepiece. By then, however, a new and improved video format was
also in development—the digital videocassette, or DVC (now simply DV or Mini DV). The DVC
consisted of a quarter-inch digital videotape housed in a cassette about the same size of the standard
digital audiotape (or DAT). This new format was created by a group of manufacturers in a cooperative effort called the HD Digital VCR Conference.
Panasonic, and later Sony, debuted the first Mini DV camcorders in September 1995, followed
by Sharp and JVC two months later. The advantages of the new format were immediate. The cameras
themselves were much smaller than either 8 mm or VHS-C. The resolution was also twice that of
VHS, resulting in less generation loss when editing and making copies, and because of its digital
nature and the IEEE-1394 interface on all DV camcorders, footage could be downloaded and edited
on a PC.
The IEEE-1394 interface on the Mini DV camcorders then gave rise to personal computer-based
home video editing, which has become as sophisticated as the editing suites found in many TV
studios. It also enabled struggling filmmakers to use video to create their imaginative works with
only an inexpensive camcorder. The best example of this camcorder-made movie trend is the wildly
successful The Blair Witch Project in 1999.
With a DV camera, a traditional video capture card is not needed because the output from the
camera is already in a compressed digital format. The resulting DV files are still large, however. The
trend towards digital video includes digital video cameras now equipped with an IEEE 1394 interface as standard. Thus, all that is needed to transfer DV files directly from the camera to a PC-based
Chapter 11
editing system is an IEEE 1394 interface card in the PC. There is increasing emphasis on handling
audio, video, and general data types. Hence, the PC industry has worked closely with consumer
giants such as Sony to incorporate IEEE 1394 into PC systems to bring the communication, control,
and interchange of digital, audio, and video data into the mainstream.
Camcorder Formats
One major distinction between different camcorder models is whether they are analog or digital, so
they have been divided into those two distinct categories.
Analog camcorders record video and audio signals as an analog track on videotape. This means that
every time a copy of a tape is made, it loses some image and audio quality. Analog formats lack a
number of the impressive features that are available in digital camcorders. The main difference
between the available analog formats focuses on the kind of videotapes the analog camcorder uses
and the resolution.
Some analog formats include:
Standard VHS – Standard VHS cameras use the same type of videotapes as a regular VCR.
One obvious advantage of this is that, after having recorded something, the tape can be
played on most VCRs. Because of their widespread use, VHS tapes are a lot less expensive
than the tapes used in other formats. Another advantage is that they give a longer recording
time than the tapes used in other formats. The chief disadvantage of standard VHS format is
that the size of the tapes necessitates a larger, more cumbersome camcorder design. Also,
they have a resolution of about 230 to 250 horizontal lines, which is the low end of what is
now available.
VHS-C – The VHS-C camcorders record on standard VHS tape that is housed in a more
compact cassette. And VHS-C cassettes can also be played in a standard VCR when housed
in an adapter device that runs the VHS-C tape through a full-size cassette. Basically, though,
VHS-C format offers the same compatibility as standard VHS format. The smaller tape size
allows for more compact designs, making VHS-C camcorders more portable, but the
reduced tape size also means VHS-C tapes have a shorter running time than standard VHS
cameras. In short play mode, the tapes can hold 30-to-45 minutes of video. They can hold
60 to 90 minutes of material if recorded in extended play mode, but this sacrifices image
and sound quality considerably.
Super VHS – Super VHS camcorders are about the same size as standard VHS cameras
because they use the same size tape cassettes. The only difference between the two formats
is that super VHS tapes record an image with 380 to 400 horizontal lines, a much higher
resolution image than standard VHS tape. You cannot play super VHS tapes on a standard
VCR, but, as with all formats, the camcorder itself is a VCR and can be connected directly
to the TV or the VCR in order to dub standard VHS copies.
Super VHS-C – Basically, super VHS-C is a more compact version of super VHS, using a
smaller size cassette, with 30-to-90 minutes of recording time and 380 to 400 resolution.
8 mm – These camcorders use small 8-millimeter tapes (about the size of an audiocassette).
The chief advantage of this format is that manufacturers can produce camcorders that are
more compact, sometimes small enough to fit in a coat pocket. The format offers about the
same resolution as standard VHS, with slightly better sound quality. Like standard VHS
tapes, 8-mm tapes hold about two hours of footage, but they are more expensive. To watch
8-mm tapes on the television, the camcorder needs to be connected to the VCR.
Digital Imaging—Cameras and Camcorders
Hi-8 – Hi-8 camcorders are very similar to 8-mm camcorders, but they have a much higher
resolution (about 400 lines)—nearly twice the resolution of VHS or Video8, although the
color quality is not necessarily any better. This format can also record very good sound
quality. Hi-8 tapes are more expensive than ordinary 8-mm tapes.
Digital camcorders differ from analog camcorders in a few very important ways. They record
information digitally as bytes, meaning that the image can be reproduced without losing any image
or audio quality. Digital video can also be downloaded to a computer, where it can be edited or
posted on the Web. Another distinction is that digital video has a much better resolution than analog
video—typically 500 lines.
There are two consumer digital formats in widespread use:
Digital Video (DV) – DV camcorders record on compact mini-DV cassettes, which are
fairly expensive and only hold 60 to 90 minutes of footage. The video has an impressive 500
lines of resolution, however, and can be easily transferred to a personal computer. Digital
video camcorders can be extremely lightweight and compact—many are about the size of a
paperback novel. Another interesting feature is the ability to capture still pictures, just as a
digital camera does.
Digital-8 – Digital-8 camcorders (produced exclusively by Sony) are very similar to regular
DV camcorders, but they use standard Hi-8 (8 mm) tapes which are less expensive. These
tapes hold up to 60 minutes of footage which can be copied without any loss in quality. Just
as with DV camcorders, Digital-8 camcorders can be connected to a computer to download
movies for editing or Internet use. Digital-8 cameras are generally a bit larger than DV
camcorders—about the size of standard 8-mm models. Digital 8-mm technology is the
digital extension of the Hi8/8 mm camcorder format which was invented by Sony and is
now available in several models. Digital 8 offers up to 500 lines of resolution. This is
superior to both Hi8 and 8-mm which have 400 and 300 lines, respectively. Digital-8 also
offers playback and recording for both Hi8 and 8-mm cassettes, and provides CD-like sound
and PC connectivity. Newly designed record/playback heads in Digital-8 camcorders can
detect and play analog 8-mm or Hi8 recordings.
Mini DV – Mini DV images are DVD-quality, and the sound quality is also superior to other
formats. Captured footage can be viewed through a TV (using the camcorder’s standard AV
outputs), a PC (using the camcorder’s IEEE 1394 digital output), or by capturing digital still
frames, then printing them out on a color printer. Also, video clips can be edited and
enhanced on a PC.
Inside the Digital Camcorder
To look inside the digital camcorder, let us explore traditional analog camcorders. A basic analog
camcorder has two main components—a video camera and a VCR. The camera component’s function
is to receive visual information and interpret it as an electronic video signal. The VCR component is
exactly like the VCR connected to your television. It receives an electronic video signal and records it
on videotape as magnetic patterns. A third component, the viewfinder, receives the video image as well
so you can see the image being recorded. Viewfinders are actually tiny black-and-white or color
televisions, but many modern camcorders also have larger full-color LCD screens that can be extended
for viewing. There are many formats for analog camcorders, and many extra features, but this is the
basic design of most all of them. The main variable is the kind of storage tape they use.
Chapter 11
Digital camcorders have all the elements of analog camcorders, but have an added component
that takes the analog information the camera initially gathers and translates it to bytes of data.
Instead of storing the video signal as a continuous track of magnetic patterns, the digital camcorder
records both picture and sound as ones and zeros. Digital camcorders are so very popular because
ones and zeros can be copied very easily without losing any of the recorded information. Analog
information, on the other hand, “fades” with each copy—the copying process does not reproduce the
original signal exactly. Video information in digital form can also be loaded onto computers where
you can edit, copy, e-mail, and manipulate it.
Camcorder Components
The image sensor technology behind both camcorders and digital still cameras are CCDs. But since
camcorders produce moving images, their CCDs have some additional pieces that are not in digital
camera CCDs. To create a video signal, a camcorder CCD must take many pictures every second,
which the camera then combines to give the impression of movement.
Like a film camera, a camcorder “sees” the world through lenses. In a film camera, the lenses
serve to focus the light from a scene onto film treated with chemicals that have a controlled reaction
to light. In this way, camera film records the scene in front of it. It picks up greater amounts of light
from brighter parts of the scene and lower amounts of light from darker parts of the scene. The lens
in a camcorder also serves to focus light, but instead of focusing it onto film, it shines the light onto
a small semiconductor image sensor. This sensor, a CCD, measures light with a half-inch panel of
300,000-to-500,000 tiny light-sensitive diodes called photosites.
Each photosite measures the amount of light (photons) that hits a particular point and translates
this information into electrons (electrical charges). A brighter image is represented by a higher
electrical charge, and a darker image is represented by a lower electrical charge. Just as an artist
sketches a scene by contrasting dark areas with light areas, a CCD creates a video picture by recording light intensity. During playback, this information directs the intensity of a television’s electron
beam as it passes over the screen. Of course, measuring light intensity only gives us a black-andwhite image. To create a color image, a camcorder has to detect not only the total light levels, but
also the levels of each color of light. Since the full spectrum of colors can be produced by combining
red, green and blue, a camcorder actually only needs to measure the levels of these three colors to
reproduce a full-color picture.
To determine color in some high-end camcorders, a beam splitter separates a signal into three
different versions of the same image. One shows the level of red light, one shows the level of green
light, and one shows the level of blue light. Each of these images is captured by its own chip. The
chips operate as described above, but each measures the intensity of only one color of light. The
camera then overlays these three images and the intensities of the different primary color blends to
produce a full-color image. A camcorder that uses this method is often referred to as a three-chip
This simple method of using three chips produces a rich, high-resolution picture. But CCDs are
expensive and eat lots of power, so using three of them adds considerably to the manufacturing costs
of a camcorder. Most camcorders get by with only one CCD by fitting permanent color filters to
individual photosites. A certain percentage of photosites measure levels of red light, another percent266
Digital Imaging—Cameras and Camcorders
age measures green light, and the rest measure blue light. The color designations are spread out in a
grid (see the Bayer filter below) so that the video camera computer can get a sense of the color levels
in all parts of the screen. This method requires the computer to interpolate the true color of light
arriving at each photosite by analyzing the information received by the other nearby photosites.
A television “paints” images in horizontal lines across a screen, starting at the top and working
down. However, TVs actually paint only every other line in one pass (this is called a field), and then
paint the alternate lines in the next pass. To create a video signal, a camcorder captures a frame of
video from the CCD and records it as the two fields. The CCD actually has another sensor layer
behind the image sensor. For every field of video, the CCD transfers all the photosite charges to this
second layer which then transmits the electric charges at each photosite, one-by-one. In an analog
camcorder, this signal goes to the VCR which records the electric charges (along with color information) as a magnetic pattern on videotape. While the second layer is transmitting the video signal, the
first layer has refreshed itself and is capturing another image.
A digital camcorder works in basically the same way, except that at this last stage an analog-todigital converter samples the analog signal and turns the information into bytes of data (ones and
zeros). The camcorder records these bytes on a storage medium which could be, among other things,
a tape, a hard disk, or a DVD. Most of the digital camcorders on the market today actually use tapes
(because they are less expensive), so they have a VCR component much like an analog camcorder’s
VCR. However, instead of recording analog magnetic patterns, the tape head records binary code.
Interlaced digital camcorders record each frame as two fields, just as analog camcorders do. Progressive digital camcorders record video as an entire still frame which they then break up into two fields
when the video is output as an analog signal.
The Lens
The first step in recording a video image is to focus light onto the CCD by using a lens. To get a
camera to record a clear picture of an object in front of it, the focus of the lens needs to be adjusted
so it aims the light beams coming from that object precisely onto the CCD. Just like film cameras,
camcorders let the lens be moved in and out to focus light. Of course, most people need to move
around with their camcorders, shooting many different things at different distances, and constantly
refocusing is extremely difficult.
A constantly changing focal length is the reason that all camcorders come with an autofocus
device. This device normally uses an infrared beam that bounces off objects in the center of the
frame and comes back to a sensor on the camcorder. To find the distance to the object, the processor
calculates how long it took the beam to bounce and return, multiplies this time by the speed of light,
and divides the product by two (because it traveled the distance twice—to and from the object). The
camcorder has a small motor that moves the lens, focusing it on objects at this distance. This works
pretty well most of the time, but sometimes it needs to be overridden. You may want to focus on
something in the side of the frame, for example, but the autofocus will pick up what’s right in front
of the camcorder.
Camcorders are also equipped with a zoom lens. In any sort of camera, you can magnify a scene
by increasing the focal length of the lens (the distance between the lens and the film or CCD). An
optical zoom lens is a single lens unit that lets you change this focal length, so you can move from
one magnification to a closer magnification. A zoom range tells you the maximum and minimum
Chapter 11
magnification. To make the zoom function easier to use, most camcorders have an attached motor
that adjusts the zoom lens in response to a simple toggle control on the grip. One advantage of this is
that you can operate the zoom easily, without using your free hand. The other advantage is that the
motor adjusts the lens at a steady speed, making zooms more fluid. The disadvantage of using the
grip control is that the motor drains battery power.
Some camcorders also have something called a digital zoom. This does not involve the camera’s
lenses at all. It simply zooms-in on part of the total picture captured by the CCD, magnifying the
pixels. Digital zooms stabilize magnified pictures a little better than optical zooms, but you sacrifice
resolution quality because you end up using only a portion of the available photosites on the CCD.
The loss of resolution makes the image fuzzy.
The camcorder can also adjust automatically for different levels of light. It is very obvious to the
CCD when an image is over- or under-exposed because there won’t be much variation in the electrical charges collected on each photosite. The camcorder monitors the photosite charges and adjusts
the camera’s iris to let more or less light through the lenses. The camcorder computer always works
to maintain a good contrast between dark and light, so that images do not appear too dark or too
washed out, respectively.
Standard Camcorder Features
The basic camcorder components include:
Eyepiece – This provides a very small black-and-white screen, viewed through a portal,
where the user can see exactly what is being taped.
Color viewfinder – The only difference between this and the black-and-white eyepiece is
the color which provides a more accurate depiction of what you are recording.
LCD screen – With this feature, the user can watch the color LCD screen instead of
viewing through the eyepiece (some models provide both). The screen can usually swivel in
order to adjust your viewing angle. But LCDs do use more power than a simple eyepiece, so
it could decrease your battery life.
Simple record button – When you hold the camcorder by the grip, your thumb will rest on
the record button. All you have to do to switch the record mode on-and-off is press the
button. This acts as a sort of pause on recording, so that you continue recording at the exact
spot on the tape that you last stopped recording.
Zoom function – This lets the user magnify images that are farther away. Zooms are
operated with a simple toggle control on the camcorder grip. Optical zoom changes the size
of the object in view without moving the camcorder. Usually limited to 24x, the larger the
optical zoom number the more flexibility when shooting. A high number is especially
important if the cam’s lens is not detachable and some control is needed over the shooting.
Digital zoom uses the camcorder’s processor to expand an image beyond optical zoom
capabilities (up to 300x). But using this feature can degrade image quality, and this feature
should be turned off for clear images (the higher the optical zoom number, the clearer the
digital zoom image).
Auto focus – The camcorder senses where objects are in front of it and adjusts the focus
VCR controls – These controls let the camcorder operate as a standard VCR.
Battery and AC adapter – Nickel-cadmium batteries are normally standard, and allow for
one-to-two hours of recording time before they need to be recharged. Smaller or higher-end
Digital Imaging—Cameras and Camcorders
cameras may use proprietary batteries specific to the manufacturer, and this can make extra
or replacement batteries expensive. Camcorders come with a rechargeable battery and a
power cord that attaches to a standard 115V outlet.
Audio dub – On most camcorders, the user can record new audio over video that has
already been recorded.
Fade-in and fade-out – This function often works by simply underexposing or overexposing the image to the point that the entire screen is black or white.
Clock – With the correct time and date programmed, the camcorder can display it on your
recorded video.
Headphone jack – This lets the user monitor sound quality as the footage is being shot or
reviewed, using the onboard VCR.
Microphone – Most camcorders come with a built-in microphone to record sound, which is
ideal for personal use. For professional sound quality, look for a camera that attaches an
external microphone.
Light – Most camcorders do quite well in low-light conditions without help from additional
lights. Some cameras come with a built-in light source, and many can be programmed to
turn on automatically when conditions require additional light. Some cameras allow the user
to connect an external light source to the unit for greater control over lighting conditions.
Image stabilization – This feature electronically stabilizes the image being filmed, but can
decrease its clarity. Digital stabilization performs the same function by using digital
technology, and can also correct for tilting and panning movement—again, this feature can
decrease image quality. Optical stabilization uses a series a lenses to decreases the effects of
camera movement and vibration on the image being filmed—very important with handheld
cameras and when filming a moving object.
Exposure modes – Most cameras will set the proper exposure mode for the conditions in
which you are filming. Some cameras allow the user to adjust the exposure for certain
conditions, such as low light, backlight, or motion.
Camera control – Some camcorders automatically adjust everything—for amateur
moviemakers, these settings are more than satisfactory. Professional users, or those who
want more control, look for features such as manual exposure control, manual focus, manual
zoom, manual white balance, and so on.
Still image capability – Digital camcorders let the user pick still images out of the video.
Camcorders with a built-in or removable memory device allow still pictures to be taken as
with a digital still camera.
Detachable lens adapter – Some camcorders have a detachable lens adapter. For example,
a wide-angle lens attachment is a common accessory.
Low-light responsiveness – Camcorders come with specifications regarding the minimum
recommended level of light during recording.
Progressive scan – Progressive scan is only available in digital formats, and records an
image with a single scan pass instead of as odd and even fields. This technology increases
image quality, and is especially important in using a camcorder to take still pictures.
Analog video input – A digital camcorder with an analog input allows the user to convert
existing VHS format tapes to digital format for editing or viewing purposes.
16x9 recording mode – This is “wide-screen” recording mode.
Audio recording formats – Most digital camcorders can support both 32 kHz, 12 bit and 48
kHz, 16 bit audio formats. The 48 kHz, 16 bit audio is better than CD quality.
Chapter 11
IEEE 1394 (FireWire, iLink) compatibility – Most of the newer digital camcorders come
with IEEE 1394 compatibility which allows extremely fast downloading to a computer.
Playback features – Almost all camcorders come with VCR-type features like rewind, play,
and pause.
Special effects – Some higher-end camcorders come with special effects features, such as
fade-in/fade-out, and special recording modes like sepia or negative.
Motion/audio sensing – Some camcorders have special sensors that turn the camera on in
the presence of movement or sound. This is useful for security purposes.
Both camcorders and still cameras moved toward digital technology at the same time. The concept of
digital photography grew out of television and space technologies. In the 1960s NASA developed
digital imaging as a way of getting clearer pictures from space in preparation for the moon landing.
During the 1970s digital imaging was further developed for use with spy satellites. Digital imaging
finally reached the consumer in 1995 when Kodak and Apple both unveiled the first consumer digital
cameras. It took two years for the first million-pixel model to reach stores. Each succeeding year has
seen an additional million-pixel increase in resolution.
It is easy to understand the booming business that digital camera manufacturers are experiencing
these days. The host of easy-to-use personal and business publishing applications, the dramatic
expansion of the Web and its insatiable appetite for visual subject matter, and the proliferation of
inexpensive printers capable of photo-realistic output make a digital camera an enticing add-on.
Those factors, combined with improving image quality and falling prices, put the digital camera on
the cusp of becoming a standard peripheral for a home or business PC. Digital cameras are the 21st
century’s solution to the old buy/process film and wait routine. Budget permitting, one can buy a
digital camera that can produce photos equal to the quality of 35 mm. For a frequent vacationer and
traveler, carrying camera bags jammed full of film, lenses, lens covers, extra batteries, and fullyequipped 35-mm cameras has been a constant source of inconvenience. A digital camera is easy to
carry, and photographs can be e-mailed to friends and relatives over the Internet. Valuable digital
photographs can also be retained on a PC hard drive, while others can be deleted. The camera’s
memory can simply be erased and reused. Hence, digital cameras provide conveniences in the
development, storage, and distribution of photographs.
The formula for success in the digital camera market is multifaceted. It is important to produce a
picture with high-quality resolution and to view it as soon as the photo is snapped. Low-cost imaging
sensors and inexpensive, removable picture storage (flash memory cards) will help the market to
grow. In addition, camera, PC, and IC vendors are working together to come up with different
combinations that lower prices and make their systems attractive in terms of providing seamless
functionality with other equipment.
A camcorder is really two separate devices in one box—a video camera and a video tape
recorder. Creating videos of precious moments is made even easier with the use of digital
camcorders. Unlike older analog camcorders, digital camcorders have the ability to store videos in
various digital formats. One such is MPEG which enables instantaneous file exchange and dissemination. This also means that the information content is less susceptible to degradation over time.
Additionally, the Internet provides an easy, fast, no-cost medium over which digital videos can be
shared with friends, families, and associates.
Digital Imaging—Cameras and Camcorders
The image quality available on camcorder playback depends on both the camera and the
recording device. The weak link for analog formats is the tape recorder, but the camera section (CCD
chips, lens, and video processing circuits) is usually the limiting factor for D8 and MiniDV
camcorders where the digital recording is very good. Most MiniDV cameras give a picture quality
that is better than most Hi8 cameras, but this may not always be the case. In particular, the cheapest
MiniDV cameras are in some ways inferior to good Hi8 cameras. Image quality tends to closely
follow the price of the camera. There is also a difference between single-chip and 3-chip cameras.
The less expensive MiniDV cameras use a single CCD chip to capture the image through the lens
which yields less attractive color images than a 3-chip CCD camera. One advantage of digital video
is that the signal can be copied without loss of quality, as inevitably happens with analog formats.
Web Terminals and Web Pads
Web terminals (also known as Web pads) are stand-alone devices used primarily for Web browsing
and e-mail. These appliances are typically based on embedded operating systems, are packaged in an
all-in-one form factor that includes a monitor, and are tethered to a Web connection. Other names for
these devices include Web appliance, countertop appliance, Net appliance, or desktop appliance.
They are marketed as an instant-on, easy-to-use device for browsing the World Wide Web (WWW)
or Web-like services such as e-mail. They do not typically allow a user to install off-the-shelf
applications that run outside a browser.
The all-in-one form factor can include a built-in monitor or can connect to a dedicated external
monitor. It cannot, however, use a TV set for a monitor. While telephony features can be included,
devices with traditional telephone-like form factor and a monitor are called screen phones, and will
be discussed in further detail in Chapter 14.
Market accelerators include:
Web and e-mail access
Quick access and no long boot-up like PCs
Low cost and refined utility
Varied deployment configurations
Vertical markets such as hotels and schools
Major market inhibitors include:
Questions about support from service providers
Lower PC prices and new subsidized distribution models
High price of built-in displays
Additional market inhibitors are poor promotion, lack of consumer awareness, and consumer
confusion. These have come about because:
The message is not effective – Consumers do not comprehend what a Web terminal can
truly offer. They understand it is a stripped-down version of the PC simple enough for a
“non-techie,” but do not understand exactly what it can actually do or not do.
Changing naming conventions – They have been called Web terminals, Internet terminals,
and Web appliances, which does not accurately depict the Web terminal.
Constant comparison with the PC – The thought that this stripped-down PC will one day
eliminate all PCs.
Web Terminals and Web Pads
Using low cost as a draw – Reducing costs through elimination of components has led to
elimination of key functions of the product. For example, not having a hard disk prevents
storage of MP3 music files.
Over-expectations have backfired – The press has now taken the approach that the Web
terminal will never succeed. This is because of the tremendous amount of press that these
products originally received and their failure on the first try. Market analysts predicted that
Web terminals would have shipment volumes similar to the PC, but this has not materialized.
The primary market for Web terminals is the consumer, but they are sometimes used in vertical
applications such as hotels and schools. Plenty of doubt surrounds the Web terminal market as
companies drop out and reported volumes remain low. First generation Web terminal products meet
the expectations of instant-on, easy-to-use appliances, but the designs need further improvement.
Consumers are used to having functionality such as full support of Shockwave, flash, real audio/
video, Windows Media and audio/video applications. They have been disappointed when they try to
link to a flash-based greeting card or media stream. Many Web terminals use embedded operating
systems, and many of the third party applications cannot be ported to Web terminals. Web terminals
cannot, therefore, provide a true Web experience for consumers. Most Web terminals only support up
to 56 Kb/s data rates and are not enabled for broadband access and home networking. The Web
terminal also cannot exchange downloaded MP3 files with a portable MP3 player.
The primary focus of the Web terminal is to provide easy Web access for the consumer. But half
the households in the U.S. have at least one PC, and Web access is easier to perform on the PC rather
than paying over $1,000 for a Web terminal. The selling points for these devices are instant-on, and
quick and convenient access to the Web. However, they are not suitable for prolonged “surfing”
sessions. The Web terminal’s entire value proposition falls through because of its technical shortcomings and the fact that it is overpriced for the mainstream market population. However, recent
Web terminal products have added applications such as calendars and scheduling tools that can be
synchronized with PDAs. But other highly desirable applications, such as Sony’s AirBoard (a
wireless video screen for viewing DVD movies), need to be incorporated in order for these products
to succeed. Web terminals could conceptually serve as PC replacements and PC adjuncts, although
they appear functionally as stripped-down PCs. They are therefore targeted at non-PC users who have
little technical sophistication and require only limited Web usage.
Distribution has been a problem because Web terminals are still emerging and are unproven.
Retailers have been hesitant to put Web terminals on shelves with competing devices (such as set-top
boxes and gaming consoles) that have better business models and higher return on investment.
Telephone companies and cable companies have also been hesitant to promote these devices because
of the requirement to carry inventory (which they never prefer). Web terminals must be seen as a
well-executed product and be sold through both direct sales and retail channels in order to have
maximum success in the market place.
Key variables that will affect the Web terminal market include:
PCs with near instant-on Web access. The latest generation of Windows operating systems
have shorter boot-up times in PCs. The PC will become as easy to access as the Web
terminal if boot time continues to shorten.
Broadband access to residences is becoming cheaper and more necessary as Internet traffic
expands to include audio, video and images.
Chapter 12
Home networking and residential gateways will allow Internet access to multiple consumer
devices in the home. The PC adjunct consumer devices will also require Internet access and
connection to the other appliances.
Web terminal shipments worldwide are predicted to exceed 2.7 million units and $540 million in
value by 2004.
Web terminals generally use passive matrix color LCD screens around the 10-inch size. Display
variations include LCD panels in sizes of 8-to-12 inches, touch-screen capabilities, and the much
cheaper (and bulkier) CRT monitors. Some Web terminals do not offer screens and allow the user to
choose a PC display. Cost of the LCD screens is the major contributor to the high bill of materials
when manufacturing Web terminals.
Web Pads/Tablets
The Web pad is a wireless, portable, low-cost, easy-to-use, tablet-shaped, consumer-targeted digital
device that usually has a touch-screen display/control. It has a browser-based interface to simplify
and enhance the Internet experience. Web tablets are similar to Web terminals, but with the key
difference that Web tablets are portable. While that makes the Web tablet very appealing, it does add
to the overall cost.
The two separate components of a Web tablet/pad are:
1. A portable LCD built into a tablet shaped device, and
2. A base station with an analog or broadband hard-wired connection to the Internet. The base
station also sends and receives wireless (RF) transmissions to and from the pad/tablet.
Transmissions between the Web tablet and the base station use RF or wireless home networking
technologies such as HomeRF, Bluetooth, or IEEE 802.11b. These technologies are not based on
cellular or mobile telephony. This limits the wireless range of the Web tablet to approximately 150
feet. Multiple tablets can be used with the same base station, and sometimes the residential gateway
can be used in place of a base station. The base station connects to the network through DSL or a
cable modem. The tablet’s portability brings more value than a Web terminal and makes an attractive
product. But it is less usable when configured with a soft keyboard that consumes half of the Web
surfing screen. Handwriting recognition is also being introduced in some products, but is still
emerging as a technology. Long battery life allows consumers to use the Web pad for many hours
without recharging.
Primary applications for Web tablets include:
Internet and Web browsing and surfing
E-Book reading
Combining Internet access with home environment and security controls.
Some Web tablets also have hard disk drives, and can be used in more sophisticated vertical
applications that have intensive computing requirements such as:
Web Terminals and Web Pads
Inventory control
Order entry systems—merchandising
Healthcare and hospitals
Field and sales force automation
Government agencies.
Web pads have succeeded in the less price-sensitive vertical markets. Tablet PC vendors are
trying to penetrate the same vertical markets as Web tablets due to large volumes and less price
Market Data and Dynamics
Market researcher IDC predicts that the worldwide market for Web pads is expected to exceed 1.5
million units with a value of $1.35 billion by 2005. Allied Business Intelligence (ABI) predicts that
more than 23 million Web pad units will be sold annually by 2006.
Some of the issues preventing Web tablets from becoming a mass-market product are:
High bill of materials (BOM) – The high cost of wireless home networking components,
battery integration, and the LCD display causes significant consumer price issues. A $500
BOM results in discouraging end user pricing of over $800.
Limited value proposition – The Web tablet’s unique value proposition is its wireless
mobile connection to the Web. This is not enough to drive mass-market success of the
PC competition – The PC’s and notebook PC’s broad value position overshadows the
single, unique value proposition of the Web tablet.
Lack of vendor support – Top consumer electronics and PC manufacturers have not
embraced this technology.
Tablet PCs – These are fully functional PCs that have a touch-screen LCD display.
Other competition – PDAs and PC companions offer some of the functionality offered by
Web tablets and (with wireless connectivity) can be taken along on the road.
Failure of Web Pads
This market has seen the failure of several high profile products such as 3Com’s Audrey and
Netpliance. The Web pad provides several nice features such as one-touch information access (to
weather, e-mail, news, and so forth) and the ability to synchronize with PDA products. It is impossible, however, to make reasonable profits by selling a $500 device if one cannot also charge
recurring monthly fees for its use.
Some of the reasons Web pad companies have left the market include:
Crowded market – Multiple vendors announced products that had similar value propositions—primarily access to the Web. However, reduced PC prices and an excessive number
of companies introducing Web pads caused Web pad price reductions that eliminated any
profit margin.
Economy – With the economy faltering after the Internet boom, venture capitalists and the
public market could not sustain business models that showed no profits for years to come.
Failure of strategic partnerships.
Chapter 12
Web pad derivatives are finding applications in refrigerators, home automation, and TV remotes. One
such hybrid is the Nokia MediaScreen, which is a tablet prototype that combines digital TV and the
Internet. It uses mobile phone technology with a 12-inch TFT screen, and its main application is in
cars and trains (not residences). Web tablets are ideal devices for providing e-book services—
portable, dedicated reading devices with flat screens and Internet access (hence requiring
connectivity components).
Integrating Web pad devices (small portable tablets) into traditional home appliances such as
refrigerators and kitchen stoves has been a common application discussion. The Web pad device
would attach to, or be integrated with, the refrigerator door. Several companies are entering this
market due to the high unit volumes of white goods sold globally. They hope to make money through
the sale of services and white goods content provisioning. Appliance manufacturers such as General
Electric, Sunbeam, and Whirlpool are also interested in this market. They believe Web pad integration could be used to reduce the number of visits by technicians to service appliances. For example, a
malfunctioning refrigerator could provide detailed failure information via an Internet connection. A
service technician could then schedule a single visit (or no visit), already knowing the challenge that
lay ahead. Other companies such as Cisco Systems and Sun Microsystems are also joining this
market to provide equipment and services for connectivity.
A future capability of the “fridge pad” could be the ability to monitor expiration bar codes of
perishable and regularly consumed goods. These could then be automatically re-ordered when
required through a favorite online grocery store. Such conveniences make the Web pad seem like an
ideal consumer device with a strong future. One can also download recipes to a screen with Web
access in the kitchen. Recipes could then be e-mailed to friends or saved on the base station hard
disk drive.
Companies such as Honeywell and Qubit are also introducing products based on the Web tablet
for home automation. Implementations include tethered, untethered, and wall integrated solutions
that usually have a LCD touch screen. These devices will act as a Web browser and e-mail client, in
addition to providing home control functions.
Another interesting and promising effort is the super remote control. This application integrates
iTV (integrated TV) platforms into Web pads. A constraint of iTV, however, is the physical limitations of the TV set. Since the iTV normally displays the menu of the set-top box, manufacturers are
looking to coordinate the set-top box operation with the Web tablet in order to display an interactive
interface on the tablet screen. This would allow the iTV screen to be unblocked and pristine. The
Web tablet can also download interactive content from the set-top box, and control other entertainment appliances.
Hence the integrated Web pad has other applications in addition to providing access to the Web.
An integrated Web pad could control several appliances around the house, control entertainment,
monitor and manage white goods, control home temperature and more.
Phased Success
Several companies have already been unsuccessful in a Web pad venture. But this market is still in its
infancy and is experiencing growing pains. There are three phases in the evolution of every product
(such as PCs, hand-held devices, and mobile phones) as that product is introduced into the consumer
Web Terminals and Web Pads
market. The Web pad has experienced phase one, and will be pushing through phase two and three.
These phases are described as follows:
Phase one – Phase one saw the introduction of several Web pads and business models. The
market began to grow, and there was rampant experimentation. The Web pad market
segment attracted a lot of venture capital. There were targeted acquisitions and new partnerships created. Web pad products have gained some ground at the hands of the consumer, but
acceptance has been shaky. Several companies will cease to exist following their failure in
this market.
Phase two – Better-positioned companies remain intact into phase two and benefit from the
painful but valuable lessons learned during the shakedown in phase one. Those with staying
power take these lessons, refine their business plans, and foster formidable relationships. It
is much easier for a General Electric to introduce a Web pad-based refrigerator to try out the
market, for example, than for a start up. Bigger companies can always hedge the losses from
one product with the profits made from another. Start-up companies may have only one
product, which is their only chance to survive. Drastic product improvements are sometimes
needed during phase two in order to meet the acceptable usability threshold for more
potential users. Market volume and awareness increase slowly but steadily in this phase. As
the Web pad market grows out of phase one and into phase two, more appealing product
designs, better applications, and sustainable business models will push the Web pad from
experimentation (gadget geeks and early adopters) toward the mainstream market.
Phase three – In phase three, the market finally matures and mass-market consumers begin
to accept the product and the business model. Newer and more elegant products, and better
and more innovative applications have more appeal to consumers. Increased volumes bring
the price down and awareness is spread more widely. Some vendors begin consolidating into
larger entities as the market matures further, possibly pushing the market into a final phase
of saturation.
Consumer products such as PCs, handheld devices, and mobile phones first appealed to, and
succeeded in, the large business community. They then made their way into the consumer’s hands.
The Web pad, in contrast, is directly targeted at the consumer and will not see volume gains through
the business environment. This approach may cause a delay in product market success. Vertical
products based on Web pads (such as health care devices) may also have a future, however.
The average selling price of the Web tablet is expected to fall, which may help its market
growth. This price reduction will be achieved through a drop in the cost of its LCD display and by
unbundling the tablet from the base station before marketing. The price of the tablet as a stand-alone
device (no base station) will appear more attractive to the consumer. Consumers will no longer need
to purchase the Web tablet base station for wireless connection as the residential gateway market
penetration increases. This market is already showing signs of saturation with a very high number of
The Tablet PC Threat
The recently announced tablet PC is a formidable threat to the Web tablet. The tablet PC is an 8-by11-inch device with high-powered computing capability for the office and the home, and provides
much more functionality than Web pads. The tablet PC will also support all applications supported
by a standard PC. It can be used to create and edit converted versions of Word and Excel documents
and to view PowerPoint and PDF formatted files. You can browse the Web with Internet Explorer and
Chapter 12
perform instant messaging. Multimedia applications include Microsoft Media Player for viewing
video clips, MP3 Player, and Picture Viewer. In fact, the tablet PC is a convenient alternative to a
full-size notebook PC. It is a portable tool for taking notes, editing documents, and managing PIM
data and it delivers more screen area than a PDA. The tablet PC is also a device for vertical-market
deployments, such as health care and insurance.
Some tablet PC products also have handwriting recognition applications. An alternative and
convenient on-screen input mode (virtual keyboard) is more user-friendly than virtual keyboards on
smaller PDAs. The tablet PC also includes both a Type II PC card slot and a Type II CompactFlash
card for expansion and wireless communications accessories. Its accessory line includes an external
keyboard and a sturdy docking station with USB upstream and synchronization ports, P/S2 keyboard
and mouse ports, and an RJ-45 Fast Ethernet port. This product is far different from Web tablets that
have embedded operating systems that are produced by third party developers.
However, the pricing and the value proposition of both products (Web tablet and tablet PC) will
determine the success of each. But Microsoft’s formidable presence will give tremendous clout and
brand recognition to the emerging tablet PC market.
The tablet PC comes with the latest generation Microsoft OS and runs nearly any Windows
application. It has a big bright touch-sensitive screen, a processor, RAM, and has an internal multigigabyte hard drive. Desktop and notebook PCs have reached the end of their evolutionary
advancement in the corporate market, but they are not going away—they will get faster, smaller, and
cheaper. A notebook PC will still be better for text-entry-focused applications than a Web pad.
Recent introductions of tablet PC variants from a number of manufacturers show significant
advantages of using tablet PCs in the business environment. Tablet PCs are available in two primary
designs: One features an attached keyboard and can be configured in the traditional laptop
“clamshell” mode, and the other uses a variety of detachable keyboard designs in a so-called “slate”
form factor. All tablet PCs are designed to be a user’s primary business PC, recognizing input from a
keyboard, mouse or pen. Tablet PCs are powered by chips optimized for low-power consumption and
longer life from Intel Corp., Transmeta Corp. and Via Technologies Inc. Tablet PCs are available at a
number of retailers throughout the United States.
The tablet PC operating system enables Windows-based applications to take advantage of
various input modes, including keyboard, mouse, pen and voice. With software developed and
optimized by Microsoft for the new platform, the tablet PC can function as a sheet of paper. Handwriting is captured as rich digital ink for immediate or later manipulation, including reformatting and
editing. The link between the pen input process and a wide range of Windows-based applications will
give users new ways in which to collaborate, communicate, and bring their PCs to bear on new tasks.
Its high-resolution display makes the tablet PC ideal for immersive reading and rich multimedia
The tablet PC’s full Windows XP capability enables it to be a primary computer. Utilizing a
high-performance x86-compatible chip architecture, the Tablet PC takes advantage of key technology
improvements in high-resolution low-power LCDs, efficient batteries, wireless connectivity, and data
storage to deliver a rich set of functions, with the added dimension of pen-based input. A number of
third-party software vendors are now engaged in developing software specifically for the tablet PC.
Web Terminals and Web Pads
Components of a Web Pad
Some of the key components of the Web pad include:
Microprocessor – Its comparatively low-power consumption and low-heat producing
characteristics are ideal for Web pad applications. Some of these processors include National Geode, Transmeta Crusoe, Intel XScale, Intel StrongARM, Intel Celeron, and ARM.
Embedded operating systems – Web terminals and Web tablets are limited in their appeal
because they lack large amounts of storage memory and dedicated functionality. Consumers
will not adopt these devices unless manufacturers select operating systems that boot-up
rapidly and are available instantaneously when the screen is turned on. Some of the popular
operating systems for Web tablets and terminals include BeOS (by BeIA), Linux (opensource), QNX’s RTOS, Windows PocketPC (by Microsoft), VxWorks (by Wind River), and
Jscream (by saveJe). Price and performance are key factors when choosing an operating
system for these products. But the most critical factor in choosing an operating system is the
software applications that the operating system supports. One of the main reasons for the
success of the PC is the number of applications that it can support. The Web pad needs to
provide multimedia capabilities as well. Of key importance, the Web pad requires a Web
browser to provide Web support—the most popular include Microsoft’s Internet Explorer
and AOL Time Warner’s Netscape Communicator. Embedded Web browsers are also
available from Espial, Opera, QNX, and others.
Display – A color touch-screen LCD is the main user interface. This is a simple-to-operate
consumer friendly solution that varies in size between 8 and 10 inches. Display options
include TFT (active matrix) screens for improved resolution. This, however, makes the
finished product even more costly.
Battery – The Web pad can typically operate for 2 to 5 hours on batteries.
Flash memory and RAM
Power management unit
Network Interfaces – This enables communication between the tablet and the base station.
The network interface is provided by wireless technologies such as wireless LANs (like
IEEE 802.11b), HomeRF, Bluetooth, DECT, and others:
• LAN standards – The IEEE 802.11b wireless LAN, or Wi-Fi, is a common method of
connecting portable PCs to the network. The IEEE 802.11b is also gaining ground as
the dominant home networking technology because the same wireless LAN card used in
industry can also be used in the household. But the 802.11b offers 11 Mb/s data rates
without quality of service. Two other LAN standards, the IEEE 802.11a and
HiperLAN2, offer both quality of service and high data rates. They are also well-suited
for home environments that require voice, video and data access. Future Web tablets
will be based on wireless LAN technologies such as IEEE 802.11a and HiperLAN2.
• HomeRF – HomeRF is a popular wireless home networking technology that offers 1.6
Mb/s data rates at a lower price than 802.11b. But reduced HomeRF membership and
the dropping price of IEEE 802.11b make the future of HomeRF look rather bleak.
Several Web tablet vendors have also chosen 802.11b instead of HomeRF as their
wireless technology.
• Bluetooth – Bluetooth is gaining ground as a network interface in the home. It has the
support of over 2000 companies and the promise of a $5 component price, which makes
it attractive. Its personal area networking features also make it an ideal technology to
replace Infrared networking technology. Bluetooth should be ideal for synchronizing
Chapter 12
calendars and address books between different consumer devices such as PDAs, PCs,
and Web tablets. However, Bluetooth is not an ideal technology for cordless communication because of its limited range and data rates.
Digital Enhanced Cordless Telecommunications (DECT) – DECT is a cordless phone
standard that has limited data rate support and a voice-oriented specification. Few Web pads
will make use of DECT technology because of these limitations.
Keypad interface
Built-in smart card reader – A smart card reader has been included in anticipation of
applications like secured online transactions, personalized authentication services, access to
personal data, and e-Cash storage options.
The base station comes with a built-in 56K V.90 modem and an integrated 10/100 Base-TX
Ethernet port as standard. There is also a wireless module that is designed for communication with
the Web tablet using 802.11b or other wireless technologies.
Figure 12.1 shows a block diagram of a Web pad/tablet.
Figure 12.1: Web Pad Block Diagram
Role of the Service Provider
Some Web pads and Web terminals fail primarily because of poor business models. A consumer is
not motivated to pay up to $1,500 for a Web tablet when their PC supports far more applications. The
Web pad will succeed in coming years only if service providers play a key role in their deployment.
These appliances will become customer acquisition and retention tools for ISPs, telecommunication
companies, and cable companies.
Cable companies have generally stayed out of the Web pad and Web terminal markets so far.
They have been focused on set-top boxes as the appliance that provides cable services and Internet
access to the consumer. The Web pad base station, however, could very easily connect through a
Web Terminals and Web Pads
cable modem. The telephone companies could also provide DSL service to the base station for highspeed Internet access. Meanwhile, the ISPs are currently driving the Web pad and Web terminal
The long-term goal of all of these companies is to provide high-speed data, voice, and video
access to multiple devices (PCs, TVs, gaming consoles, and Web pads) within the connected home.
The ISPs, telephone companies, and cable companies are all trying to add new users and often try
schemes such as offering free Web cameras to attract customers.
The ISPs try to increase their revenues by offering free online devices like Web pads to new
customers. This translates into additional users who are willing to pay monthly fees to use their
portals. New users then become customers of online retailers and order merchandise through those
portals. The ISPs sell additional portals that are then used to provide advertising real estate on the
consumer screen. Microsoft and AOL Time Warner are the major ISPs with online portals. They
would like to sell easy-to-use, instant-on Web terminals and Web pads in order to attract the PC-shy
user. These ISPs could offer rebates on the Web pads and Web terminals in exchange for a year’s
Internet service at $22 a month—similar to what they charge to connect a PC through a regular
phone line. Some ISPs have offered rebates for the purchase of PCs in order to lock-in recurring
revenues for years of subscription. The ISPs are always trying to extend their reach from the traditional PC to consumer devices like TVs, handheld devices, and screen phones. Web terminals and
tablets are simple extensions of this strategy.
The strategy of service providers such as EarthLink and DirecTV that do not have online
properties like portals to sell is to simply earn revenues from subscriptions. They also provide e-mail
services, which brings consumers back to the ISP. They prefer selling the cable or DSL modems
through several direct or indirect channels. Other companies may also offer rebates on appliances
such as Web pads with Internet service through one of these providers. This strategy allows the ISP
to be more focused on technology when providing service. These subscription providers have offered
broadband service far quicker than rival ISPs with portals.
The attraction of online portals that have no ISP services is the quality and quantity of content
provided by their portal. Some ISPs with portals are becoming the de facto portal of choice for the
consumer by partnering with ISPs such as Yahoo that do not have portals of their own.
The rise of the World Wide Web has stimulated an onslaught of Web-based applications and services.
Everything from calendars, shopping, address books, customized news services, and stock trading
can be done over the Web. This has made the standalone PC based on the Windows operating system
less relevant for those who want only Web-based information and services. New machines that are
easier and simpler to use than a PC are on the way, and promise to liberate non-techies from the
headaches of dealing with a PC. Two such attempts are Web terminals and Web pads.
While the initial outlook for Web terminals has been gloomy, they will have their place in the
consumer market. They are currently in the evolution phase of market acceptance. This is where
business models need to evolve and the product value proposition needs to be further strengthened.
Unit volumes will not be enough for the next few years to justify the current number of vendors
building Web terminals. These vendors are all fighting over a very small pie, and several vendors will
exit the market.
Chapter 12
The Web pad provides an easy-to-use, portable tool for wireless, high-speed connection to the
Internet and centralized home control applications. Most Web pads/tablets weigh less than three
pounds, measure less than an inch thick, and are about the size of a standard sheet of paper. A user
can surf the Internet, send and receive e-mail, and enter information via a wireless touch screen
display—all this at higher speed and with more features than the Web terminal. The Web pad’s
transmission and reception signals link to an 802.11b wireless access point connected to broadband
cable, DSL, or a dial-up phone connection. It allows the consumer to enjoy the benefits of a highbandwidth wireless connection from anywhere inside or outside the home from within 200 feet of an
access point or base station. The tablet is currently fashionable, or “in,” which also helps to sell the
product. The Web pad will become more appealing as word of its usefulness spreads. This will
certainly help sales and accelerate the market for this product.
The basic math still holds: Consumers currently will pay $500 for a PC that gathers dust, but not
for a less powerful, less usable Net appliance. The Web tablet is not to be confused with the
Microsoft Tablet PC. Though they look similar, the Tablet PC is a full-fledged PC with a fast
processor and a Windows operating system that’s marketed toward businesses. The Web tablet is a
wireless Internet device that shares a household’s PC Web connection and other core hardware, and
better yet, one person can surf the Web on the tablet while another connects via the PC.
Web pads and Web terminals have seen discouraging results thus far in the consumer market, but
it is important to note that these products are not completely out of the running. History has shown
that failure of one product configuration does not kill an entire category. Discouraging results so far
could be due to improper product execution and poorly thought-out business models. For example,
the Apple Newton and other handheld devices failed in the PDA market before the Palm Pilot was
finally a success due to its correct product design. Web pads and Web terminals are still waiting for
the perfectly executed product strategy to emerge.
Internet Smart Handheld Devices
Internet smart handheld devices (SHDs) provide direct Internet access using an add-on or integrated
modem. Vendors that have announced Internet SHDs include Palm Computing, Dell Computer,
Nokia, Sony, Samsung, and Hewlett-Packard, with numerous others following suit. Over the last few
years, SHDs have been the hottest and largest growth segment in the consumer market.
Smart handheld devices include handheld companions, smart handheld phones, and vertical
application devices (VADs). The SHD market has grown significantly over recent years because of
market accelerators such as their acceptance by corporations and the presence of content (e-mail and
Internet services such as stocks, news, and weather). Key market inhibitors, however, have been the
slow acceptance of the Windows CE operating system, high cost (and the inability to eliminate the
PC), and the lack of applications for these devices. Market researcher IDC predicts that annual SHD
shipments will exceed 35 million units and revenues of over $16.5 billion by 2004.
Vertical Application Devices
Vertical application devices (VADs) are pen- and keypad-based devices that are used in specific
vertical applications in a variety of industries. Key applications for VADs include:
Routing, collecting, and delivering data for a vendor in the transportation industry
Providing physicians access to patient’s records in hospitals
These devices include pen tablets, pen notepads, and keypad handheld segments. The pen tablet
is used to gather data from the field or in a mobile situation, such as entering orders in a restaurant or
collecting data in a warehouse or rental car return lot. Pen tablet products typically weigh two
pounds or more and have a display that measures five inches or more diagonally.
Pen notepads are small pen-based handheld devices that feature a five-inch or smaller display
(measured diagonally). This product is typically used in business data collection applications (e.g.,
route accounting).
Keypad handheld products are used in vertical business data-collection applications. Typically,
these devices are built to withstand harsh environments, such as rain or dust, and they can continue to
work after being dropped three feet onto a hard surface. The worldwide market for vertical application devices, including pen tablets, pen notepads and keypads, will exceed 3.4 million units and $5
billion in revenue by 2005.
Smart Handheld Phones (or Smart Phones)
Smart phones are generally cellular voice handsets that also have the ability to run light applications,
store data within the device, and, in some cases, synchronize with other devices such as a PC. The
Chapter 13
ability to run these applications is what differentiates smart phones from handsets that are merely
Wireless Application Protocol (WAP)-enabled (which will be virtually all handsets over the next four
years). The expanded capabilities of the smart phone can include the functions of a personal companion such as PIM (personal information management), as well as the ability to access data through a
WAP-enabled micro browser such as Openwave’s UP browser. Smart phones will initially be able to
take advantage of multiple standards of data connectivity, including:
Short messaging systems (SMS, which will most likely be concentrated in “dumb” handsets)
WAP (Wireless Application Protocol)
Services that pull data from existing web-based content providers and reformat them for
consumption on a small screen.
Smart handheld phones include the emerging enhanced, super-portable cellular phones that
enable both voice and data communications. Some of the applications for smart handheld phones
include cellular voice communications, Internet access, calendar, and Rolodexâ data such as names,
addresses, and phone numbers.
The smart phone market is in constant flux as new players (e.g., Hewlett-Packard) enter the
market to capitalize on its explosive growth, and old ones (e.g., Psion) abandon it due to competitive
issues. The constant change in the market dynamics is beneficial to the growth of the market,
however, since the result is an environment that fosters innovation and the adoption of new technologies. Smart phones generally have a larger screen and perhaps smaller buttons than the simple
WAP-enabled handset, and some will include a pen for touch-screen input. Ericsson, Neopoint,
Qualcomm, Kyocera, Samsung, and Nokia have introduced smart phones. The leading operating
systems for smart phones include Palm OS, Symbian’s EPOC, and Microsoft’s Stinger. The worldwide market for smart phones will reach about 18 million units and $7.7 billion by 2004
Handheld Companions
Handheld companions include personal and PC companions, and personal digital/data assistants
(PDAs). Applications for handheld companions include:
Personal information management (PIM)
Data collection
Light data creation capabilities (such as word processing for memos)
Personal and PC Companions
Personal computer companions normally feature a keyboard, a relatively large screen, a Type I and/
or II PC card expansion slot, the Windows PocketPC operating system, data synchronization with a
PC, and, in some instances, a modem for wire-line Internet access, and a pen input for use with a
touch screen. PC companions are generally used for PIM and data creation activities such as e-mail,
word processing, and spreadsheets. This form factor will increase in popularity (but will not come
near the explosive growth of PDAs) as prices fall and as technology currently associated with
notebook computers is added to the product segment. Examples of current products in this category
include the Hewlett-Packard Jornada and the Psion Revo.
The most popular class of devices among handheld companions is the PDA, which is discussed
in further detail below.
Internet Smart Handheld Devices
Personal Digital (Data) Assistants—The PDA
In the 1980s the Franklin planner or Filofax organizer was the visible sign that you were a busy and
important person. The end of the 1990s replaced that badge of distinction with a digital equivalent—
the Personal Digital Assistant (PDA). A PDA is effectively a handheld PC, capable of handling all
the normal tasks of its leather-bound ancestor—address book, notepad, appointments diary, and
phone list. However, most PDAs offer many more applications such as spreadsheet, word processor,
database, financial management software, clock, calculator, and games.
What made PDAs so attractive to many PC users was the ability to transfer data between the
handheld device and a desktop PC, and to painlessly synchronize data between the mobile and
desktop environments. Early PDAs were connected to the PC by a serial cable. Modern PDAs
connect to the PC via an infrared port or a special docking station.
The allure of the PDA in the realm of business is not hard to understand. Small, portable, and
powerful technologies have always held great general appeal. For style-conscious users looking for
the latest and greatest in gadgetry, the PDA is a natural accompaniment to that other essential
business item of the twenty-first century—the mobile phone. The increasing power of PDAs has led
to a growing interest in the corporate arena. If simple data manipulation and basic Internet connectivity are the only applications required, the PDA is an attractive option—likely to be a much lighter
burden to bear than a notebook PC for both the mobile worker and the company bank account.
Because of the PDA’s size, either a tiny keyboard or some form of handwriting recognition
system is needed for manually entering data into a PDA. The problem with the keyboard is that they
are too small for touch-typing. The problem with the handwriting recognition system is the difficulty
in making it work effectively. However, the Graffiti handwriting system has proven to be the solution
to the handwriting recognition problem. This system relies on a touch-screen display and a simplified
alphabet (which takes about 20 minutes to learn) for data entry. Typically, PDAs with the Graffiti
system provide the option to write directly onto the display, which translates the input into text, or to
open a dedicated writing space, which also provides on-line examples and help.
The PDA market has become segmented between users of the two major form factors—devices
that have a keyboard and others that are stylus-based palm size devices. The choice depends on
personal preference and the level of functionality required. Following this trend, Microsoft has
evolved its CE operating system into the PocketPC. However, PDAs will still require a universally
sought-after application to make them truly ubiquitous. The advent of multifunction universal
communications tools that combine the capabilities of mobile phones and PDAs may be set to deliver
that. The ability to conveniently and inexpensively access the Internet with a single device is the holy
grail of mobile computing. In fact, Palm Computing brought this ability a significant step closer with
the launch of their wireless Palm VII in the autumn of 1999. One possible outcome is for the market
for the wireless PDA class of device to split into two approaches. One approach would be for those
whose desire is basically for an upscale mobile phone, and who only require modest computing
power such as a built-in web browser. The other would be for those who require a portable computer
and who want to be in touch while in transit.
The PDA has been one of the poster children for putting digital computing into the hands of the
consumer. In general, they are palm-sized devices—about the size of a package of cigarettes. Users
input data by tapping the keys of a mini keyboard pictured on-screen or, more commonly, by writing
Chapter 13
with a stylus on a note-taking screen. The notes are then “read” by handwriting recognition programs
that translate them into text files. The PDAs are designed to link with desktop or laptop PCs so that
users can easily transfer dates, notes, and other information via a docking unit or wireless system.
Personal digital assistants come in many shapes and sizes, and are synonymous with names like
handheld computer, PC companion, connected organizer, information appliance, smart phone, etc. A
PDA or handheld computer is primarily a productivity and communications tool that is lightweight,
compact, durable, reliable, easy to use, and integrates into existing operations. Typically, it can be
held in one hand leaving the other to input data with a pen type stylus or a reduced size keyboard. It
is undoubtedly one of the most successful appliances to have arrived in the palms of the consumer.
Personal digital assistants fall into one of several general categories—tablet PDAs, handheld PCs
(HPCs), palm-size PCs (PPCs), smart phones, or handheld instruments. Ease of use and affordability
are two essential factors driving consumer demand. Handheld computers and PDAs are not a new
concept, but only recently have they begun to find broad appeal. Palm Computing, Inc.’s Palm-brand
“connected organizers” and Microsoft’s PocketPC were introduced in 1996, and have revived
consumer demand for PDAs. The PocketPC operates the new generation Windows CE operating
system, which is a stripped-down version of its Windows operating system, and is tailored for
consumer electronics products.
History of the PDA
The idea of making a small handheld computer for storing addresses and phone numbers, taking
notes, and keeping track of daily appointments originated in the 1990s, although small computer
organizers were available in the 1980s. In the late 1990s, a combination of primarily chip, power,
and screen technologies resulted in an avalanche of newer and increasingly sophisticated information
and communication gadgets for both home and mobile use. The growth of what would become the
handheld computing market was driven by the transformation of the corporate environment into an
extended, virtual enterprise. A mobile, geographically dispersed workforce requiring fast and easy
remote access to networked resources and electronic communications supported this transformation.
The emergence of corporate data infrastructures that support remote data access further encourages
the growth of the handheld computing market.
The easy acceptance of mobile computing may well have been influenced by the TV show Star
Trek, where producer Gene Roddenberry forbade pen and paper on the 23rd century starship U.S.S.
Enterprise. This requirement gave rise to the “Tricorder” and the concept of mobile information
devices. In the 1970s, Xerox’s PARC research center explored the Dynabook notebook computer
concept. The first mobile information device (in the real world) was the Osborne 1 portable computer
in April 1981. The first Compaq appear in July 1982, and was followed by the Kaypro 2 in October
of that same year. All three “luggable” computers were the size of small suitcases, and each weighed
about 25 pounds. The Kaypro was mockingly nicknamed “Darth Vader’s Lunchbox.”
These early portable computers and their successors, laptop and then notebook computers,
merely served as replacements for their full-sized counterparts. Consumers were seeking a new type
of device—one that would supplement the computer and replace paper-based appointment calendars
and address books.
Internet Smart Handheld Devices
Psion and Sharp
It is generally accepted that UK-based technology firm Psion defined the PDA genre with the launch
of its first organizer in 1984. The Psion 1 weighed 225 grams and measured 142mm x 78mm x
29.3mm—narrower than a large pack of cigarettes, and slightly longer and thicker. It was based on 8bit technology, and came with 10K of nonvolatile character storage in cartridges. It had two cartridge
slots, a database with a search function, a utility pack with math functions, a 16-character LCD
display, and a clock/ calendar. The optional Science Pack turned the Psion into a genuine computer,
capable of running resident scientific programs and of being programmed in its own BASIC-like
language, OPL. The Psion I was superseded by the Psion II—500,000 Psion IIs were produced
between the mid-1980s and the early 1990s. Many of these were commercial POS (point of sale)
versions that ran specialized applications and did not include the standard built-in organizer functions.
In 1988, Sharp introduced the Sharp Wizard, which featured a small LCD screen and a tiny
QWERTY keyboard. This, however, saw very little success. The Psion Series 3a, launched in 1993,
and based on 16-bit microprocessor technology, represented the second generation in Psion’s
evolution. The Series 3a was housed in a case that looks remarkably like a spectacle case and opened
in a similar way to reveal a 40-character x 8-line mono LCD and 58-key keyboard in the base. The
Series 3a broke new ground with its ability to link to a desktop PC and transfer, convert, and synchronize data between the two environments. Psion’s domination of the PDA market was assured for
a couple of years. The more powerful Series 3c and the third generation 32-bit Series 5 were
launched in 1997, built upon the success of the 3a. The Series 5 boasted the largest keyboard and
screen—a 640x240 pixel, 16 gray-scale—of any PDA to date. But these features did not prevent
Psion from losing its PDA market leadership position to 3COM’s groundbreaking PalmPilot devices.
Apple Computer
Psion’s success prompted other companies to start looking at the PDA market. Apple Computer made
a notable attempt to enter this market in mid-1993, when the launch of its first Newton Message Pad
was heralded as a major milestone of the information age. Faster and more functional chips led to the
development of the electronic organizer into the personal digital assistant (PDA), a term coined by
then Apple CEO John Sculley. Several other established electronics manufacturers such as HewlettPackard, Motorola, Sharp, and Sony soon announced similar portable computing and communication
An ambitious attempt to support data entry via touch-sensitive LCD screens and highly complex
handwriting recognition software differentiated Apple’s Newton technology from its competitors. In
1997 Apple launched the eMate, a new PDA that continued the Newton technology. But Newton’s
handwriting recognition technology never became fast or reliable enough, even though it had
advanced by leaps and bounds in the years since its first appearance. The Newton was also too large,
too expensive, and too complicated. In 1998 Apple announced its decision to discontinue development of the Newton operating system.
Palm Computing
In 1995, a small company called Palm Computing took the idea of the Newton, shrunk it, made it
more functional, improved the handwriting recognition capability, halved Newton’s price, and
produced the first modern PDA, the Palm Pilot. U.S. Robotics acquired Palm Computing in 1995,
Chapter 13
and transformed the PDA market one year later by introducing the company’s keyboard-less Pilot
products. Data was entered into these devices with a stylus and touch-sensitive screen, using the
company’s proprietary Graffiti handwriting system. This process relies on a touch-screen display and
a simplified alphabet—which takes about 20 minutes to learn—for data entry. Typically, PDAs with
the Graffiti system provide the option to write directly onto the display, which translates the input
into text. The user can also open a dedicated writing space that provides on-line examples and help.
Palm products became formidable players in the handheld computing arena following a further
change in ownership in mid-1997, when U.S. Robotics was purchased by 3Com. This led to a
burgeoning PDA market. Their success has also led to a segmentation of the market into users of the
two major form factors: devices that have a keyboard, and stylus-based palm size devices that don’t.
The keyboard-entry devices are increasingly viewed as companion devices for desktop PCs, and
often run cut-down versions of desktop applications. The palm-size form factor retained the emphasis
on the traditional PIM application set, and generally has less functionality. The choice depends on
personal preference and the level of functionality required.
In 1996 Palm Computing, Inc.—then a part of U.S. Robotics—led the resurgence of handheld
computing with the introduction of its Pilot 1000 and Pilot 5000 devices. Designed as companion
products to personal computers, Palm PDAs enable mobile users to manage their schedules, contacts,
and other critical personal and business information on their desktops and remotely. They automatically synchronize their information with a personal computer locally or over a local or wide area
network at the touch of a button. Their most distinguishing features include their shirt-pocket size, an
elegant graphical user interface, and an innovative desktop-docking cradle that facilitates two-way
synchronization between the PC and organizer.
The Pilot devices introduced the “palm-sized” form factor. Early devices were about the size of
a deck of playing cards and weighing around 155g. By 1999 sizes had become smaller still and the
design was much sleeker; the Palm V weighing in at 115g at a size of 115mm x 77mm x 10mm. At
that time devices were equipped with a 160 x 160 pixel backlit screen and came complete with a
comprehensive suite of PIM software. The software includes a date book, address book, to-do list,
expense management software, calculator, note-taking applications, and games. The software bundle
also included an enhanced version of the award-winning Graffiti power writing software by Palm
Computing, which enables users to enter data at up to 30 words a minute with 100% accuracy.
Functionality has made this easy to use device the de facto standard in the handheld computing
By the end of 1999 Palm Computing had once again become an independent company from
3Com and had consolidated its market leadership position. It has since led the market with the
launch of its much-anticipated Palm VII device, which added wireless Internet access to the familiar
suite of PIM applications. Several web content providers collaborated with Palm to offer “webclipped” versions of their sites—designed specifically for the Palm—for easy download. Palm
Computing is undoubtedly set to dominate the palm-size segment for some time to come as the
overall PDA market continues to grow. The Palm Pilot was a hit with consumers because it was small
and light enough to fit in a shirt pocket, ran for weeks on AAA batteries, was easy to use, and could
store thousands of contacts, appointments, and notes.
Internet Smart Handheld Devices
The Present and the Future
Following Palm’s success in the PDA market several companies have licensed the Palm operating
system, which in many ways has been the reason for Palm’s success. This licensing trend has also
been stimulated by the existence of a few thousand Palm-based application software developers. A
second camp, based on Microsoft’s PocketPC, has emerged and continues to gain significant ground
as the market looks for PDA applications that are compatible with notebook and desktop PCs.
Microsoft has been part of this market since September, 1996 with the introduction of its first PDA
operating system called Windows CE. This PocketPC operating system has been licensed to PDA
makers including Casio, Sharp, and Hewlett-Packard.
Today, you can buy Palm-like devices from major PC hardware and consumer electronics
manufacturers. Though originally intended to be simple digital calendars, PDAs have evolved into
machines for crunching numbers, playing games or music, and downloading information from the
Internet. The initial touch-screen PDAs used monochrome displays. By 2000, both Palm and
PocketPC PDAs had color screens. All have one thing in common—they are designed to complement
a desktop or laptop computer, not to replace it.
Meanwhile the PDA market is seeing a growing threat from the mobile phone market. This
market continues to provide similar functions such as address books, calculators, reminders, games,
and Internet access, not to mention voice.
PDA Applications
Early PDAs were designed as organizers and came with little variation on the set of familiar PIM
applications—perhaps with a couple of games. They could store addresses and phone numbers, keep
track of appointments, and carry lists and memos. The traditional PIM functionality has matured and
grown in sophistication over the last few years but continues to be present today. Personal digital
assistants today are more versatile, and can perform the following functions:
Manage personal information
Store contact information (names, addresses, phone numbers, e-mail addresses)
Make task or to-do lists
Take notes and write memos on the notepad
Scheduling—keep track of appointments (date book and calendar), and remind the user of
appointments (clock and alarm functions)
Plan projects
World time zones
Perform calculations
Keep track of expenses
Send or receive e-mail
Internet access—limited to news, entertainment, and stock quotes
Word processing
Play MP3 music files
Play MPEG movie files
Play video games
Digital camera
GPS receiver
Chapter 13
Applications in Vertical Markets
Personal digital assistants can help technicians making service calls, sales representatives calling on
clients, or parents tracking children’s schedules, doctor’s appointments, or after-school activities.
There are thousands of specialty software programs available in addition to the PDAs basic functions. These include maps, sports statistics, decision-making, and more. Some applications that show
the versatility of PDAs include uses by health professionals, amateur astronomers, truck drivers, and
service technicians.
Health Professionals
Many health professionals (physicians, nurses, pharmacists) need to regularly keep track of patient
information for medications and treatment. Computer terminals are habitually limited in number and
seldom available in a clinic, and especially so at the patient’s bedside. In addition, many health
professionals need access to information about pharmaceuticals (pharmacopoeias, Physician’s Desk
Reference, Clinician’s Pocket Reference). They also need access to emergency room procedures and
other medical or nursing procedures. Doctors and nurses can put all of this information on a PDA
instead of carrying manuals with procedures, or references, or index cards with patient information
in their pockets. They can note patient monitor readings and other patient information at the bedside
on the PDA for later upload into a PC. They can download drug and procedural reference materials
onto a PDA for consulting at bedside. They can have programmed drug dosage calculators in their
Another promising application for mobile technology is the management of home health
services. Hospitals trying to save money and maintain a reasonable staff workload can benefit greatly
by sending patients home as soon as possible. But to ensure proper follow-on care, the traveling
medical staff caring for those patients must be able to schedule appointments on the fly, and communicate with hospital-bound professionals. They also must have remote access to any given patient’s
dossier of current and historical medical information. The cost-reduction benefits of using the PDAs
mobile technology extend beyond the patient’s early departure from the hospital. The constant
communication afforded by a portable device can also reduce the cost of rescheduling nurses and
other professionals in the field. And when a home visit is covered by a healthcare professional
previously unfamiliar with the patient’s case, the ability to instantly access the patient’s complete
medical history via the portable device allows the visiting case worker to get up to speed quickly,
potentially shortening the visit and certainly providing better-informed care.
Amateur Astronomers
The equipment that an amateur astronomer takes out in the field when observing is sometimes
daunting. Not only must they take the telescope, telescope mount, eyepieces, and cold weather gear,
but also include star charts, field guides, and notebooks. Some astronomers have to carry a laptop
computer to drive their computer-guided telescopes. Consequently, many of these loose items get
scattered across an observer’s table and are easily misplaced in the dark. If an amateur astronomer
had a PDA, he or she could download a planetarium program into the PDA, which could then serve
as a star chart and field guide. They could then take observation notes on the PDA and later upload
them into a PC. The PDA could also replace a laptop and drive the computer-guided telescope to the
desired celestial coordinates. All of these functions could be done with one PDA instead of several
other items.
Internet Smart Handheld Devices
Truck Drivers
Truck drivers on the road must frequently communicate with their companies and their homes. They
consult e-mail and keep track of expenses, shipping records, maps, and schedules. Many drivers use
laptop computers in their trucks to take care of these tasks. However, laptops have relatively short
battery life and are bulky. Modern PDA models and software can now do many of these functions,
including personal information management to mapping and wireless e-mail.
Service Technicians
Utility companies (telephone, cable, etc.) issue PDAs to their customer service technicians to allow
the technicians to receive dispatching information, mapping, and diagnostics. This lets the technicians spend more time in the field with customers. In general, this translates into hours saved,
happier customers, and minimized operating costs.
Vertical and Horizontal Market Considerations
Some firms developed businesses selling PDA automated systems to companies that needed handheld
computers to solve various business problems. Applications included delivery-route accounting,
field-sales automation, and inventory database management in manufacturing, warehouse, health
care, and retail settings. The horizontal-market PDA companies have now recognized there is a very
real need for PDAs within vertical markets.
Unlike horizontal-market PDAs, vertical-market PDA buyers are not overly concerned with cost.
The payback period for the PDA can be calculated. In many vertical-market application cases, a
$2,000 cost/PDA makes sense because of the vast cost savings achieved with the implementation of a
PDA-based system. In another well-known example, Avis introduced handheld computers to handle
car rental return transactions. The handheld computers allowed Avis to provide a much higher level
of customer service regardless of possible cost savings. The car-return process was much faster, and
that made renting from Avis attractive to customers. The tangible cost savings of having handheld
devices like PDAs either provides a company with a competitive advantage (such as fast check-in for
harried business travelers) or saves a company a significant amount of money (more time available to
spend with customers).
One universal avenue that PDA marketers are seeking to let them translate the device’s benefits
from vertical applications to the mass market may be literally at hand in the ubiquitous World Wide
Web (WWW). With its enormous, built-in word-of-mouth notoriety and enough plausible remoteaccess scenarios to support a coherent large-scale marketing push, a PDA permitting low-cost
wide-area web connectivity could be the key to attracting home users. The identification of user
needs will no doubt be crucial to the PDA’s future. Vendors are facing a marketplace where the basic
selling proposition will be to convince a consumer who has already spent $2,500 for a full-service
desktop PC to spend an additional $500 to $1,500 for a wireless device to communicate with the PC.
With virtually everyone a potential customer, the market seems almost limitless. Someday, if the
conventional wisdom holds, grandma will produce digital coupons at the checkout counter, courtesy
of the supermarket’s memory-card advertising flyer. The kids will order interactive music videos via
their handheld link to the set-top TV box. Mom and dad will effortlessly transfer the day’s e-mail
from their traveling PIM to the home desktop for automatic identification, prioritization, and file
Chapter 13
PDA Market
Personal digital assistant shipments are expected to display impressive growth. From 4 million units
in 1998, annual shipments of PDAs increased to more than 12 million units in the year 2003, and are
expected to reach 13.6 million units in 2004. The worldwide end-user revenue reached $3.5 billion in
2003 and is expected to top $3.75 billion in 2004. While the PDA ASP is expected to continue to
decline, the PDA market is still forecast to expand at a solid rate. Consequently, demand for ICs in
PDA applications will also grow significantly. The average IC content in a PDA, in terms of dollars,
is about 40%, correspondingly, the PDA IC content market is expected to exceed $1.6 billion by
2004. The market for PDAs overall is expected to grow to $4.3 billion by 2004. Global PDA retail
revenues are expected to increase five-times-over by 2006, with total unit shipments exceeding 33
million. Low cost PDAs from Dell and Palm should boost the market for PDAs. The PDA is expected
to take the lead in the device race as the most suitable device for using these services.
The leading manufacturers of PDAs are Casio, Dell Computer, Hewlett-Packard, LG Electronics,
Palm Computing, Nokia, Philips, Psion, Samsung, Sharp, and Sony. Apple Computer, the pioneer in
PDAs and the first to market without a keyboard, discontinued its Newton product line in early 1998.
Apple is also expected to announce their line of new PDA products. Personal computer manufacturers such as HP, Toshiba, NEC and Acer are also targeting the PDA market. As the annual growth rate
for the PC industry has started to slow, these manufacturers are looking to boost revenue growth.
PDA Market Trends
There are a number of popular trends affecting the PDA market. These include everything from form
factor, to the efforts of individual vendors, individual features of the PDA device, wireless connectivity, competing products, and more.
Form Factor
The PDA provides access to important personal and business data via a small, portable, and lightweight handheld device without an expensive and relatively heavier notebook or laptop computer. It
has become very popular in the consumer market, with much potential in vertical markets. Popularity
looks to continue its healthy growth in the consumer market, with anticipated further growth in a
widening vertical market.
Palm Computing
The PDA products from Palm Computing are the most popular PDAs today. The success of Palm has
been tied to its utter simplicity. It was designed to provide a few personal and business “house
keeping” functions and as a companion to PCs. Palm PDAs have been termed as a ‘connected
organizer’ by Palm Computing, since it is simple to exchange contact and calendar information
between a PC and the handheld unit. The latest Palm generations can wirelessly connect to the
Internet or to corporate networks through third-party attachments. The success of Palm-based PDAs
has been due to the real-time operating system (RTOS), called Palm OS. Palm OS has garnered wide
support—there are several thousand registered Palm developers. The Palm OS is also established as a
standard operating environment in the industry. As such, it has been licensed to other manufacturers
(such as Sony, IBM, TRG and Symbol) for use in products such as smart phones, pagers, PDAs, and
data collection terminals. Consumers continue to prefer the Palm OS and its clones to the various
handheld and palm-sized computers running Microsoft’s “mini-me” version of its Windows operating
Internet Smart Handheld Devices
system, Windows CE. Palm OS continues to enjoy a larger market share than Microsoft’s Windows
CE operating system. While Microsoft will gain larger market share in the coming years, Palm OS
will remain the PDA operating system market leader for the next few years.
Four members of the team that developed the first Palm PDA later formed a company called Handspring. While the Handspring PDA looks very similar to the Palm, its most notable difference is a
special slot for hardware and software upgrades. This slot, called the Springboard, allows the easy
addition of extras such as pagers, modems, MP3 players, cell phones, digital cameras, Internet access
programs, GPS receivers, bar code scanners, video recorders, games, portable keyboards, and other
applications and utilities. Palm Computing acquired Handspring in 2003.
Microsoft has provided its Windows CE RTOS (real-time operating system) to various PDA clone
and handheld computers manufacturers. This provided the consumer with an interface and applications they are familiar with on the PC. Windows CE was similar to the popular Windows 95/98
operating system, but had more limitations than features. The Windows CE 2.0 generation had
several enhancements and was more widely accepted than its predecessor. It was considered as a
potential threat to nonstandard platforms such as Palm, Psion, and Avigo (by Texas Instruments).
However, Microsoft was not able to monopolize the PDA industry with Windows CE as it did the
desktop and notebook PC market with its Windows operating system. However, Microsoft has
become a more formidable player in the PDA market since the introduction (in April 2001) of its
PocketPC operating system.
The PocketPC platform runs a much improved and simplified version of Microsoft’s Windows
CE operating system. Previous versions of this OS had been derided as nothing more than a slimmed
down version of its desktop OS and not conducive to the handheld environment. Some of the key
adopters of PocketPC are Dell Computer, Hewlett-Packard, and Casio. Windows-based PDAs have
advantages over Palm PDAs that include color screens, audio playback, several application support,
and better expansion/connectivity capabilities via card slots. The Windows-based PDAs are intended
for users who need most, but not all, of the functionality of larger notebook PCs. Windows PocketPC
is available in larger platforms from companies such as HP, LG Electronics, Sharp, and so forth.
These platforms offer more memory, a keyboard, a track pointer or track-pad, a color screen,
weighed about two pounds, and cost around $800.
Microsoft is expected to continue to attack as wireless phones, handheld devices, and the
Internet continue to converge into multipurpose devices that are expected to soar in popularity over
the next few years, and gain in this potentially lucrative market. Windows PocketPC is a small-scale
version of the Windows operating system that has been adapted for use with a variety of diverse
portable devices, ranging from AutoPC systems in cars to cable TV set-top boxes to handheld and
palm-sized computers. Windows PocketPC gets by on relatively little memory, while still offering a
familiar interface to Windows users, complete with a Start button and “pocket” versions of programs
such as Microsoft Outlook, Word, Excel, and PowerPoint. Microsoft also introduced PocketPC 2003
in June 2003. While this is not a major upgrade, some improved features provide a better wireless
connection. Microsoft licensees are launching new models based on this OS and Intel’s XScale
PXA255 processor, which will provide improved speed and battery life.
Chapter 13
EPOC and Symbian
The EPOC takes its name from the core of Psion’s Series 3 OS. It was called EPOC to mark Psion’s
belief that the new epoch for personal convenience had begun. For the Series 5, EPOC had become
the name of the OS itself and had evolved into a 32-bit open system. It originally ran only on RISCbased processors using ARM architecture. Subsequently, EPOC32 became portable to any hardware
architecture. Psion did not begin to license its EPOC32 operating system until 1997. Support was
disappointing though, with Philips as the only major manufacturer to show any interest. However, in
mid-1998 Psion joined forces with Ericsson, Nokia, and Motorola to form a new joint venture called
Symbian. Symbian had the aim of establishing EPOC as the de facto operating system for mobile
wireless information devices. It was also seeking to drive the convergence of mobile computing and
wireless technology—enabling Internet access, messaging, and information access all within a
device that fits in a shirt pocket. These three handset manufacturers (Ericsson, Nokia, and Motorola)
would share their development resources with Psion, which would continue to develop the EPOC32
operating system in conjunction with its new partners. Symbian believes that there will either be
smart phones combining communication and PIM functionality, or wireless information devices with
more features that will combine today’s notebook, mobile phone and PDA in a single unit. Symbian
claims that EPOC32 has a number of characteristics that make it ideal for these devices, such as
modularity, scalability, low power consumption, and compatibility with RISC chips. As such,
Symbian plans to evolve its EPOC technology into two reference designs. One will be a continuation
of the current form factor, supporting fully featured PDAs and digital handsets. The other will be an
entirely new design providing support for a tablet-like form factor with stylus operation, handwriting
recognition, and powerful integrated wireless communications—a device that sounds remarkably like
the Palm VII. Color support by EPOC became a reality in late 1999 with the launch of two subnotebook models, the Series 7 and netBook.
Internet Connectivity
Wireless connectivity to corporate and faster Internet data via PDA devices provides users with the
ability to obtain information at anytime, from anywhere. New PDAs will communicate more with the
Internet over a wireless connection. Instead of downloading entire web pages to your PDA, Palm
devices use a process called ‘web clipping’ to slice out bits of text information and send the text
through the airwaves to your PDA. For example, say that you want to get a stock quote from an
online broker such as E-Trade. You tap the E-Trade icon, fill out a form on your PDA listing the
ticker symbol and tap the Send button. Your text query is sent via a data packet-paging network to an
Internet server. Software on the servers searches the E-Trade site and then transmits the answer back
to your PDA. News headlines, phone numbers, e-mail, and other information can be transmitted in
the same way. Eventually, PDAs will merge with cell phones and use a cellular network to communicate via voice as well as text. It is also likely that PDAs will become faster, have more memory, and
consume less power as computer technology advances.
Wireless LANs
The value proposition of the wireless-enabled PDA becomes even more compelling with the advent
of wireless LANs (WLANs) on PDAs. Wireless LANs (particularly WiFi—Wireless Fidelity or IEEE
802.11b) allows lower cost-per-minute charges and higher bandwidth to end users when compared to
a wide area connectivity solution such as 3G technology. This also creates revenue opportunities for
Internet Smart Handheld Devices
carriers and service providers. In usage scenarios where spontaneity is important, the wireless LANenabled PDA presents an attractive form factor for accessing the mobile Internet, when compared to
the notebook PC. This is compelling in public WLAN applications such as shopping malls, where
retailers can provide services such as online coupons to consumers. This creates revenue opportunities for WLAN carriers, retailers, service providers, device manufacturers, and software application
developers. Secondly, WLAN has the potential to augment the bandwidth of cellular networks. There
is an opportunity for cellular carriers to partner with WLAN carriers to offload the cellular network
in scenarios where calls are made within range of a WLAN network. This would provide the consumer with a lower cost of making voice and data calls over the LAN. These scenario models have
the potential to accelerate the adoption of WLAN technology and drive the sales of PDAs. The
proliferation of WiFi “hot spots” entices more people to carry WiFi-capable computing devices with
them more of the time.
Wireless Synchronization Using Infrared
There’s little benefit in having a word processor or similar feature on a PDA without the capability to
transfer and synchronize data back to a desktop system—particularly as relatively few devices
support printing via a parallel printer port. It is no surprise then that data transfer and synchronization is a feature that has improved significantly in recent years. This has been due to the efforts of
third parties who have developed both hardware accessories for use with PDA docking cradles and
software applications designed to make the synchronization task as comprehensive and as simple to
execute as possible. Most PDAs employ a similar docking design, which enables the device to be
slotted into a small cradle that is connected to the desktop PC via a serial cable. Many cradles also
provide a source of power, as well as facilitating connection to the desktop device, and recharge the
PDA’s battery while the device is docked. The synching feature has proved one of the most popular
features of the PDA. It allows a user to update and organize the Palm Rolodex function and calendar
as frequently as they might like. Today, the synchronization is done through a cable that connects the
PC and the PDA (either directly or in a cradle). This, however, is quite inconvenient.
Hence, two wireless technologies—Infrared Data Association (IrDA) and Bluetooth—are likely
to play an increasing role in the synchronization task in the future. Since its formation in June 1993,
the IrDA has been working to establish an open standard for short-range infrared data communications. The IrDA chose to base its initial standards on a 115 Kb/s UART-based physical layer that had
been developed by Hewlett-Packard, and an HDLC-based Link Access Protocol (IrLAP) originally
proposed by IBM. It is a point-to-point, narrow angle (30° cone) data transmission standard designed
to operate over distances up to 1 meter at speeds between 9.6 Kb/s and 16 Mb/s. The IrDA is
commonly used in the mobile computing arena for establishing a dial-up connection to the Internet
between a portable computer and a mobile phone. This standard also specifies the IrLAN protocol
for connecting an IrDA-enabled device to a wired network. Although the worldwide installed base
reached more than 50 million units, many consider IrDA to have been a failure. The manner in which
many manufacturers implemented the standard resulted in numerous incompatible “flavors” of IrDA.
In addition, software support was poor. The result was that IrDA is difficult to use and has never
worked as well as it was intended—which is why so many are hoping that the Bluetooth initiative,
started in mid-1998, will fare better.
Chapter 1