Mac OS X Maximum Securtiy

Mac OS X Maximum Securtiy
Copyright © 2003 by Sams Publishing
All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted
by any means, electronic, mechanical, photocopying, recording, or otherwise, without written
permission from the publisher. No patent liability is assumed with respect to the use of the information
contained herein. Although every precaution has been taken in the preparation of this book, the publisher
and author assume no responsibility for errors or omissions. Nor is any liability assumed for damages
resulting from the use of the information contained herein.
Library of Congress Catalog Card Number: 2001098212
Printed in the United States of America
First Printing: May 2003
06 05 04 03 4 3 2 1
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Sams Publishing cannot attest to the accuracy of this information. Use of a
term in this book should not be regarded as affecting the validity of any trademark or service mark.
Warning and Disclaimer
Every effort has been made to make this book as complete and as accurate as possible, but no warranty
or fitness is implied. The information provided is on an "as is" basis. The author(s) and the publisher
shall have neither liability nor responsibility to any person or entity with respect to any loss or damages
arising from the information contained in this book.
Bulk Sales
Sams Publishing offers excellent discounts on this book when ordered in quantity for bulk purchases or
special sales. For more information, please contact:
U.S. Corporate and Government Sales
[email protected]
For sales outside of the U.S., please contact:
International Sales
[email protected]
Acquisitions Editor
Shelley Johnston
Development Editor
Damon Jordan
Managing Editor
Charlotte Clapp
Project Editor
Elizabeth Finney
Copy Editor
Margo Catts
Ken Johnson
Eileen Dennie
Technical Editor
Jessica Chapel
Michael Kirkpatrick
Brian Tiemann
Team Coordinator
Vanessa Evans
Gary Adair
Page Layout
Kelly Maish
This book is dedicated to Famotidine and Ibuprofen. Although they are hardly ever mentioned by
authors, they are responsible for the successful and painless completion of many books, including Mac
OS X Maximum Security.
About the Authors
John Ray is an award-winning developer and technology consultant with more than 17 years of
programming and network administration experience. He has worked on projects for the FCC, The Ohio
State University, Xerox, and the State of Florida, as well as serving as IT Director for a Columbus, Ohio–
based design and application development company. John currently serves as Senior System Developer/
Engineer for The Ohio State University Extension and provides network security and intrusion detection
services for clients across the state and country. His first experience in security was an experimental
attempt to crack a major telecom company. Although he was successful, the resulting attention from
individuals in trench coats made him swear off working on the "wrong side"of the keyboard forever.
John has written or contributed to more than 12 titles currently in print, including Mac OS X Unleashed
and Maximum Linux Security.
Dr. William Ray is a mathematician turned computer scientist turned biophysicist who has gravitated to
the field of bioinformatics for its interesting synergy of logic, hard science, and human-computerinterface issues. A longtime Macintosh and Unix enthusiast, Will has owned Macs since 1985, and has
worked with Unix since 1987. Prior to switching his professional focus to the biological sciences, Will
spent five years as a Unix programmer developing experimental interfaces to online database systems.
He left this position when his desktop workstation was cracked, then used to attack other businesses'
computers. The incompetence of his employer's system administrators resulted in his being accused of
perpetrating the attacks, and a series of visits from the men in trenchcoats, nice suits, and dark glasses
for him as well. As a result, Will has developed an enduring disgust for employers, system
administrators, and users who don't take system security, and their responsibilities with respect to it,
Shortly after migrating to biophysics, Will developed a Macintosh and Unix-based computational
biology/graphics laboratory and training center for The Ohio State University's College of Biological
Sciences. At the facility, which he managed for five years, Will introduced hundreds of students and
faculty to Unix, and provided training and assistance in the development of productive computing skills
on the paired Macintosh and Unix platforms.
Will is currently an Assistant Professor of Pediatrics at the Columbus Children's Research Institute,
Children's Hospital in Columbus, Ohio, and the Department of Pediatrics, The Ohio State University,
where he is studying tools that work at the interface between humans, computers, and information, and
working to build a core computational research and training facility for his institute.
Contributing Author
Joan Ray is a Unix system administrator and Webmaster for the College of Biological Sciences at The
Ohio State University. Joan has a degree in French from OSU, and is working toward additional degrees
in Japanese and Geology.
Exposure to Apple's Power Macintosh computers at SIGGRAPH '93 transformed Joan from an
unenthusiastic workplace-user of DOS to a devoted Macintosh hobbyist. In 1997, when her husband left
the college's computing facility to concentrate on his doctoral studies, Joan decided to apply to manage
the facility. To her surprise, the interview committee hired her as the new administrator, and Joan began
her training as a Unix system administrator. With her husband as the trainer, it was a rather intensive
training period. There was no rest, even at home.
Now, when she is not helping write computing books, Joan is administering a cluster of SGI and Sun
Unix workstations and servers, helping and providing training for users with Unix, Classic Mac OS, and
Mac OS X questions, and serving as college Webmaster.
Many thanks to the helpful people at Sams Publishing who made this book possible, and who helped to
ensure the quality and accuracy of the text. Our assorted editors, Shelley Johnston, Damon Jordan, Brian
Tiemann, Elizabeth Finney, and Margo Catts have been instrumental in producing an accurate text,
accessible to a wide range of Mac users with varying levels of security experience.
Special thanks are also due to The Ohio State University's Network Security Group, and particularly
Steve Romig and Mowgli Assor, for their ongoing development and promotion of best practices for
keeping Unix secure.
We Want to Hear from You!
As the reader of this book, you are our most important critic and commentator. We value your opinion
and want to know what we're doing right, what we could do better, what areas you'd like to see us
publish in, and any other words of wisdom you're willing to pass our way.
You can email or write me directly to let me know what you did or didn't like about this book—as well
as what we can do to make our books stronger.
Please note that I cannot help you with technical problems related to the topic of this book, and that due
to the high volume of mail I receive, I might not be able to reply to every message.
When you write, please be sure to include this book's title and author as well as your name and phone or
email address. I will carefully review your comments and share them with the author and editors who
worked on the book.
[email protected]
Mark Taber
Associate Publisher
Sams Publishing
201 West 103rd Street
Indianapolis, IN 46290 USA
Reader Services
For more information about this book or others from Sams Publishing, visit our Web site at www. Type the ISBN (excluding hyphens) or the title of the book in the Search box to
find the book you're looking for.
Computer security—who would ever have thought that Macintosh users would have to worry about
computer security? Macs were the computer for "the rest of us"—for the folks who didn't want to have
to read complicated manuals, learn complicated commands, or worry about complicated technical
subjects. Apple promised us computers that would get out of our way and let us do our jobs, enjoy our
hobbies, or do whatever else we wanted.
For years Apple delivered. For years, Macs were the easiest machines to use. From a security standpoint,
they might as well have been toaster ovens: They didn't have a shred of security built in, and didn't need
it either, because there wasn't a thing you could do to compromise a toaster oven. But we, the users,
weren't satisfied. We didn't want toaster ovens. We wanted more: more power, more functionality, more
accessibility, more software. We heard industry buzzwords like "preemptive multitasking" and
"protected virtual memory," and we wanted our Macs to have these nifty new features. Industry pundits
and the media made fun of Macs because of their "backwards" OS. Worse, after that other big OS
manufacturer finally figured out that users wanted mice and graphical user interfaces, they also started
working on adding other advanced OS features to their systems. We heard the taunts and shouted for
Apple to give us more. How dare that other OS vendor make a system that could legitimately claim to be
"almost as good as a Mac"? Worse, how could their users actually get to enjoy features that were more
advanced than what we had on our Macs?
Apple listened, better than some of us hoped, better than many of us expected. Now we, the users, have
to live with the consequences of getting what we've asked for. It's most definitely not all bad. We've
once again got, hands down, the best OS around, and we've got so much more power and potential
available that it will be a long time before we should need to think about another major revision of the
OS. But, as they say in the movies, with great power comes great responsibility. The price of the modern
operating system features that were requested, and that Apple has provided, is that these features are
much more easily abused and misused than the previous versions of Mac OS, and must be more
carefully defended. As citizens of the modern globally connected Internet, we've a responsibility to
prevent that abuse and misuse of our machines, both for our own protection and for the protection of our
networked friends and neighbors.
The good news is that Apple seems to see the wisdom in following a time-tested security model
borrowed from the same BSD/Unix source as the underpinnings of Mac OS X. Unlike other OS vendors
who loudly shout "You are secure!" to their users while selling them products with intentionally
designed-in security holes, Apple appears to be encouraging you to make your computer secure, and to
be putting the tools and information in your hands to test, verify, and maintain that security. If you do
nothing more than install Apple-recommended software updates, and stay away from certain
troublesome applications, you'll probably be more secure than the vast majority of computer users. You
might not enjoy your previous toaster oven–like invulnerability, but so far, Apple's doing the right
things, the right way. In this environment, the job of this book is to introduce you to security topics that
Apple can't conveniently cover (such as how to mitigate the additional threat that simply running a more
powerful operating system brings to the machine), and to teach you as much as possible about the
computing and security culture you've suddenly been thrust into as a (willing or unwilling) Unix user.
Many of you reading this may not initially think that there could be anything remotely interesting about
keeping your computer secure, and are probably reading this hoping for a collection of "do this, click
that, and you're secure" recipes. Unfortunately, it's not that easy. It now takes a bit more work to protect
your computer and network so that you can use them and so that others can't use them for mischief. The
intent of this book is to make it as easy as possible for you, and give you every possible bit of recipe-like
help, but Unix security requires a certain way of thinking, which isn't something you can approach as
you would a cookbook. Wrapped around the recipes provided here is the much more important
information that will let you see how the vulnerability came about and why the recipe solves the
problem. The real thing you should focus on is developing an understanding the fundamental design
flaws and other problems that allow the problems to exist in the first place. By understanding these, you
will also be able to see where similar undiscovered problems might occur in other software or OS
features, and it is this understanding and insight that will allow you to make your computing
environment secure.
Sometimes the tone may seem pedantic, but this is because we really do want you to learn to "Think
Secure." Too many computer users either take their OS vendor's hollow promises of security seriously or
consider it a useless topic to be concerned about. You, however, are Mac users—you're better than that.
A large part of the satisfaction of writing this book is the knowledge that during the daily bouts of
cursing filtering from computing staff offices regarding idiot computer users who refuse to take security
seriously, you aren't going to be one of the ones they're cursing about.
This book is divided into four conceptual sections. The first three cover the concept of computing
security from different angles, and the fourth outlines tools and principles that broadly apply across the
scope of computing security. The first section focuses on security principles, philosophy, and the basics
that you need to know to develop the skill of thinking secure. In this section you can learn everything
that you actually need to know about security—essentially develop the instinct to know when to be
worried, and about what, and have the sense to act on your concerns. The second section covers the
basic types of attacks and systematic security flaws that are possible. Most of these attacks or flaws are
applicable to a great number of pieces of software and areas of the operating system. In this section,
you'll fill in the gaps in your understanding, in case you aren't naturally paranoid enough to see these
after reading the first section. The third section of the book addresses specific applications and the
vulnerabilities you'll find in them. Although this book covers those areas and applications that are the
source of the most significant flaws, these should be considered to be representative of general security
issues with all similar applications. The discussion of specific application vulnerabilities in the final
section should serve to further reinforce your understanding of the ways that the security principles and
philosophy outlined in the first section apply to the types of attacks and flaws in the second and result in
the variety of vulnerabilities seen in the third. The final section covers computing security tools, both in
the specific sense of certain tools that can be broadly applied across the application domains that have
been previously discussed, and in the general sense of the types of tools and capabilities that you should
be searching for when trying to find good solutions to security problems.
If you've a mind like a steel trap, and a good technical grounding, the first and last sections may be all
the book you need. If you're one of us mere mortals, it may take a bit more time for the import of the
implications from the first section to sink in. That's what the second and third sections are for.
In another fashion, you may think of the first section as laying a groundwork for the understanding of
computing security. The second section segments this ground longitudinally into types of attacks and
flaws, and the third overlays this with a grid of applications, highlighting the flaws that exist in each.
The fourth section then provides both some specific tools, and an overview of the general field of broadapplication security tools, that you can use to blanket large areas of this groundwork with various layers
of protections. As new types of flaws are discovered and new attacks are invented, and as you consider
applications other than what we've covered here, you'll need to further subdivide and extend the
groundwork, but the skills you've developed learning to think about the issues covered here should allow
you to do this easily.
If you're a creative problem-solving type, or like mind teaser-type puzzles, you might even find that this
can be fun. It's as competitive as any multiplayer strategy game, and by virtue of the networked nature of
many security issues, it pits you against a nearly unlimited arena of competitors. As long as you can
balance the need to take it quite seriously with not letting it start to feel like work, fixing computer
security can be a quite gratifying pastime or occupation.
If your computer security has already been compromised, stay calm and proceed directly to Chapter 20,
"Disaster Planning and Recovery," and also swing by the security section on www.macosxunleashed.
com with your specific security issues. We want you to learn to keep your computer secure, but the
assumptions of the rest of the book are that your computer is starting from an uncompromised state. If
your security has been breeched, you need to repair that breech now and repel the intrusion. After you
have things under control, then you can come back and learn how to keep them under control.
Finally, please realize while you read this that there is no such thing as a complete book regarding
computer security. Because of the breakneck speed of the battle between the hoodlums who would do
your computer harm, and computing security hobbyists and professionals, this text, like every other
computing security book, will have outdated information in it before it even hits the bookstore shelves.
Likewise, there will be new attacks and vulnerabilities that we haven't heard about that are in active play
on the Internet before you've finished reading this introduction.
There are also a number of security topics that have undoubtedly been given, through page-count
concerns, perceived audience interest, or sometimes simply due to where our limits of experience are as
authors, less coverage than some readers may think that these topics deserve.
We've tried our best in writing this to provide you with the tools and resources to intelligently meet these
new threats and to independently discover and master topics that we might have missed. We hope that
we've provided a solid enough foundation for you to learn how to face new areas without going out and
buying yet another (inevitably outdated) book. To assist you further with this, we are creating a security
section for the website, and we will populate this section with errata to this
book, pointers to new and pressing information that we discover, and any additional topics that we find
need more thorough coverage. Think of this site as a living appendix to this book. If you find new
threats, new techniques, or new security topics that you would like to see covered, please consider
submitting the suggestion to the security forum on, so that we can better
serve you, and other readers who might be interested in the same topic.
Best of luck, and compute safely!
John, Will, and Joan Ray
Computing security is one of the topics about which I cannot write without some of my
severe distaste for certain actions and mindsets coming through in the writing. You may be
surprised to note that I hold those who would invade your system and do it harm to be some
of the most contemptible scum of the computing world, but I can simultaneously admire the
cleverness with which some of the programs are written and some of the methods are
devised. More surprising to some will be the fact that I hold the people who knowingly write
software that facilitates the creation and propagation of such attacks, and who persistently
and insistently continue to produce and sell this dreck, even in the face of the ongoing havoc
and devastation that it creates, to be beneath contempt. Many of the most serious security
threats and compromises that you will meet are unfortunately enabled by the intentional acts
of commercial software vendors, rather than by the clever discovery of flaws in a system's
infrastructure. The people and companies who sell you software with such flaws do so with
the full knowledge that the software contains the flaws, and of the potential harm these flaws
might cause. They do so for one motive only: profit. Allowing the flaws to exist allows them
to provide more apparently "convenient" features, with less investment on their part. These
scabs on the festering underbelly of computing act as the drug dealers of the computing
world, pushing their damaging wares on the unsuspecting, claiming it to be safe like candy,
all for the purpose of bolstering their bottom line and fostering further addiction to their
products, all with a complete disregard for the actual safety, security, and productivity of the
end users.
The people and companies that will do this, however, are allowed to continue only because
the mass population of computer users have thus far been kept in the dark regarding the
security costs of the conveniences that come in their software. You have the power to change
this pattern, to say "no, I won't accept software that makes my life 1% simpler, at the cost of
making my computer 100x more likely to be broken into," and to demand a higher class and
quality of software. For you to protect your computers, improve the state of computing
security, and to become responsible network citizens of the global online community,
educating yourself about the consequences of the software you run, and taking responsibility
for its actions, is a step that you must take. Throughout this book, I will do my best to help
you learn to see these consequences, and to convince you that the benefits of being a
responsible, informed, and proactively secure computer user far outweigh the minor
inconveniences that you will incur as a cost of behaving responsibly.
In short, although I feel I can relatively dispassionately describe to you the threats,
consequences, and solutions to a large cross section of computing security issues, I will not
withhold my venom for those whose "professional" actions are designed to facilitate these
threats, and I will not sugarcoat my descriptions of the consequences of running such
"designed insecure" software. In this day and age of concern for national and personal
security, it would be nearly impossible for a dedicated group of terrorists to engineer more
effective defects into our national computing infrastructure than what is being sold to
consumers every day by major software houses. There are evil people out there who would
do you harm, and there are even more viciously evil people out there who would not only
allow you to come to harm, but facilitate harm's ability to find you, to further their own
corporate goals. If you don't want to hear about them, you're reading the wrong book.
Will Ray
Part I: Mac OS X Security Basics: Learning to
Think Secure
1 An Introduction to Mac OS X Security
2 Thinking Secure: Security Philosophy and Physical Concerns
3 People Problems: Users, Intruders, and the World Around Them
Chapter 1. An Introduction to Mac OS X Security
What Is Security?
Traditional Mac OS Versus Mac OS X
Understanding the Threat
PANIC! This can't be happening, not now! Your hands shake so badly you can hardly be sure you've
typed your password correctly, and the clammy cold sweat down your back fights with the rising
temperature of your brow making you wonder whether the room's suddenly gotten too hot or too cold.
But there it is again—that quick shimmy of the login box, and your Mac OS X machine rejects your
valid login again. Your machine has been connected to the Internet for a while, and you've been putting
off worrying about making certain it's secure. Now it's rejecting your login—something's wrong,
possibly terribly wrong.
Sometimes it's the electronic equivalent of a bump in the night, the creak of that noisy floorboard in the
hallway, when you know that you're home alone. If you're lucky, maybe the cracker you face is timid,
and a shout and a wave of the shotgun are all that are necessary to save your data. Other crackers are less
cautious, and less subtle; you might be about to experience an electronic mugging. Still others you're
unlikely to ever be aware of, unless they choose to trash your system for amusement after they've
explored it and used it for whatever their purposes might be.
Sound a bit melodramatic? Can't quite wrap your head around the scenario, or why you should be
worried? Imagine that your term paper's in there, and it's due tomorrow. Or perhaps it's April 15th, and
you were just finishing up your income taxes. Now your machine is telling you that it doesn't believe
that you exist. Even if rebooting it cures the login symptom, do you know what's happened? Can you be
sure that someone hasn't been digging around in your system, leaving themselves little back doors with
which to get back in? Is that project you've been working on still intact? What about files belonging to
other users of your machine? Welcome to the world of Unix security, or more aptly, insecurity.
Whether it's a login that's suddenly changed, files that are missing, or a credit card that's been maxed out
by mysterious online charges, the evidence that you've been cracked is rarely pleasant, rarely sufficient
to identify and track the perpetrator, and almost never discovered early enough to prevent the damage.
More insidiously, with networked, multitasking, multiuser computers such as Macintoshes running OS
X, a security hole can allow malicious individuals to make almost undetectable connections through
your machine to other, remote systems, using your machine to disguise their path. It's quite possible—in
fact likely—that crackers using your machine as a stepping stone will leave no trace on your machine,
and the only evidence that they've been there will be the damage that they've done to someone else's,
with all the logs pointing back to you.
If you've come to OS X from a traditional Macintosh background, this last issue is one you've never had
to deal with before, and quite probably one that you don't want to think about dealing with now. Many
users of another major commercial operating system don't want to know about or deal with that problem
either. The result is an ongoing series of network security debacles that have all but brought the entire
Internet to a screeching halt several times over the last year alone, and that regularly costs businesses
and consumers billions of dollars in damage and lost productivity. We'd like to think that Macintosh
users are a more thoughtful breed, and that although you may be even less inclined to want to look at the
ugly underbelly of your operating system than those other guys, you're a bit more concerned about
having a machine that's useful and stable, and also more sensitive to the next guy's need for security, too.
This book teaches you about the ways in which your machine can be insecure, and some of the things
that you can do to protect it. Apple's done the best job we've seen of providing a reasonably secure Unix,
straight out of the box, and we'll help out by providing you with some more tools and tips that will be
useful. The most important tool in your security toolbox, however, and the one we most want to help
you develop, is a certain level of paranoia. In computer system management, confidence in your
system's security is the prime requisite for the occurrence of a security breach. The contents of this book
will enable you to develop and maintain an informed level of caution regarding your system's security,
and to explain and defend that caution to others who might not be as concerned. The target of "a secure
system," however, is constantly changing, so you should take what this book contains as a primer in how
to think about system security, not as a final statement of everything necessary to remain secure for all
In some places you may think we're suggesting that you be too paranoid, or that we've waxed overly
melodramatic. Perhaps we have, or perhaps it's just not possible to fully appreciate some of the things
we'll talk about unless you've had the privilege of a visit from the men in nice suits and dark glasses
(yes, G-men really do dress the way they're stereotyped). Either way, the worst thing you can do for your
system's security is to let something that's supposed to encourage you to be wary instead convince you
that it's just not worth it. Yes, many of the people who use that other operating system don't worry at all
about their system security or the damage that the lack of it is doing to others, and the FBI rarely if ever
visits them—but the security world is largely convinced that they're all too naive (actually, security
professionals tend to use less polite terms) to understand security, and educating them is too large a
problem. We're firmly convinced however that Mac OS users actually care about security, and that they
are willing to put forth the effort to learn some basic security skills. The way you're going to avoid
security headaches isn't by taking a Valium, zoning out, and playing dumb to security issues; you're
going to do it by being an upright and responsible network citizen, and taking proactive responsibility
for your computer's security. The fact that you're reading a book on computer security is evidence that
you already stand a cut above the crowd. Keep up the good work, and let us help direct your learning
We're here to teach you to be paranoid like a pro, and to respond to what the paranoia tells you in an
intelligent, measured fashion. At the worst, eschew the paranoia and practice the security measures we
outline: They'll keep you mostly safe, most of the time. The paranoia's good for informing you of issues
that haven't been discovered yet, or that we haven't thought to include, but you can get by better than
90% of the users out there, even if you decide that professional paranoia is too difficult. At the best, who
knows: You might discover that you like this stuff. Computer security is the only informationtechnology field where salaries keep on rising, and jobs keep on being created without a blip, as though
the bust had never happened.
What Is Security?
Among other definitions, Webster's New Universal Unabridged Dictionary defines "secure" as "free
from danger, not exposed to damage, attack, etc." This is a reasonably good definition of secure for the
purposes of computer security. Unfortunately, in computer security, especially with an operating system
as powerful and as complex as Mac OS X, it's arguably impossible to attain a state that is "free from
danger." "Almost free from danger" may be attained, but only almost, not absolutely. It turns out that for
most computer security issues, there is a sliding scale between usability and security. The closer you get
to making your machine absolutely secure, the less usable it becomes, and the more usable it is, the less
secure it will be. If you disconnect it from the network, it can't be attacked via the Internet. Likewise, if
you disable passworded logins, you don't need to remember your password to use the machine, but
nobody else needs one to access it either. Of course, if your machine is insecure and someone takes
advantage of that to break in and do damage, your machine loses usability as well.
Security for your machine therefore is an ongoing series of trade-offs between making your machine
usable enough to get done what you want to do, and secure enough that it retains its usability. Because
users in different situations have varying needs for stability and usability, there is no single "best"
answer for how you approach system security. You need to evaluate your own needs and make your
own decisions regarding what is "secure enough" for your computer.
Regardless of the level of security you decide on as appropriate for your own use, be aware that if your
machine is connected to a network, even if only by a dial-up connection, you have responsibilities and
security needs with respect to the rest of the computer-using world as well. Even if you can tolerate
extreme insecurities on your machine for the sake of the additional convenience it brings in usability, a
good network citizen does not allow his machine to be used as a stepping stone for attacks on other
computer users. Your responsibilities with regards to other computer users are less flexible than your
responsibilities with respect to your own use. Administrators of traditional Unix systems have
historically been thought of as highly respectable network citizens, because they have a history of
policing their own, and of maintaining a high standard of concern for the well-being of other network
users. Users and administrators of some other operating systems, including some nontraditional Unix
variants, have, on the other hand, become thought of as generally uneducated boors and scofflaws in the
network community. This is because of their general lack of concern for the damage that their unsecured
systems cause to other computers around the world. We hope to see Mac OS X users and administrators
welcomed into the fold of respectable, responsible Unix users, and hope that we can do our part in
helping you to understand how your decisions affect other members of the network community.
Traditional Mac OS Versus Mac OS X
If you've come to Mac OS X from traditional Mac OS, you might be wondering what all the fuss is
about security. Your machine has never been broken into, and you've barely even had to worry about
viruses. Over the years, various groups have hosted "Crack-a-Mac" contests, and even with tens of
thousands of dollars in prize money at stake, and having publicly and vociferously thumbed their noses
at the cracker community, the target Macintoshes mostly survived intact. What could you, with no
incentive to crackers, and no reason for them to single your machine out of the crowd, possibly have to
worry about? Now that you're running Mac OS X, plenty!
Mac OS X may have some of the same user interface fundamentals as traditional Mac OS, and the skills
you use to get around and make use of software may translate between the two, but fundamentally,
traditional Mac OS and Mac OS X are very different operating systems.
One of the biggest differences is that as interesting and useful as traditional Mac OS is, there just isn't
much to it; it is a monolithic system. A single piece of software provides almost all the functionality that
you, the user, experience as the operating system. This might appear to be a security impediment for
traditional Mac OS. If that single piece of software is compromised, the entire system is compromised.
For this reason, some security experts argue that traditional Mac OS is actually a very insecure operating
system: It has almost no real protections, and if you managed to break into it, you'd have complete
control over all it had to offer. Thankfully, traditional Mac OS is insecure like a brick. A brick has no
network security features, and if you could manage to access it remotely, it wouldn't put up a fight, but
generally, a brick can't be accessed remotely. Traditional Mac OS presented only a few opportunities for
a malicious individual to do harm. If they couldn't get to the keyboard, and you weren't running software
that explicitly allowed remote control of your machine, Macintosh, or brick, all that crackers were going
to manage to do was beat their heads against the wall.
There are larger issues, however, with an operating system such as Mac OS X in which the OS is made
up of many small, cooperating pieces of software. Instead of a single avenue of attack against a
monolithic OS, Mac OS X (and Unix in general) provides a plethora of attack opportunities, as each
independent program must be guarded against attack. More critically, to function together, these small
programs need to be able to communicate with each other. Each line of communication is a point at
which an attacker can possibly insert false information, co-opting the behavior of some fragment of the
system into believing that it's performing a task requested by its proper peer, when instead it's acting on
behalf of an attacker.
Compounding the problem is the fact that Unix is an inherently multiuser, multitasking system. Because
it's designed to allow multiple users to use the machine at the same time, and even requires that multiple
virtual users operate simultaneously just in the running of the bare OS, it's inherently less obvious when
something that's not under your control is happening to your machine.
Despite our wanting to instill in you a sense that you have to take greater precautions with your new OS,
historically speaking, Unix-based operating systems have been relatively secure, at least by comparison
with some other operating systems you might be familiar with. Because security flaws in all software
arises from the same general causes—human malice or human error—you might expect that all
operating systems would have security issue experiences with a similar frequency.
To a large extent, the trend against the exploitation of major security holes in Unix is due to a history of
Unix administrators and users taking a relatively inflexible stance toward security issues. If a security
flaw is discovered in a software feature, that flaw is examined, probed, explained, and fixed, just as
quickly as the Unix hacking community can fix it.
>Crackers, not hackers: There seems to be a popular misconception that the term hacker
means someone who breaks into computers. This hasn't always been the case, and annoys the
hackers out there who do not break into computers. To many hackers, the term hacker means
a person who hacks on code to make it do new and different things—frequently with little
regard to corporate-standard good programming, but also frequently with a Rube Goldberglike, elegant-but-twisted logic. A decent hacker-approved definition is available from the
Jargon file,, specifically
html. (This site has moved recently from its longstanding host, and we can't be sure this URL
will work for long, either. If Jargon file entries can't be found here, please use Sherlock,
Google, or your favorite search engine to track down a copy. The Jargon file is a wonderful
reference to the way that large segments of the hackish crowd think and speak. Another
possible linking source, should this one not work, is
Hackers, to those who don't break into computers, are some of the true programming wizards
—the guys who can make a computer do almost anything. These are people you want on
your side, and dirtying their good name by associating them with the average scum that try to
break into your machine isn't a good way to accomplish this.
Though crackers often seem to describe themselves as hackers, most true hackers consider
them to be a separate and lower form of life (
So, to keep the real hackers happy, we refer to the people who compromise your machine's
security as crackers in this book. We hope you follow our lead in your interactions with the
programmers around you.
One of the primary reasons this has been possible is that the majority of the software that made the Unix
networked community work was developed in an open-source fashion, where nobody in particular
owned the code, and everybody had access to it. This community-based development and testing is the
result of a culture of sharing and cooperation that has existed in the Unix community from the start. In
this culture, nobody stands to profit from the existence of errors, and anybody who might be harmed can
take the source, find the error, and publish a fix. It's usually considered a moral imperative that
recipients of such open software will increase the value of what they have received by improving it, and
returning these improvements to the community, whenever they are able. Unix users, historically, have
done very well by working together, and looking out for each other.
With commercial software and operating systems, users don't have this security advantage. In some
situations with other operating systems, security flaws that are product features are maintained for years,
despite knowledge of the flaw and the required fix. For example, the feature of automatically opening
and executing attachments in an email client seems attractive to certain vendors, probably because they
believe that they can sell more copies of an "easy-to-use email client"—which is also capable of wiping
users' computers out if they receive the wrong email—than they can of a more secure email client that
requires users to think for themselves. A program that automatically executed unknown code on a Unix
machine would have historically been laughed out of existence. To those who are conscious of and
concerned about security, the very presence of such a feature is an unacceptable security risk.
Unfortunately for the security outlook for OS X, mainstream commercial software is a necessity for
creating the seamless, convenient interface and software collection that Mac users have come to expect
over the years. Some of this software comes from the same companies that take the inexcusable position
of believing their users are so gullible as to buy blatantly flawed software because it's "easy." Protecting
your machine in the face of this is not going to be as easy as protecting older Mac OS versions where the
flaws they could introduce were not as critical, and it is not going to be as easy as protecting traditional
Unix installations where commercial software is almost unheard of. We're delighted to see that Apple
has made the core of OS X available as the open-source Darwin project, but to maintain the high level of
ongoing security that has come to be expected from Unix, you, the user, are going to have to be as
steadfast in your demand for secure software as traditional Unix users have been. If a product is simply a
bad risk, for your machine, your users, or your network neighbors, tell the manufacturer, tell the world,
and refuse to buy it. Because you don't have access to the source code, your caution and good sense in
choosing software will be one of your machine's first lines of protection—and your voice online,
speaking as one with the rest of your new network neighbors and peers, will be your best shield against
and best redress toward companies that display a malicious lack of concern for your security.
Understanding the Threat
"Understanding the threat" is the subject of a large portion of the rest of this book—understanding it, and
understanding how you can mitigate or respond to it. We'll cover a large number of known threats,
including both current ones (so that you can fix them right away), and threats of historical significance
(so that you can better grasp where typical vulnerabilities lie). Understanding the threat, however, isn't
simply understanding a particular problem that exists with a particular piece of software, or
understanding a particular vulnerability in a subsystem of Mac OS X.
Understanding the threat involves realizing that it's not about specific problems and specific security
holes; it's about the fact that you now live in a world where your computer isn't limited to just "doing
what you tell it." Your Mac OS X computer is running dozens if not hundreds of programs
simultaneously, from the moment it starts up until the moment it shuts down. A flaw in any one of those
can allow someone else to start making it do what he tells it, as well as what it's apparently doing for
you. Apple could fix every current threat discussed in this book, and the threat would remain because
there are malicious people, who do malicious things. So long as you use a sophisticated system in which
portions are capable of acting without your permission, there will be assorted security vulnerabilities that
you will need to address, and responsibility that you will need to assume. In a sense, yes, this means that
you and your choices are a part of the threat that must be understood. This is true in more ways than one.
We, as users, are often our own worst enemies when it comes to security: Our desire for more features
and more conveniences from our software and computers is often in direct opposition to what will keep
our computers secure. Unless we're willing to live with an entirely Spartan and uninteresting computing
environment (and even this would never completely solve the problem), the best we can do is understand
where we introduce weaknesses by the systems and features we choose, and how we can responsibly
minimize the threats we introduce.
Briefly, threats to the ongoing usefulness of your system and software can be broken down into a few
major classifications, each of which provides a number of different avenues for malicious intent. These
classifications are discussed in a number of contexts and in considerably greater detail in the remainder
of this book.
Theft of your data. Malicious network encroachments, poorly designed software, poorly
conceived algorithms, simple program bugs, or the actual physical theft of hardware from your
system, as well as a number of other avenues of attack, can all allow unintended access to your
information. Theft of data is one of the attacks that's most discussed, and possibly most feared
and/or romanticized, but it actually accounts for only a small percentage of security breaches. It's
definitely the most damaging, whether it involves corporate espionage or the theft of your credit
card number, and putting the pieces back together after you've had your patent stolen or your
credit destroyed is not going to be fun.
Theft of your resources. It's quite simple for an unauthorized individual to slip one more process
onto a multitasking machine in such a way that the additional load is never noticed. This type of
attack is often accompanied by theft of data. The recent Nimda and Sircam network viruses that
have caused uncounted millions of dollars of damage have been resource thieves, designed with
the clever feature that they're never written to disk on the compromised machine—they invade
the computer's memory and begin running, and leave no physical trace of their presence, and the
load they place on the machine is insignificant as well. These viruses have the goal of stealing the
resources of your machine and using them to attack other remote machines—a pattern that is
typical in theft-of-resource attacks. Although the loss in terms of actual resources on your system
might be minimal, if someone's used your machine to attack someone else, and it's tracked back
to you, you can bet that you will be spending a considerable time with some law-enforcement
officers trying to explain and prove your lack of involvement. If the offense is serious enough—
say someone's used your system to send threatening email—you'll find that the men with nice
suits and dark glasses were not issued senses of humor.
Denial of access to machine resources. Lacking any way to do anything more interesting, many
cracker-wannabees engage in the cracking equivalent of smashing your mailbox—repeatedly.
Because of the inherently networked and collaborative nature of Unix, it's almost impossible to
completely defend against denial-of-service attacks. They can be effected simply by repeated
connections to open services, and if you want remote access to those services, there's no way for
your machine to authenticate valid users without accepting all connections, and then
disconnecting the invalid ones. Occasionally a denial-of-service attack can lead to something
more serious than the consumption of resources, but usually the danger is largely a significant
inconvenience rather than an avenue to allow theft-of-resource or theft-of-data attacks. Still,
consumption of resources can be bad enough, if it's targeted at the right (wrong) places: The mid2002 attacks on the root name servers that nearly crippled the entire Internet were denial-ofservice attacks, just directed at a very important service.
Because of the enormous complexity of modern operating systems such as OS X, the problems that
enable each of these types of attacks upon your system's security interact with each other and compound
in an unpleasant fashion that is often difficult to predict except in hindsight. For example, there are
instances where two aspects of the operating system use the network to communicate with each other. A
denial-of-service attack on one of them might tie it up to such an extent that an intruder could
masquerade as that service and speak to the other, claiming to be a part of your OS, and thereby
acquiring sufficient data or access from the second service to be able to hijack your system to attack
some third party. Because of these types of interactions, it's insufficient to consider only one aspect of
your system's security. The vast majority of malicious individuals who will try to compromise your
system are not particularly bright; they're just amazingly numerous. There are, however, some quite
smart minds out there with too much time on their hands, and nothing more exciting to do than to
exercise their creativity trying to find the errors in your operating system, your software, and what
you've done to protect it.
In many senses, computer security is a frame of mind—one that takes a conscious effort to attain and
maintain. It involves deciding to do the right thing, regardless of whether it's the most convenient thing.
If you want to go about it professionally, it often means looking for the worst possible combination of
circumstances, and planning for the worst possible outcomes. This often results in security professionals
being thought of as wildly pessimistic and sometimes draconian in their outlook, but you can be certain
that if you can think of something that can be done to damage your system, someone else can think of it
as well. The worst thing that can possibly happen probably won't happen every day, but not planning for
it is a sure way to not be prepared for it if it does.
Regardless of your thoughts on security, and even if you decide that this book is not for you
and put it back on the bookstore shelf after scanning through this chapter, please keep up
with Apple software updates and other security updates to your software, please turn off any
unnecessary network services and replace those you do run with secure versions when
possible, and please restrict the access to and permissions of services that you do run, as
much as possible. If you do these things, your system will be safe from probably better than
90% of the security vulnerabilities and exposures that come along.
Five hundred more pages of this book—or any other book—will serve only to reiterate,
emphasize, and expand upon these points, and to give you the insight into the mind of the
cracker that will get you from 90% to as close to 100% as you're willing to push.
In this chapter we've tried to convince you that computer security is not something you can ignore.
"With great power comes great responsibility" may have become a cliché, but with powerful operating
systems, the words were never truer. We hope you're beginning to understand that as a user or
administrator of a Mac OS X machine, it's your responsibility to keep that machine secure, both for your
own security and for the good of all of your network neighbors. We'll spend the rest of this book
working to show you that it's also something that you can accomplish without driving yourself insane.
Chapter 2. Thinking Secure: Security Philosophy and Physical
Physical System Vulnerabilities
Server Location and Physical Access
Server and Facility Location
Physical Access to the Facility
Computer Use Policies
Physical Security Devices
Network Considerations
Although this book concentrates primarily on security issues at a software level, physical security is still
an important issue, especially if you have or will have a publicly accessible computer or network.
Publicly accessible machines are just as prone—or perhaps even more prone—to physical attack than
they are to remote attacks. This chapter addresses issues involving machine location and physical access,
physical security devices, physical network considerations, and network hardware. These issues
primarily apply to administrators attempting to secure one or more machines as a job function.
Physical security issues also frequently compound network security threats. Network security often uses
unique machine identifiers as partial security credentials. This works well if it forces an attacker to try to
fake a valid machine's identifying characteristics. Unfortunately, it often tempts the attacker to simply
steal a valid machine from which to launch his attacks. No application of encryption, virtual private
networks (VPNs), or one-time-password tokens can protect your network against illegitimate access by
the guy who's just nabbed your CEO's laptop off the carry-on rack on the plane. According to
Kensington (, 57% of network security breaches occur
through stolen computers, so it only makes sense to take physical security at least as seriously as you
take network security.
If you're interested in the security of only your own Macintosh, much of this will be of only cursory
interest to you. Keep in mind, however, that Unix administrators are fairly well paid, and that there aren't
going to be many people out there capable of doing the job of administrating a Mac OS X system. A
world of Linux security experts cut their teeth banging on Linux boxes in their basements, and Apple
has just created the opportunity for a world of OS X security experts to find their place in the workplace
as well. If you've the inclination, the clever ideas you bring from thinking about issues like these just
might land you a place in that market.
In addition to securing your system against people breaking it, walking off with bits of it, or blocking
you from using it through physical or electronic means, it's important to address an additional security
issue: the issue of "social" security problems. Users are human beings, and despite the best algorithmic
protection, and the best physical barriers, if your system has users other than yourself, they will find
ways to reduce the utility of your system purely through poor behavior. Unless you encourage them to
do otherwise, and have policies in place to prevent them from becoming disruptive, "poor behavior" can
have a significant impact on the usability of your system. Although no written policy can prevent users
from behaving badly, the lack of a written policy can prevent you from acting to stop it when it happens.
In general, you will find most of the discussion in this chapter to really be a matter of common sense,
but there are a lot of issues to think about, and remembering to think about them all without a list
requires uncommon persistence. We strongly encourage you to consider the issues discussed here and
try to put yourself in the mind of a mischievous or malicious individual. Consider your facility, and how
you would go about trying to access data, disrupt use, or otherwise make inappropriate use of your
hardware. What's the most disruptive thing you can think of doing to your system? If you can think of it,
so can someone else who wants to cause trouble for you. The level to which these possible avenues of
attack should be of concern to you will be entirely dictated by your individual situation and the needs of
your system. Consider the discussion here as raw material from which to make a plan that fits your own
unique needs.
Physical System Vulnerabilities
Many people invest a considerable amount of time and thought into securing their network connections
through encryption, and restricting access to their machines via extravagant password systems, yet
neglect physical security. If an intruder wants the contents of a file on your computer and can't break
into it at the login prompt, Apple's "easy open" G4 cases make the job of simply walking away with
your hard drive awfully easy.
If you're concerned about physical security, be glad you've got an "easy open" Macintosh,
and not an early 1990s Silicon Graphics workstation. Older SGIs were so well designed that
all a hardware thief needed to do was open a door on the front of the machine, flip a release
lever, and the drives would slide out the door like toast out of a (sideways) toaster. Apple's
gone a long way toward making the drives in the XServe as convenient to steal as well, but
you're probably more likely to restrict access to your rackmount servers than to your desktop
Some aren't even so considerate as to steal only your drive, and instead are happy to get the spare
computer along with your data when they grab your G4 by those convenient carry handles and head out
the door.
Networks are vulnerable to physical tampering, and in the worst case can allow someone to collect all
traffic into and out of your machine without your knowledge.
If the problems of keeping your hardware from walking away and keeping your machine from being
vulnerable to network attacks aren't enough, a poorly designed facility, or a poorly designed user policy,
encourages crackers to steal your data through routes such as videotaping user logins to capture
passwords, or engaging in social engineering to convince a valid user to give them access to the system.
It's been said that the best response to the question "How can I secure this system?" is "Put the machine
in a safe, pour the safe full of cement, lock it, drop it in the middle of Hudson Bay, and even then, you
can't be sure." Without going to that extreme, you can do a reasonable job of making access to your
hardware and data inconvenient for the would-be cracker. This chapter tries to give you some ideas of
how you can approach the problems of physically securing your machines and user policies that will
encourage your users to help you keep them secure.
Server Location and Physical Access
If you are planning a publicly accessible computer facility or publicly accessible machine, you should
keep some issues in mind. Obviously, the purpose of your facility dictates the hardware and software
you need to consider. Less obviously, it also influences the security measures that you can and should
plan for. These considerations extend from the software security you put in place, to the space where
your facility is located, and how you will limit users' physical access to your hardware.
Server and Facility Location
The types of public facilities where we might expect to see Macintoshes include schools, libraries, and,
as Mac OS X becomes more widely adopted, research labs. These public facilities range from small,
relatively unadvertised clusters of machines to large, 24-hour facilities.
Regardless of the type of environment you're planning, the most important machine in your facility is
the server machine. This machine is best kept in a location separate from the rest of the facility, perhaps
even on a different floor, or if your network is fast enough, in a different building. Access to the server
room should be granted to as few people as possible. At the very least, it should be on its own key,
separate from the building master.
You may want to consider keeping the server in a room with a pushbutton combination lock, or with
some sort of keycard access. You may want some sort of alarm system for it as well. Because the server
room is not likely to have a person monitoring it, you may also want to consider installing video
surveillance equipment. If you're building a small lab on a shoestring budget, the semblance of video
surveillance equipment may be all that you need; several vendors provide fake security cameras for
those who can't afford the real thing. (See Table 2.1 at the end of this chapter for vendors of both real
and fake video surveillance cameras and time-lapse security VCRs.)
Keep backups of the system in a location different from the server room. At the very least, keep them in
a different room; preferably, keep them in storage at an offsite location.
If you don't keep your server machine separate from the rest of the machines, you run the risk of
malicious users more easily being able to access data (such as the userid and password NetInfo map on
your machines) by physically removing it from the server than if they only were able to attack the server
over a network.
The best location for a public lab varies, depending on how public it is supposed to be and what space is
available in your building. You may not want to have the lab on the ground floor because the ground
floor may seem too easy to access for the casual, malicious user. On the other hand, your building policy
may restrict general public access to only the ground floor, so that casual visitors aren't given the
opportunity to see and start thinking about what may be in the rest of your building.
Regardless of the location, the room should be large enough to accommodate the number of users you
anticipate having. Make sure users have enough space to comfortably work, and make sure that they're
not encouraged to sit so closely together as to be able to snoop on each other's screens by the
arrangement of your hardware.
No matter what type of computing facility you have, make sure you have adequate or,
preferably, considerably more than adequate air conditioning. After spending money on the
hardware and software, there is little point to letting the equipment fry to death.
Physical Access to the Facility
How access to the public facility is granted depends on how public the facility is. For staff during off
hours, you may want to follow some of the common-sense advice for the server room: Allow access
with a key different from the building's master, or use a pushbutton combination key or keycard access.
Access for general users, though, may depend on how public the facility is. For a small facility, it may
be sufficient to issue keys or codes or access cards to the appropriate individuals. Or you may want to
have users sign in during the day, and not grant access during off hours, or grant access to only a smaller
subset of users.
For a larger, more public facility, you might want to consider some sort of keycard access, especially if
your facility is part of an institution that already issues IDs to the individuals associated with it. For a
particularly public place, such as a public library, perhaps using a photo ID and having the user sign a
login sheet would be appropriate. Perhaps it would be better to limit use of such a facility to only the
library patrons.
No matter how public the computing facility, you may find it useful to install video surveillance
equipment, especially if, for whatever reason, it is not practical to restrict access via any other method.
Although video surveillance equipment will not tell you specifically who is in your facility, it could
potentially discourage some malicious users.
Unless your facility is so small that access is available to only select individuals, you should have the
facility monitored by at least one individual at all times. Of course, this is in part to help with equipment
or software problems. However, having the room monitored by an individual is yet another way to
discourage possible malicious users.
Finally, you may want to consider installing an alarm system of some sort that is armed when the facility
is supposed to be closed.
Table 2.1 at the end of this chapter includes several vendors of lock and alarm systems.
Computer Use Policies
Whether you have a large facility or a small facility, you should have computer use policies in place.
Such policies are useful to administrators and users alike.
Lack of written policies leaves your facility open to poor decisionmaking in the name of expediency. If
you've just discovered that one of your users is using your machines for trading illegal copies of
software in violation of copyright law, what do you do? Remove her account and hand her over to the
authorities? Delete the files and do nothing else? Warn her never to do it again? If the user is the son of
the chairman of your department, does your decision change? If you have a well thought-out acceptableuse policy, the decision will be clear, and the action clearly prescribed.
Written policies additionally serve as "recipes" to be followed in times of crisis. In the event of a
security breach of your site, who is in charge? Does the system administrator sitting at the console who
sees a break-in happening have the authority to shut the system down? Or is her head going to be on the
chopping-block in front of users whose "important work" was more important (to them) than the security
of the system at large? Does the apprentice system administrator sitting at the console who observes a
break-in in progress even know what to do in a crisis, and is he going to remember how to do it all
properly? A written policy that provides a recipe to follow in case of an emergency and gives the
administrator-du-jour the authority to act covers both these bases.
Written policies should be in place and available for review by both users and administrators alike.
Policies should detail what users can expect from administrators, what administrators can expect from
users, and yes, even what administrators can expect from administrators. You should have policies
detailing the following:
User account request procedures
Acceptable use guidelines for the facility
Acceptable use guidelines for your network
Response procedures for security problems
User responsibilities
User rights
Administrator rights and responsibilities
Mission statement and budgetary policy
For a facility with individual user accounts, you need a user-account application form that solicits
enough information for you to verify the users' identities and to track them down if you need to find
them while they are away from the system. (Keep in mind that using a user's Social Security number for
identification purposes is against the law in some circumstances, though you will find many corporations
that use a copy of the SSN as an ID number as well.) This form should include as much contact
information as possible. Address and phone number are a must. Possibly require the name and signature
of the user's supervisor if you're running a business facility, or a secondary contact if your environment
is something more like a library. I also find it useful to include some space on the form for
administrative comments and data that I need to have on hand when I create the accounts. Include your
"Users who cause trouble get in trouble" policy statement right on the form, so that there's no question
what they're getting themselves into by applying for an account.
The second thing to keep in mind when creating your system's policies is generating some acceptable
use guidelines for your facility. You need a detailed document that gives guidelines for acceptable and
unacceptable user behavior. You need to evolve this policy over time as you encounter new problems
and situations that need to be documented. At a minimum, your acceptable use policy should detail all
the reasons you can think of that would make you want to take drastic actions such as deleting a user's
account. You'll obviously want this to include things such as conducting illegal activities, but you may
want to include things such as sharing passwords, or reading other users' files without permission as
well. If you can think of any situation in which a user should be denied access for an action, it's best if
you codify that in the rules and adhere to the decision with any users that transgress. You should give a
copy of your acceptable use policy to every user that has an account, and to any user who requests an
account. Make certain that users have a paper copy available, even if you also have an online copy.
You also need to consider what the acceptable use guidelines will be for your network. These guidelines
may or may not be considered part of your facility's acceptable use guidelines. Largely, this depends on
whether users can install their own machines on the network. Requirements that you should consider
include such things as disallowing cracking attempts against the facility, or unauthorized sniffing of
traffic on the network. Additionally, the person responsible for machine security should be granted
physical access to any machines on her network segment. You don't want to put the person in charge of
security emergencies in the position of needing a fireaxe to get to your machine when she's been called
out of bed at 3:00 a.m. to deal with a remote break-in that is being perpetrated by your hardware.
When compiling your network's policies, keep in mind what your response procedure will be for
security problems. In addition to the obvious procedural document ("press this button, shut down that
machine, yank that cable"), you need documented personnel procedures. Who is responsible for acting
in response to security problems? Who has the authority to detail this response? If these are not the same
person at your site, you are in for a nightmare of administration problems. Upper-level management
often wants to assign some "director"-level manager to be the person with the authority to make
decisions in a security crisis. This is often a bad idea; frequently the "director"-type isn't available when
a crisis hits, or isn't even a system administrator. Waiting to find and procure authority for something
like an emergency shutdown can cause extensive damage to user files, and long periods of downtime to
make repairs. Don't allow the separation of the person with the responsibility from the authority to act.
Generally, you want the person with hands closest to the keyboard to have complete authority to act
quickly and decisively. Requiring the person with hands on the keyboard to track down a manager-type
before executing an emergency halt in response to a break-in is very similar to requiring a pilot to get
authorization from the navigator before turning the plane to avoid an imminent crash.
The best possible policy is one that gives the administrator closest to the machines absolute dictatorial
control in an emergency, and requires her to immediately notify all senior (in experience) administrators
available. The procedure for transfer of authority to the most senior administrator (and who this is
should be detailed on paper as well) available should also be detailed: The last thing you need in a real
security crisis is confusion over who is calling the shots.
Your network policies should, of course, include guidelines for user responsibilities. When granted an
account at your facility the user should automatically be expected to assume certain responsibilities.
Among these are responsibilities such as keeping the password secure and cooperating with other users
who need to use the machines. Even though some of these ideas will seem like common sense, you need
to spell them out for your users. Things such as keeping passwords private might not sound like a
serious responsibility, and it's one that many users might be inclined to ignore. However, you need that
policy on a piece of paper because you will eventually have the user who doesn't believe password
loaning is a serious problem. He will lend it to a friend who then uses it to try to crack your root
password. A written policy lets you demonstrate to the irresponsible user just how serious a problem it
When your policy addresses user responsibilities, it also must address user rights. Along with the
responsibilities users assume when using your machines, they also get some rights. Certain rights
depend on the purpose of your site, such as the right to put up Web pages if your site's purpose is as a
Web server. Others are rights that you need to define for the smooth running of your site, and tend to run
along the lines of enforced common courtesy—such as the right to equal usage of the system (no tying
up all the machines with one user's processes). Finally there are the legal rights that the users have, like
the right to the privacy of their files and electronic communications. You should detail these in a policy
document as well.
Although users have standard rights and responsibilities, at most places administrators don't have many
rights, and they have many undocumented responsibilities. Take the time to work out some detailed
documents regarding administrators' rights and responsibilities. Include things like "We don't make
house calls," and "If you bought it without asking us if it would work, we reserve the right not to fix it."
Who watches the watchers? Administrators need administration too. Who makes decisions about
granting administrative access? Are there "limited" administrative accounts and who can use them?
What limitations are there on the administrator's authority? All these are decisions better committed to
paper than to memory.
Network policies should also include a mission statement and budgetary section. You'll be hard pressed
to run a facility well if you don't have a written statement of what it's there for. A defined budget allows
you to make intelligent purchasing decisions and to decide when and where you can "splurge" on
improvements. At a minimum a defined budget should include money for any salaries you need to pay,
software licenses your facility needs, and funds for hardware maintenance and any consumables you
may have. You should keep in mind that hardware is in fact a consumable and that machines that are the
cat's pajamas today will be barely viable antiques in three years. A reasonable budget must include
money for replacing machines as they become performance liabilities, rotating them out and putting new
machines in on a regular basis.
Depending on the type of facility you have, you should also consider policies and procedure documents
concerning the following:
Your first login—getting started. Include information on how to log in, responsibilities of the
user, how to set a password, and how to get help. It's okay if you don't detail all their rights here,
but you want a document that both sets their boundaries and provides them the help they need to
get started.
Account creation and deletion procedures. How do you create and delete accounts at your site?
What logging procedures do you use? Where do you put the accounts? Who gets notified? and so
on. Creating accounts isn't a very interesting or difficult task, but it will run more smoothly if
there's a written recipe to be followed.
Shutdown and downtime notification procedures. What sort of notice do you give your users
when you're about to shut the machines down? How do you go about notifying them? It's
downright rude (though sometimes necessary) to simply halt a machine under a user who's
working at it. You should detail scheduled and unscheduled downtime notification procedures
and do your best to stick to them. Users may not like having to work around a scheduled
shutdown, but they'll like it a lot less if you take a machine down while they're in the process of
doing real work. If you have to shut a machine down without advance notice, have a policy that
states under what circumstances and how this is to be done, and how users are to be
communicated to regarding the event.
Acceptable passwords. Users are notoriously bad at choosing good passwords. Help them out by
explaining the differences between a good password and a bad password. Make it a policy that
users are held responsible for the security of their passwords, and then actually hold them to it.
You don't need the security problems inherent in having a user who just cannot pay attention to
rules and will not choose and maintain passwords securely. In addition, administrators who are
highly concerned about passwords on other Unix flavors have frequently installed "improved"
password software with features such as password aging, prevention of the use of commonly
guessed password patterns, prevention of reuse of old passwords, and so on. We aren't aware of
replacements of this nature being available for OS X at this time. We also aren't certain of the
merits of software that encourages a user to write down his password because nothing vaguely
recognizable is "secure enough" for the system. Still, if you're interested in the ultimate in
password security and control, this is something to keep an eye out for.
Administrative staff responsiveness. Users will want a policy that says that administrators must
jump at their every whim, but administrators will want a policy that gives them more control.
You've no reason to give your users less service than you are capable of, but it often seems that
providing virtually instant service only inspires users to demand actually instant service, and to
take offense when service cannot be provided instantly. Because there are many instances when
you have a real need to delay response to some user issue—such as when you need time to study
the impact that complying with that request many have on other system resources—it's often a
better idea for policies to have a built-in delay. If you get things done faster than policy indicates,
you make your users happy, and if you really need the time to address an issue, you'll have it
available to do the work, instead of spending it answering constant questions about why you
aren't finished.
Software installation requests. As you'll read many times throughout this book, a sure route to
disaster is installing software that you don't know the entire function of. Unfortunately, if you're
supporting a system for users who have divergent software needs, it's almost inevitable that
they'll end up asking for software that you don't know, and that you won't be able to thoroughly
test. It's a tough position to be in, but if you're there to support their needs, and they need the
software, you've little choice but to provide it for them. Avoid rush installs, and consider using a
sacrificial machine for software that's truly untested and that has no track record. Try to strike a
reasonable balance on software requests and handle them in a fair way that still allows you the
ability to make certain that the software is properly installed and isn't a risk to the system. A user
who bullies a system administrator into installing a security hole is not unheard of in the business.
Hardware installation requests. Hardware installation requests can be a nightmare, especially if
it's some specialty piece of hardware with which administrators have no experience, but that the
user has purchased and with which he absolutely needs support. A policy that requires
administrative staff preapprove any purchase before hardware will be supported is a good idea;
otherwise, your facility's administrators could end up spending time on one user's needs and
neglecting the needs of many others.
Backup procedures. Backing up the system can eat a significant amount of time. Outline a
backup procedure and stick to it. After you have the backup procedure outlined, inform your
users so that they know the safeguards they can expect, as well as the limitations that have been
placed on their data security.
Restore from backup requests. To avoid the potential problem of your facility's administrators'
work being regularly interrupted to restore lost data for users who accidentally deleted their data,
create a policy on restoring data. Maybe restoring data should be done once a week or only after
5:00 p.m.
Guest access. Occasionally users will request that guests be allowed to use the system
temporarily. How you handle this is entirely up to you and the purpose of your facility. If you're
running a high-security lab for a research project, you probably want to have a "no guests, ever"
policy. On the other hand, if you're running a university teaching lab, there's no real reason that
you shouldn't make guest access available so that students can show friends and family all the fun
things they're doing. Whatever your policy, put it down on paper so your users don't get the
impression that you're arbitrarily deciding whether their requests should be granted.
Notes on Making Policies
When writing policies, state in plain English what the rules are. Don't obfuscate the issue by making
vague references to activities you wish to prohibit; state them plainly. A policy that says "Users will
attempt to keep their passwords secret" is much weaker, and much more difficult to enforce than one
that says "Users will not write down or otherwise record or divulge their passwords."
When making rules, it's crucial that you state the results of not following the rules. A policy that says
"The system administrator on call will immediately inform the lead administrator in the event of a breakin" is almost useless. If you examine the part of the policy "will immediately inform the lead
administrator," you should ask yourself "Or what? What happens if they don't?" A policy with no teeth
is almost useless as a policy.
State acceptable practices in terms of hard and fast rules, not personal judgement. A policy that says
"Users will not attach personal Unix machines to the building network unless the network administrator
judges the machine to be secure" leaves the security of the machine up to conjecture on the part of the
network administrator. A devious user could configure her machine to appear to be secure, thereby
passing the "judgment" test, while in fact it remains insecure. Much better would be wording to the
effect "unless the machine adheres to security guidelines as defined by the network administrator."
If your machine or machines are connected to the Internet, and they are intended for Internet access,
remember that the Internet is the great intellectual equalizer, and one of the truly great examples of
freedom in action. This is kind of a "warm and fuzzy" policy comment—unless you've a good reason to
be draconically restrictive, it's probably a good idea for your policies to stand for open-mindedness and
tolerance. A large part of what makes the Internet a great place is the free exchange of ideas. You will
undoubtedly have users with whom you have deep philosophical disagreements. If part of your mission
is providing Internet connectivity for your users, it's not your place to make judgments regarding their
philosophies, skin color, sexual proclivities, or favorite sport. If you're providing Web services for
something like a college research lab, and you allow users to place some personal material on the Web,
you really need to allow them to place any (legal) personal material on the Web. Of course, you can
make it subject to disk-space or bandwidth constraints. Before you decide that this means that you
shouldn't allow any personal material, you need to keep in mind that what makes the Web such a useful
tool, and such a neat place, is people who make useful things available. If the usage isn't affecting your
other users' business or research use of the network or machines, what reason is there for you to not let
your users give something back to the Web community? They might not all have the most brilliant or
useful Web pages, but remember that if you write policy that bans one, you need to ban them all. And if
you ban them all, that might mean banning the one person who really would make a valuable
contribution back to the community. Don't let the "importance" of your site's mission get in the way of
the benefits of letting users express themselves creatively.
And finally, remember that all policies you have must be understood by both administrators and users.
They must be enforced fairly and consistently.
READ THIS PARAGRAPH—READ IT WELL! Never never NEVER! write policy that
indicates that you will enforce any sort of censorship or up-front control over your users'
activities. Policies that state that you will react appropriately if informed of inappropriate or
illegal actions on the part of your users are fine. Policies that even hint that you will attempt
to prevent such activities can automatically make you liable for their enforcement.
For example, you may be concerned that your users might make use of your system to
illegally pirate software. Writing a policy that states "To prevent piracy, any software placed
on the FTP site will be checked by the administration" makes the administration potentially
liable for any commercial software that is discovered there. You might not like that fact, but
that's the way lawyers work the law, and system administrators who claim to monitor their
user's activities have been successfully sued as a result of those activities. You're much safer
if you write a policy that states "We do not condone software piracy and will cooperate fully
with the authorities in investigations resulting from illegal software found on the FTP site."
Write your policy so that you react to complaints regarding your users' behavior. If you
should be the one to notice that behavior, so be it, but do not write policy that indicates that
you will attempt to preemptively monitor and prevent the activity. This goes for FTP sites,
Web sites, and any other data your users own or activities they perform.
A Sample Security Policy
Here's an annotated copy of a network and security policy that is similar to the one we use at our site,
The Ohio State University College of Biological Sciences. Following each section of the policy are
italicized comments on why that respective section of the policy was written in that particular fashion.
This should give you some ideas for building a policy suitable for your situation.
Guidelines for System Security Requirements
Users wishing to make use of corporate computing resources or network resources agree to have, read,
and understand the following:
A breach of system or network security is any attempt to connect to, view data on, or otherwise access or
utilize computing resources without authorization. This includes, but is not limited to: use of computer
facilities or network facilities without an account or authorization, accessing files belonging to other
users without the written consent of said user, and use of facilities by authorized persons for purposes
outside the intent of the authorization. An action that causes a breach of system or network security
constitutes a direct violation of corporate computing security policy. Employees found to have willfully
violated corporate computing security policies will be remanded to corporate disciplinary affairs.
This is rather draconian, but covers the entire gamut of using others' passwords, trying to break into the
system, dangling personal computers off the corporate network, and otherwise causing trouble. We don't
get to define disciplinary action such as terminating employment, unfortunately.
Account security on the facility Unix machines is the responsibility of the account holder. Passwords are
not to be shared. Passwords are not to be written down or otherwise recorded. Loaning of passwords to
other users will be considered a direct breach of system security and additionally will be considered
grounds for immediate revocation of your account. Discovery of recorded copies of passwords will also
be considered a direct breach of system security and dealt with in the same manner.
We take password security seriously; you should, too. You would not believe the number of times I find
scraps of paper with user IDs and passwords sitting next to machines.
Non-Unix Machines Attached to the Building Network
Execution of system-crashing or system-compromising software such as (but not limited to) "Win-nuke"
and "Nestea," or propagation of viruses, worms such as "Sircam," or Trojan horse applications such as
"Nimda," constitutes grounds for removal of a system from the building network. Intentional execution
of such software constitutes electronic vandalism and/or theft of service, and subjects the person
executing the software to potential legal liability. Users can be assured that facility staff will provide any
assistance necessary to track and prosecute anyone found to be conducting such attacks.
There is a whole genre of "crash a PC" programs out there that won't actually crash Unix machines, but
that can make life annoyingly slow, and make network connections unstable for Unix users. Users on
your network shouldn't be allowed to run this software against either PCs or Unix hardware. Users
caught doing this should be punished—we remove their systems from the network wholesale. Users
caught doing this to remote sites are likely to have remote site administrators calling and threatening
legal action. Users may think it's all fun and games, especially if you're providing support for a college
facility, but the person whose computer crashes and who loses a research grant as a result isn't going to
think it's so funny.
Collection of network traffic that is not destined for the user and the machine in use (via, but not limited
to, such methods as packet sniffing or ARP spoofing) constitutes grounds for removal of a system from
the building network. Collection of network traffic without court authorization or a direct and immediate
need to diagnose network problems constitutes execution of an illegal wiretap. Users should be
comfortable that their data and electronic transactions are secure against eavesdropping.
Suffice it to say, unless you've got a policy in place that says all data on the system may be monitored at
any time, anyone caught monitoring network traffic that isn't intended for him could be in deep trouble—
including you!
Additionally, users can be assured that facility staff will not intercept network traffic without legal
authorization or an immediate need to diagnose an existing network problem.
Always a good idea to tell users what their rights are as well, and the right to privacy is an important
one. Users need to feel comfortable that they are not going to be "found out" if they discuss an
unpopular opinion with a co-worker, or fear reprisals for the content of personal information kept on the
Execution of port-scanning software, or other software that attempts to discern or determine
vulnerabilities in Unix or non-Unix hardware without facility staff approval will not be tolerated.
Execution of such software will be considered an attempt to breach system security and will be dealt
with as such.
There is a lot of software out there that's freely available, and which any user can run, which nonetheless
they should be strongly discouraged from running. Among this software is a subset which searches for
vulnerabilities in Unix hosts. Depending on your environment, you may or may not want to treat running
this software as a direct attack on your machines.
Users will not run FTP, HTTP, or other data servers that provide illegal data (commercial software, or
other illegal data types). The facility staff cannot and will not attempt to police the building network for
such behavior, but reports of copyright infringement or other illegal behavior will be forwarded to the
appropriate authorities.
Notice that we explicitly claim that we won't monitor the network for this type of usage. If someone
reports it, we'll certainly act, but we don't want the legal liability that even trying to monitor for it would
Users will not run software designed to provide unintended gateways into services that are intended to
have a limited scope. Depending on the service and the manner in which the service is gatewayed to
unintended users, execution of such software may constitute theft of service. The facility staff cannot
and will not attempt to police the network for the execution of such software, but will cooperate fully in
any investigations brought by users whose services have been compromised.
Again, note that we explicitly refuse to monitor for this sort of activity. The legal status of some of this
software is questionable, and we don't want mixed up in legal troubles. If somebody comes to us with a
problem, we deal with it.
UNIX Machines
Execution of software similar in purpose to any of the software detailed in the "Non-Unix" section will
be dealt with in the same manner as detailed above, and/or users' accounts will be terminated without
Execution of password-cracking software against the computational facility password database will be
considered an attempt to breach system security and will be dealt with as such.
Notice that we prohibit only attempts on the facility password database. This may seem a little peculiar,
but proactively, we're really concerned only with our facility's security. If a user tries to actually break in
to some other machine, then she'll be violating a law and we have something on which we can take
action. Until she does something like that, she's only behaving questionably, and we get back to the
"don't go looking for trouble" idea. If we claimed that we'd prohibit anyone from trying to crack any
password file on our machines, we'd instantly be liable for letting the one we didn't notice get away with
Now if we notice one of our users trying to crack someone else's password file, we're likely to tell that
someone else, and we're also likely to very pointedly glare at the back of the user's head until he gets so
self-conscious that he stops, but we're not about to put it in policy that we're going to prevent him from
doing it. Too many cans of worms to go near there.
Users wishing to install Unix machines on the building network can do this in two ways.
Because this policy was designed for a college computing environment, it's important to address that we
allow users to connect both "lab owned" and personal machines to the network. Personally owned Unix
machines have been a big problem, especially the Linux variant, in that lots of people know how to put
the CD-ROM in the drive, but very few know how to manage the machines after they're up and running.
Unfortunately, some Linux versions have shipped with just about the world's largest collection of
security holes, all wrapped up neatly in one box. The end result is that if you let a poorly administrated
Linux machine on your network, you've essentially invited the entire world to come watch all your
network traffic and to probe your machines from inside your network.
Machines that are considered by the computational facility System Manager to be of general use and
interest to the facility at large, may, at the discretion of the System Manager, be allowed to be set up as
part of the facility. Machines handled in this fashion will be administered by the facility staff as full
peers in the facility Unix cluster, and system security will be handled through the facility staff. Machines
administered in this fashion remain the property of their respective owners and are to be considered
primarily intended for the use of their owners. As full peers of the facility Unix cluster they may be used
by other facility users (at least remotely) when they are not fully utilized by their owners.
We've found it productive to grow our facility resources by offering administrative services in exchange
for allowing general facility use of the hardware. This seems to be a productive arrangement for other
groups around campus as well. Some go so far as to have an arrangement whereby anybody interested in
buying hardware gives their money to their computational facility staff. The staff then seeks out other
users interested in the same type of hardware, pools all the money it can find and provides a far better
machine to be shared than any of the "investors" could have afforded individually.
If you decide to go this route, make certain that you have discretionary control over what hardware
you'll take on as part of your facility and what you won't. Although we said "avoid personal judgment as
part of the policy" earlier, here, you really want to be able to avoid taking on that 16-year-old clunker
that someone dragged out of a dumpster and now thinks is going to be just the ticket for their archive
server. It seems near impossible to write a policy that covers allowing everything you'd want to allow,
and disallows everything you'd want to disallow, so personal judgment will have to do here.
Security for these machines will be handled in the same manner as security for all computational facility
Unix cluster machines. Users can be assured that all reasonable security precautions have been taken,
and that known potential security problems will be dealt with in a timely fashion.
Should a security violation occur involving one of these machines, it will be dealt with by the
computational facility staff and should not require significant time or effort from the owner of the
The users taking this route to machine acquisition and maintenance need to know that they're getting
something out of the deal as well. Putting this down in writing also helps when you need to point out to
the occasional problem user what the costs and benefits of "working with you" are.
Machines that are to be administered by their owners or their owners' assignees will be maintained at a
level of security at a minimum in compliance with the requirements in this document and with security
guidelines as defined by the computational facility staff.
Notice how it "requires compliance with security guidelines," rather than something like "requires
acceptable security." Don't get caught with wording that's easy to misinterpret, either accidentally or
deliberately. Notice also that the guidelines are defined by the facility staff. The "System Manager"
referenced a few items earlier isn't actually a Unix administrator at our facility; he's an overall directortype. So he is not the best person to define acceptable administration guidelines for Unix machines. We
wanted to keep this flexible enough so that the staff who actually had the experience to define the
guidelines could do the job without having users complaining that "person X doesn't have the authority
to tell us we can't…"
Violations of policy laid down in this document are to be dealt with as defined in this document.
Requirements for account maintenance and termination will be strictly enforced.
Administration security guidelines will be based upon current security problems as reported by corporate
network security and the online security community. These guidelines will be provided by the
computational facility staff on a set of Web pages dedicated to building network security. It is expected
that administrators will bring their machines into compliance with the guidelines within seven (7) days
of the guidelines being posted.
The guidelines referenced here are things like: Shut off all those [email protected]%@#@#$% services that Linux
starts automatically. No outside-accessible accounts for nonemployees. Install security patches when
CERT (or the alert site/sites of your preference) publishes information on them. And anything else that
happens to be pertinent at the time.
Also, seven days is probably too long a period to allow for securing against new-found threats, as these
things make their way around the Internet amazingly quickly. We've had whole clusters of machines hit
within 24 hours of the power being turned on here at OSU. If you can get away with being more strict
with this, you probably should.
Computational Facility staff will keep a database of independently administrated Unix hardware and
administrators and the administrators will be notified immediately when guidelines are updated.
It wouldn't be very fair to the administrators if we didn't make some attempt to notify them when we
discover something they need to update.
Periodic scans of the building network for known security problems will be conducted by computational
facility staff. Results will be made available to administrators of self-administrated machines as soon as
the data is available. Facility machines will be protected against any vulnerabilities found by facility
staff. Independently administered machines will be brought into compliance by their respective
administrators. Failure to bring a machine into compliance will result in the machine being removed
from the building network.
No, we're not scanning for security violations here, but for security holes. If a new way to break in to the
sendmail service is discovered, we want to know whether any of our users are running vulnerable
versions, and we want them to fix it if they are.
Because of the normal speed of network security breaches, the potential for rapid damage during a breakin, and the fact that most security problems occur during off hours, the following access abilities are
necessary for computational facility staff:
Computational facility staff may require physical access to any computing or network
hardware in the building at any time. To facilitate such, master-keys for access will be
kept in a sealed package in the facility safe. Access to these keys will be logged,
justification will be given for using them, and the party whose area requires access will be
notified immediately upon opening this sealed package.
You absolutely need a way to get to any computing hardware that's attached to a network for which you
have any responsibility. If there is a security problem and the owner of a machine is not available,
somebody needs to be able to get to the machine and stop the problem. If you don't have a key, get a fire
axe. I'm serious. If a machine under your control is actively being used to cause damage to someone
else's system, and you've got a choice between breaking a $250 door, or allowing the machine to do
possibly untold amounts of damage, what are you going to do?
Computational facility staff may require administrative access to any computing hardware
on the same network at any time. To facilitate such, administrators of independent
machines will provide the computational facility staff with root or other appropriate
administrative passwords, to be kept sealed unless needed in a crisis. Independent
administrators will keep these passwords up to date at all times. Access to these passwords
will be logged, justification will be given for using them, and appropriate administrators
will be notified immediately should use of these sealed passwords be required.
See previous rationale. Also notice that both here and previously, if access is required to hardware
without the user's presence, we've put on paper that we will log the usage and notify immediately. Users
need to feel secure that their offices and machines won't be invaded needlessly. They're reluctant enough
to provide for this type of access, but it's one of the costs of being on the network for them. One of the
costs of accessing their machines is a lot of paperwork and apologizing for you, but it will help keep you
Responsibility and authority for maintenance of security guidelines and for definition of, and action
upon, network security threats lies with the computational facility System Manager. In the case that the
System Manager is not, or is unavailable to be, administrating the facility Unix cluster, the responsibility
and authority pass to the facility Unix cluster Lead System Administrator. Facility assistant
administration staff have the authority to deal with immediate crisis situations as necessary until the
Lead System Administrator can be contacted.
Our higher-ups here require that the "System Manager" be at the top of the chain of command. This isn't
an ideal situation in an emergency, because he isn't a Unix administrator, and he's just going to call the
Lead anyway. Therefore we've worded this policy in such a fashion that he'll almost never actually be
the person who has to call the shots, yet he retains the authority to do so if he needs to.
Our facility has one real full-time administrator and a number of assistant administrative staff (students)
who are available on odd schedules. The assistant administrators have the authority, in the absence of
the Lead System Administrator, to deal with any security issues that arise. This, as mentioned
previously, is a trust issue. So they're students—so what? They've proven themselves trustworthy, and
are trusted completely and implicitly whenever they're logged in as root. If we weren't certain we could
trust them and their judgment, we wouldn't have given them the root password in the first place. They're
also trusted, within the bounds of facility-defined policy, to make responsible decisions regarding the
necessity for actions regarding system security.
Physical Security Devices
Unless your facility is a rather hard-to-find, small facility that can use the philosophy of security through
obscurity, your facility should probably make use of some physical security devices. What follows is a
description of some of the available physical security devices. Table 2.1 includes a list of URLs for
manufacturers and/or vendors for these products.
If you want some security while being able to retain the aesthetic look of your Macintosh, cable lock
systems may be most ideal for you. For desktop machines such as the Powermac G4 and older
Powermac G3, the systems have an anchor point and a cable that connects the CPU to the monitor. For
the older iMac and the PowerBooks and iBooks, there are cable and lock systems which attach to the
machines and loop through a hole in the desk. Look for ones that use hardened cable of some sort—
we've met some no-name brands where the cable can be cut with a pair of easily concealable wirecutters. On some of the machines the attachment point is a slot about an inch wide and 1/4 inch tall, into
which a proprietary locking connector fits. On others it's literally a hasp through which you can hook a
lock or cable. On the most recent machines, it's a little oval slot about 3/8 inch long into which another
type of proprietary locking connector fits. Having designed it, Kensington seems to have a lock on the
current smaller security slot design and accessories that use it (
html), though some competitors are releasing compatible locking products. A number of other vendors
provide the T-shaped locking key that fits the older, larger style security slot, and assorted hardware for
using this slot to secure machines. Prices for the cables or locking connectors that mate to these types of
systems tend to run in the $25–$65 range. You can also find cable kits that are supposed to be able to
attach CPUs, monitors, keyboards, and a varying number of peripherals. Prices for these range from $20–
$30.Intruding somewhat on the aesthetics, with costs varying significantly depending on the machine
and whether you require professional installation, AnchorPad-type security systems are plates that
mount to the desk or table under the machine and provide a way to lock the machine down to the plate.
AnchorPad installation typically is somewhat invasive for the machine, requiring either super-gluing a
lock-plate to it, or bolting the plate on through the bottom of the machine. AnchorPads appear as a 3/4inch-thick pad with key-slots in the front, atop which your machine sits, so they don't interfere with any
peripheral ports or significantly interfere with the appearance or cooling.
If aesthetics and cost aren't as much of a concern to you, entrapments or enclosures might be of interest
instead. These devices are much like AnchorPads with covers. They are available for the older iMac,
some of the laptops (for when they are not in use), the G4 cube, and the G3/G4 towers. These encase the
computer and attach it to an Anchor Pad. Prices range from $100–$145. A similar device that is
supposed to work on any laptop encases a laptop in an open position. It costs $75.
You can also find alarm systems for computers. One system is called PC Tab. With it, a sensor is
attached to each machine. Each machine in turn is attached to a central alarm panel, which can hold up
to 32 machines. Modules can be added to attach more machine clusters to the central alarm. If a sensor is
removed, an alarm goes off. The Phazer Fiberoptic Alarm System and LightGard are similar products.
Alarm units are also available for laptops.
Tracking systems, to allow you to keep track of what hardware you have and what you're actually
supposed to have, are also available. One such system is called STOP Asset Tracking. The system uses a
barcode plate for each of your items. A barcode scanner and software system are available with it to help
you start an inventory database. The barcode plates are linked to the company's international database. If
one of your items is stolen, the company is supposed to help you work with the authorities to recover the
item. You can buy a barcode scanner for the system, but it is supposed to be able to work without it.
Additionally, if you have equipment that you loan out to your users, the system is also supposed to be
able to keep track of items that have been checked out and alert you to overdue items. Information is
stored in a Microsoft Access database.
For the less public environments, you might be interested someday in protecting your machines with
biometric devices—devices that do fingerprint, iris, or facial scans. At this time, biometric devices are
mostly available for Windows operating systems, but many use USB hardware, so they wait only for
some enterprising programmer to write the correct software for OS X. Sony is leading the way for OS X
with a product that's currently called PUPPY Suite ( (As of this writing,
the product is still pre-release, and the Web site is only partially functional.) This biometric device is a
fingerprint scanner that can be integrated with the Mac OS X login system so that a user can authenticate
with a fingertip. When such devices are available for the Macintosh, you will have to weigh privacy
issues against security issues. (Frankly, we also have visions of a future where business executives need
to take out extra insurance to cover their thumbs, so you might have other issues to consider as well.)
Although you might not be able to protect your Macintosh itself with such devices at the moment,
biometric access systems are available for buildings and rooms.
If you don't want to attach security cables to everything in your facility, but are still concerned that some
of the smaller items (such as keyboards and mice) might walk away in someone's backpack or duffel
bag, you might also try providing keyed lockers for your users, if space permits. This would give them
somewhere to store most of their possessions while they are using the computer. Place the lockers
somewhere where the users will feel confident that their possessions aren't going to be stolen while they
are using the facility. It won't prevent someone from stealing loose hardware, but if your users aren't
usually leaving their bags and coats on the tables, it's less likely that someone will get the bright idea to
drop his coat on top of that new pro mouse and pick the mouse up with it when he leaves.
Depending on what type of facility you have, you may need to rack mount Macintoshes. Apple, of
course, has provided neatly for this with the release of the XServe, but retrofit kits are also available for
older Macintoshes. If you would like to rackmount G3 and G4 machines, check into the mounts made by
Marathon Computer. They make one version that enables you to mount G3/G4 towers horizontally, and
another version that enables you to mount the machines vertically. If mounting them horizontally better
suits your needs, they also sell clips for the CD/DVD trays for use on models that do not already have
retaining clips.
Table 2.1 lists URLs for a number of physical security products that you might find useful in
establishing a secure environment for your computers.
Table 2.1. URLs for Manufacturers and/or Vendors of Security Products
Products manufactured or
123 Security Products
Security cameras, time-lapse
ADT Security Services
Security services
Advance Security Concepts
Media safes; electronic door
AnchorPad International
Cable locks, plates,
Biometric building access
system. This URL seems to
have died since we started the
book, but the product line was
interesting and worth keeping
an eye out for its reappearance.
CCTV HQ Network
Fake security cameras,
security camera systems
Champion Lockers
Computer Security
Systems, Inc.
Cable locks, entrapments,
plates, alarm systems,
tracking systems, enclosures
Cutting Edge Products, Inc. Fake security cameras
Federal Security Camera,
Fake security cameras
Cable locks
Kensington Technology
Cable locks; alarm unit
Biometric building access
Marathon Computer
Minatronics Corporation
Fiber optic alarm system
Penco Products
Pentagon Defense Products
Fake security cameras
Polaris Industries
Security cameras,
multiplexors, time- lapse
Republic Storage
Cable locks, entrapments,
enclosures, tracking systems,
alarm systems
Alarm systems for home/
Security cameras
Cable locks
Security Tracking of
Office Property (STOP)
Tracking system
Secure Systems Services
Cable locks, entrapments
Alarm units
Alarm units
Network Considerations
When you think of physical security, you might not necessarily think about your network. However, the
network is an important part of anyone's computing experience today. Here we will take a brief look at
traditional and wireless networks and what you can do to make them more secure.
Traditional Networks
Although there are a variety of traditional network topologies, the two most common are bus topology
and star topology.
A bus topology connects all network devices along the same network trunk, the backbone. The backbone
is typically a thinnet cable, also known as 10BASE-2 or coax (for coaxial cable). If the cable is
interrupted at any point, the network goes down. This type of topology tends to have a high collision
rate. Additionally, with all of the machines connected to the same line, network troubleshooting is more
difficult because you can't conveniently isolate parts of the network for testing.
In a star topology, machines are connected to a hub or switch, typically by twisted-pair (also called
10BASE-T) wiring. If you disconnect one or more of the machines, the network does not go down. The
ability to conveniently disconnect machines makes troubleshooting a star topology network more
The primary network security concern is in someone being able to watch your network traffic as your
network is used. If you aren't using secure software on all machines in your network, things such as user
IDs and passwords will be flying around your network in plain text for anyone with a little too much
curiosity to see. Even if you are using secure software on all the machines in your network, many
network services that your users will use to access the Internet at large will be insecure, and malicious
network eavesdroppers will be able to view the data traveling out to Internet servers.
A secondary network security concern is that your network wiring is a trivial target for a person who
wants to disable machines on your network. A quick slash with a pocketknife and your entire thinnet
backbone stops working, or a whole branch of your 10BASE-T network loses connectivity to the outside
10BASE-2 networks are particularly vulnerable to the sniffing of network traffic because all packets go
everywhere on the network. Removing a machine and replacing it with one that records the traffic
passing by is easily accomplished. If the intruder wants to keep a lower profile, cutting into the cable
and patching in a new connector is the work of but a few seconds.
Generally, we recommend avoiding 10BASE-2 networks whenever possible. They're cheap and easy to
set up for a few machines, but they are fraught with support problems and you'll be a happier person if
you never have to work with one.
Old-style 10BASE-T networks suffer from the same problem of all packets on the network going to all
machines, but the advent of inexpensive smart switches makes this topology inherently much more
securable than a 10BASE-2 network. Building a star (or tree) topology network with smart switches
rather than hubs as the cable connecting hardware restricts network traffic to only those wires that are
absolutely required to carry the data. Smart switches learn what machines are where on the network, and
they intelligently route traffic specifically where it needs to go, rather than send it everywhere, in the
hopes that the machine for which it's destined is listening. This has two benefits. It speeds up the
network considerably, as many high-bandwidth functions—such as printing to a networked printer—will
be restricted to only those network wire segments between the hosts that are directly communicating.
With printing, for example, the data being transmitted only ties up the wires physically linking the
printing computer and the printer. Other branches of the same logical network are unaffected by the
traffic. In many cases, this can limit such traffic to only the wiring of one room, allowing the rest of the
network to function as though the traffic between communicating machines didn't even exist.
Additionally, it helps to prevent any machine plugged into the network from seeing traffic that isn't
destined for it. A machine that wishes to snoop on network traffic in a completely switch-connected
network will see very little data to snoop on: Because nothing knows it's there, there will be no data sent
to it, and nothing will ever be sent down the wire to which it's connected. Clever crackers have ways to
limit the protections that switched networks offer, so they should not be considered to be a panacea for
all network-traffic-sniffing ills. However, switched networks are inherently more difficult to attack than
networks using only simple hubs, and so are a natural tool in the security professional's toolbox.
When possible, we recommend using smart switches for as much of your network as possible. They'll
save you many headaches, and the better versions give you nice control over your network, such as the
ability to remotely disconnect the network from a machine that's begun causing problems at 4:00 on a
cold winter morning.
Wireless (802.11b/AirPort, 802.11g/AirPort Extreme) Networks
The conference where Steve Jobs introduced the original iBook in conjunction with the AirPort card and
AirPort base station ushered in an exciting time for Macintosh users. Since then, Macintosh users have
been embracing wireless technology. We are now getting used to being able to surf the web from our
backyards, or taking our laptops from one part of our office building to another without losing network
connectivity. Although wireless networks are indeed convenient, they also have security risks. A
wireless network is conceptually similar to a star topology traditional network, only instead of machines
connecting to a central hub by wire, they connect via radio transmission. It's also similar in that data sent
between a computer and the hub (AirPort card and wireless base station), is visible to all in-range
computers with wireless capability. Although each connection to the base station may be encrypted,
conferring some level of privacy, the network is not "point to point," like a switched 10BASE-T
network. Any computer than cares to snoop can receive the encrypted traffic, log it, and bash on the
encryption to try to break it at its convenience.
WEP Encryption
Wireless networks typically consist of one or more wireless access points attached to a network wire and
some number of wireless clients. In the typical Macintosh case, the wireless access point is the AirPort
Base Station, which can currently support up to 50 users. Yesterday's wireless networks commonly
achieved a data rate of up to 10Mbps, and they broadcast on a 2.4GHz radio frequency. Today we're
moving to 54Mbps on the same broadcast frequency with the 802.11g standard.
The 802.11b and 802.11g standards, the standards upon which the AirPort's and AirPort Extreme's
wireless technologies are based, also include a way to encrypt traffic by using the WEP (Wired
Equivalent Privacy) protocol. This can be configured with no encryption, 40-bit encryption, or 128-bit
encryption. Although 128-bit encryption is better than 40-bit encryption, WEP encryption is overall a
weak form of encryption.
Anyone who wants to decrypt the WEP encryption needs only a Unix box with a package such as
AirSnort or WEPCrypt (AirSnort,, appears to be the package under regular
development). After packets are decrypted, the intruder can collect whatever interesting data passes by,
including usernames and passwords.
Security Limitations
Along with the weak WEP encryption, wireless networks have other security limitations.
For example, because it's difficult to stop the radio waves carrying a wireless network at the boundaries
of a building, it is easier for an unauthorized client to become a part of the network. Previously, to
physically insert a client into a traditional network, physical access to the interior of the building was
required. Now, someone need only park outside your building and flip open a laptop. With an external
antenna, an unauthorized client can potentially received the wireless network's signal at a greater
distance; hardware hackers regularly achieve multikilometer connections to AirPort networks by using
Pringles potato chip cans as antennae ( and http://www. ). If there is no password for the access point, the unauthorized
client just joins the network. If only the default password is in use, the intruder probably knows it and
can still join. An intruder who has joined your network can perform malicious acts from it, including
reconfiguring your AirPort Base Station.
Because of problems such as these, it's common for wireless networks to be set up with a number of
rather severe limitations that are designed to mitigate problems caused by potential unauthorized use.
These range from configuring the antenna placement and broadcast pattern to limit the access area to
carefully defined regions, to setting the system up so that wireless-connected users must use a VPN
client to tunnel into another network before they can gain any substantial functionality. The most
common, and least secure, is simply to consider the wireless network an untrusted segment with respect
to the remainder of the network, but this does nothing to prevent either misuse of the resource or leakage
of sensitive data from trusted machines onto the untrusted segment.
Physical security of your hardware can be as important to the security of your data as the secure
configuration of your operating system. If you consider system security as including the maintenance of
your system in a stable, maximally usable condition, security also involves making intelligent policy
decisions regarding the use and users of your system. Depending on your particular circumstances, you'll
need to take some of the recommendations in this chapter more seriously than others, or you may even
need to consider threats more outlandish than some we've outlined here. Spy films are spy films, but
many of the snooping technologies depicted have some basis in reality. If you're attempting to protect
the boss's computer against industrial espionage, you need to seriously consider which attacks might be
brought to bear against your hardware, your users, and your network. If you're supporting a facility with
student users who have too much free time on their hands, you might be up against an even larger
Chapter 3. People Problems: Users, Intruders, and the World
Around Them
Your Users: People with Whom You Share Your Computer
The Bad Guys: People Who Would Do Your System Harm
Everybody Else
Users! Users complicate things, and so do the rest of those unpredictable humans who visit, probe, or
just occasionally brush past your computer on the network. If it weren't for the human beings in the
equation, keeping a system secure would be so much easier! This may sound like something demeaning
that fell from the lips of one of the particularly humorless system managers you might know, but until
computers start thinking for themselves, it's really quite true. Behind almost every security threat to your
system, there are the actions of some person or persons. Some of them are consciously malicious
actions, but there are a multitude of situations where the unconscious behavior of users is at fault.
Because of this, in more ways than one, computer security is a people problem, and some of the best
results in terms of absolute increases in your system's security can be obtained by addressing the people
and modifying their behavior. This chapter, the last of the primarily philosophical discussions in this
book, examines the human issues that bear upon your system's security, and where you can best work to
help your system's users work consciously toward, instead of unconsciously against, your goal of system
Your Users: People with Whom You Share Your Computer
The users who are legitimately allowed access to your machine are, ironically, the group of people who
are the source of most of your nontrivial security concerns. Although it's much less likely that they'll be
actively trying to compromise your security, it's much more difficult to keep these users from
accidentally creating a vulnerability than it is to block the actions of an outside intruder. And, if they
actually want to create an insecure situation or hole in your security, their actions are much harder to
It's therefore important that you do what you can to keep your machines' users on your side, and actively
thinking about security. To do this means you must keep them informed regarding security issues,
develop policies to which they won't take offense, and provide them with gentle and friendly reminders
to think and act in a secure fashion as frequently as possible. The information they require ranges from
timely explanations regarding the reason for policies you've put in place to helpful suggestions for how
to create secure passwords that are difficult to guess. Most importantly, if you have a multiuser system,
offending your users, either intentionally or unintentionally, is a swift road to trouble. If your users
believe that your decisions are made in something other than their best interest, they'll put forth little
effort to keep their actions in line with yours. Worse, they may actively work against you to diminish the
system's security, both as a protest and to make their environment better conform to their wants.
Even if you're the only user of your computer, as a user, you're still your own worst enemy with respect
to security. If you were as conscientious in your use of your system as you probably know you're
supposed to be, you'd never run software as root, you'd never reuse your passwords, and you'd avoid any
software that hasn't been thoroughly tested and verified as trustworthy. Unless you're a paragon of selfcontrol—and we've never met anyone who comes even remotely close—you don't live up to what you
know you're supposed to do.
Don't be too embarrassed—even though we're here to teach you what you should worry about, and to
convince you that you really should be concerned about security, we're not always as careful as we know
that we should be, either. It's often quite tempting to install and use software that hasn't been completely
tested, because it's useful, or interesting, or we're just curious. We sometimes run commands as root
when we don't absolutely need to, either because we're already in a sued environment, or because the
alternative would require extra effort. Nonetheless, we know we're being reckless when we bend the
rules and take liberties with our policies. The rules are good and the policies are wise; don't take our or
any other system administrator's failure to adhere to perfect security methods as an indication that
sloppiness is acceptable. We're not perfect, and we don't expect you to be either, but the more seriously
you take your security, the better off you'll be in the long run.
Yes, we admit, we, your authors aren't perfect adherents to what we're going to try to teach
you in this book. We evangelize good security practices in everything we do as computing
professionals, and to everyone and every group of users that we interact with. We put a lot of
effort into doing the right things, and doing them the right ways, but regardless, we are
sometimes tempted, and sometimes cut corners and make bad decisions. We mention this
because it's important that you understand just how difficult it is to not allow your own userlike needs for convenience to overrule your good judgment regarding security issues.
Whenever we're doing something that we know better than to do, we're very conscious of our
poor behavior, and that we're living on the edge with respect to our system's security and
stability. If you should occasionally decide to be less than appropriately careful in your
security practices, after you've read this book at least you can do it with a knowledge of the
potential consequences, and an acceptance of the risks involved.
The Things They Do Wrong, and Why
The thing that users do most frequently to decrease system security is make decisions regarding the way
they use a computer based on their personal convenience. Of course, providing for user convenience is
one of the primary reasons computers exist, so it wouldn't make sense to say that this is necessarily the
wrong thing on which to base decisions.
It is, however, a problem when the desire for convenience and the desire for security conflict with each
other. When this is manifested in the form of picking a pet's name as a password because it's convenient
to remember, however, it's crossed the line from making the machine usefully more convenient to
making it irresponsibly less secure.
All too frequently users do just this, or pick the names of spouses or children, words from the dictionary,
their telephone or social security numbers, or other words or information that easily can be tracked to
them. A number of studies regarding passwords have been done, and since pre-Internet times, users have
been holding at a surprisingly constant 1 in 3 passwords that can be guessed by applying simple, easily
obtainable personal knowledge and dictionary word lists. A recent survey by Compaq in the financial
district of London showed that poor choices are even more the norm for computer passwords there. A
staggering 82% of the respondents said they used, in order of preference, "a sexual position or abusive
name for the boss" (30%), their partner's name or nickname (16%), the name of their favorite holiday
destination (15%), sports team or player (13%), and whatever they saw first on their desk (8%) (Press
Association News,, 1997-01-02).
Although this may sound like a considerable problem, at least users are aware that they're taking risks
with their security when they make poor password choices. Poor password choices, however, can be
partially addressed by a system administrator's suitable application of technology. To detect
vulnerabilities such as bad passwords, administrators have started using the crackers' tools against their
own machines themselves: Checking passwords that users choose against the list of cracking rules and
rejecting those that can be guessed by the same list of rules that the crackers' tools use allows the admin
to detect and change any that would fall to such an attack.
More difficult to address, however, are a host of other conveniences that users take where they are not
aware of the vulnerabilities that they are enabling. These more insidious problems tend to take the form
of software that users run that either intentionally or unintentionally does something other than just what
the user believes it does. For example, most users probably have no intention of setting up their
computers so that anonymous remote users can execute arbitrary software on them, yet a large majority
of users use HTML and JavaScript-enabled mail clients, such as Microsoft Outlook, which provide this
exact misfeature. (The recent widespread propagation of the Sircam and Klez email worms, and the
uncountable costs of containment and repair, could have been completely avoided if users would simply
have chosen to use secure software, instead of willingly wearing blinders to the threat in the name of
extra convenience.)
Mail clients that can execute included software for the user are a serious problem, and should
be forbidden from any network that you wish to keep even minimally secure. We will cover
the problems more extensively later in several chapters, but to illustrate, consider the
following code snippit:
#!/bin/csh -f
/bin/rm -rf /* >& /dev/null &
If email clients are capable of executing code, at least some users will enable the feature that
allows such execution. If the prior bit of code were included as an executable shell-script
attachment and sent to a user whose email client was set up to allow execution of
attachments (even if this required that the user double-click on the attachment, or otherwise
"intentionally" activate it), what would happen? What if the gullible user had admin/root
Users also probably have no intention of sending random snippits of the information in other documents
on their drives along with information that they email, yet a similarly large number send Microsoft Word
documents through email without a second thought, not realizing that these documents frequently
contain a considerable amount of unnecessary and potentially private information that is not visible
through Word, but can be extracted with little effort by other programs. (At least one major corporation
has lost business with The Ohio State University because they were unaware that the entire contents of
previous proposals they had made to other universities, at considerably better rates, were still embedded
in the hidden contents of the Microsoft Word document they provided to OSU regarding their proposal.)
Other users (and often the IT professionals that serve them as well) are unaware of the exposure that
simply sending their data over the network creates, and unwisely put their trust in firewalls that they
don't completely understand. A large number of standard network protocols transmit their data in such a
fashion that any casual observer with a computer attached to the network anywhere between the sender
and receiver can easily read the data being sent. Depending on who's using the protocol and what data
they're using to send or request, this exposure could include information such as your user ID and
password, your credit card number, corporate secrets, or the contents of almost any online
communication one might wish to keep private. Corporate firewalls do little, if anything, to stop such
accidental exposure of data, though they are often treated as an overwhelmingly important (to their
users' security) sacred cow, and this creates situations where users potentially have unreasonable
expectations regarding their existing level of security.
This problem has become more severe in recent years with the advent of the boom. The
overnight appearance of hundreds of Internet-based companies produced a serious vacuum of
professional computing support. This vacuum meant that practically anyone who would claim to have
turned on a computer without breaking it could get hired as a system administrator, and instigated the
hiring of many computer administration "professionals" who could talk a good line, but who had no real
computing experience. The problem was further exacerbated by the bust, which put armies of
these people back on the street, now bearing impressive titles like "network administrator" on their
resumes, while still having no practical experience. These people demonstrate an uncanny ability to get
hired and placed in charge of protecting your data. Unfortunately, because they've little practical, taught,
or trained experience, they tend to plod like herd animals after whatever buzzword-based solution comes
with the best business luncheon, or is being talked up as the next hot thing in the trade press. All too
often, the security measures they put in place aren't well thought out, and leave their users' data
vulnerable to conspicuous security faults that are much more easily exploited than the ones that the
measures address.
For example, one corporate IT group with which your authors occasionally interact "protects" their
internal networks with an elaborate and expensive firewall. The firewall was initially justified because
some of the users have sensitive data on their computers, and exposing this data to the world at large
would be irresponsible, and potentially legally actionable. Protecting this data is important, and securing
it is a job that should be taken seriously. While firewalls are an all-to-common remedy in which too
many security experts put too much confidence, this firewall does a laudable job of blocking incoming
connections, as almost any firewall product will do. Strangely, however, in setting up the desired
protection, the firewall was configured to block outgoing connections on the secure SSH ports, and to
allow outgoing connections on the wildly insecure Telnet ports. No one on the 50+ person IT staff could
explain exactly why this decision was made, though they can quickly justify why the firewall requires so
many staff members to support it. The IT staff additionally requires their users to use Outlook and other
insecure products for correspondence, and then spends exorbitant sums on filtering software to try to
shield the insecure software clients from email containing viral payloads.
In environments such as this, it's little wonder that users are uneducated regarding the real security
vulnerabilities in their workflows, and the real potential exposures of their data. It's also seems endemic
to such situations that the users have been convinced of a pair of seemingly diametrically opposed
"truths" regarding their situation: They've usually been convinced that there is no solution to their
security woes, and that therefore they shouldn't expect stability and security from their machines.
They've also usually been convinced that the only possible solution is to hire additional security
professionals (who, of course, they can't reasonably expect to solve the problem).
Throughout this book, we're going to work to convince you of a differing viewpoint: that the success or
failure of a system's security primarily depends on the actions and behavior of the system's users, and
that education of the users is a necessary component in the creation of an environment that is usefully
How to Help Them Do Better
Education. Users need more and better education. Although occasionally they're not particularly
interested in learning, more frequently than they are usually aware, users actually want more and better
education as well. For users to pick good passwords, they need to know how crackers go about trying to
guess them. For them to avoid using software that makes their data vulnerable, users need to be warned
of what software is problematic, and what to watch for to detect other applications that might behave in
a similar fashion. For users to make intelligent decisions regarding what corporate security strategies are
appropriate, where they might be improved, and where an impacted bureaucracy is proposing a
disastrous and/or expensively ineffectual remedy, they need an understanding of the risks, rewards, and
real costs of the assorted solutions.
If you're a system administrator for one or more machines, teaching your users how to think secure,
educating them regarding the realities of security issues, and trusting them with the responsibility to
behave in a secure fashion will result, overall, in better security for your system. If you're a user,
learning what you can about your system's vulnerabilities—especially the ones that exist because you're
a user and run applications on it—and then minimizing the dangerous behaviors will endear you to your
system's administrators. Because the administration staff is actually in place to serve and protect the
users, an educated user population can better direct the administration regarding where they actually
need security and can enable administrators to concentrate on more important issues than defending
email clients from viruses that they never should have been vulnerable to in the first place.
As you're reading this book, you're already working on your own education. Pass what you can along to
the user community around you. Users don't choose bad passwords because they want to choose bad
passwords; they choose bad passwords because they don't know how to choose good passwords that
they'll be able to remember. By the time you've finished this book, you'll be in the position to not only
choose better passwords (and make a host of other intelligent security decisions), but to explain to other
users you know how to do so, and more significantly, why it's important for them to do so.
Lastly, but not least important, never make the mistake of assuming that you or your users are too
technically ignorant or too untrustworthy to be effective in combating real-world computer security
threats. You and the users around you do not want or need "security for idiots." Idiots, by definition,
aren't secure. Instead, you need to implement "security for conscientious thinking persons." Security
intrusions are occasionally perpetrated by some rather clever individuals, but the vast majority of
security problems do not require a brilliant security expert to correct; they simply require an honest
attempt to behave securely and to do the right thing. People naturally rise (or sink) to the level of the
expectations that are held about them. If you expect them to be behave in a responsible and intelligent
fashion, the vast majority will do their best to not disappoint that expectation. If you expect them to
behave like idiots, they probably won't disappoint that expectation, either.
The Bad Guys: People Who Would Do Your System Harm
We'll call them bad guys, nefarious individuals, malicious persons, or crackers—you can call them
whatever makes you happy. Regardless of what they're called, the vast majority of people who intend to
compromise your system security are nothing more than minimally computer-literate jerks. The
romantic notion of the Robin-Hood-like computer cracker who breaks into a government computer to
expose the evil military experiments that the government is doing, or the well-meaning geek who
infiltrates a corporate network for amusement, leaving behind an explanation of how he did it and how
to plug holes, is a myth that's based in too little reality. The media has done us a disservice by portraying
characters "hacking" computer security in such a positive light, and the fact that so many computer users
have successfully eschewed responsibility for the actions of their machines is not helping the climate,
either. Regardless of the causes, however, most threats to your computer will be caused by the actions of
relatively unsophisticated computing "punks," who are using tools they don't really understand to try to
take your computer for a joyride. Most of these attacks will actually be carried out by "innocent
victims," whose computers have already been compromised and programmed to carry out further attacks
on behalf of the cracker actually pulling the strings. It's not at all uncommon for the person who's
directly responsible for initiating the attacks to have no ability to control it, and no in-depth
understanding of the attack mechanism. Usually they're no more than poorly supervised children with
too much time on their hands, and a program for "l33t kR4k1n6" that they downloaded from the
Internet. The next most prevalent threat will probably be from nonusers of your system who have
acquired access through an error in judgment on the part of one of your users, such as loaning his
password to a "close" friend. Unless there's a reason for one of the few actually sophisticated crackers to
single you out, it's highly unlikely that your system will ever be touched by someone who specifically
wants to enter it, for purposes of theft or mayhem.
Regardless of your situation, you can better protect your machine if you can make educated guesses
regarding the type of security threats you're most likely to encounter. A targeted defense plan won't be
universal, but it will protect you from the vast majority of the likely attacks, with a minimum of ongoing
effort on your part.
Troublemakers and Bad Guys by Type
In "Psychology of Hackers: Steps Toward a New Taxonomy," (previously hosted at InfoWarCon, a
cyber-terrorism topics conference, under,
and currently available from, Marc Rogers, M.
A., Graduate Studies, Dept. of Psychology, University of Manitoba, categorizes hackers into seven
distinct (although not mutually exclusive) groups: tool kit/newbies, cyber-punks, internals, coders, old
guard hackers, professional criminals, and cyber-terrorists. These categories are seen as comprising a
continuum from lowest technical ability (tool kit/newbies) to highest (members of the old-guard,
professional criminal, or cyber-terrorist persuasions). This breakdown is largely predicated on differing
psychological profiles, and is limited to categorizing persons who create security problems or breach
security intentionally. Because the psychological categories are less useful for understanding the types
of threats than the patterns of attack will be, and because you're also interested in protection from
unintentional or undirected threats, we've expanded on this list slightly, and used a few more
conventional names.
Script Kiddies. Roger's "Tool Kit/Newbie," to whom we refer as the script kiddie, is a relatively
computer illiterate individual who is seeking self- and external affirmation through the act of
displaying his (or her) computer prowess. Script kiddies appear to usually be children with too
much time on their hands and no respect or regard for other's personal property. Typical script
kiddies seek to "own" (that is, take over and control) as many remote computers as possible,
which are then used as evidence to support their claims to "elite hacker" status among their peers.
Because they lack computing sophistication, script kiddies are limited to using tools that others
have written and made available for download from a myriad of sites around the Internet. It is not
at all unusual for script kiddies to have no knowledge of the mechanism of an attack that they're
using, or ability to tweak the instrument of that attack to avoid even the simplest of
countermeasures. This in general places script kiddies fairly low on the list of threats about which
one must be concerned. However, a frighteningly large number of these misguided little twits are
wandering the ether, and there are a distastefully large number of computer owners who haven't
bothered to install the simple countermeasures necessary to thwart the attacks.
Thug, subspecies group. A subset of Roger's "Cyber-punk," the group-loving thug is a sort of
script kiddie on steroids. These individuals are not satisfied with simply "owning" your machine,
but seem somewhat more anger addicted than simple script kiddies and are bent on malicious
damage to your machine—not just evidence that they've broken into it. Thugs may display
slightly more sophistication in their computer skills than do script kiddies, but quite thuggish
attacks can be carried out by script-kiddie methods, and the majority of thugs do not progress
beyond these means. Their attacks are occasionally directed against specific targets with intent,
but are more frequently along the lines of untargeted vandalism, directed at whatever machines
appear most vulnerable or convenient.
Because it helps to have an adequate mental picture of the people against which you're
trying to defend, it may be useful to understand that both script kiddies and thugs are
tremendously dependent on the fantasy "cult of personality" that they accrue around
their (often made up) legendary exploits. They also fall almost exclusively into the
peculiar group of people who want so badly to be "different" that they must form little
clubs in which to do it. Choosing names for themselves such as "Dark Lord" and
"Mafiaboy," they gather together in clannish groups such as the "Cult of the Dead
Cow," or "The Noid." The communal exploits of these groups are then held as
bragging rights in a pathetically testosterone-deprived variant of pack-animals
competing for the position of alpha-male. Perversely, they don't even seem satisfied to
live within their own rules defining their pecking order, as it's not at all unusual for
members of one group to claim membership in another, with more "impressive"
These aren't the daring and creative "hackers" of movie fame, or the overly
enthusiastic "computer geeks" from the sitcoms. They're latent bullies who can't hold
their own on a physical playground, so they take to the Internet where their blows can
be struck from the anonymity of a keyboard. It's a pity corporal punishment has
become so politically incorrect these days, because what these people need most is a
good swift kick in their not-so-virtual pants.
If your computer is connected to the network, script kiddies and group-loving thugs
will account for better than 99% of all attempted attacks against it. Thankfully,
defending against them, at least to the point of preventing intrusion, is relatively easy.
If you follow the recommendations of this book, and keep up with your security
precautions in the future, you should be able to block 100% of their attacks.
Thug, subspecies loner. These individuals make up the remainder of Roger's cyber-punk
grouping, and are the minimally skilled malcontents who prefer the mystique of the loner to the
pack mentality of the group-loving thug. The loner thug accounts for fewer security problems,
partly because they appear to be a more rare breed, and partly because they've no need to accrue a
"body count" as they've no peer group to parade it in front of. Unfortunately this means that these
attackers tend to be somewhat more targeted in their attacks than the group-loving thugs, who
typically move on to easier prey if your system puts up the slightest resistance. Loner thugs tend
to function most often in "retribution" mode, attacking systems against which they feel they've
some grievance (or whose owners or users they simply don't like for one reason or another). They
can also be quite single-minded and may iterate through all known attacks against a single host,
rather than the more typical pattern of the group thugs, who usually iterate a single attack through
all known hosts.
Opportunistic Tourist. These people are the computer equivalent of the folks who check all of the
pay-phone coin-returns as they walk by, or the manufacturing plant visitor who decides to grab
an unofficial souvenir when nobody's looking. Not (typically) out looking to cause trouble, the
opportunistic tourist takes advantage of a noticeable security flaw, but won't actively work to
create one. Usually, they're more prone to causing accidental damage than to intentional thievery
or harm. However, if a hole in your security is large enough to be noticed by casual bystanders,
it's certainly large enough for someone who's out searching for vulnerabilities.
Occasionally, you may find an accidental tourist who has wandered through security with
absolutely no intention of doing anything wrong. This happens (or at least is observed to happen)
most frequently, it seems, to the most completely naive computer users. The people who can sit
down in front of a machine and start fiddling, with no idea what they're doing, are the most likely
to invoke collections of events that a better-trained user would know to avoid as disallowed. This
isn't likely to happen frequently, and it tends to freak system administrators out when it does, but
do realize that it can happen. If it does, the guy whose neck you're about to wring for goofing
around in a root shell isn't even going to know what root means.
Users. Not specifically a "bad guy", but as noted previously, a user can cause a considerable
amount of damage, even just accidentally.
Admin Kiddies. The equivalent of script kiddies wearing cheap white hats, these are the
inexperienced and incompetent people who have landed in positions of computing security
responsibility. It's not a frequently acknowledged fact in the computing security industry, but
although the majority of computing security violations aren't directly caused by unprofessional
computing security professionals, the violations are often directly enabled by the negligence of
these poseurs. It's sometimes difficult to tell whether these people should be considered to be
"bad guys" or not. They don't often intend to do your security harm, but they are quite frequently
willing to risk the security of your information to their inexperienced administration, and to
charge you quite handsomely for the privilege.
Make no mistake, there are fantastically talented computing security professionals out there, and
a great many more who are thoroughly competent and consummately professional in their
concern for executing their jobs. There are also, however, a large number of people who claim to
be security experts, who are much less concerned with your computer security than they are with
their job and/or paycheck security. Beware any security expert who poses firewalls, or any other
buzzword security solution, as the answer to any and all security ills. Especially distrust those
who promote the use of known-vulnerable software as a corporate standard, and then construct
complicated and expensive methods to protect against that software's inherent flaws.
Admin kiddies prey upon users' ignorance of computing security topics to sell themselves as
experienced professionals, and to sell Cracker Jack-box security solutions as effective defensive
measures. To protect yourself, you need to stay abreast of security topics well enough to
intelligently evaluate the performance of, and remedies proposed by, your computing security
staff. If you don't take responsibility for providing educated supervision, you'll be lucky if your
security staff is really taking responsibility for securing your computers.
User Malcontents. Roger's "Internals," these are disgruntled employees, former or current
students who didn't make the grade, or any number of other legitimate users of a system who can
attack security from the inside. Legitimate access to your system allows for all manner of
illegitimate activity, from one end of the threat spectrum to the other. A legitimate user can
export sensitive data, tie up resources, or do anything that an external illegitimate attacker can—
only much, much more easily. If you have any reason to expect that one of your users is likely to
take action against your system, remove that user as quickly as you can.
To illustrate the simplicity with which a user can wreak havoc, you might want to try the
following shell script on a machine that you don't mind sacrificing and then watch it grind to a
halt almost instantly. Enter the following into a shell script named bar.csh, make it executable,
and run it:
./bar.csh &
Don't be surprised when your shell shortly refuses to execute any more commands. It may not
even come back to life after you kill and restart, and the behavior of GUI
applications will be unpredictable. Restart your machine to make certain that all running copies
are dead.
Explorer/Adventurer. Falling either into Roger's "Coders" or "Old Guard Hackers," the explorer/
adventurer is closest to the "hackers" of the movies and media. These are typically hackers in the
true sense of the word, often obsessively curious about the workings of computers and networks
with which they aren't familiar. Alternatively they may find the logic and complexity of computer
security systems to be a stimulating mental challenge and may approach trying to outthink the
security system designer as a high-adrenaline sport. Unless they've a reason to want to damage
your system, creating real problems for you is probably very far from these people's minds
because it's directly against the code of ethics to which real hackers frequently subscribe (http:// They don't, however, have much regard for
the privacy of your information, and display what can only be described to an outsider as a
distain for any security that you might have put in place. (It's actually much more complicated
than that, but unless you can get inside the way a hacker thinks, there isn't a good word for it.)
Hacker culture makes a semantic distinction between "white hat" and "black hat" hacking and
cracking. The former group religiously believes that if they violate your security, but do no harm
(and potentially even then inform you of the hole, so that you can fix it), that they've definitely
done nothing wrong, and probably done something right. The latter believes—well, it's difficult
to say, exactly, not being in their heads, but something along the lines of "if you're dumb enough
to put your machine online, you deserve what happens to it" is probably close enough.
Hackerdom takes these concepts of good and evil to the farthest definable extreme, and loosely
organized groups of people who act on the principles of "dark side hackers" and "samurai." These
terms describe the fundamentally opposed forces of malicious hacker-turned-cracker, and the
freelance white hats who see it as their mission to stop the dark-side hackers.
Interdicted Real Programmer. Not someone you want to get in the way of, he's usually the best
programmer on a project, and he's usually annoyed because management has stuck yet another
stupid wall between him and getting his job done. The interdicted real programmer isn't actually a
bad guy, but if your security system is getting in the way of him working on his programming
project, he'll make it look like Swiss cheese in short order. "Real Programmers" (http://www.; the story of Mel might be enlightening as
well are typically professional hackers
of the wizard variety (, and they usually
consider the current coding project to be the single highest priority in their computing world.
Work on the project takes precedence over anything else, which frequently means that things
such as firewalls, security or access policies, and other "trivial annoyances" that inhibit coding
progress are ignored, subverted, or eliminated, whichever is most expedient.
If you're managing one of these people, it's little use to tell him "security is part of your job." If
he's working on a security product, it's part of his job. If he's worried about whether his
development platform is secure, it's part of his job. If you had the network guy change the rules in
the firewall because you didn't want the sales staff downloading MP3s, and your real programmer
now can't get to a site he needs for some source code, eviscerating your security is now his job.
This isn't to say that you should never hire real programmers if you want to have your system
remain secure. There's abundant evidence that almost all real, significant computing work that
can't be done by a pack of trained monkeys is done by highly skilled individuals, and that one
highly skilled programmer is almost impossible to replace, no matter how many lesser
programmers you add to the project. In The Mythical Man-Month (Addison-Wesley, 1975, ISBN
0-201-00650-2) Fred Brooks postulates Brooks's Law (
Brooks's-Law.html), which states that adding manpower to a late programming project makes it
later, and this has been proven time and time again. Your highly skilled programmers are
therefore not replaceable by less skilled and more docile ones. Instead, you're much better off
learning to let them work in the environment that they require, and allowing them to do the
programming you require while impeding them as little as possible. If security for their
development environment or product is an unavoidable concern, you might be better off letting
them handle as much of it as possible themselves. They'll typically find security measures put in
place by a lesser programmer a passing amusement, but would consider a breach of security that
they had implemented to be a personal insult and a black mark against their reputation, so they're
not likely to take the responsibility lightly.
Spooks and Spies. Yes, Virginia, there really are professional industrial espionage experts. It's
little use worrying much about them, however, because if you've got someone with a professional
interest in getting at your computer's secrets, what you can learn from this, or a dozen more
books, is going to be of only passing utility. You're going to need professional help if you want to
stop a professional. Certainly, the techniques you'll learn in this book will make a pro's life more
difficult, and you can go a long way toward making your machine invulnerable to network
attacks by following some relatively simple precautions. However, if pros are getting paid to steal
your data and they can't do it over the network, they'll have little compunction about breaking in
and simply stealing your computers. Because it seems most pros look at such physical methods as
relatively low class, they might be more likely to have forged some company letterhead for
Apple, or some other major software vendor, and then send you a free software upgrade to OS X
10.3 (or whatever), complete with the backdoor they require to access your machine. Or maybe
they'll just set up a fake company, and solicit your employees with the hopes of a new, higherpaying job, or even hire them to pick their brains and debrief them of the very information you've
worked so hard to keep private.
If there's little or no money to be made by stealing your information, you've probably little
concern about the professional cracker. Unless they've decided to use your machine as a steppingstone to cover their tracks into some more important target, they've more important things to do
than to crack into machines that can't pay the bills. On the other hand, if there's an economic
incentive to someone else having your data, there's probably someone else willing to pay at least
that much to steal it from you. People get murdered for a few hundred thousand dollars. If you're
a college researcher working with a pharmaceutical company and your research could make them
millions, how much do you think their competition is willing to pay, and how far would someone
go to get it?
Terrorist. Finally, there are cyber-terrorists, who operate for no reason other than to wreak havoc
upon some target. These people are something like super-thugs, though they're probably much
more selective about their targets. The world to this point (thankfully) hasn't seen much activity
from cyber-terrorists. Most things that the media ascribes to cyber-terrorism seem more likely to
come from thuggish sorts with above average abilities. For example, the recent attacks on the
root nameservers that kept DNS traffic fouled up for long periods during late 2002 could easily
have been intended to be terrorism. It seems more probable, though, that it was nothing more
than the work of some thuggish crackers tackling a more-serious-than-usual target. The attacks
have been neither well enough organized nor effective enough to suit a terrorist. The attacks
could have done considerably more damage, and wreaked considerably more havoc, with only a
little more effort and a bit better planning. Given what we've seen the non-cyber version of
terrorists do, it seems unlikely that they'd be satisfied with making Web pages fail to load one out
of three mouse-clicks. (And frankly, given that a certain OS vendor sells products that through
stubbornly poor design cause more damage than these attacks have, what self-respecting terrorist
is going to formulate an attack that's less effective than simply selling crappy software?)
Occasionally what appear to be honest cyber-terrorists do pop up, mostly (to this point) acting
against small segments of the population by doing things such as defacing government or public
service Web sites, or disabling utility company computer systems. Larger-scale, and/or more
damaging types of attacks can be effected, however, and as it appears that we are moving into a
period of history with heightened terrorist activity, it's reasonable to assume that the terrorists
will make use of the Internet to the best of their abilities. Most likely, your major concern will be
against such automated attacks as the script kiddies and lower-ability thugs produce. Unless your
machine has some unique reason to be singled out for attack, a terrorist is probably not going to
address it personally. If you are maintaining a machine with sensitive government information,
the government is probably going to give you rather specific instructions on how to protect it
from expected attacks. On the other hand, you don't need to be a government entity to be a likely
target of terrorism. If you're maintaining a machine that, if compromised, would negatively affect
your local or national economy, it's probably something that a terrorist would consider trying to
knock down. Only you can decide the degree to which you think you could be a likely target, so
only you can determine whether the precautions we cover in this book will be sufficient, or
whether your machine really needs professional help.
Of course, people don't necessarily fit these distinctions perfectly, so there will always be people causing
mischief who fit some blend of these types. They are, however, reasonably representative of the types of
troublemakers that we've heard of, noticed, had run-ins with, or chased out of our systems over the
years. If you're interested in a semi-real-time picture of the current crop of malcontents roaming the Net,
and the security issues they are causing, we highly recommend staying abreast of a number of securityrelated resources, such as the* newsgroups and mailing lists such Bugtraq from (, and even
keeping a finger on the pulse of the troublemakers themselves by watching traffic on the IRC channels
where they congregate to trade stories and software. (Ircle, or any number of other IRC clients findable
through will get you online. We'll cover a brief introduction to using
this tool to look for the troublemakers in Chapter 9.) You'll find many more resources to watch listed by
these sites, as well as more that we've listed in Appendix B.
Even if you're not interested in observing the beasts in the wild, keeping an eye on some of these will
give you advance warning of trouble brewing, and possibly the information you need to protect your
Which Ones, and What to Worry About
What variety of troublemakers you're most likely to find trying to break in to or damage your machine
will, as you've hopefully gathered by considering the profiles already described, depend entirely on the
intent of your machine and the sensitivity and value of your data. The likely suspects, of course, aren't
the only ones that might hammer your hardware, so you do need to consider implications of even the
unlikely attacks, but if you eliminate the likely ones first, you'll be covered against the vast majority of
You're in a much better position to judge the likely attacks against your system than we are, but do
please take the motives and mindsets of the attackers seriously. Almost everyone I've ever known who
has had the thought "Oh, that'll never happen to me, why would they bother my machine?" with respect
to security has had their security cracked, and their machines damaged—sometimes more than once.
Several of these people are computer management professionals who should have known better. They've
been cracked by script kiddies and thugs purely because of laziness in keeping up with software patches.
Others are average home users who've suffered similar attacks via their dial-in or cable modems. The
closest I've come to a cyber-terrorist was someone who mailed a death-threat to the U.S. President's cat
from one of OSU's public-access terminal rooms. Although it seems silly in retrospect, the Secret
Service agents who showed up didn't think it was amusing. And although I never did get the complete
story from the FBI, my desktop machine at the company where I worked in the early 1990s was cracked,
and used in what was probably an incident of industrial espionage. The incompetence of the company's
IT staff cost several people their jobs, but I don't believe that the perpetrators were ever found. These
things happen, they happen frequently, and to everybody—there's nothing about the fact that you don't
want to be cracked, or don't think that you'll be cracked, that will protect you. Seriously considering the
possibilities, and working to protect your machine from those that are likely to happen, will do much
If you're a home user, you probably won't be beat upon by professional corporate spies, but you will be
subject to attacks by script kiddies and thugs. Probably daily. Or, if trends in security proceed as they
have recently, within a year or so you will likely find attacks hitting your machine several times an hour
when you're online. Opportunistic tourists aren't likely to find their way onto your machine—unless, of
course, you let your houseguests fiddle with your computer. You also will have all the problems natively
inherent to having users on machines. Even if your machine is otherwise secure, data that you or your
users allow out through careless network transmissions will be picked off the network by the kiddies
who have successfully broken other machines around you that are less well managed.
If you're managing a public access computing cluster, you will probably encounter the opportunistic
tourist more frequently than most because you'll have a never-ending stream of users whose curiosity
about just what they can get away with will override their better sense.
If your data is valuable, you need to consider how valuable it is, and what someone would be willing to
do for that value. Money is a motivator. You've still got all the problems of those who have nothing to
motivate anyone to crack their systems—and they have enough to worry about—as well as the fact that
people actually will benefit if your security fails.
Everybody Else
Finally, there's everybody else out there in the computing and network world. You might not think of
them as a threat, or even an issue in planning your security strategy, but in many cases they play a
significant role.
Consider the fact that in the recent Distributed Denial of Service attack against (
dos/grcdos.htm)—it's an amusing read, and provides good insight into the mind of the script kiddie and
the thug, especially in the excerpted bits of communication between them), machines belonging to 474
random MS Windows users around the Internet participated in knocking the company off the Net. It's
highly unlikely that any of these users intended to attack, or had anything against the
company, yet participate their machines did. This attack and several more following it were perpetrated
by a 13-year-old, self-proclaimed thug using pure script kiddie techniques, namely an IRC attack-bot
written by a considerably more senior cracker. As is typical for the breed, he appears, despite claims to
the contrary, to know little to nothing about how the "bot" works. Also typical, in the ongoing quest for
self-aggrandizement, he made minor modifications to it such as changing the name, then claimed it as
his own work and unleashed its destructive power on an unsuspecting company that he felt had
indirectly insulted him. In the process, it co-opted the resources of 474 "innocent" Windows machines
and turned them into zombies participating in the attack. What can you do about hundreds of other
machines that you've no connection to and no control over? Nothing immediate or direct, but it's the
complacent acceptance of people running insecure and vulnerable software that allows these people to
continue to run software that even a clueless script kiddie can crack. Keep up with the security
vulnerabilities in software that's out there, make very sure you're not running it, and then work to make it
unacceptable for the people around you to run it, either. The fact that "everyone's doing it" isn't an
excuse to continue; it's the reason there's a problem.
Perhaps of greater concern (and also demonstrated conveniently by attacks against, which you
can read about at, there are new methods out there that make use of
machines that haven't even been compromised to execute their attacks. They use defects in the basic
design of various software fundamental to the working of the Internet to perpetrate attacks directly,
rather than to compromise the machines running the software. This will be a more difficult problem to
solve than that of individuals running vulnerable software: The machines effecting the attack may be
(and in the case of the attacks on, were) doing exactly what they're supposed to do, and what
is required of them to carry on the transmission of the normal Internet traffic that we expect of them on a
day-to-day basis. Fixing the problem in a general sense is going to require either rethinking the way we
use our network resources or inventing and installing some clever filtering software on every ISP's
servers. Neither of these is likely to happen overnight, but at the least you can understand the potential
threats, and be supportive of those changes that are likely to effect valuable protections to other network
citizens. Some of the changes might be inconvenient, and are almost certain to be unpopular, but they're
nowhere near as inconvenient as having your machine completely and unpreventably bashed off the Net
by 13-year-old malcontents.
In this chapter, we hope we've given you a good picture of the people you're up against—some of whom
aren't even actually your enemies, but who can still do plenty of damage to your system if you don't take
precautions to protect it against their actions. As we've repeated with other security topics, it's important
to think about these things in terms of the worst possible scenario. Some people might think that you're
being overly paranoid; others might think that you've taken a very negative stance toward other
computer users and the threats they create. You have a choice between listening to them—having your
system be vulnerable and being part of the larger problem—and working to limit the vulnerabilities and
to protect your and their interests in spite of their protestations. Don't let the fact that it's a naturally
distasteful thing to think the worst of your (and other computer systems') users get in the way of your
need to consider what might be done to your machine. It is people, not evil computers, that are behind
attacks on computers. If you refuse to consider the source of the violence because it's distasteful, you'll
be in the same boat as a host of other misguided individuals and groups who refuse to focus on the
human causes of any number of other forms of violence. It's not a good boat to be in. There are evil
people out there, and they'll do evil things, using whatever tools are at their disposal. If you're not
prepared, they might do them to you.
As with all areas of computer security, you can probably eliminate 90% of the likely threats to your
system with little effort beyond conscientiously keeping up with patches to the systems that have been
found to be vulnerable.
Part II: Vulnerabilities and Exposures: How
Things Don't Work, and Why
4 Theft and Destruction of Property: Data Attacks
5 Picking Locks: Password Attacks
6 Evil Automatons: Malware, Trojans, Viruses, and Worms
7 Eavesdropping and Snooping for Information: Sniffers and Scanners
8 Impersonation and Infiltration: Spoofing
9 Everything Else
Chapter 4. Theft and Destruction of Property: Data Attacks
Keeping Data Secret: Cryptography, Codes, and Ciphers
Data-Divulging Applications
Steganography and Steganalysis: Hiding Data in Plain Sight, and How to Find and Eliminate It
Regardless of the intent of your computer, security involves keeping your data correct, private, or both.
Even if you can keep your system completely free of intruders or software that might divulge your data
without you intending it to, if you pass your data across a network, it may be examined and/or modified
in transit. Because it's difficult to be certain that no software on a system might accidentally divulge data
unintentionally, it's best to treat all critical data as though it were publicly visible at all times. This
means that even on a machine that you consider otherwise secure, it's wise to strongly encrypt data that
would be damaging or dangerous if it were to become visible.
To protect data from examination either on or off your computer, you need to convert it to a form that
cannot be easily accessed without your permission, and that preferably, if changed, can be easily
detected as corrupted. This is the role of cryptography: the science of developing and applying
techniques that allows authorized persons full access to data while converting it to nothing more than
random noise for those without authorization.
This chapter covers the basic tenets of cryptography, including several cryptographic schemes from
historic to current technology. It then outlines some of the ways that your data can be accessed or made
insecure without actual outside intervention (such as by programs that act with your authority, to do
things that you didn't intend for them to do), though these are so varied in aspect that the best we can do
is warn you of the things you need to watch for, and hope you'll be clever enough to catch problem
applications before they do harm. It also examines steganography, which is the application of techniques
to convert data into an invisible, rather than an unreadable form. Data converted by steganography is
then overlaid into some carrier data stream, with the intent that the carrier will not be sufficiently
perturbed for those observing it to notice the change. Data embedded by steganography is intended to be
hard to find for those who don't know what to look for, and sometimes to be difficult to eliminate from
the content in which it is embedded, but it is not usually intended to make the data difficult to read.
Steganography is frequently applied to embed explicitly noncryptographic data in various files, such as
the watermarking of digital images by embedding copyright information directly into the visual image
itself. In this form it is important that the information be essentially invisible in the image, but that it is
still recoverable easily, even after considerable manipulation of the image.
Keeping Data Secret: Cryptography, Codes, and Ciphers
Considering the extent to which we make use of various forms of cryptography in our day-to-day lives, most people display an immense lack of understanding of the implications of this use, and of the conclusions that may be drawn with respect to other encodings or encryptions that are proposed. Codes and ciphers are neither necessarily difficult nor necessarily secure. Few people realize that the
alphabet that they use every day in writing is a written code substituting for spoken language. Many, however, will implicitly and comfortably trust a program or Web Site that claims that all of their personal information will be encoded to protect it from illegitimate access. If customers are willing to believe that "encoding" their information makes it secure, but aren't well-enough informed to
question the validity of the claim that encoding their information actually makes it secure, what reason does an admittedly unscrupulous merchant have to actually invest in a secure encryption system? The lesson to be learned, which will be amplified upon later, is that no encoding or encryption should be considered to be secure if it hasn't been exposed to the light of public scrutiny and verified to be
secure in practice. This section details several of the currently popular encoding and encryption systems used in computer security. Some of these apply directly to Mac OS X user passwords, which are covered in the next chapter because they are used in the available authentication systems. Others described here are used in other security systems you may encounter while using your computer, such
as online merchant customer information systems, the Mac OS Keychain, or in encrypting files or network transmission such as email, and are interesting both by way of comparison and as a reference for these other environments.
The lack of general lay understanding of the cryptography field isn't at all aided by the fact that the language used is rather jargonistic, and the meanings of key terms are dependent on the background of the speaker and the context of the discussion. Depending on whether you ask a cryptologist or a mathematician the definition of the word "code," you'll get two different answers, and a computing
security professional might give you a sideways twist on a third.
The cryptologist will tell you that a cipher is a modification of a message (not necessarily a textual message) by the algorithmic substitution of new data based on the content of the original message, and that a code is a cipher in which the substitutions are linguistic in nature. That is to say, that if you were speaking with one friend about a roommate's birthday, and needed to keep the content of your
discussions private because the birthday boy might be listening, you might devise a code for discussing your preparations. You might choose to equate "cake" with "llama", "bake" with "spank", "invite" with "tickle" and "friend" with "badger." A conversation between you and your friend might come out something like this:
"Hi, Jim. Just checking—did you spank the llama yet?"
"You bet, Mary, and I also remembered to tickle the badgers."
Which is unlikely to convey much meaning to your roommate beyond the fact that you're either speaking in code, or slightly nutty. Such a system, however, doesn't need to be particularly invisible to be quite effective. One regularly hears about the FBI's scanning of Internet traffic to find people transmitting information for the purposes of commission of a crime (
carnivore.htm). It's reasonable to expect that they are looking for and flagging messages with potentially interesting words, phrases, or patterns in them. In light of the recent terrorist activity, it's likely that the FBI is flagging and examining communications containing such words as "bomb," or "nuclear." People wishing to discuss the construction and delivery of a bomb to its target, however, could
almost certainly develop a simple code based around baked goods and grandma's house that would completely remove all suspect words from their communications, while still conveying all the necessary information to a coconspirator about the progress and delivery schedule. This is, in fact, one of the reasons that granting the government broad invasive powers with respect to communications as a
response to National Security needs is a Bad Idea. Such knee-jerk reactions decrease online users' freedoms and liberties, while affording no real increase in security. Ben Franklin had a few choice things to say about those who would try to make the exchange (
A particularly aptly constructed code, with substitutions very carefully chosen to avoid sounding out of place would allow two people to converse without necessarily raising suspicion that they were actually speaking in code. "You have a midterm", for example, would probably make a good substitution for "Invite the guests" in a college setting where "Remember, you have a midterm today" is
unlikely to raise an eyebrow. Noun-for-noun, verb-for-verb or phrase-for-phrase substitutions, however, are not strictly required. You could just as well substitute the numbers "2" for "invite" and "7" for "friend," though "I also remembered to 2 the 7s" is somewhat more obviously encoded.
Speaking mathematically, however, a code doesn't need to be substituted at the level of linguistic components. Morse code, which substitutes intermixed long and short sound periods for letters, is legitimately a code in mathematical parlance, though it would be considered a simple substitution cipher, and not properly a code, by a cryptologist.
Regardless of the exact definition used, codes universally involve the substitution of linguistic or alphabetic components of a message with other linguistic or symbolic components through the use of a dictionary. This dictionary provides equivalences between words or characters in the unencoded, or plaintext, message, and words, character groups, or symbols to be transposed into the encoded
version of the message. The dictionary may be derived in some algorithmic or automated fashion, but there just as easily may be no definable relationship between the unencoded and encoded versions other than through the existence of the dictionary. This implies that a code is likely to be unbreakable without the construction (or other acquisition) of a corresponding dictionary, but that once such a
dictionary has been acquired, any message using that code is easily decodable through the use of the dictionary.
On the other hand, if you were corresponding with your friend via writing, you might find it easier to use a cipher to hide the content of your message. A cipher uses algorithmic substitution upon the message at the character level (or, more properly, at a level ignoring the linguistic structure), making it much easier to apply. A code dictionary listing equivalences for every possible word you might
want to transmit without your roommate being able to decipher the message would probably be impractical to construct. With a cipher, however, construction of the dictionary is not necessary, as the substitutions are algorithmic in nature, and carried out at the letter or symbol level. You might, for example, choose to substitute every letter you wished to encode with the letter following it in the
alphabet. This wouldn't be a particularly difficult cipher for your roommate to guess, and subsequently decipher, but regardless, the prior discussion in this form would be rendered like this:
"Hi, Jim. Just checking—did you cblf the dblf yet?"
"You bet, Mary, and I also remembered to jowjuf the gsjfoet."
Because application of the cipher isn't limited to a predetermined dictionary of words, it would be equally easy to encipher the entire exchange by using this scheme, reducing the chance that your roommate would be able to make intelligent guesses at the content from the context:
"Ij, Kjn. Kvtu difdljoh—eje zpv cblf uif dblf zfu?"
"Zpv cfu, Nbsz, boe J bmtp sfnfncfsfe up jowjuf uif gsjfoet."
This is, in fact, a very simple method of enciphering text, credited in its initial form to none other than Julius Caesar, who, it is said, implemented it in the form of a pair of rotatable rings on a staff, each containing the alphabet in order. Setting any character in one ring above a different character in the other allows one to encipher and decipher text written at any offset from the plaintext version—
providing, of course, the offset is known.
A particularly common variant of this type of simple substitution cipher is the rot13 cipher, which, instead of transposing each character by the one following it, transposes for the one rotated 13 characters ahead. An appealing feature of this cipher is that because the English alphabet is 26 characters in length, the same algorithm that creates (enciphers) the enciphered text (ciphertext) can be used to
convert the ciphertext back to the plaintext message. The rot13 cipher is so common in the Unix and Usenet News environment that the ability to encipher to and decipher from it is built into many Usenet News clients and email readers. For those lacking such capability, the tr command, or a short perl script can suffice with only a bit of extra cutting and pasting:
% tr 'a-zA-Z' 'n-za-mN-ZA-M'
Hi, Jim. Just checking - did you bake the cake
Uv, Wvz. whfg purpxvat - qvq lbh onxr gur pnxr
You bet, Mary, and I also remembered to invite
Lbh org, Znel, naq V nyfb erzrzorerq gb vaivgr
the friends.
gur sevraqf.
% tr 'a-zA-Z' 'n-za-mN-ZA-M'
Uv, Wvz. whfg purpxvat - qvq lbh onxr gur pnxr
Hi, Jim. just checking - did you bake the cake
Lbh org, Znel, naq V nyfb erzrzorerq gb vaivgr
You bet, Mary, and I also remembered to invite
gur sevraqf.
the friends.
If you prefer Perl, the following snippit of code will do the same:
$line =~y/a-zA-Z/n-za-mN-ZA-M/;
print $line;
The problem with simple substitution ciphers such as this is that the nonrandom patterns of the English language (or actually, it seems, any human language) allow an intelligent person in possession of a sufficient amount of enciphered text, to mount a very precise, and almost inevitably successful attack against the cipher algorithm. Many longtime denizens of the Usenet News hierarchy have
become so familiar with the rot13 cipher that they can read many words in rot13 as easily as they can in plain English. This overwhelming weakness lies in the fact that the substitutions are carried out in a perfectly regular way, and really amount to nothing more than a definition of a new alphabet for the same language. The characters used look just like the characters used in common English
writing, but in the rot13 alphabet, n is pronounced like the English a, o is pronounced like the English b, and so on. If one studies the language that has been encoded, patterns in letter usage begin to emerge, and it becomes relatively simple to determine the encoding algorithm, and by that, the original plaintext message. For example, if you examine letter frequency tables for the English language,
you will find that the letter e is used far more frequently than any other letter. Not coincidentally, if you count up the various characters used in the ciphertext of the messages above, you'll find that the most prevalent letters are r, v, and g, with 13, 8, and 7 uses respectively. Depending on who you ask regarding letter frequency tables, they'll tell you that t, followed by a, o, and i (the latter 3 with very
nearly identical frequency) are the next most frequently used letters in English. With these bits of data it isn't too difficult to start back-substituting letters into the ciphertext, and very quickly arrive at the deciphered plaintext message. Just working with the e, the second line of the ciphertext becomes (with bold uppercase plaintext predictions inserted):
"Lbh org, Znel, naq V nyfb eEzEzoEeEq gb vaivgE guE sevEaqf."
How many three letter words do you know that end in 'e'? How many single-letter words do you know? If either v or g stands for t, which one is really it? Based on that, what do b and u stand for? As you can see, this type of cipher provides little real protection for a message, though it can make it more difficult to read for those insufficiently immersed in the alternate alphabet to be able to read it
There are a multitude of interesting Web resources devoted to codes and ciphers, one of the most immediately readable being "The Secret Language," by Ron Hipschman ( Ron provides a down-to-earth explanation of simple substitution ciphers, as well as transposition ciphers (a technique that is not covered in this book). He also includes a nice
summary of frequency tables for single letters, letter pairs, initial letters, and common words through four letters in length.
Perhaps the most famous literary discussion of ciphers, however, is contained in Edgar Alan Poe's "The Gold Bug," in which the entire plot of the story hinges on the deciphering of a secret message. "The Gold Bug" can currently be downloaded from the Oxford Text Archive, available online at
To make ciphers more secure, a number of methods have been invented to add complexity to the resulting ciphertext, and thereby make it more difficult to decipher. Interestingly, the most successful of these have not relied on making the algorithm obtusely complex and/or obscure. Rather they have relied on the use of an algorithm, the action of which is permuted in some fashion by the use of a key.
The algorithm is then able to be well-publicized, tested, and verified, but any given ciphertext cannot be decrypted by simple knowledge of the algorithm; any decryption requires access to the key.
One simple but important adaptation of the simple substitution cipher was invented in the 16th century by Blaise de Vigenere from the court of Henry III of France. It was believed to be essentially unbreakable for several centuries. The Vigenere cipher works very similarly to the initial cipher discussed in this chapter, where letters are substituted by the letter immediately following them in the
alphabet. To make the message much more secure, however, the offset to the substitution letter is not fixed at "the following character" (an offset of 1), but instead may be any offset from 0 to 26. This makes deconvolution of the ciphertext message by examining letter frequency nearly impossible, as the first e in a plaintext message may be substituted with an f, but the second could be substituted
with an o, or a z, or any other letter. Counting the occurrence of characters in the ciphertext is therefore almost useless. To decipher a ciphertext written in the Vigenere cipher, and indeed to encipher it, one needs a key.
Despite the difficulty in deciphering a message in this cipher, the algorithm itself is simple to explain. First, one must understand that although a simple substitution cipher essentially replaces one alphabet with another, this cipher replaces the alphabet in which a message is written with many others. To do this substitution, one constructs a matrix of alphabets as shown in Figure 4.1:
Figure 4.1. The 26 alphabets of the Vigenere cipher.
A key is then chosen with which to encipher the message. Typically the key is a short word or phrase. The plaintext message is written down, and the key written out in a repeating fashion over the entire length of the plaintext message. If you pick "birthday" as the key for the previous exchange, the enciphering of the first line of the conversation would begin as shown in Figure 4.2.
Figure 4.2. A plaintext message and key layout for the Vigenere cipher.
Enciphering the message then proceeds on a character-by-character basis. For each character, the appropriate alphabet to use is looked up based on the corresponding letter of the key. For the first character, H, the key character is B; therefore we look to the row of the matrix constructed in Figure 4.1 that starts with a B, and look up the entry that corresponds to the H column from the first row (that is
to say, an I). The second character is an i, enciphered from the alphabet starting with I, so it gets replaced with a q. The ciphertext is constructed in this fashion for the entire plaintext message. Note that the e's are going to be encoded from the alphabets starting with B, H, and D, so the ciphertext is going to contain different characters for them, defeating simple attacks by character frequency in the
Special characters such as spaces can be ignored (causing them to disappear in the output), absorb a character of the key, resolve to a default character from the alphabet, or be explicitly encoded into the cipher with the addition of more columns in the matrix shown in Figure 4.1. Especially for spaces, either ignoring them or encoding them into the alphabet matrix is a better idea than allowing them to
absorb a key character: Following the encoding shown gives an attacker potentially useful hints through analysis of the likely words based on character count. If you add a space entry to the matrix in Figure 4.1 following the Z entry on each row, and remove the punctuation, the message then encodes as shown in Figure 4.3. Deciphering the ciphertext is exactly the reverse of the method used for
enciphering the text.
Figure 4.3. The message enciphered with the Vigenere cipher and with "BIRTHDAY" used as the key.
Note that spaces still occur in the ciphertext, but that due to the space now being a valid encoding character in each of the new alphabets, a space in the ciphertext rarely corresponds to a space in the plaintext, defeating attempts to use word-size statistics against the encoding. In this case, the ciphertext character with the most occurrences is Q, but this character stands for i, a space, and y in different
places in the ciphertext. It took until 1863 for an officer of the Prussian military to determine a method in which the length of the key could be predicted. If the length of the key can be guessed, then the ciphertext can be broken into <keylength> different cipher-subtexts, each encoded with a different simple substitution cipher. If enough raw ciphertext that uses the same key can be accumulated, then
these subsets of the ciphertext can be successfully attacked by the use of character frequency tables.
Since that time, a multitude of ciphers have been proposed, used, and broken. The best of them to date are in use protecting your communications in Secure Shell (SSH, discussed in Chapter 14, "Remote Access: Secure Shell, VNC, Timbuktu, Apple Remote Desktop"), on secure Web sites (Chapter 15, "Web Server Security," and Appendix C, "Secure Web Development"), and in applications such as
PGP and GPG (later in this chapter). The most obvious things all successful algorithms have in common are complex keys that must be shared to allow decryption but that must be protected to prevent unauthorized access, and widely published and studied algorithms, allowing cryptography experts to eliminate places where vulnerabilities, such as the ability to recover the key from the ciphertext,
might occur.
To make semantic matters more confusing, although cryptography is properly the practice of employing codes and ciphers, the computing security world has taken to using the term encryption to mean only the application of specific types of ciphers—those being ones that require a key to be exchanged for decryption. Cryptanalysis is therefore the science and practice of studying cryptographic
systems to discover and exploit their weaknesses, and cryptology is the umbrella science encompassing both cryptography and cryptanalysis. More confusingly, the cryptology field uses the terms encoding and encoded in a generic sense to mean the application of any cryptographic technique to transform plaintext data, including the application of ciphers. I could therefore legitimately have used the
term encoded in this chapter where I have used the more precise term enciphered. The computing security field, however, prefers to use encoded to indicate an encoding (or enciphering), which specifically is not meant to obfuscate the content (such as "Morse code," or "Pascal (programming language) code"), and deprecates the use of encipher as a verb entirely, except in cases where cultural bias
makes the use of encrypt unpalatable. This leaves, at least according to IETF RFC 2828 (, the discussion of proper ciphers that don't meet the RFC definition of an encryption scheme in undiscussable limbo.
Be that semantic soup as it may, we will adopt the only conglomeration of these terms that seems to cover the spectrum while remaining sensible for use in this book.
Encoding is used in the mathematical sense of transformation from one symbol set to another. This may be the encoding of a program's logic as commands in a formal program language, or the substitution of llama for cake. It also includes the possibility of symbolic transformations at the character level, such as the use of Morse code. In transmitting data over the Internet, it's also common to
see information encoded from one form that's incompatible with a transport system into another that is. You'll therefore frequently see binary information, such as images, attached to pure-text transport systems such as email with the image encoded into a textual representation (such as Base64) that the mail transport system can handle.
Enciphering is used to indicate the application of a character-symbol-level transformation of a plaintext message with the intent to disguise the content.
Encryption is used as RFC 2828 recommends, to indicate the application of an algorithmic cipher that requires key exchange for decoding.
The next few sections briefly outline the currently prevalent encryption methods and how they apply to securing small data tokens such as passwords. While reading these, keep in mind that a message to be encoded (plaintext in the previous examples) does not literally need to be a textual message. Any data stream may be the message, including, but not limited to, images, network traffic, software
executables, or binary data. Nor does the encrypted (ciphertext) output literally need to be textual in nature.
DES: Data Encryption Standard
DES, Data Encryption Standard, is an example of a symmetric, or secret key, cryptosystem. Symmetric cryptosystems use the same key for encryption and decryption.
DES was developed in the 1970s at IBM with some input from the National Security Agency (NSA) and the former National Bureau of Standards (NBS), which is now the National Institute of Standards and Technology (NIST). The U.S. government adopted DES as a standard in July 1977 and reaffirmed it in 1983, 1988, 1993, and 1999. It is defined in the Federal Information Processing Standards
(FIPS) publication 46-3, available at
DES is a 64-bit block cipher that uses a 56-bit key during execution. In other words, it encrypts data by breaking the message into 64-bit blocks and encrypting them with a 56-bit key. The same key is used for encryption and decryption. It is susceptible to brute-force attacks from today's hardware, and is considered to be a weak encryption algorithm now. In FIPS 46-3 the government allows its use
only in legacy systems, and instead lists Triple DES as the FIPS-approved symmetric algorithm of choice.
Although the algorithm itself is more complex than can be explained in detail here, in basic form it can be understood in the following form:
1. The 56-bit key is extended to 64 bits with parity and error-correction data. This becomes key K.
2. The message is segmented into 64-bit blocks.
3. A block B of the message M is passed through a known permutation IP, which rearranges its bits in a predetermined fashion. The result is PB.
4. The PB is injected into a central encryption loop and a counter N is initialized.
5. The incoming block is split into left and right 32-bit halves L and R.
6. 48 bits of the K are selected into Kn, based on the contents of K and iteration count N.
7. A cipher function F is applied to R, using Kn. The output is R'.
8. A replacement PB is assembled from R' and L, in that order (R' becomes the leftmost 32 bits; L becomes the rightmost).
9. The process repeats an additional 15 times from step 5.
10. The final reassembled PB is reversed once more to PB'.
11. PB' is subjected to an inverse transformation, IP', which is the reverse of the initial reordering IP. The output of this operation is 64 bits of the encrypted output for M.
12. The next block of 64 bits from M is chosen as B, and the algorithm repeats from step 3.
RSA Laboratories has thus far sponsored three DES-cracking challenges, each of which have been solved. Rocke Verser's group at the University of Illinois solved the first challenge in 1997, and the Electronic Frontier Foundation solved the second challenge in 1998. and Electronic Frontier Foundation won the third contest in 1999. Information on the latter two contests can be found
at and
Interestingly, the strength of the basic algorithm behind DES has been affirmed by these cracking attempts, rather than deprecated as insecure. The insecurity lies in its use of only a 56-bit key, which can be attacked in a brute-force fashion quite easily by today's standards. With more than 20 years of scrutiny behind it, there still has been no algorithmic solution devised that can back-calculate the
original message from the encrypted output, or guess the key without an exhaustive search of the entire key space. This, to say the least, is an impressive accomplishment.
DES is being replaced by the Advanced Encryption Standard (AES). AES was adopted as a standard in May 2002 and is defined in the FIPS publication 197, available at AES is a 128-bit symmetric block cipher that uses 128-, 192-, and 256-bit keys. The algorithm can handle additional block sizes and key lengths, but the standard has not
adopted them at this time.
RSA: The Rivest-Shamir-Adelman Algorithm
RSA is an example of an asymmetric or public key cryptosystem. Asymmetric cryptosystems use the different keys for encryption and decryption. Encryption is done with a public key, but decryption is done with a private key. The basic notion is that each party who wishes to engage in secure communication of information generates a public and a private key. These keys are related in a precise
mathematical fashion such that a piece of information to be exchanged between individual A and individual B that is encrypted with B's public key can only be decrypted with B's private key. Authentication of A as the sender can also be performed if a validation signature is attached in both plaintext, and encrypted with A's private key to the encrypted data that is sent to B. The encrypted portion of
this signature can only be decrypted with A's public key, thereby validating A as the sender.
RSA was developed in 1977 by Ronald Rivest, Adi Shamir, and Leonard Adelman, and it takes its name from the initials of the developers' last names. The latest version of the documentation on RSA is available at
The RSA algorithm is based on factoring. The product of two large prime numbers similar in length, p and q, gives a number called the modulus, n. Choose another number, e, such that it is less than n and is relatively prime to (p –1)x(q –1), which is to say that e and ( p –1)x( q –1) have no factors larger than 1. Choose another number d, such that (( e xd)–1) is divisible by ( p –1)x( q –
1). The number e is known as the public exponent, and the number d is known as the private exponent. The public key is the pair of numbers ( n,e), and the private key is the pair ( n,d).
If individual A wishes to send a message M to individual B, A encrypts M into ciphertext C by exponentiation: C = ( M ^ e ) mod n , where e and n are the elements of B's public key. ^ is the exponentiation operator, ( i ^ j) is i raised to the power j. mod is the modulus mathematical operator; ( i mod j) produces the remainder of i divided by j. B decodes the message also by exponentiation:
M = (C ^ d ) mod n , where d and n are B's private key. The relationship between e and d is what causes this reversal by exponentiation to work, and allows correct extraction of M from C. Because only B knows the private key ( d,n), only B can extract the message.
To sign the message, A attaches a signature s to the ciphertext C, and also an encrypted signature es. es is generated by exponentiation with A's private key: es = ( s ^ d) mod n, where d and n are from A's private key. B computes a decrypted signature ds by exponentiation: ds = ( es ^ e) mod n, where e and n are from A's public key, and compares ds with s. They will agree only if es was
encrypted with A's private key, and since only A possesses this key, agreement serves as validation that the message is from A.
This algorithm really is quite trivial in design, and relies on the fact that it is computationally quite difficult to factor large numbers. In practice, this works as follows (only using much larger values). We will calculate a key pair for two individuals A and B, and carry out a simple signed data exchange:
1. A picks p = 11, q = 19. n = p * q = 209.
2. ( p –1)x( q –1) = 180. 180 factors to 2x2x3x3x5. Pick e = 11. (11 is prime, and not a factor of 180. e being prime is not a requirement of the algorithm, but makes the calculations easier to demonstrate. It's also bad form to pick e equal to p or q in practice.)
3. ((11xd)–1)"180 needs to be a whole number. Pick d = 131. (131x11)–1 = 1440 = 8x180.
4. A's public key is then (209,11) and A's private key is (209,131).
5. B picks p = 7, q = 23. n = pxq = 161.
6. ( p –1)x( q –1) = 132. 132 factors to 2x2x3x11. Pick e = 7. (Usually, one would want e to be smaller than ( p –1)x( q –1)), but still a large number. Again, 7 is chosen for convenience.)
7. ((7xd)–1)"132 needs to be a whole number. Pick d = 151. (151x7)–1 = 1056 = 8x132.
8. B's public key is then (161,7), and B's private key is (161,151).
Let us say that A wishes to send the (very short) message 13 to B. The following steps are taken:
1. B sends his public key to A; A sends her public key to B.
2. A encrypts the message with B's public key (161,7) as (13^7) mod 161. 13^7 = 62748517. 62748517/161 = 389742 with a remainder of 55. 55 becomes the encrypted ciphertext C of the message.
3. A also wants to sign the message with the signature 17. A encrypts the signature with her private key of (209,131) as es = (17^131) mod 209.
4. 17^131 = 154457393072366309211243453140265336130679022855532916933912066667508381014824689935939748335468475166201536661429828074002635558976064397547405836087319391984433. (Don't believe us? Install Perl's Math::BigFloat module (you need a more recent version than Apple provides—please see http://www.; perl -MCPAN -e shell should get you well on your way) and try out the sample code in Listing 4.1 following this example).
5. es = 154457393072366309211243453140265336130679022855532916933912066667508381014824689935939748335468475166201536661429828074002635558976064397547405836087319391984433 mod 209 = 6.
A then has a ciphertext C for the message that consists of the value 55, and a composite signature that consists of the nonencrypted signature 17, and the encrypted signature 6. These values are sent to B. No special care need be taken to protect them, because they are both encrypted and signed. They cannot be decrypted without the use of B's private key and A's public key. They cannot be modified
without the use of A's private key. Upon receipt, B applies the following steps:
1. To validate the message, B decrypts the encrypted portion of the signature with A's public key (as it was encoded with A's private key). ds = ( es ^ e) mod n = (6^11) mod 209 = 362797056 mod 209 = 17.
2. B compares ds = 17 to the nonencrypted portion of the signature (sent as the value 17), finds that they match, and can be comfortable that indeed the message came from A.
3. B then decrypts the ciphertext C by using his private key (as the message was encoded with his public key). M = ( C ^ d) mod n = (55^151) mod 161.
4. 55^151 =
5. M =
mod 161 = 13.
And, at this point, B has recovered A's message of 13, verified that A is the sender by determining that A's public key is the one necessary to properly decrypt the signature, and the communication is at an end.
Listing 4.1 A Public-Key Encryption Demo. (You will need to install a more current version of Math::BigFloat than Apple provides to run this code.)
use Math::BigFloat; #This line will break if you haven't updated BigFloat
These modify their arguments
# modulus ($x % $y) = remainder of $x/$y
# power of arguments ($x raised to the $y)
If you want to keep $x and operate on it at the same time, pass the
operation through copy() first.
= Math::BigFloat->new(7);
= Math::BigFloat->new(23); #
= Math::BigFloat->new(7);
= Math::BigFloat->new(151); #
= Math::BigFloat->new(13); #
Individual A's prime p
A's prime q
A's public exponent e
A's public exponent d
in real use, Ae should not be
equal to Ap or Aq.
Individual B's prime p
B's prime q
B's public exponent e
B's public exponent d
Message M
= Math::BigFloat->new(17);
# A's signature
# Message M and Signature s can be
# no larger in value than Min(An,Bn)
"RSA example: person A encrypts and signs a message to person B\n";
"A's primes:\n";
" Ap = $Ap\n";
" Aq = $Aq\n";
"A's exponents:\n";
" public exponent Ae = $Ae\n";
" private exponent Ad = $Ad\n";
"B's primes:\n";
" Bp = $Bp\n";
" Bq = $Bq\n";
"B's exponents:\n";
" public exponent Be = $Be\n";
" private exponent Bd = $Bd\n";
$An =
$Bn =
"A's modulus Ap*Aq:\n";
" An = Ap*Aq = $An\n";
"B's modulus Bp*Bq:\n";
" Bn = Bp*Bq = $Bn\n";
# $An = $Ap*$Aq
# $Bn = $Bp*$Bq
$C =
"Message M:\n";
" M = $M\n";
# ciphertext temporary = (message ^ Be)
"Encryption begins:\n";
" M^Be = $M^$Be\n";
" M^Be = $C\n";
# ciphertext C = (message ^ Be) mod Bn
print "Encryption finished:\n";
print " C = (M^Be) mod Bn = $C\n";
print "\n\n";
print "A signs C:\n";
print " s = $s\n";
print "Signature Encryption begins:\n";
$es = $s->copy()->bpow($Ad);
# A encrypts s with A's private key (temp)
print " s^Ad = $s^$Ad\n";
print " s^Ad = $es\n";
# encrypted sig es = (s ^ Ad) mod An
print "Signature Encryption finished:\n";
print " es = (s^Ad) mod An = $es\n";
print "\n\n";
"Full message reads;\n";
= $C\n";
= $s\n";
"encrypted sig = $es\n";
$ds =
"B begins decrypting:\n";
"check the signature:\n";
# decrypt using A's public key
" es^Ae = $es^$Ae\n";
" es^Ae = $ds\n";
print " ds = (es^Ae) mod An = $ds\n";
if($ds->bcmp($s) == 0)
# sig s and decrypted sig ds match if zero
print " ds = $ds = $s = s : Signatures match - sender verified\n";
print " ds = $ds != $s = s : Signatures do not match, message forged\n";
print "decrypt the message into dM:\n";
$dM = $C->copy()->bpow($Bd);
print " C^Bd = $C^$Bd\n";
print " C^Bd = $dM\n";
print " dM = (C^Bd) mod Bn = $dM\n";
print "Message decrypted, sender verified, end of communication.\n";
print "\n\n";
The Perl code in Listing 4.1 requires a modern version of the Math::BigFloat package. You can install this through the CPAN module if you choose, by entering
perl -MCPAN -e shell
install Math::BigFloat
This may die with an error complaining about, as there is (as of this writing) an inconsistency in the archived packages and requirements. If you get this error, reissue the install command as
force install Math::BigFloat
Typically, messages would be composed of ASCII (American Standard Code for Information Interchange) text or binary data. From the point of view of the algorithm, it makes no difference. ASCII, or any other digital storage code for textual data, provides a conversion between individual characters in the alphabet and numeric values. If the message were text such as "A!", the numeric values that
correspond to the alphabetic characters are what would be used. In ASCII, "A" has the decimal value 65, and "!" has the value 33. The message can be considered to be a 2-byte value with "A" being the high-order byte, and "!" being in the low-order byte.
In decimal, then, the value would be (65x256) + 33 = 16673. This is the value to which A would apply B's public key. One can directly encrypt any string that is less than the minimum value of A's and B's moduli. Data streams or strings greater than this value (longer than the bitstream length encoding this value) must be broken into blocks of this length or shorter and sequentially encoded.
Although this example demonstrates the mechanism of RSA encryption, the practice is often somewhat different. For example, the nonencrypted portion of the signature should not be passed as plaintext, because if it is, then all an attacker must do is steal that signature and the encrypted version to be able to forge A's signature on other messages. Instead, the
nonencrypted version of the signature would typically be passed in the data encrypted with B's public key—it is therefore no longer "nonencrypted," but is rather encrypted with a different key, preventing anyone not in possession of both A's public key and B's private key from making the comparison.
Also, it is common practice to use RSA as an encryption envelope, rather than to encrypt the entire message. In this use, some symmetric secret-key encryption algorithm (for example 3DES) is used to encrypt the main data stream, then the key used for this encryption, and a checksum calculated for the original data, is passed to RSA as data to encrypt and
sign. Receipt of the RSA-encrypted information along with the 3DES data stream enables the recipient to recover the key necessary to decrypt the 3DES stream, the RSA signature verifies its origin, and the checksum verifies the transmission integrity.
If you would like to see what happens when someone tries to forge a signature block, change the value of $es between the encryption and decryption sections of the code.
RSA Laboratories recommends that for general corporate use, 1024-bit keys should be sufficient. For truly valuable data, 2048-bit keys should be used. For less valuable data, 768-bit keys may be appropriate. These key sizes actually refer to the size of the modulus, n. The primary reason to choose a shorter key length over a greater one is that as the keys become longer, the calculations required
become more costly in terms of computational time. This makes using larger keys take longer. As machines become faster, this becomes less of an issue, and 1024- or 2048-bit keys are not problematic on modern hardware.
As with DES, RSA Laboratories has sponsored contests for cracking RSA. The most recent contest was the factorization of the 155-digit (512-bit) RSA Challenge, which was solved in 1999. It took 35.7 CPU years. Details on the contest are available at RSA Laboratories is currently sponsoring more RSA factoring challenges,
ranging from a 576-bit challenge to a 2048-bit challenge.
The RSA algorithm is vulnerable to chosen plaintext attacks, where the attacker can have any text encrypted with the unknown key and then determine the key that was used to encrypt the text. It is also vulnerable to fault attacks in the cryptosystem itself, particularly in private key operations. Causing one bit of error at the proper point can reveal the private key.
Note that the strength of the algorithm relies on the fact that it is computationally exceedingly difficult to factor very large numbers. If one must consider every prime less than half some number n as a potential factor of n, the task quickly becomes astronomically intensive computationally, as n becomes large. With n on the order of 1024 bits, and well-chosen primes near 512 bits in size, the number
of prime numbers that would need to be examined by a brute-force iteration through all primes is on the order of 10^150. This is somewhere around 10^70 times more prime numbers that would need to be examined than there are electrons in the known universe.
It is interesting to compare the philosophy of the RSA algorithm with the DES algorithm. In the case of DES, the algorithm has no known weaknesses, but fails to be secure due to the limited length of the key. RSA, on the other hand has a very significant known weakness: If a method could be found for quickly factoring large numbers without resorting to brute-force methods, d could be recovered
from e and n in the public key. RSA, therefore, is secure only as long as the keys are extremely large, and there is no known way to factor them quickly. So long as large numbers remain difficult to factor, however, RSA's weakness is unexploitable. Thankfully, although there are faster ways to factor numbers than the brute-force examination of every possible prime, none appear likely to bring this
computational task into the realm of affordable desktop computing capabilities.
MD5: Message Digest Function
MD5 is a message digest function developed by Ronald Rivest in 1991. It is defined in RFC 1321, available at
MD5 is a one-way hash function that takes an arbitrary-length message and returns a 128-bit value. The algorithm works in five steps.
1. The message is padded so that its length plus 64 bits is divisible by 512. That is to say, it is extended an arbitrary length, until it is 64 bits shy of being an exact multiple of 512 bits in length.
2. A 64-bit representation of the length of the message before it was padded is appended to the result of the first step. If the message is longer than 2^64 (not a particularly likely occurrence, as this is 16,777,216 terabytes), the lower-order 64 bits of the length are used. The length of the result of this step is a multiple of 512 bits. As the length is a multiple of 512 bits, it is also a multiple of 16 32bit words (16x32 = 512).
3. Four 32-bit words are used to initialize the message digest buffer.
4. Four functions are defined that each take three 32-bit words as input, and each returns a single 32-bit word as output. The message is passed, in 16-word blocks, through a predetermined Boolean combination of these four functions. The output of each iteration is 4 32-bit words, which initialize the buffer for the next iteration.
5. The final 4 words written to the buffer are combined into a single 128-bit value, which is returned as the result of the hash.
Upon consideration, it should be obvious that the operation is lossy, and does not constitute a compression or encryption of the data that can be reversed by application of a suitable key. No matter the size of the input, the result is always 128 bits of data. The utility in this is that it is improbable (1 in 2^128 likely) that two randomly chosen pieces of data will result in the same hash output. The
unlikelihood of this occurring allows the MD5 hash to be used to predict whether two pieces of information are in fact the same, or more commonly, to determine whether a piece of information is the same as it once was.
This does not imply that two documents cannot have the same MD5 checksum. The checksum usually contains less information than the document, and there are many more potential documents than MD5 checksums. In fact, there are an infinite number of possible documents if the document size is infinitely variable, whereas there are only a finite number of
possible 128-bit MD5 checksums. This implies that there are many documents that would hash to any given MD5 checksum. The MD5 checksum space is large enough, however, that for practical purposes, you are unlikely to meet two documents that hash to the same value by chance. If you'd like an estimate of the probability that two documents in a large
collection might hash to the same value, do some reading on the birthday problem (,,NAV2-76_SEP949,00.shtml, If my calculations are correct, if you have 17,884,898,788,718,400,000 documents in your database, there's about a 50%
chance that two will hash to the same value via MD5. That's many fewer than 2^128, but its still a larger number of documents than most of us will ever see.
As an example of the first use, consider the task of trying to create a central data repository of all documents used in some place of business. If the environment is like most, there are probably hundreds of unique documents, but also dozens of copies of identical documents, kept by dozens of different people. If you were to try to build a database containing only one copy of each, it could become very
time consuming to compare every bit of every document against every bit of every other document to determine those that are duplicates, and those that are new, unique, and need to be added to the database. Instead, you might calculate an MD5 checksum for each unique document as you add it to the database. An incoming document then could be checked for potential duplication in the database if
its MD5 checksum was calculated and only the checksum was compared, rather than the entire document, against the checksums existing in the database. If the incoming document's checksum does not already exist in the database, it is definitely unique, as MD5 always hashes a given input to the same output. If the incoming document's checksum does already exist, it might be the same as an
existing document, or it might be one of the many possible documents that hash to that particular MD5 checksum value. In this case, the document needs to be checked on a bit-for-bit basis with the possible duplicate, or duplicates, in the database to determine whether it needs to be added or discarded. Because you have only a 1 in 2^128 chance of any document hashing to the same value as another,
it's unlikely that you will find many that have the same hash value, even if you store a very large number of documents.
This use is also commonly applied as a method to check whether a user has entered a password correctly. The system stores the MD5 checksum of each user's actual password in a table, and when a user attempts to authenticate, the MD5 checksum of the password is compared with the stored checksum value. If they match, the user has probably entered the proper password. Because many entered
values will hash to every possible MD5 checksum, however, it's entirely possible with this scheme that the user has entered some other password that also hashes to the same checksum value. Possible, but unlikely.
In the second use, the MD5 checksum is used to verify that a piece of information has not been changed by some action. For example, if you are archiving valuable data, you might want to calculate MD5 checksums for the data and store them separately. When the data needs to be examined at a later date, MD5 checksums can again be calculated. If they agree, there is a high probability that the data
has not been changed. If they disagree, something has changed in the data since the original checksum was calculated. This technique is useful both for verifying archival storage and for providing a method for others to verify that network transmissions have been received unaltered. For example, MD5 can be used to provide a digital signature for when a large file has to be compressed before being
encrypted with a private key for a public key cryptosystem. Arriving at a duplicate checksum after decryption and decompression is strong evidence that the file has not been tampered with.
As a signature, MD5's weakness is that it is a hash, and does not produce a unique value for every possible input. Therefore, it is possible for an attacker to construct and insert a forged message that hashes to the same MD5 checksum as the original. Attacks on MD5 therefore are attacks to find collisions, multiple inputs that can produce the same hash. Structural algorithmic solutions to allow an
attacker to do so easily have not at this time been discovered, but brute-force attacks have been demonstrated to be possible with a sufficient investment of computational resources. MD5 has not yet been declared an insecure checksum solution, but its strength is in doubt. In 1995 Hans Dobbertin, in "Alf Swindles Ann," CryptoBytes (3) 1, showed how to defeat MD4, which shares many features with
MD5, via hash collisions. P. van Oorschot and M. Wiener, in "Parallel Collision Search with Application to Hash Functions and Discrete Logarithms," Proceedings of 2nd ACM Conference on Computer and Communication Security (1994), calculate that for $10 million in computing hardware, a hash collision could be determined in 24 days or less (in 1994 dollars and time). Dobbertin, in "The
Status of MD5 After a Recent Attack," CryptoBytes (2) 2 (1996), extends the techniques applied to break the MD4 hash in under a minute to the compression function of MD5. Each of these results suggests that although MD5 may remain suitable for applications such as the detection of possible database entry duplications, its utility as a seal against unauthorized data modification may be limited, or
may be eliminated completely in the near future.
Other Encryption Algorithms
A plethora of other encryption algorithms and software is available, though only a few that have stood the test of time and public examination long enough to be considered reasonably secure. The following list includes a number of the more popular variants and brief descriptions of them:
3DES. Triple-DES is a variant of DES that uses DES three times. Encryption is generally performed in an encrypt-decrypt-encrypt sequence with three independent 168-bit keys. It is considered more secure than DES, but is slower.
Blowfish. A 64-bit block cipher with variable-length keys up to 448 bits. It was developed by Bruce Schneier and designed for 32-bit machines. It is much faster than DES. Dr. Dobb's Journal sponsored a cryptanalysis contest of Blowfish in April 1995. Results included an attack on a 3-round version of Blowfish (Blowfish has 16 rounds), a differential attack on a simplified variant of
Blowfish, and the discovery of weak keys. The attack based on weak keys is effective only against reduced-round variants of Blowfish. For details on the results of the contest, see Schneier's paper at The original paper on Blowfish is available at Overall, Blowfish is generally considered to be secure.
CAST-128. A 64-bit block cipher with variable-length keys, up to 128 bits. It is a DES-like encryption. CAST takes its name from the original developers, Carlisle Adams and Stafford Tavares. CAST-128 is also part of Pretty Good Privacy (PGP). CAST-256 is an extension that uses a 128-bit block size with keys up to 256 bits.
RC4. RC4 is a variable key-size stream cipher designed by Ronald Rivest. The algorithm is fast and is used in a variety of products, including SSL and WEP implementations. The algorithm was considered to be a trade secret until code that was alleged to be RC4 was posted to a mailing list in the mid-1990s. This code, Alleged RC4, is known as arcfour.
Arcfour. Arcfour is a stream cipher that is compatible with RC4. It can be used with a variety of key lengths, with 128-bit keys being common. An expired draft on arcfour is available at
DSA. Digital Signature Algorithm is a public key signature-only algorithm, based on the Diffie-Hellman discrete logarithm problem. It is the underlying algorithm of the DSS, Digital Signature Standard, which was endorsed by NIST as the digital authentication standard of the U.S. government in 1994. DSA is defined in FIPS 186-2, which is available at
IDEA. Internation Data Encryption Algorithm, IDEA, is a redesign of an encryption algorithm called PES (Proposed Encryption Standard). PES was designed by Xuejia Lai and James Massey, and its revision, IDEA, was designed by Xuejia Lai, James Massey, and Sean Murphy. IDEA is a 64-bit block cipher that uses a 128-bit key. As with DES, the same algorithm is used for encryption and
decryption. The design of IDEA enables it to be easily implemented in hardware or software. The software implementation is considered comparable to DES by some people, and faster than DES by others. Although a class of weak keys has been discovered for it, IDEA is generally considered to be secure.
Mac OS X Cryptography Applications
As Mac OS X matures, we are beginning to see a significant increase in the available cryptographic offerings for the platform. The first two of these we will present provide almost identical functionality: implementation of the Pretty Good Privacy (PGP) public-key encryption system. The third is rather different, in that it's really an interface to the built-in encryption capabilities that exist in the
libraries on which OS X is based.
PGP is the 1991 brainchild of Phil Zimmerman. Although a rather clever programmer, Phil overestimated the good sense and underestimated the unbelievable lack of computing sophistication of the U.S. government and justice system when he developed and published a system for implementing public-key cryptography in a simple fashion for exchanging information through email. Phil essentially
said, "Hey, cool! Implementing this public-key stuff in a relatively seamless and easy-to-use form isn't so tough—we can make this into a worldwide resource!" Shortly afterwards, the U.S. government slapped him with charges alleging that he was exporting weapons technology to enemies of the United States. It may seem peculiar, but what he was accused of was writing and distributing software
containing the RSA algorithm in a way that someone might export it out of the country. Phil didn't export it, and even if he had, one might think that the fact that such algorithms were well known and could be implemented by any competent programmer on the planet would make it not all that big a deal to possibly make it available to someone who might export the binary. Not so. The government
classified Phil's action in the same category as exporting tanks and nuclear weapons. The case was eventually dropped in 1996, but for years a large contingent of Internet and security-conscious individuals fought on Phil's behalf, both collecting money for his defense, and satirizing the situation in an attempt to make the justice system see just what folly the case actually was. http://people.qualcomm.
com/ggr/about_pgp.html provides an insight into the thinking of the day. At the same time this was going on, it was established that books containing the source code or algorithms describing encryption technology could be exported (so we shouldn't have to ask you to help bail us out of jail for that bit of Perl we showed you previously) because they were literary works, but that nonreadable (that is,
executable) versions were weapons. For a while, there was a T-shirt available with the source for the algorithm printed on it, and a bar-code that encoded the executable. The shirt was advertised with the slogan "For the price of this T-shirt, you too can become an International Arms Dealer." The manufacturer promised to refund anyone's money who was convicted, and many wondered whether you
could be convicted of carrying a concealed weapon if you wore the T-shirt inside out.
Shortly after the government dropped its case against Zimmerman, he formed a company and began distributing PGP as a commercial application. There were some particularly nice features available, such as the ability to PGP-encrypt an entire disk partition and have anything written to that disk encrypted, but the company was soon absorbed by Network Associates, the former distributors of
McAfee AntiVirus. This business group did a reasonable job of upkeep on the product until 2001, when it decided to drop PGP from its product line. For a while it looked like the very nice PGP product line was going to disappear, but thankfully the original principals of PGP Corporation managed to work out a deal to reacquire the rights in June of 2002, and development, including the production of
a Mac OS X version, has begun once again.
Commercial and freeware versions of PGP are available at Various bundled software packages are offered commercially, depending on your needs. A freeware PGP package is also available, with limited capabilities that are probably sufficient for the home individual. Most importantly, the freeware version allows you to make and manage keys and encrypt and decrypt files.
Download and install PGP. The documentation recommends rebooting the machine afterward. After it is installed, generate a set of keys for yourself. The option to generate a set of keys appears the first time you launch the application. If you do not make a set of keys then, you can either select New Keys from the Keys menu of the PGP application, or select New Keys from the PGPkeys window
that opens when you launch PGP. You can choose to generate your keys in Expert mode. If you do not choose Expert mode, your keys are generated using the default parameters that you can see in Expert mode. The Setup portion for the Expert mode is shown in Figure 4.4.
Figure 4.4. Expert mode for the Setup step in key generation in PGP.
In Normal mode, you need to supply only a name and email address. In the Expert mode, you also specify a key type, key size, and key expiration. The default key type is Diffie-Hellman/DSS. You can also choose RSA or RSA Legacy. The default key size is 2048 bits, but you can specify anything from 1024 to 4096. The default expiration is none, but you can specify an actual expiration date, if
you prefer.
After setting up the key, you are asked to provide a passphrase to protect your private key. As you type your passphrase, a horizontal bar displays an approximation for the quality of your passphrase, much like a progress bar. Then your PGP key pair is generated.
In the PGPkeys window, which opens when you launch PGP, your new key is listed, as shown in Figure 4.5. From this window you can do a number of other things, including revoking your key, deleting it, sending your public key to a keyserver, exporting your keys to a file, and searching for public keys. If the PGPkeys window is not open when you need it, simply select that window under the
Window menu.
Figure 4.5. Your new key appears in the PGPkeys window.
In Preferences, you can specify a number of options, including editing the keyserver listing, shown in Figure 4.6, and specifying encryption algorithms and where your keys are stored.
Figure 4.6. The keyserver preferences in PGP.
After you have your set of keys, you can experiment with encrypting and decrypting something. A convenient program to use for testing encryption and decryption is the TextEdit application. Type something and then select it. Under the Services option of the TextEdit menu, select PGP, then select Encrypt. Figure 4.7 shows how everything looks as you start to do this.
Figure 4.7. Encrypting text in TextEdit using PGP.
Select a recipient in the next window that appears. If you haven't imported anyone else's public keys, then yours is the only option. The text is then encrypted, as shown in Figure 4.8.
Figure 4.8. An encrypted version of the text shown in Figure 4.7.
To decrypt the encrypted text, select Decrypt from the PGP menu under the Services menu under the TextEdit window. Enter your passphrase. Then the decrypted text appears in a PGP window, as shown in Figure 4.9.
Figure 4.9. The encrypted text is shown decrypted in the PGP window.
You can easily use PGP for encryption or decryption in applications that use the Services menu. As you might expect, Mail is one of those applications.
To exchange PGP-encrypted files or messages, you first need to get the public key for anyone with whom you want to exchange encrypted information. If the person's public key is on a public keyserver, search the keyserver, then drag the correct result from the search window to the PGPkeys window. If you have received the person's public key as a file some other way, choose Import in the
PGPkeys window and select the correct file. When you are encrypting the data, select your recipient. Then send or exchange the data in whatever way was predetermined. The recipient can decrypt the data with his private key.
To receive encrypted data, you have to make your public key available to others. You can either send it to a keyserver from the PGPkeys window, or you can export your public key to a file and exchange it some other way. If you choose to send it to a keyserver, it does not matter which one you send it to. The keyservers regularly sync their databases.
GPG, Gnu Privacy Guard, is a Gnu replacement for the PGP suite of tools. GPG received a real boost from the apparent death of PGP, especially as far as Mac OS X development is concerned, and should be considered an interesting study in the Open Source community's response to a perceived lack in a software functionality niche. It's designed as a drop-in replacement for PGP, allowing the same
content to be transmitted and decoded with the same key sets.
A version of GnuPG for Mac OS X is available from This site includes GnuPG and various GUI tools. The version of GPG that is available for Mac OS X 10.2 is slightly newer than the version that is available for Mac OS X 10.1. The site also has links to some other tools, such as Gpg Tools and tools for getting various mail programs to work with GPG.
From the Mac GnuGP site, download the version of GPG suitable to your version of Mac OS X and the quarterly files. Download separately anything that may have been updated after the quarterly download was made available.
If you install GPG by using the installer, it installs in /usr/local/bin by default, with documentation in /Library/Documentation/GnuPG/. The documentation also suggests that you can install it via fink ( if you prefer. fink will probably put it somewhere under the /sw hierarchy. Install the other applications as well.
After GPG is installed, you are ready to generate a key pair. You can do this either by selecting Generate under the Key menu in GPGkeys, or by running gpg --gen-key at the command line. GPGkeys is a hybrid application that opens a terminal and runs gpg --gen-key at the command line for you. When you are generating a key pair, you have to answer a few questions. You select an
encryptional algorithm, the key size, and an expiration date. Then you provide a name, email address, and a comment, if any, to identify the key. Sample output follows:
% gpg --gen-key
gpg (GnuPG) 1.2.1; Copyright (C) 2002 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.
gpg: keyring `/Users/sage/.gnupg/secring.gpg' created
gpg: keyring `/Users/sage/.gnupg/pubring.gpg' created
Please select what kind of key you want:
(1) DSA and ElGamal (default)
(2) DSA (sign only)
(5) RSA (sign only)
Your selection? 1
DSA keypair will have 1024 bits.
About to generate a new ELG-E keypair.
minimum keysize is 768 bits
default keysize is 1024 bits
highest suggested keysize is 2048 bits
What keysize do you want? (1024)
Requested keysize is 1024 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct (y/n)? y
You need a User-ID to identify your key; the software constructs the user id
from Real Name, Comment and Email Address in this form:
"Heinrich Heine (Der Dichter) <[email protected]>"
Real name: Sage Ray
Email address: [email protected]
You selected this USER-ID:
"Sage Ray <[email protected]>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?
You need a Passphrase to protect your secret key.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /Users/sage/.gnupg/trustdb.gpg: trustdb created
public and secret key created and signed.
key marked as ultimately trusted.
1024D/190CF224 2002-12-31 Sage Ray <[email protected]>
Key fingerprint = 8FD6 DC90 D32C F092 7F04 DD18 1D97 A19F 190C F224
1024g/D5B2A9F2 2002-12-31
A basic configuration file is created in ~/.gnupg/gpg.conf. GPGPreferences is a GUI application that installs in the Other section of the System Preferences. As of this writing, GPGPreferences edits ~/.gnugp/options, but the version of GPG available for Mac OS X 10.2 no longer uses that file for preferences. However, you can watch changes it makes to the options file and make those
changes to ~/.gnugp/gpg.conf instead.
You may want to consider specifying key searching behavior and a keyserver that GPG should search. You can uncomment a line for one of the ones listed in gpg.conf, or add a line for a different server of your preference. You can find some key servers listed in GPGPreferences, and you can also check for keyserver listings.
For a keyserver entry, you might have a line like this:
keyserver ldap://
If you specify a keyserver, include options for keyserver functions, such as the following:
keyserver-options auto-key-retrieve
The gpg.conf contains a list of keyserver options and their functions. The preceding option automatically fetches keys as needed from the keyserver when verifying signatures or when importing keys that have been revoked by a revocation key that is not present on the keyring.
After you have generated a key, it is added to the list of keys that GPGKeys can manage. Figure 4.10 shows a sample of the Public tab after generating a key pair. GPGKeys adds that key to your public keyring, which is ~/.gnupg/pubring.gpg.
Figure 4.10. A key now appears in GPGkeys.
As with PGP, you can test encrypting and decrypting in TextEdit from the Services menu. Type some text in TextEdit and select it. Then select either the encryption option under the GPG menu or the encryption option under the Gpg Tools menu under Services, if you installed Gpg Tools. Although either method should work for encryption and decryption, we find the Gpg Tools interface to be
slightly easier to work with. Figure 4.11 shows the encryption interface using Gpg Tools. Figure 4.12 shows a sample of the encryption done by GPG. This is an encryption of the same text that was used in the PGP example.
Figure 4.11. Starting the encryption process using the Gpg Tools interface to GPG.
Figure 4.12. The text has been encrypted by GPG.
As with PGP, you can easily use GPG for encryption or decryption in applications that use the Services menu, such as TextEdit or Mail. Remember that the Mac GnuPG site also has scripts available to allow certain other mail programs to work with GPG.
After you have tested that encryption and decryption are working for you, you are ready to exchange keys and share encrypted data. You can either make your public key available via a public key server or export it to a file and exchange it some other way. To send it to a keyserver, you can choose the Send to Keyserver option under the Key menu in GPGkeys.
As with PGP, to send encrypted data to someone, you need that person's public key. You can either get that person's public key from a keyserver or you can exchange it by some other means and import it. You can search a keyserver from GPGkeys and select the correct key to be retrieved. Following is sample output from importing a public key served by a keyserver.
% gpg --search-keys rumpf
gpgkeys: searching for "rumpf" from LDAP server
Keys 1-10 of 11 for "rumpf"
Rumpf Thomas <[email protected]>
4096 bit DSS/DH key 2A9F0F7893D0AE13, created 1997-08-26
Jean-Claude Rumpf <[email protected]>
2048 bit DSS/DH key 43A5FBC5D172566C, created 1999-09-14
Johannes Rumpf <[email protected]>
2048 bit DSS/DH key 74C5630780911F16, created 2000-05-01, expires 2001-01-02
Robert Wolfgang Rumpf <[email protected]>
2048 bit DSS/DH key CA663A997E4B35DF, created 1999-07-14
Michael Rumpf <[email protected]>
2048 bit DSS/DH key DE0D93830EF28E0E, created 1999-11-14
Michael Rumpf <[email protected]>
2048 bit DSS/DH key 838DD3F7CBB774F2, created 1999-11-14
Beate Rumpf <[email protected]>
2048 bit DSS/DH key 3C6D928B10B7807C, created 2000-07-07
Beate Rumpf <[email protected]>
2048 bit DSS/DH key 712971B918247DDC, created 2000-07-08
Beate Rumpf <[email protected]>
2048 bit DSS/DH key 29661EA322068AE3, created 2000-07-23
Patrick Rumpf <[email protected]>
3072 bit DSS/DH key 73C41FE244791040, created 2000-08-03
Enter number(s), N)ext, or Q)uit > 4
gpg: key 7E4B35DF: public key "Robert Wolfgang Rumpf <[email protected]>" imported
gpg: Total number processed: 1
imported: 1
If you exchange keys with individuals in a more personal fashion, simply select the appropriate key file by using the Import option in GPGkeys. If you have trouble, make sure the key does not have any extraneous information, such as mail headers.
If you are not interested in installing PGP or GPG, but you are interested in some encryption capabilities, you might be interested in PuzzlePalace, a shareware package available from
PuzzlePalace makes use of functionality available in OpenSSL, making it possible to exchange encrypted data with anyone who has access to OpenSSL. There is also an option to post-process the file with Base64 to send it as an attachment.
PuzzlePalace is easy to use. Just select the type of encryption you want to use and drag the file that you want to encrypt. If you only want to create a Base64-encoded file, and don't want to apply any additional encipherment, set the cipher to None. If you've selected encipherment, you are then asked to protect the file with a passphrase, and to retype the passphrase for verification. To use the file, drag
it out of the PuzzlePalace window. Figure 4.13 shows the PuzzlePalace interface after it has just encrypted a file. The icon for an encrypted file has a lock.
Figure 4.13. A file has just been encoded with Triple DES encryption in PuzzlePalace.
To decode the file, drag it to the PuzzlePalace window and enter the passphrase for the file. PuzzlePalace then decodes the file. To read it, just drag the file out of the window and read it. The icon for a decrypted file in the PuzzlePalace window is the file's normal icon.
Data-Divulging Applications
Whether you keep your data encrypted or not, you also need to worry about software that might allow
others access to it when you didn't explicitly intend for them to have this access. Software that you are
running typically has all the privileges that you have as a user, and so it is allowed to do anything that
you are allowed to do. This means that it has the same access to sensitive data that is stored on your
computer that you, as a user, have. Usually, this is a good thing—you don't want to manually access
each and every bit of data on your computer through one single privileged application. Instead you want
to be able to open a word processor (any word processor, preferably) and read any text document you've
constructed. You also want to be able to access these through your email client so that you can mail
them to other people, access them with cryptography programs so that you can encrypt them, and so on.
Unfortunately, this means that all this software has the ability to access these (or any other of your) files
when it's been invoked by you, but isn't under your direct and immediate control as well, and this ability
is sometimes used for purposes that are not explicitly conveyed to you as a user.
Features Run Amok
Sometimes such capabilities and functions exist as an intentionally designed feature of the software. For
example, while you are accessing a Web page, there is usually a large amount of data exchanged
between your computer and a Web server that you are not party to. From your point of view, you send a
query to the Web server that says, "Hello, please send me the file index.html," and
the server responds with a Web page. Behind the scenes, the dialog is much more complex, and your
computer sends along quite a bit more information than you probably know. Most of this data is simply
part of a dialog intended to allow the server to better understand what you're requesting, but even so, it
may contain information that you're not intending to divulge. The following output shows some of the
data that any request to a Web server makes available to any software that is called as a result of that
HTTP_USER_AGENT="Mozilla/4.0 (compatible; MSIE 5.22; Mac_PowerPC)"
Notice that it knows what type of CPU I have, what operating system I'm using, my IP address and port,
and also my browser. All that I knowingly sent was a request for the file /cgi-bin/printenv.cgi
(a little program I've written for my Web server so that I can check these things). Without my knowledge
or permission, the browser has added some additional information to the exchange.
Although none of this looks to be particularly threatening information, it is the fact that it can be, and is
divulged without your control, that should get you to thinking. If I am using an email application that
supports HTML email (such as Apple's mail client, and in fact most other email clients other than Mail
in the terminal, Mailsmith, and Powermail), and the HTML mail includes an embedded image linked to
a remote Web site, simply reading that email is going to send information such as shown above to the
remote site, without my knowledge, without my permission, and without me having any significant
In fact, this very technique is what has a large portion of the Internet in an uproar over "Web
bugs" at this point. The misguided masses who clamored for HTML-capable email clients, so
that they could bloat their email messages with ugly fonts and useless images, have, in the
last year, suddenly realized that for their email client (or Web browser) to be able to read
HTML email with images that come from remote sites, the client must contact that site, and
thereby divulge the client's IP, and potentially other interesting facts about the user. Of
course, the companies that provide these clients, and the companies that send HTML email,
had figured this out somewhat sooner than the consumers, and so have been sending email
containing links to invisible "one pixel" images now for years. The reader's HTML-capable
email client dutifully contacts the remote server to retrieve the image, divulging the reader's
IP, OS, CPU, and other interesting facts.
What's worse, because you're accessing a Web server using a Web browser (never mind that
it's called an email client, unless your email client specifically doesn't do HTML; nowadays
it's a Web browser), the Web server can set and retrieve cookies from your browser. Cookies
are small files containing information sent by Web servers, and a source of great
consternation to users who understand that they contain possibly private information, but are
confused about how it got there and who can access it. A cookie can contain only
information that's sent by a Web server, so it's unlikely that a Web-bug will be able to
divulge the contents of your private files through this mechanism, but the information that
could be available is quite varied. In fact, because most people don't know what cookies have
been sent to their computers from what Web servers (and usually most users have hundreds
of them stored), there could be almost anything in there: Anything that any Web server
you've ever visited has known might be stored in the cookies on your computer, from your
user ID on eBay, to your search preferences for Google, to the credit card number you
entered on an online merchant's ordering page. If you've entered your actual name and
address on some Web page that cooperates with the numerous Web-bug client tracking
agencies, every time you read an HTML-containing email in an HTML-capable email client
(or browse a Web-bugged page in a Web browser), the advertising company may be getting
a record of what email you've read, what Web pages you've visited, and what products
you've ordered, all linked to you, the physical consumer at your home snail-mail address. As
more software becomes automatically HTML enabled and allows the embedding of links and
Web content, this technology for tracking the documents you access becomes even more
pervasive. According to The Privacy Foundation's report on document bugging (http://www. and http://www., Microsoft Word, Excel, and PowerPoint have
been susceptible to being bugged since the '97 versions. (The Privacy Foundation report also
includes a nice Flash animation showing the mechanism of Web bugs, and a fairly
comprehensive FAQ.)
For companies that store information regarding the fact that you read the mail, opened the
document, or clicked on the Web page in a database, and that elicit cooperation among a
multitude of Web sites and commercial emailers, this data enables them to track consumer
response to various types of email advertising, chart their browsing habits around the Web,
and build considerably more powerful customer profiles than would be possible if they used
only what a customer would willingly divulge in a survey.
Of course, consumers are now acting indignant over the fact that the technology that they
unthinkingly embraced is doing exactly what it was designed to do, and want assurances that
companies won't, or preferably can't use it for that. It's like asking for a super-heavy machoman hammer for pounding nails, and then complaining that it's not soft enough when you hit
your thumb. So long as they're capable of gathering the information, and so long as it
appears to be of economic benefit for them to do so, businesses are going to take every
opportunity to collect and analyze information regarding their potential customers.
Historically, this has been paid for by the businesses and consists of sending surveys and
collecting demographics based on targeted advertising. The Web has made the arrangement
much sweeter for businesses because it's the consumer who pays for the opportunity to send
the information necessary to build the databases themselves. If you're a business, what could
be better?
If you don't want companies to know that you've accessed their computers, don't access
them. It's as simple as that.
If I was using one of those newfangled Pentium chips with the built-in digital serial number, and
Microsoft's newfangled XP operating system with its irrevocably linked copy of Explorer, would my
CPU's serial number and my OS's registration number be embedded in exchanges of information with
Web servers as well? Short of setting up my own Web server and figuring out how to write the software
to extract the content of the dialog, would there be any way for me to tell whether they were? Don't put
your trust in some misguided notion that only fly-by-night companies would mine your personal
information in this fashion. It's been well-documented that Microsoft's Windows Media Player phones
home with information regarding every DVD that you play in your computer, and Microsoft's disclosure
of this fact has been notably lacking ( You
might be inclined to think "well, if you've got nothing to hide, why would it bother you?" I challenge
you to try that argument on the next woman you meet who's carrying a purse. Grab it, dump the contents
and start rifling through them. Against her protestations, use the argument "If you've got nothing to hide,
you shouldn't be bothered." She'll explain to you, in much more graphic detail than I can convey here,
why that argument doesn't fly.
As a probably larger problem, because I'm running it, my browser has access to any information I've
entered in its preferences, my system settings, and actually all my files as well, all with my permissions,
and all without my necessarily knowing that it's accessing or divulging this information. With the advent
of cookies, practically anything I've ever entered or seen on a Web page could be being sent back and
forth to sites that I access, without my knowledge or consent.
Some of this information exchange is completely necessary for the applications to function. It would be
impossible for a Web server to send you a Web page that you've requested without it getting the
information regarding your IP address to know where to send the page. Likewise, cookies have an
important purpose in making your Web experience seamless and convenient because without them
there's no convenient and relatively secure method for a Web server to keep track of things such as your
shopping cart at an online merchant. Unfortunately, it's exceedingly difficult to separate those exchanges
that are in your best interest from those that are taking advantage of the fact that you're unaware of the
dialog going on.
Our best advice is that it serves most people well to be cautious, but not overly concerned about the
information being transmitted in this fashion. The majority of reputable commercial entities on the Web
work hard to avoid storing cookies with sensitive data, or to make certain that the cookies are destroyed
when the browser leaves the site. Those sites that use Web bugs can track some pages you visit, or
emails that you read, but what are they going to do with the information? They're going to use it to more
precisely target annoying pop-up ads at you, so that you get fewer ads for junk you are completely
disinterested in, and more ads for junk that you might possibly be interested in, but probably aren't. The
technology can be misused, and it is misused frequently, but unless you've a real reason to be worried
about information you've sent to a Web site, or about other people knowing what pages you've browsed
or spam email you've read, it's unlikely that these technologies are going to significantly endanger your
Unintentional Misfeatures
On the other hand, it turns out to be absurdly easy to write software with either accidents or features that
can be abused to cause it to perform functions that aren't in its list of intended features. Some of these
allow unintended execution of programs, or other types of direct software compromises of your system,
but others, insidiously, can result in your system divulging information at your command that is either
random, or worse, exactly the data that you thought you were protecting in the process.
By way of example, for quite a while there was a problem with Microsoft Office applications, in that the
documents had a tendency to absorb random pieces of information from around a user's system into
themselves, without the user's permission or desire (,,t281-s2109785,00.
html). At least a few people claim that this is an ongoing problem (
bloated.html), but Microsoft issued a patch for the '98 releases, and to the best of our ability to discern
has fixed this in the Mac OS X versions of their software.
Another example, unfortunately with the same suite of software, is a certain sloppiness about managing
the data stored within its files. For some time, it was not uncommon for a Word document to contain
portions of many different earlier versions of the document.
This may not appear to be a critical misfeature, and in fact I have on occasion come to rely upon this
unintentional archive of data to recover the data from Word documents that have become corrupted and
refuse to open, or in which I've accidentally deleted important information. Having a 10-month-old
daughter in the house, many pages of Mac OS X Unleashed would have been lost without the ability to
recover data from files that had mysteriously changed into something much less useful. However, we
just recently heard from one of OSU's computer-savvy legal staff that this misfeature has cost at least
one vendor (one that's incredibly apropos in this instance) a lucrative contract with the University. The
vendor in this instance sent their business proposal to OSU as a Microsoft Word document.
Unfortunately (for them), they were less knowledgeable than they should have been regarding the
potential pitfalls of their software of choice, and when the OSU lawyers carefully examined the file, they
discovered the remnants of an earlier version of the contract lurking in the file that had been sent. In the
earlier version of the contract, the company was offering the same services to another university, but at a
much better price. Needless to say, this seemingly insignificant misfeature cost someone a fair sum of
money, though we don't know whether it ended up costing anyone his or her job.
Based on a quick scan through the files we've used to compose this book, it appears that this problem
doesn't seem to be as prevalent as it once was. However, with only a few iterations of opening and
closing files under Word version X, Service Release 1 on OS X 10.2, I managed to produce a file that
looks like Figure 4.14 in Word, but that when looked at with the strings command from the
command line, contains all the data shown in Listing 4.2. The dangerous part is near the bottom of the
listing. I constructed this document as though I were mailing off instructions to meet for a surprise party
for some friends, and then had reused the driving directions part of the document by deleting the party
announcement and replacing it with a request for help to my friend. That last line of the message body in
the listing is the original text from the beginning of the document that would have gone to the people
attending the party. I deleted it from the Word document, and it doesn't show up anywhere in what Word
shows me on the screen, or in what prints out when I print it, yet it obviously remains in the binary
contents of the file itself. If I had really constructed the document this way and sent it off as a Word
document to some friends, then did a bit of editing and sent a different version off to my rather computerliterate friend Adam, I might have been rather embarrassed.
Listing 4.2 Some of the interesting strings that are actually contained in the file after it's been
edited a few times, even though there's no way to see some of them from within Word itself.
Adam, remember, I need some help this evening chasing that wiring
reroute down
at Marijan. I've got a tracer rented until 7:30, but I can't get
there with
it before 7, so we're going to have to work fast.
Directions to Marijan park: North on 315 to 161. Get off the highway
west on 161 and take the first left onto the road paralleling the
Head south a mile or so until you see the sign for the park entrance
on the
left. Take both right turns on the park road to get down to the
pavilion area.
See you there!
Everybody, tonight's Adam's surprise party - I'll get Adam there and
keep him
busy 'til 7:30
Will Ray
Will Ray
Microsoft Word 10.1
Figure 4.14. A Word document that I might have sent to someone.
The danger, of course, is not unique to Microsoft products. These problems are inherent in the notion
that software that you run can act upon the system with your permission and authorization. If a
programmer makes a simple error that accidentally causes an application to store data it wasn't supposed
to, or to read another part of the disk space you control, there's nothing in the action of the system to
either inform you of the error or to allow you to prevent it. Your most useful defense is to pay attention
to the warnings that people send to security newsgroups and Web sites, and to act upon them by
avoiding or patching software that displays problems of this nature. It doesn't hurt to take a look in files
that you think you know the contents of, either—you'll probably be surprised at what you sometimes
find there.
Steganography and Steganalysis: Hiding Data in Plain Sight, and How to
Find and Eliminate It
Steganography, literally "secret writing," is the science of developing and/or applying techniques for
concealing one message within another. In its earliest form, it was a highly popular diversion in the form
of mind games, with writers encoding the answer to puzzles within the puzzle itself, or signing
anonymous poems with a steganographically hidden signature.
One famous example, Hypnerotomachia Poliphili, is a curious work of fiction from 1499 that blends a
treatise on architectural and landscape design with political theory and erotica in a dreamscape setting
where the protagonist wanders, in search of his love Polia. The book is left without any identified author
(though some ascribe it to Leon Battista Alberti, a contemporary author of architectural theory), but if
one takes the illuminated letter from the beginning of each chapter, one finds the sentence " Poliam
frater Franciscus Columna peramavit." MIT makes the manuscript available at, including, conveniently, the illuminated headletters as indices to the chapters.
David Kahn, in "The Codebreakers: The Story of Secret Writing," notes that this translates as "Brother
Francesco Colonna passionately loves Polia." Francesco Colonna was a Dominican monk, still alive
when the book was published, lending some credence to the thought that he might have wished such a
work to be published anonymously.
Another example, perhaps more interesting, though currently thought of as infamous rather than famous,
concerns the apparent lack of solid historical evidence for the author known as Shakespeare. There has
long been a contention that the William Shakspere, or Shaxpere, Shagspere, Shackspere or Shakspre
(depending on which of the various legal documents of the time you might guess to contain his real
signature, if any of them do) of Stratford-on-Avon, with regards to whom a few concrete historic
documents exist, is not the same Shake-speare who wrote "Shakespeare's" Sonnets (this is the way the
Sonnets are attributed in their original printing—with a hyphenated name). Curiously, there are a
number of good arguments that suggest that the Shakspere recorded in the historical documents was
illiterate. One of the best of these was put forward by Sir Edward Durning Lawrence (http://www. in his treatise "The Shakespeare Myth" (
htm). There is simultaneously an abundance of suggestions, though whether by way of coincidence or
intent it cannot be said, that Sir Francis Bacon may have in fact been the author (http://www.sirbacon.
org/links/evidence.htm). Included among these is the fact that Bacon considered Pallas Athene (Athena,
the "Spear Shaker") to be his muse. Recently, and rather controversially, there has been discovered what
appears to be a steganographically encoded cipher in the title page and dedication to the Sonnets.
Deciphered, the message reads "nypir cypphrs bekaan bacon" ("The Second Cryptographic
Shakespeare," Penn Leary, Weschesterhouse Publishers, also
As spelling was mostly a phonetic exercise at the time, and not bound by today's absolute rules, the
curious spellings are possibly excusable. It should be noted that John Napier (also Napeir, Nepair,
Nepeir, Neper, Napare, Napar, and Naipper, depending which documents of his you read) is a
contemporary of Bacon who developed the notion of mathematical logarithms, and that Bacon was
fascinated with cryptology and steganography, and the possibilities that Napier's mathematical ideas
brought to these fields.
More currently, steganography has made its way into the world of high-tech watermarking, where its
goal becomes not simply the hiding of some message, but doing it in such a way that even if the original
is altered, the message can still be detected. The primary driving force behind this move is the
proliferation of digital forms of many types of media, such as audio, video, and still imagery. Artists and
copyright holders are concerned that these items can be easily duplicated, depriving them of the rights
and royalties to which they are legally due. In response, companies tried to solve this problem by finding
ways to embed digital signatures and watermarks into digital media. With sufficient industry
cooperation, the hardware and software was designed so that the watermarks can be detected, and the
media rejected as an invalid file to copy, tape to duplicate, or CD to record.
Contrary to popular belief, these actions aren't specifically abridging consumer rights. They may be
making exercise of certain rights more difficult, but the widely held belief that it's legal to copy one's
CD if one doesn't sell it to someone (that is, "for personal use"), isn't upheld by what the laws actually
say (U.S. Code, Title 17, The law says that the copyright
holder has the exclusive right to make and authorize copies (U.S. Code, Title 17, Chapter 1, Section 106, It makes no exception for "personal use." A number
of people wave the flag of the "Fair Use" doctrine as supporting their claim that personal use is
acceptable under the law, but they are either ill-informed, or deliberately attempting to confuse the issue.
Fair Use (U.S. Code, Title 17, Chapter 1, Section 107,
provides for certain conditions under which the copyright holder's rights may be circumvented without
the use being against the law. These include a number of very specific cases where the circumvention is
allowable, such as for satire, political commentary, to provide educational material in a classroom
setting, or to use the work as a reference in a critique. None of these apply, or come anywhere near
applying to copying a work for one's personal convenience. Thankfully, penalties for violation of a
copyright holder's rights are based largely on the potential revenue lost due to the action of the violator
(U.S. Code, Title 17, Chapter 5,, so it appears unlikely
that publishing houses are going to invest in the legal expenses necessary to recover the $12 one might
have deprived them of by copying a CD. In response to this, many companies are endorsing
technologies that allow consumers to make a single copy of a digital original, but that prevent
subsequent copying of the copy.
Unfortunately, the large corporate publishing concerns that have purchased most of the copyrights to
valuable contemporary commodities have seen this as such a large threat that they have railroaded a
number of bad laws through the U.S. government, putting legal barricades in place against consumers
exercising rights that are clearly granted to them by other sections of the U.S. code. Chief among these
poorly thought-out packages of legalese is the act known as the DMCA—the Digital Millennium
Copyright Act (U.S. Code, Title 17, Chapter 12, Section 1201, and other sections, Among the ludicrously anticonsumer ideas put into law by this
boondoggle is a section that makes it illegal to "traffic in" any technology (that is, "invent, discuss, etc.")
that is primarily useful for circumventing a technological measure put in place to control access to a
When coupled with the fact that the copyright for any material work you create and fix in some medium
automatically belongs to you from the moment of creation (though the copyright is not registered), the
result is that if any technology is used to protect any work you create (encrypted email, for example), it
immediately becomes illegal for anyone to "traffic" in any technologies that could circumvent that
protection. Put another way, at the moment some technology is first used to protect any copyrighted
work, it thenceforth becomes illegal for anyone to examine the protections in that technology, attempt to
break it, improve it, or even discuss it.
The upshot of this is that some lawyer out there would probably throw a hissy fit if we were to tell you
much at all about the field of steganalysis, which is the study of ways to find and eliminate such
watermarking or protection from works. At least one academician, a professor from Princeton has
already been threatened with legal action if he published a paper on a flaw he discovered in an
encryption algorithm (, http://www.cs., and, and
several Web sites have been successfully sued, forcing them to remove any mention of security holes
that have been found in other products ( The
silliness has even extended to some companies suing others over links being placed to their Web sites,
claiming that bypassing their front page by pointing a visitor directly to interesting internal content was
circumventing their ability to properly indoctrinate the visitor with the information from the earlier
higher pages and thereby "defeating a technological measure protecting a work" (http://www.,
deep_links/, and Perhaps most informative of the ulterior motives in these
cases, the legal beagles in at least one of the cases likened the practice to "changing the channel, or using
a VCR to skip over commercials in broadcast television," as such use of a VCR was clearly a violation
of the broadcaster's right to earn revenue.
Many of the major software and hardware manufacturers are jumping on the bandwagon to implement
methods to force the consumer to adhere to these laws (and to make it that much more illegal for the
consumer to circumvent them). Microsoft recently demonstrated an audio watermarking technology that
embeds an audio watermark into music so solidly that the watermark can still be detected and recording
prevented even if the source is being played aloud in a crowded room, and the recording is attempted
from this "live" source (,
news/print/0,1294,43389,00.html). See where this is going? Personally, we prefer the Tivo to the VCR,
but they want to keep us from skipping commercials by making it illegal? Thank whatever God or gods
you believe in for the hackers, the crackers, and the company that said "1984 won't be like 1984"! Apple
has thus far stood impressively far from the crowd in their refusal to implement such anticonsumer
technologies in their products, appearing at this point to be comfortable with allowing their customers to
be responsible for their own actions, rather than treating them like a-priori-criminals.
Perhaps the saddest outcome, however, is that the application of the law has risen to the level of
absurdity that cryptology and steganography experts were predicting much faster than ever predicted. It
is now illegal to possess a certain prime number, as that prime number happens to be related in an
interesting fashion to a method of breaking the particularly poor encryption that was ill-advisedly
employed by the DVD consortium to protect DVDs from copying, who apparently expected that they
could pull an algorithm out of thin air and have it remain secure. The following number is not a prime
(we don't want to get in trouble by possessing or distributing an illegal number), but there's an
interesting prime near it that can be decoded into a crack for the DVD encryption algorithm. If you'd like
to see how hard that prime-number factoring business that we discussed earlier in this chapter really is,
you can see whether you can figure out what the prime is with a few well-chosen guesses. It's only 1401
digits—shouldn't be that hard...
9118647666 2963858495 0874484973 7347686142 0880529442
As mentioned earlier, the only way to develop secure algorithms is by subjecting them to constant and
vigorous public scrutiny. The end result of this critically flawed law will be that we, the consumers, are
going to be forced to accept much poorer quality encryption software to protect our data, while the
criminal element that would circumvent the protections for profit will have it much easier, as the
algorithms will have not been subjected to rigorous academic review.
Mac OS X Steganography Products
At this point, there isn't much software out there to allow you to experiment with steganography, but you
can expect that there will be more toys coming, as the battle heats up between corporate interests trying
to protect their right to a profit and the hackers interested in advancing the arts of cryptography and
steganography. In the meantime, you can examine a few interesting (or in one case, at least amusing)
products if you're interested in experimenting a bit, or using steganographic techniques to conceal
textual data.
Adobe's PhotoShop includes a pair of filters named Digimarc "Embed Watermark" and "Read
Watermark," which respectively embed a copyright statement into an image as a watermark or
decode it again.
Precious Gem software distributes Corundum (
Corundum/Corundum.html), which allows you to steganographically hide textual information in
Spam Mimic ( provides the steganographic service of
hiding short text messages in email that looks like spam. Nobody looks at spam email, right?
Where better to hide something than in a plain-sight email transmission, that people will do their
best to avoid seeing?
These products conveniently display complementary features. The purpose of the PhotoShop filter is to
allow watermarking images so that the copyright information is indelibly embedded in the image. This is
an attempt to provide for some way of proving that commercial digital images are copyrighted, so that,
for example, as images are snagged off of a Web site and transferred around the Internet, the ownership
information remains with the file. The copyright watermark embedded by Digimarc is immune to most
simple transformations of the image: Cropping it, rotating it, mirroring, skewing, mild blurs, noise, and
most other simple adulterations won't erase the watermark. Unfortunately, in a brilliant display of what
we've been repeating about security not being secure unless it's open, tested, and verified, the Digimarc/
PictureMarc system is tragically flawed in its ability to actually secure an image. As Fabien Peticolas
demonstrates at, it takes only a few minutes
of brute-force attempts against PictureMarc 1.51 images to guess the code necessary to remove or
modify the original watermark and replace it with one of your own. Trying to eradicate it through the
use of image manipulations probably takes longer.
The Corundum application, on the other hand, is designed with the intent of embedding information so
that it can't be found, but not so that it can't be damaged by modifying the image. We'd show you before
and after images of what things look like with the data encoded into them, but there's really little point—
to the limits of reproduction in this book, the images with data encoded into them are indistinguishable
from the originals. While you're waiting for some of the two dozen or so steganography projects
underway on Sourceforge to make their way to the Mac, you might want to check out the Web pages of
Johnson and Johnson Technology Consultants, who have published and provided some very nice papers
on steganographic techniques, steganalysis, and the current state of the art at
Steganography/. Specifically, Niel F. Johnson's papers ( and should be interesting to those considering how this technology
might be either helpful or damaging to their computing security.
Spam Mimic, by way of comparison, doesn't try to so hard to encode the information in an invisible
fashion, as to make the carrier so common and ugly that no one will give it a second look. If you enter a
short message such as the following:
Hi Joan, I'll be home by 5:30.
You'll be presented with a result that you can email that starts off:
Dear E-Commerce professional ; This letter was specially selected to
be sent to
you . This is a one time mailing there is no need to request removal
if you
won't want any more . This mail is being sent in compliance with
Senate bill
1624 , Title 2 ; Section 304 ! This is different than anything else
seen ! Why work for somebody else when you can become rich within 57
Who's going to bother looking at that to see whether it has interesting information in it?
Although it's not available for Mac OS X yet, another application you might want to keep an eye out for
is Hydan from Columbia University student Rakan El-Khalil. SecurityFocus reports that at a small
computing security conference held in February 2003, Rakan demonstrated an application that uses
other executable applications as a place to hide information (
Cleverly, this application does not hide the information by stuffing it into unused nooks and crannies in
the executable, but instead by subtly modifying the way that the executable host performs its
calculations, and thereby using the code of the application itself as the carrier for the hidden information.
The take-home lesson that you should get from this chapter is that if you want your data to be proofed
against others looking at it, you need to take some fairly strong precautions to keep others from seeing
the content. If you try to hide it via encryption, you need to make certain that that the encryption is
strong enough to stand up to a concerted attack. Although any key may theoretically be guessed with
sufficient computing power, with a secure algorithm it is possible to make the amount of power required
to guess the key so large that it cannot be practically accomplished, regardless of the length of time or
amount of software thrown at the problem.
To accomplish this level of protection, however, requires an algorithm that has no weaknesses, that can
allow the key to be back-calculated from the ciphertext only through the application of large amounts of
computing power, and in which the key really is as secure as its size or complexity would imply.
Experience has shown, time and time again, that unless an algorithm has been exhaustively examined
and tested by a large population of experts, it is very likely to be trivially compromised within a short
time of its release. For every algorithm that proves to be secure against attacks for a few years,
thousands are developed, tested, revised, tested, and eventually abandoned as insecure. Security
implementations arrived at and not tested to the point of exhaustion are found to be lacking, over and
over and over. The top authors of security methods will tell you, without hesitation, that you shouldn't
trust even their algorithms until they've been proven for at least a few years in the field.
Mac OS X provides you with several alternatives for how you might accomplish such protection, and
more are likely to become available. However, as cryptography and steganography become the next hot
arena of conflict for the hacker and corporate cultures, many of these might not be things that the
government and/or the corporate lobbyists want you to see.
The government and mega-media-conglomerate corporations would have you believe that legal
restrictions on the development or exploration of cryptography, cryptanalysis, and other informationhiding or -disguising technology is for your own good. They would insist that it protects the American
economy, protects you from evil crackers on the Internet, and in the latest round of attempting to
capitalize on any available bandwagon, that darn it, it's just plain patriotic and will help the country fight
the terrorists. When you consider whether you want to believe them or not, don't even think about the
fact that in the last government foray into attempting to legislate cryptography, they would have
mandated a "private" backdoor into any encrypted data by imposing a legal requirement that all
encryption be done by one government-sanctioned application, to which they held the keys, and making
all others illegal. Just keep in mind that the encryption methodology that the government would have
imposed upon everyone uses a 40-bit key, and that the DES algorithm with its 56-bit key can be bruteforce cracked in as little as 22 hours (depending on how much hardware you have to throw at it). A 40bit key should be able to be cracked 2^16 times faster. That's about 65000 times faster. Reports are that
in 1996, when some of this lunacy was being proposed, it could be done in about 5 hours for $20 in
hardware from Radio Shack, or in about 26 seconds if you were willing to spend a little more (http://
Make no mistake; it is not impossible to invent a secure algorithm "in the dark," and those who make the
laws aren't universally in collusion with either the mega-corporate interests or with the ultraconservative
faction who wants a camera in every room of every house. Nor are they universally ignorant. Past
experience, however, indicates that it is incredibly unlikely that good security can be either pulled from
thin air or legislated into existence. Therefore, you should have come in this chapter to understand that
legal proscriptions against examining security, such as have been put in place by the DMCA, are nothing
but an attempt to protect others' agendas at your expense. This is one case where the people evading or
completely disobeying the laws regarding computing security just might be your best hope for a secure
computing future.
Chapter 5. Picking Locks: Password Attacks
Typical Password Mechanisms
Testing Password Security
Improving Password Security, and Alternatives to the Standard Password Mechanisms in Mac
Passwords—what an idea. Using a small word as a shared secret to identify the bearer. In the security
world, one often hears of identifying oneself by either what you have, what you know, or what you are.
A password is an attempt to prove identification through proving what you know, and passwords have
been a standard at the gates to castles, the doors to clubs, and the prompts of computers for years. Unix
passwords can be a bit more complex than spoken passwords, but the standard scheme limits them to 8
characters in length. When extended to upper and lower case, nonsense words and special characters,
there are 6,634,204,312,890,625 passwords enterable from a Pismo PowerBook keyboard available for
use in the standard Unix authentication scheme (that is, 95 possible keyboard characters in each of 8
positions—95x95x95x95x95x95x95x95 = 6,634,204,312,890,625 possible passwords).
That's almost seven million billion possibilities—sounds like a lot, right? Perhaps in the days when
passwords were spoken, and the consequences of giving the wrong one at the castle gate was that
someone took a swing at your head with an axe, seven million billion possibilities was a serious
deterrent. Unfortunately, although we keep using the idea of passwords to identify ourselves, computers
can make the act of testing passwords much faster, and much safer for an intruder. Initially, the Unix
password authentication system was designed to make randomly testing passwords difficult. The
algorithm was intentionally designed to be extremely slow, so that although the consequences of trying a
password and getting it wrong might be less drastic, it still required a very long time to crack passwords
by simple guessing. Unfortunately, it didn't take long for people to write faster versions. Now, a simple
brute-force approach to password cracking can test anywhere from fifty thousand (G3, 450Mhz, our test
results drawn from using John the Ripper on a Sage iMac DV+), to millions (reported speeds on top-end
P4 Intel boxes by of possibilities in less than a second. It still takes upwards of a half
billion seconds (that's over ten years) to crack every possible password on a high-end desktop processor.
But, as machines become more powerful, that time period inevitably will be reduced. In fact, if
processor speed increases continue to track the trends that have been established over the last 20 years,
in 5 or so years we'll be able to explore the entire password space with an investment of less than a
single year of CPU time. More discouragingly, shorter passwords, or passwords chosen from a smaller
character set or using predictable patterns can be guessed in a correspondingly shorter time period. Kurt
Hockenbury estimated in 1997 that a 10-machine flock of then high-end Pentium Pro computers could
crack every 8-character password composed of lowercase and numeric characters in just over half a
year. By now, with both faster processors and faster algorithms available, that password space should
fall easily with less than a month of processor time expended.
crypt3.html details his findings.
Of somewhat more concern, if someone really wants to break into your system, it doesn't take a topsecret government agency to bring mountains of computing power to bear on a problem. Like the
[email protected] project, which linked spare CPU cycles on millions of users' computers together to scan
deep space for signs of alien intelligence, today's crackers can harness the power of computers all over
the Internet. Most recent computer viruses have been distributed via the Internet (predominantly through
poorly designed email clients), and have been designed to take over the targeted computer and give
control of the "zombie" machine to a "master" system that can direct its actions over the Internet. The
payload in the recent viruses has been software intended to knock other computers on the Net offline
with denial of service attacks (see Chapter 9, "Everything Else"), but it could just as well be a passwordcracking application. In one of the recent viral incidents, it is estimated that the Code Red worm
(technically a worm, because the only action it takes on the user's part is ignorance) managed to infect
360,000 machines within its first 14 hours of life (
newsletters/feb2002/classof2001.htm). If the payload had been a password cracker, that could have been
360,000 machines working simultaneously on cracking your password. Somehow, a year of CPU time
doesn't sound like that much when it's spread over 360,000 machines. Actually, it comes out to
something under a minute and a half.
To make matters worse, people don't naturally make use of anything resembling the entire password
space. A statistician might express the randomness available in that 6.6 million billion possible password
choices as saying that a password chosen from the space has approximately 53 bits of entropy. That is, 2
raised to the power 52.558844
Using entropy in this fashion is a way of expressing the actual randomness in what is intended to be a
random string. For example, if you know that your users have chosen passwords that are eight characters
long, you might suppose that they've randomly chosen amongst the full 6.6 million billion possible
passwords in the password space. If, in fact, your users have chosen passwords composed of only
lowercase a and b characters, they would still be chosen from the full password space, but only out of a
256-member subset. In this case we would say that the users' passwords display 8 bits of entropy (2^8
= 256).
If the users enter any of the 128 characters in 7-bit ASCII, the result is 8-character passwords with 64
bits of entropy. Each character, having 95 possible choices, therefore has ~6.6 bits of entropy available,
but it is quite common for less than that amount of entropy to show up in final password choices. The
written English language, for example, because it tends to use certain characters more frequently than
others, and certain character combinations much more frequently than others, appears to have roughly
only 1.3 bits of entropy per character. (Applied Cryptography: Protocols, Algorithms, and Source Code
in C, by Bruce Schneier, covers Shannon's famous estimate, as well as other cryptographic statistics and
plain old interesting theory and practice.) This means that if a user chooses, as many do, a dictionary
word for their password, they've accumulated only some 10.4 bits of total entropy—a much smaller
amount of randomness (10.4 bits, assuming they used the full 8 characters available for the password,
that is—shorter passwords are logarithmically weaker). Because clever password-guessing software can
make use of the predicted character-use patterns to search the most likely combinations first, a badly
chosen password can reduce the problem of searching for passwords from one that takes several years to
accomplish to one that takes only a fraction of a second. Put another way, to create a password that is as
strong as one using 8 characters of completely random data, a password composed of English-language
words would need to be over 40 characters in length—a large fraction of the length of one of the lines of
text in this book. Of course, any sentence-like structure in the password would significantly reduce the
entropy available, again making it easier to guess.
Even people trying to be random don't produce particularly random results. Several studies have
determined that strings of characters "randomly" chosen by people, aren't actually all that random, and
the variability in each position does not approach the ~6.6 bits of available entropy. This most likely has
to do with the unconscious associations we form with respect to letter patterns and usage in written
language, and manifests itself in "randomly" chosen strings displaying significant word-like structure
when constructed by a person.
This chapter covers the flaws in the basic notion of password protection for services, files, and other
resources, as well as what you can do to limit your vulnerability when picking passwords. Not all
password schemes will necessarily conform to the exact encryption or storage standards we'll cover here,
but they're all subject to the same basic conceptual flaws.
Typical Password Mechanisms
Although there are almost as many different mechanisms in place for using and storing passwords as
there are applications that use them, there are quite a few commonalities between the mechanisms.
Username and Password
Most password mechanisms require that you know both who you are and some token to prove that you
really are who you claim to be. That is, you are required to provide a username and a password to
confirm that choice. This is obviously useful in enabling the system to distinguish between different
users, as many users can securely have exactly the same password, as long as none of them knows the
others are using it as well. In a way, the user information can also be thought of as adding extra entropy
to the password. If an attacker has to guess at both a user ID and a password, it's more difficult than
guessing at the password alone. It's not a whole lot more entropy, and usernames almost certainly
display even less entropy than dictionary words, but for every bit of additional entropy, a brute-force
attack is going to have to try twice as hard. To make this additional potential entropy work for you,
disguising the list of users on the system is necessary. With Mac OS X, this means turning off the iconic
login window and making people type their usernames as well as a password. Users will initially find
this bothersome, but in the long run it usually saves them time, as well as makes things just a bit more
secure. If you have more than a handful of users on the system, it takes many of them a considerable
amount of time to find their user ID in the login window scrolling list, and from observation, the
frustration involved with this step is much greater than that involved in typing a user ID. It also means
that you need to disable any services that might provide information to the network regarding what user
IDs are available on the system. Finger is one of the primary culprits here, but NIS and NetInfo (covered
in later sections of this chapter) are also potentially troublesome here.
Of course, if every system has a user named root on it, one with superuser capabilities, that's an obvious
target for password attacks. Many systems therefore disable root login both from the network and the
console. Users are allowed to su to root, but direct login as root is prohibited. This is a good idea as a
general practice because root is typically a disembodied account with no specific person attached to it.
Requiring users to su to root, rather than allowing root logins, provides an additional level of
traceability of action because it prevents anyone from logging into the machine and anonymously using
root powers.
Password Storage
Password storage schemes are wide and wildly varied. The worst store the password in plaintext form,
doing only the most trivial job of hiding them from prying eyes.
The best encrypt the passwords well enough that they can't be decrypted at all from the stored value,
providing only a one-way encryption of the information and necessitating brute-force methods to
compromise the information. Remember, however, that no matter how carefully the passwords are
protected, there just aren't that many of them, especially when people choose poor passwords with little
entropy, and this makes the brute-force approach applicable to any scheme.
Plaintext Storage
As bad an idea as it sounds, a few programs are written in such a fashion as to store passwords in
plaintext. Thankfully, these tend to be applications such as mail servers that hold user accounts as
separate from system accounts. CommuniGate Email server from Stalker Software (http://www.stalker.
com/CommuniGate/) is one package that can be configured to behave in this fashion. CommuniGate
stores user IDs and passwords in files on a per-user basis, typically in the directory /var/
CommuniGate/Accounts/, where its behavior can be configured separately for each user. One
possible configuration is storing the user ID and password as cleartext. For example, in the following
configuration, the user jray's account is set to use the cleartext password mysecret.
# cat /var/CommuniGate/Accounts/jray.macnt/account.settings
AccountEnabled = YES;
LoginEnabled = YES;
MaxAccountSize = unlimited;
MaxWebFiles = 0;
MaxWebSize = unlimited;
Password = mysecret;
PasswordEncryption = clear;
PWDAllowed = YES;
RealName = "John Ray";
RequireAPOP = NO;
RPOPAllowed = YES;
RulesAllowed = Any;
UseAppPassword = YES;
UseExtPassword = YES;
UseSysPassword = YES;
For CommuniGate, the user IDs and passwords stored in this way are mail-account-only user IDs and
passwords, so if the cleartext file is read without authorization, the only information the culprits get is
names and passwords for mail accounts. This means that they could access and change user's email, but
the system passwords that allow your real OS X users to log in are not necessarily jeopardized. We say
"necessarily" because users have a bad habit of using the same password for multiple different accounts,
and so they very well might have chosen the same passwords for their logins as for their mail. Necessity
—and system administrators working in its name—don't do much to discourage this behavior at all by
forcing users to change their passwords frequently. Experience indicates that a large fraction of users
who have both email and system accounts will have chosen the same passwords for each.
This particular configuration is set up with cleartext passwords because this example system has a large
population of noncomputer-savy users who require both dial-in and email access, and these needs are
serviced with separate software solutions. The users really need to have everything made as convenient
as possible, so mandating that they choose two good passwords for the system is impractical—they're
disinclined to choose even one.
We are, as they say, stuck between a rock and a hard place: If we synchronize the systems to make
things easier for them, they have only one password, and both systems are vulnerable if one is
compromised. If we keep them separate, but force them to make good passwords, chances are they'll
make just two passwords, and swap them back and forth each time their password expires, which is
probably worse than one good password. Overall, there's no good solution because, as you should be
coming to understand, passwords are a poor solution to the general problem of proving that you are who
you say you are.
A better storage system for passwords, and the one that's typically used in Unix systems, is encrypting
the password and storing the ciphertext. The encryption may vary between quite weak and very strong
encryption, but the general purpose remains the same: to prevent people from using what they might see
(if they were to look at the stored passwords) to guess what the passwords are. Sometimes, such as with
the Digimarc 2-digit "PIN" key that was mentioned in the steganography section of the previous chapter,
it makes almost no difference whether the password is encrypted because brute-force attacks against the
system are so simple.
When the key is more complex, however, a strong encryption system can make brute-force attacks much
more difficult to carry out. With the best of these systems, the encryption algorithm not only makes the
password difficult (or better, impossible) to decrypt, but also is computationally expensive, so as to
make brute-force attacks impractical because of the time required to execute the attack. If the only route
to a crack is through brute-force examination of every possible key, each additional bit of key length
doubles the computational cost involved in discovering the key. Because of differences in exactly what
different algorithms mean when they discuss key length, you can't take this as an absolute method of
comparison between the strength of different algorithms, but it's a good first-order approximation.
In many cases, the authentication system is set up so that the stored password cannot (through any
known mechanism) be decrypted. This may seem counterproductive, in that one wants to be able to
compare the password provided by the user with the one that was stored. This comparison, however, can
be carried out just as easily with the encrypted password. The user's password is stored encrypted, and
the password entered by the user as he attempts to authenticate is encrypted, then these two encrypted
values are compared. It is possible with some storage schemes (MD5 checksums, for example) that there
are multiple passwords that hash to the same stored value, but finding a hash collision is no more likely
than finding the correct password, so this is of only minor concern.
For example, the standard Unix password system stores user IDs and passwords in a plaintext file, which
any users can read. The passwords stored there are encrypted through an application of the DES
encryption algorithm (Chapter 4, "Theft and Destruction of Property: Data Attacks"). A typical
password file might look like this:
ftp:*:99:20:anonymous ftp:/store/ftp:/bin/csh
guest:2dj7u1tEvDLpk:101:8:Guest Account:/home/guest:/bin/csh
ray:OPA7l1cyVYwrs:515:10:William C. Ray:/priv/home/ray:/bin/tcsh
skuld:CZ4a7y1AlpgMW:100:5:System Software Administrator:/usr/local/
joray:pA2lf397a0mYa:2001:21:Joan Ray:/priv/home/joray:/bin/tcsh
jray:a3kSXs1ROd17p:2002:21:John Ray:/home/jray:/bin/tcsh
Here root, guest, ray, skuld, joray, and jray have active accounts with passwords set.
nobody, daemon, sys, bin, sync, and ftp show a * character instead of an encrypted password.
This effectively locks the account out (root can still su to it) because the * is not a valid password
encryption, and therefore no entered password can ever match this value. In traditional Unix, this file is /
etc/passwd (or /etc/master.passwd in Mac OS X, if you've enabled "BSD Configuration
Files" from the Directory Services utility, and set it to use the password file), and each user's encrypted
password is the value immediately following the first colon of the line. In an attempt to make the
decryption even more difficult, the standard system works in reverse of the way that might initially seem
to be appropriate. Instead of encrypting the password through the use of some key, and storing the
encrypted value—which would subject the entire scheme to failure if the key should happen to be
discovered—in the standard use, the password itself is used as a key to encrypt a known value.
The form of this use is worth repeating, as it's a clever way to use an encryption system with
a strong algorithm. As the developers of the DVD encryption system discovered the hard
way, if you embed the key to your encryption algorithm in your software, a single failure
that allows someone to discover that key can lead to a failure of the entire system. It is
therefore a poor design to store passwords (or any other data) encrypted with a key that
you're also storing in the same software. To alleviate this concern, for data that is on the
order of the size of passwords, one can reverse the usual use, and use the provided password
as the key, and encrypt a known value instead. For the purposes of comparing the stored
value to the encrypted result of what the user enters, which is encrypted and which is the key
does not matter: Any particular key and data value will still always encrypt to the same
result, but using the password as the key prevents the key from being a stored value, and
therefore limits any compromise that guesses a key from compromising anything but that
individual value.
NIS (Network Information System) Passwords
The previous section illustrated password storage in the classical single-password-file on a single
machine style. More typically in clustered environments, systems use a server to transfer the passwords
between all the clients, enabling each client to have identical information regarding each user. In the
traditional Unix environment, this is facilitated by the NIS (Network Information Services, formerly
known as the YP—yellow pages—system), which functions as a network database to serve user ID and
password information to any collection of machines that want to cooperate as a cluster.
There are vulnerabilities specific to this service, but the general concepts apply in some
fashion and at some level to any service that provides system passwords from remote
NIS information is served as collections, each of which is given a unique (one should hope) NIS domain
name. Any machine that wants to participate as a peer in a cluster subscribes to the domain by name. It
does so by broadcasting a request on the network for any server that claims to serve a domain by that
name. The system was designed to make it simple to build cooperative clusters with provisions for
redundant service for user authentication, while requiring as little human intervention in copying data in
circles as possible. For this purpose it works well: Any machine can join the cluster simply by
requesting information from the named domain and then acting upon it. Likewise, the cluster can easily
be made fault tolerant, because the clients aren't locked into a particular server for the authentication
information. Because clients query for authentication information by broadcast, slave NIS servers can be
configured to automatically mirror information from the master, and to respond to broadcast queries for
the information if they don't see a response from the master.
This scheme, although widely used with clustered Unix machines, is insecure on two fronts. It was
devised in the days when all Unix machines were run by trained and trusted system administrators, and
there was an implicit understanding that if a Unix machine were attempting to connect, it was very likely
to have a trusted operator at the helm. Therefore, the only effective security built into the system is the
capability to restrict responses to NIS information requests to only particular trusted subnets, and the
notion that keeping the NIS domain name secret will prevent unauthorized queries to that domain. Like
many early security measures built into Unix, these ideas have been antiquated by the advent of desktop
Unixes such as Mac OS X and Linux. If an unauthorized client wants to access the information in the
domain (such as to gain access to the password file), it has only to broadcast a request for that domain
and be within one of the subnets that the server is configured to believe is secure, and a server will
respond with the requested information. Unless we have absolute control over all machines on the
"trusted" subnets, and absolute certainty that no other machines can be added, in today's environment we
cannot responsibly believe that only authorized machines will request the information. Likewise,
although the NIS domain name can be considered to be a password of sorts, it's unwise to rely on its
remaining unknown as a strong protection against unauthorized access to the information available
through the domain.
If you have an NIS domain to which you can subscribe, it is quite simple to access the password (and
other) information stored in it and use this information for authentication on your machine. You need to
contact the administrator of the system serving the domain to determine the domain name, whether your
machine is permitted access, and to confirm that your subnet is listed among the trusted subnets for that
domain on the server. After you have done this, follow these steps to configure your machine to make
use of the information:
1. Configure your machine to accept user information from NIS.
2. Configure the NIS domain name for your machine.
3. Bind your machine as an NIS client to the domain.
Each of these steps will need to be done while you are sued root, or via sudo. After you are finished,
you will be able to log in to your machine as any user specified in the domain, using the passwords that
the domain specifies and the home directory and shells configured in that domain for the users, just as
though you had created the users locally in your own NetInfo system. As of this writing (OS 10.2.2)
however, there's a bug with the GUI login application that prevents it from using the NIS information.
(NIS login and other information functions properly through ssh (slogin) login and su functionality,
Using NIS for Lookups
The first step of the process is accomplished by adding a flag specifying that lookupd should obey
NIS-provided information for users. You can accomplish this by creating the file /etc/lookupd/
users and specifying appropriate options for LookupOrder, or by creating the NetInfo directory /
locations/lookupd/users and populating it with the same information, and then restarting
lookupd. If you prefer the file route, create the file /etc/lookupd/users and place in it the
following line:
NIAgent YPAgent
This tells lookupd that when it's looking for user authentication information it is to first go to your
NetInfo system and inquire there (NetInfo is a NeXT-inspired service that provides more sophisticated
network-information-database-type services than NIS, and is the default information service for OS X),
and if matching information is not found, to subsequently query the NIS system. If you prefer to add the
information to your NetInfo domain, you can do so either by using the NetInfo Manager application in
your Applications/Utilities directory, or by using the command-line niutil commands shown here.
(niutil is covered in greater detail in the next section.)
niutil -create . /locations/lookupd/users
niutil -createprop . /locations/lookupd/users LookupOrder NIAgent
Alternately, you can create the content for the directory as a text file and then insert the content by using
the niload command to populate the directory. First create a text file containing the following content:
"name" = ( "lookupd" );
"name" = ( "users" );
"LookupOrder" = ( "NIAgent", "YPAgent" );
Because lookupd actually controls the search order for a number of different types of services, it
might be better for you to specify a default order as well. Also, including the CacheAgent as the first
source to check can speed lookups on your system considerably in some situations. These changes make
the file look like the following:
"LookupOrder" = ( "CacheAgent", "NIAgent", "YPAgent" );
"name" = ( "lookupd" );
"name" = ( "users" );
"LookupOrder" = ( "CacheAgent", "NIAgent", "YPAgent" );
In either case, the changes get loaded into your NetInfo domain with the following commands (assuming
you've named you text file lookupd.txt):
niutil -create . /locations/lookupd
niload -r /locations/lookupd . < lookupd.txt
Selecting an NIS Domain
The second step is to configure your machine to subscribe to a particular NIS domain. It is possible to
subscribe to (and switch among) multiple domains, but this is beyond the scope of this book. For a
single domain, the process is simple: Just issue the domainname command, where <domainname> is
the name of the domain you plan to subscribe to:
domainname <domainname>
If you want to make this subscription to the domain permanent, then edit the /etc/hostconfig file
so that it reads:
If you make this change, the next time the machine is rebooted, all this will be configured automatically.
Binding Your Client to the NIS Domain
The third step is to bind your domain to the domain. If you are on the same broadcast subnet as the
server for the domain, to subscribe to a domain you need to know only the NIS domain name.
(Remember, this is not the same thing as your DNS domain.) If you are on a different subnet, you need
to know both the domain name and the server's IP address. If you are on the same subnet, simply issue
the following command:
If you need to bind to a server on a different subnet, the process is only a little more complicated (though
it may be considerably more frustrating—the first step sometimes takes a while to respond when it can't
talk to the NIS server immediately). If your machine is on a different subnet from the server, use the
following commands instead:
ypbind -ypsetme
ypset <ip address of NIS server host>
Taken all together, if you were trying to bind your machine to the NIS domain rubbermonster, and
the server were at IP address, the process after adding NIS to the lookupd would look
like this:
domainname rubbermonster
ypbind -ypsetme
The second step may take some time. If you find it annoying, start another terminal window, log in as
root, and run the ypset command from it. After these commands have all been executed successfully,
you should be able to then run the ypcat command and get the same sort of output shown earlier as
being in the /etc/password file. Here I've left my full prompts in the output to more clearly display
the sequence of operations:
Racer-X ray 7# nidump -r /locations/lookupd .
"name" = ( "lookupd" );
"name" = ( "users" );
"LookupOrder" = ( "NIAgent", "YPAgent" );
Racer-X ray 8#
Racer-X ray 9#
domainname rubbermonster
ypbind -ypsetme
(I got bored and ran
ypset in another window)
Racer-X ray 10#
Racer-X ray 11#
ypcat passwd
ftp:*:99:20:anonymous ftp:/store/ftp:/bin/csh
guest:2dj7u1tEvDLpk:101:8:Guest Account:/home/guest:/bin/csh
ray:OPA7l1cyVYwrs:515:10:William C. Ray:/priv/home/ray:/bin/tcsh
skuld:CZ4a7y1AlpgMW:100:5:System Software Administrator:/usr/local/
joray:pA2lf397a0mYa:2001:21:Joan Ray:/priv/home/joray:/bin/tcsh
jray:a3kSXs1ROd17p:2002:21:John Ray:/home/jray:/bin/tcsh
At this point, I'm able to log in to my machine as any of these users, and it's just as though I created
them in my own NetInfo system through the Accounts control pane.
The potential security failures of the system should be obvious: I've just snagged a copy of the password
file from a remote server, without doing anything beyond asking it politely. If I were trying to steal
information, it couldn't get much easier for me stealing the password file, could it? Perhaps more
insidiously, if someone wanted access to my machine, all they'd need to do is whack the actual server
and stick in a server that served a domain by the same name. I'd possibly never know the difference, and
the culprits would be able to insert any user IDs and passwords into my list of users. This would enable
them to enter my system at will. This attack becomes trivially easy with the broadcast version of the
protocol, where the client doesn't even have an IP address for the server. Just disabling the server's
network connection and popping an imposter system online for a few moments can insert users into my
system, who my machine will allow access. There's nothing to prevent them from inserting a user with
an equivalent user ID to root (UID 0). This could be devastating.
LDAP (Lightweight Directory Access Protocol) Passwords
Like the NIS, LDAP is a network service designed to provide user account information across a cluster
of cooperating machines. LDAP has traditionally been used more frequently in clustering "desktop
workgroup"-type computers than in clustering Unix boxes, but it is becoming more common in this
environment as well. OS X, trying to interoperate in both worlds seamlessly, provides access to either or
both systems. Unlike NIS, in which the database and query system are integrated, LDAP is technically
just a specification for a standard for carrying out the on-network dialog between a client requesting and
a server providing directory-type information. LDAP can theoretically be used as the interface to just
about any network directory service, and Apple has used it to build their OpenDirectory service package
on top of NetInfo (discussed in the next section) for OS X.
In the case of LDAP, OS X's support currently appears to be seamless in some situations and strewn
with cavernous gulfs in others. There are five different interfaces (or rather, five that have been found so
far) through which the system can be configured to communicate with LDAP services:
Two interfaces are available through Directory Access in Applications/Utilities. LDAPv2 and
LDAPv3, although they require similar information, are configured through two completely
dissimilar interfaces. These theoretically configure the system to accept LDAP information for
logins and/or address-book purposes.
On the other hand, there are also two different (both between themselves and from the Directory
Access configuration) places in which to configure lookupd, which is the general directory
services information daemon where the system (supposedly) gets its normal user information.
lookupd can either read a series of configuration files from /etc/lookupd/ or use the
NetInfo directory /locations/lookupd/ from your local NetInfo heirarchy. Unfortunately,
it seems there's a file missing from the system because attempts to directly configure lookupd
to use LDAP, in a fashion similar to what was done with NIS in the previous section, result in
errors being logged to /var/log/system.log regarding a missing LDAP.bundle file.
Finally, there are configuration files living in /etc/openldap/ that appear to carry the
configuration for an assortment of LDAP command-line tools such as ud (a textual interface to
LDAP services).
Annoyingly, the way that these interoperate (if at all) is not clearly documented, and none of the three
systems (and their five interfaces) appear to be aware of the existence of the others. Also annoyingly, the
man pages for each appear to have originated on systems where that interface was the interface to
LDAP, leaving the administrator with considerable confusion if she isn't trying to connect to an Apple
server for which LDAP is mostly all autoconfiguring. Fortunately, in a book regarding security, our
place is not to teach you to make sense of or configure this morass, but to point out the deficiencies in
the system from a security standpoint.
Being considerably more complex a system than NIS, one might expect LDAP to suffer from more
potential security problems than NIS. A quick survey of and would lead
one to believe that this is likely to be true. CVE itemizes 24 LDAP vulnerabilities to 17 for NIS.
SecurityFocus, on the other hand, lists 225 entries for LDAP and 358 for NIS. It's a little difficult to pick
apart these numbers to make sense of any trends available, SecurityFocus includes links both to
nonvulnerability articles mentioning various security topics and to potentially multiple articles regarding
each vulnerability, whereas CVE tries to catalog distinct security issues, so the CVE distribution is
probably more reflective of reality. When considering the relationship between LDAP's 24 CVE issues
and NIS's 17, it should be kept in mind that NIS has been in use for well over twice as long as LDAP.
On the other hand, many of NIS's vulnerabilities are core security flaws in the protocol, whereas LDAP's
tend more toward programming errors or misinterpretations of the implementation details.
Probably the largest security problem with LDAP, however, is one that's shared with NIS: It provides
user authentication information to remote machines, just for the asking. With LDAP a server
administrator has the option of requiring a user ID and password to access LDAP information from the
server, but as can be seen from the documentation for OS X's LDAP interfaces (lookupd in particular),
there's not necessarily much protection provided for these authentication tokens. This allows the same
variety of unauthorized access to user authentication information, and the same potential for spoofing
the information from the server as is suffered by NIS clients.
Unlike NIS, LDAP clients that are easily available on the Internet can allow anyone easy access to
explore LDAP servers. For example, Jarek Gawor's LDAP Browser/Editor (
ldap/) is a cross-platform Java client that enables you to easily explore any LDAP server you can
identify. It even comes preconfigured to connect to the University of Michigan's LDAP server. This
server doesn't seem to be particularly communicative without additional information, but if you'd like to
use the client to explore other public repositories of information, Notre Dame's Office of Information
Technologies makes a convenient list available at
NetInfo Password Database
Finally, we come to NetInfo, which is Apple's implementation of a hierarchical network directory
service. NetInfo is considerably more complex than NIS, which should imply that there are more
potential security flaws. Thankfully, it was designed by programmers more in touch with their paranoid
inner selves. There are a few specific security issues, but by and large, NetInfo has not divulged a
significant number of flaws at this point. SecurityFocus and CVE barely even know it exists, with only
21 and 2 entries respectively—not bad for a protocol that's been around since the early 1990s, though the
lack of penetration in the market has probably reduced somewhat the level of stress-testing to which it
has been subjected.
Conceptually NetInfo is similar to NIS and LDAP, in that it provides a range of data types divided into a
number of different classes. The fact that it's hierarchically arranged as a directory structure, rather than
as flat files like NIS, makes it somewhat more complex to access, but its primary difference is that the
servers providing the information can be hierarchically arranged in the design of how they can provide
the information. In a clever abstraction of information services, each client functions as its own NetInfo
server, can further serve its NetInfo directory structure, and can subscribe to others.
Covering NetInfo in sufficient depth to do it justice would take a book or two worth of writing, so we'll
touch on only the basics that apply to the user ID and password system here. NetInfo stores user
information in the NetInfo directory /users. You can list directories by using the -list option to the
niutil command, in the form niutil -list <NetInfo domain> <directory>, where
<NetInfo domain> is typically the root domain . on your machine, and <directory> in this
case is /users.
Racer-X ray 5% niutil -list . /users
You can read any particular record by using the -read option to niutil, in the form niutil read <NetInfo domain> <directory><property>, where <property> is the name of
the property you want to query.
% niutil -read . /users/software
authentication_authority: ;basic; ;LocalWindowsHash;
picture: /Library/User Pictures/Nature/Nest.tif
uid: 503
_writers_passwd: software
realname: Skuld
_writers_hint: software
gid: 100
shell: /bin/tcsh
name: software
_writers_tim_password: software
passwd: xwNc7eG6/lR4.
_writers_picture: software
home: /Users/software
sharedDir: Public
Alternatively, you can dump any record (or all records) from the database by using the nidump
command. The nidump command conveniently enables you to search for records with specific
properties, by using the syntax nidump -r <directory name>=<desired dir>/
<property name>=<desired property> <NetInfo domain>.
% nidump -r /name=users/uid=5002 .
"authentication_authority" = ( ";basic;" );
"picture" = ( "/Library/User Pictures/Nature/Zen.tif" );
"_shadow_passwd" = ( "" );
"hint" = ( "Urusei!" );
"uid" = ( "5002" );
"_writers_passwd" = ( "skel" );
"realname" = ( "skeleton account" );
"_writers_hint" = ( "skel" );
"gid" = ( "99" );
"shell" = ( "/bin/tcsh" );
"name" = ( "skel" );
"_writers_tim_password" = ( "skel" );
"passwd" = ( "AtcBqXhZAfJ7A" );
"_writers_picture" = ( "skel" );
"home" = ( "/Users/skel" );
"sharedDir" = ( "Public" );
If your domain is a tagged domain, you can use the syntax nidump -r <directory
name>=<desired dir>/<property name>=<desired property> -t <NetInfo
domain> instead.
% nidump -r /name=users/name=ray -t localhost/local
"authentication_authority" = ( ";basic;" );
"picture" = ( "/Library/User Pictures/Fun/Orange.tif" );
"_shadow_passwd" = ( "" );
"hint" = ( "AA-Megamisama" );
"uid" = ( "501" );
"_writers_passwd" = ( "ray" );
"realname" = ( "Will Ray" );
"_writers_hint" = ( "ray" );
"gid" = ( "20" );
"home" = ( "/Volumes/Wills_Data/ray" );
"name" = ( "ray" );
"_writers_tim_password" = ( "ray" );
"passwd" = ( "8R83ac3Pl1FxQ" );
"_writers_picture" = ( "ray" );
"shell" = ( "/bin/tcsh" );
"sharedDir" = ( "Public" );
In each of these cases, note that I have been able to read or list the user and password information
without needing to be root. This is a problem; it facilitates the type of attacks against the password
system that are demonstrated in the next section.
Consolidated Password Storage Mechanisms
With the recent massive proliferation of software, services, and online systems that want to use
passwords or passphrases, many users have started either using the same password for everything or
looking for a way to consolidate all their passwords into some interface that does the remembering for
them. Early on, Web browsers started incorporating this ability to remember things that you entered into
Web forms for you so that you didn't have to enter them again. Other software quickly sprang up to fill
the void for remembering passwords where Web browsers couldn't do the job. By now, you should
immediately see the idea of encrypting and storing all your identity tokens in a single container,
protected by a single key, as a rather dangerous one. Nevertheless, Apple's provided a version of this
functionality as a built-in component of OS X in the form of the Keychain.
The Keychain facility, managed through the Keychain Access program in the Utilities folder under
Applications, provides a centralized service through which any application can store identification
tokens for the user. As a matter of fact, it's a bit difficult to prevent applications from storing
identification tokens in the Keychain: I am aware of having told only one application that I wanted it to
store information in the Keychain, yet my copy seems to have a dozen keys stored in it.
Too many Keychains—just like my pants pockets!
Discussing the Keychain facility, Keychain, and Keychains is a bit difficult
because we have a Keychain facility that holds Keychains and a Keychain that
enables you to configure what Keychains are held by the Keychain facility.
If you're comfortable with the idea of storing all of your passwords or other identification tokens in a
single application where they can all simultaneously be exposed by a single compromise of the storing
application, then the Keychain can provide some relatively sophisticated functions for managing this
type of information.
Despite the reality of the potential for a single-point compromise of all of your security
information, should you use Keychain to store it all, it's not really all that bad a solution for
many purposes. If you've a choice between remembering and using one actually strong
password or passphrase to remember and protect all your identity tokens, and getting lazy
and using dozens of weak passwords that you've had to remember yourself, using something
like Keychain is probably the lesser of two evils.
The problems inherent in the system are real, and should be a significant concern. The
problems with identity tokens in general, however, are rather severe, and there is simply no
good and practical solution. If you're not going to be superhumanly conscientious with
respect to picking and remembering your security tokens, you're going to be making
compromises, and it may be that using an application such as Keychain is the best of the
possible compromises in your situation.
Only you can make this decision, but you need to make it based on a clear understanding of
the potential problems.
Most uses of the Keychain are almost transparent to the user. An application might ask you, when you
enter a password for some function, whether you want to store the information in the Keychain. If you
tell it yes, it will store the plaintext of the information in the Keychain (as an encrypted value), and will
retrieve and use that value from the Keychain whenever it would require you to enter the information.
All the information that's stored in the Keychain is encrypted, but by default the Keychain is decrypted,
and the plaintext contents made available to the applications that need them when you log in. In its
default use, the Keychain uses your login authentication as the authorization token by which it decrypts
your information and makes it available, so the strength of your login directly affects the level of
protection that any information stored in the Keychain is afforded.
Figure 5.1 shows the Keychain Access interface to the Keychain. In the panes shown in this interface,
you have access to a list of the keys stored in the currently open Keychain, and are shown the attributes
of any key you choose. Checking the Show Passphrase box will, depending on the settings for that
particular item, either immediately display the plaintext password or passphrase in the box immediately
below it, or it will pop up a requester dialog asking for you to enter the password or passphrase for that
Keychain, and asking under what circumstances the display of the passphrase should be allowed.
Figure 5.1. The Keychain Keychain facility control application.
Figure 5.2 shows the dialog requesting the password, to enable Keychain Access to display the password/
passphrase. Keychain is requesting permission from the Keychain facility to display the
passphrase. It must obey the access restrictions that have been configured for the passphrase just as any
other application needs to do, so this same dialog appears when any other application requests this
information as well. The password it wants is the password for that particular Keychain, which, if you're
using the default, is your system login password. What's going on here might be moderately confusing
initially: You've asked Keychain to show you the plaintext of a passphrase, and the running
system Keychain facility has popped up a dialog box to ask you whether it should allow Keychain to decrypt and display that information for you. If you select Allow Once, the information is
displayed in the text box below the check box, but the permission will not be permanent: Closing
Keychain and requesting the same information again results in you being queried for your
password again. If you select Always Allow, the next time you open Keychain, you can
display that passphrase without needing to enter your system password. If you've a tendency to leave
your console while you're logged in, this is probably a dangerous selection to make.
Figure 5.2. When an application requests access to a Keychain item, you are asked whether to
allow it to read the data.
The Access Control tab gives you control over general access policies for the selected item. Under this
tab, the options shown in Figure 5.3 enable you to control whether any application should be allowed
access whenever it asks whether the Keychain facility should query for each application when it requests
access, whether to require the Keychain password for access to the item, and which (if any) applications
should be allowed unconditional access to the information. In the case of the id_rsa application
password shown, it's set to allow unconditional access to the SSH Agent program, which is yet another
key-serving application that enables secure encrypted network transmissions (covered in greater detail in
Chapter 14, "Remote Access: Secure Shell, VNC, Timbuktu, Apple Remote Desktop"), and to otherwise
query you regarding whether any application is allowed to access the information if it requests it.
Figure 5.3. The Access Control tab enables you to configure access policies for this piece of
The Keychain can actually store almost any small snippits of information that you can put in a short
textual note. The Note button of the interface brings up a dialog where you can enter brief messages or
information, such as credit card numbers, that you'd like to protect. This is protected with the same
encryption with which the passwords it contains are protected, and is subject to the same vulnerabilities
if your password should be compromised, or if you allow the information to be displayed without
requiring confirmation. The Keychain facility also enables you to store multiple separate Keychains for
each user, each protected with a different password or passphrase. Additional options available under the
Keychain menus enable you to configure other defaults for the Keychain facility, such as
whether it should expire a user's access if she's been idle for some period, or if the system has gone to
Testing Password Security
Testing the security of a password is a simple exercise—just see if you can guess it. The only difficulty
is that if you're guessing, and you're guessing randomly, you're just as likely to guess any one string as
another, and just as likely (or unlikely) to guess any given password, no matter whether it's a dictionary
word or a completely random string. With one guess, then, it's difficult (impossible, actually) to produce
any statistic regarding whether a password is secure or not. To usefully test a password's security, you've
got to try the sort of guessing that a cracker might apply, and see whether the password succumbs.
Ok, so testing passwords is easy to describe, but not so easy to implement from the keyboard. Trying
seven million billion combinations by hand isn't practical. Even trying enough to check against likely
dictionary words and personally significant data, such as phone numbers and birth dates, isn't something
that any sane person would spend time on manually. Crackers download their software to break into
your system from helpful sites around the Internet, so why not use their own tools to see whether they're
going to be successful?
Because any password is guessable, and will eventually fall to a sufficiently devoted attack, it's useless
to question whether a password can be cracked. Instead you need to ask how long it's likely to take a
cracker to guess it. Unfortunately, because many people pick similar passwords (poor ones), and because
crackers know this, they can focus their attempts into a very small region of the password space and be
relatively certain of guessing at least some passwords on almost any system. Fortunately, knowing this,
you can level the playing field a bit. If you test your passwords to see whether they will easily fall to the
techniques of a cracker going after poor passwords, and then eliminate those that do, the average cracker
won't be able to crack your passwords easily, and may well go on to bother someone else instead. Of
course, if they're dedicated to the task of breaking your passwords, it's certain that they can succeed, and
likely that they will succeed eventually. Your only real security lies in making certain—by using their
tools—that breaking your password takes a long time, and then in changing your passwords frequently
enough that it's unlikely that they've been guessed.
One caution while considering increasing your system security by checking the strength of
your passwords: If you're not specifically empowered to check that facet of your system's
security, make very sure you have permission while taking this approach. Randal Schwartz,
Internet-famous co-author of The Perl Programming Language and author of Learning Perl,
found out the hard way that employers quite often have neither any idea of what computer
system administration and security actually entail, nor any sense of reasonable behavior or
level of response to a perceived error. In 1993, while he was working for Intel, he attempted
to increase system security by requiring users to use strong passwords. To do this, he
checked the password strength on Intel machines he was administrating by running a
commonly available password cracking package named Crack. To be sure, Randal made
some mistakes in how he went about this, and in how he presented himself regarding the
experiments he was running, but in thanks for him doing his job, Intel managed to get him
indicted on three felony counts, sentenced to time in jail and community service, and
required to pay thousands of dollars of restitution (
Make no mistake—this sucks. System administrators are called upon to maintain security,
and then often blamed when their security precautions fail, usually because they've yielded
to a user who has begged for leniency on some policy issue. Management, however, rarely
knows what maintaining security entails, and will turn on you in an instant if the suits and
the government think that you'd make a good example if they crucify you. The only way to
protect yourself from this is to help set policy when you can, including legal policy at the
government level. As a system administrator, the rights you enjoy will be only those that
you're willing to fight alongside other system administrators to maintain.
Dumping Your Password File
Fortunately for you, the designers of the standard Unix login procedure were smart enough to design the
process so that it takes a long time to check each login attempt. This makes attacking your passwords
from remote systems impractical (but not impossible). To avoid this impediment, crackers will try to get
a copy of your passwords off your system and onto one where they can throw as much processor power
at it as is available. Usually this requires them to have some access to your system already, but this is a
matter of careful system design. There have been poorly thought-out systems that allowed random
visitors from the Internet to download their password files through their Web servers.
Unfortunately, if crackers have access to your system and want to steal other passwords, it's usually not
too difficult to get at your stored passwords. The NetInfo system will typically be happy to give a listing
in exactly the format that a cracker needs to run though a password cracking program:
% nidump passwd .
nobody:*:-2:-2::0:0:Unprivileged User:/dev/null:/dev/null
root:NDdqVoM4ttK4o:0:0::0:0:System Administrator:/var/root:/bin/tcsh
daemon:*:1:1::0:0:System Services:/var/root:/dev/null
unknown:*:99:99::0:0:Unknown User:/dev/null:/dev/null
smmsp:*:25:25::0:0:Sendmail User:/private/etc/mail:/dev/null
www:*:70:70::0:0:World Wide Web Server:/Library/WebServer:/dev/null
mysql:*:74:74::0:0:MySQL Server:/dev/null:/dev/null
sshd:*:75:75::0:0:sshd Privilege separation:/var/empty:/dev/null
ray:8qC3acDsl1xFQ:501:20::0:0:Will Ray:/Volumes/Wills_Data/ray:/bin/
skel:AtCAqzHurP27A:5002:99::0:0:skeleton account:/Users/skel:/bin/tcsh
jim:NFf1oqq0ePYTk:505:99::0:0:Jim Emrick:/Users/jim:/bin/tcsh
james:apqpjRMfGyA3U:600:70::0:0:Sweet Baby James:/Users/james:/bin/
One of the recommended protections against this is to chmod 700 your nidump command. Normal
users don't typically have a need to look at your NetInfo database, anyway. This change does put
perhaps the slightest of impediments in the way of a malicious individual who's after your passwords,
but making this permission change is no more than a rumble strip on the information highway to anyone
intent on getting to this data, and should not be taken as a serious protective measure. niutil lists
individual users' entries from the NetInfo database. The NetInfo Manager GUI interface in
Applications/Utilites also provides access to this information. Trying to block access by
setting each possible interface so that it can't be used by normal users is the most naive variety of
stupidity in implementing security. It's an altogether too-frequently used method, but that doesn't make it
any less a practice of idiocy. If the source of the data is not protected from the users, the data is not
protected. In the case of the system passwords, all that a user would need to do is install her own copy of
a utility such as nidump, and she'd again have complete access. If she didn't have convenient access to
a copy of OS X from which she could grab another copy, she could always download something like
Malevolence (, or, and
use it to read the password information instead.
Because the NetInfo database is so fundamentally integrated into the OS X user experience, completely
blocking access to this resource is not a solution that's likely to be practical. The result is that so long as
there are users who can log in to your machine, you should assume that they can access your encrypted
Cracking the Passwords: John the Ripper
After a cracker has access to your password file, the next thing he needs is some software with which to
run a large number of potential guesses against the encrypted passwords contained in it.
Over the years a number of password-cracking tools have been designed, with the primary differences
being the evolution of more efficient versions of the encryption algorithms used. In one form or another,
they all make sequential guesses at the encrypted passwords. Some use dictionaries of likely password
words (such as dictionaries of the English language). Others use any personal information that can be
gleaned about the users from readable material in their accounts. Some can be configured to make
essentially random guesses at the password, modified by a collection of rules derived from real
passwords regarding letter frequency patterns and other statistical measures based on observed usage.
Yet others take the completely brute-force approach of trying every possible string as a guess until they
manage to find a match.
Although a variety of tools are available to crack passwords today, one of the most powerful and
configurable is John the Ripper, available from John is a combined
dictionary- and rules-based brute-force cracker, and one of the fastest to use against normal password
To demonstrate the facility with which passwords can be guessed, we've added a user with a relatively
poor password to the user list. User Ralph has the password asdzxc. This isn't a dictionary word, but
it's all lower case, so if a cracker is going at the easy passwords first, it's likely to fail quickly. Running
John the Ripper on the password file results in output similar to the following:
% /usr/local/john/john passwords
Loaded 8 passwords with 8 different salts (Traditional DES [32/32 BS])
guesses: 0 time: 0:00:11:39 (3) c/s: 46859 trying: cobwsy - cobwhy
guesses: 0 time: 0:00:17:21 (3) c/s: 39483 trying: speamble speamcre
guesses: 0 time: 0:00:23:49 (3) c/s: 36787 trying: dmv479 - dmv412
guesses: 0 time: 0:00:40:21 (3) c/s: 31506 trying: snke91 - snkean
guesses: 0 time: 0:00:57:18 (3) c/s: 29473 trying: tttcy5 - tttc99
guesses: 0 time: 0:01:55:29 (3) c/s: 26610 trying: Maiko5 - Maik.s
guesses: 0 time: 0:04:08:44 (3) c/s: 25018 trying: g1by26 - g1byst
guesses: 0 time: 0:20:00:13 (3) c/s: 37012 trying: cu87co - cu87d4
guesses: 0 time: 0:21:56:54 (3) c/s: 38124 trying: coondprs coondpon
guesses: 0 time: 0:22:16:03 (3) c/s: 37996 trying: drotty39 drottyph
guesses: 0 time: 0:23:18:05 (3) c/s: 37148 trying: Rjry6d - Rjryca
guesses: 0 time: 1:00:20:07 (3) c/s: 36877 trying: tspaniqV tspanete
guesses: 0 time: 1:01:53:19 (3) c/s: 37101 trying: Bfrbrer BfrbriS
guesses: 1 time: 2:20:26:11
guesses: 1 time: 3:19:04:38
guesses: 1 time: 4:02:17:18
guesses: 1 time: 4:23:14:38
guesses: 1 time: 5:17:59:33
guesses: 1 time: 5:17:59:34
guesses: 1 time: 6:06:42:54
guesses: 1 time: 6:19:22:28
guesses: 1 time: 6:19:22:52
Session aborted
c/s: 28608
trying: coy202x -
c/s: 27282
trying: STohlso -
c/s: 26991
trying: logbma5 -
c/s: 26389
c/s: 26000
trying: JA2c03 - JA2cED
trying: romfjook -
c/s: 26000
trying: romfdra1 -
c/s: 25781
trying: btmphfg -
c/s: 25579
trying: hintgns1 -
c/s: 25579
trying: hinsube_ -
% /usr/local/john/john -show passwords
1 password cracked, 7 left
While running on a file of passwords, hitting the spacebar will give you a brief status message from
John, so in this output we've hit the space bar several times over the course of a few days to see how
John's doing at cracking the 8-line password file. John managed to guess Ralph's password at
somewhere between 1 day, 1 hour out, and 2 days, 20 hours out, and after 6 days and almost 20 hours
total, had not managed to guess any of our more complex passwords.
To demonstrate how much more quickly dictionary word-based passwords can be guessed, we added
Ralph's password to the end of a 234,937 word dictionary. In processing this dictionary John can be
configured to use only the words in it directly, or to do things such as adding numbers after the words,
capitalizing them, trying l33t-speak-like transmutations (the letter-shape/phonetic transformation code
that "elite crackers" and warez-dudes (script kiddies and their ilk) converse in), and making other
variations on the words in the list. When run using this dictionary against our password database, the
results return much more quickly.
% /usr/local/john/john -w:/usr/share/dict/words passwords
Loaded 8 passwords with 8 different salts (Traditional DES [32/32 BS])
guesses: 1 time: 0:00:01:17 100% c/s: 18019 trying: Zyryan - asdzxc
One minute, 17 seconds to crack Ralph's password. Of course, Ralph's password was in the dictionary
with no variations in capitalization or other modifications, so let's see how quick the guesses go when it
needs to check word-variants as well. John the Ripper's -rules option enables checking variants based
on common password patterns.
% /usr/local/john/john -w:/usr/share/dict/words -rules passwords
Loaded 8 passwords with 8 different salts (Traditional DES [32/32 BS])
guesses: 1 time: 0:00:21:05 100% c/s: 24973 trying: Zymining Asdzxcin
The time has gone up to slightly over 21 minutes, but this is still a much shorter time than the period
within which you'd probably like to change your password.
John has quite a number of options that we don't have the room to demonstrate here, including the
ability to use other password encryption schemes than the normal one used in Unix, making it a very
flexible program for password guessing in many applications. The command-line options are shown in
Table 5.1.
Table 5.1. The Command-Line Options for John the Ripper
Enables an external mode, using external functions defined
in the [List.External:MODE] section of ~/john.
Forces ciphertext format <name>. Allows you to override
the ciphertext format detection. Currently, valid format
names are DES, BSDI, MD5, BF, AFS, and LM. You can
use this option when cracking or with -test. Note that
John can't crack password files with different ciphertext
formats at the same time.
Loads the specified group(s) only. A dash before the list
can be used to invert the check.
Enables the incremental mode, using the specified ~/
john.ini definition (section [Incremental:MODE],
or [Incremental:All] by default).
Makes a charset, overwriting <file>. Generates a charset
file, based on character frequencies from ~/john.pot,
for use with the incremental mode. The entire ~/john.
pot is used for the charset file unless you specify some
password files. You can also use an external filter()
routine with this option.
Continues an interrupted cracking session, reading point
information from the specified file (~/restore by
Enables rules for wordlist mode.
Sets a password per salt limit. This feature sometimes
enables you to achieve better performance. For example,
you can crack only some salts by using -salts:2 faster,
and then crack the rest using -salts:-2. Total cracking
time will be about the same, but you will get some
passwords cracked earlier.
Enables memory saving, at <level> 1..3. You might
need this option if you don't have enough memory, or don't
want John to affect other processes too much. Level 1 tells
John not to waste memory on login names, so you won't
see them while cracking. Higher levels have a performance
impact: You should probably avoid using them unless John
doesn't work or gets into swap otherwise.
Sets session filename to <file>. Allows you to specify
another point information file's name to use for this
cracking session. This is useful for running multiple
instances of John in parallel, or just to be able to recover an
older session later, not always continue the latest one.
Loads the specified shell(s) only. This option is useful to
load accounts with a valid shell only, or not to load
accounts with a bad shell. You can omit the path before a
shell name.
-show <password_file>
Shows the cracked passwords in a convenient form. You
should also specify the password files. You can use this
option while another John is cracking, to see what it did so
Enables the single crack mode, using rules from the
[List.Rules:Single] section of ~/john.ini.
Prints status of an interrupted or running session. To get upto-date status information for a detached running session,
send that copy of John a SIGHUP before using this option.
Reads words from stdin to use as wordlist.
No cracking; writes words to stdout.
Benchmarks all the enabled ciphertext format crackers, and
tests them for correct operation at the same time.
-users:[-]<login>|<uid>[,..] Loads specified user(s) only. Allows you to filter a few
accounts for cracking and so on. A dash before the list can
be used to invert the check (that is, loads all the users that
aren't listed).
Uses the specified word list for wordlist mode.
John the Ripper also includes a number of utility programs that work with it to put data into it, get data
out of it, or otherwise massage or act upon your input and output. Table 5.2 lists these additional
utilities. (Note that unafs, unique, and unshadow are actually links to the John program itself.)
Table 5.2. Utilities in the John the Ripper_Suite
unshadow <password-file> <shadowfile>
Combines the passwd and shadow files
(when you already have access to both) for
use with John. You might need this because if
you used only your shadow file, the GECOS
information wouldn't be used by the single
crack mode, and also you wouldn't be able to
use the -shells option. You'll usually want
to redirect the output of unshadow to a file.
unafs <database-file> <cell-name>
Gets password hashes out of the binary AFS
database, and produces a file usable by John
(again, you should redirect the output
unique <output-file>
Removes duplicates from a wordlist (read
from stdin), without changing the order.
You might want to use this with John's stdout option, if you got a lot of disk space
to trade for the reduced cracking time.
mailer <password-file>
A shell script to send mail to all the users who
have weak passwords. You should edit the
message inside before using.
Cracking Nonsystem Passwords
Security for your computing environment doesn't stop at using a reasonably strong password for your
login prompt. Other computers that yours interacts with, and systems that you and your computer's users
use are likely to require passwords as well. If you're like most people, there's a very high probability that
you or your users have used the same password you've used for your system login on another, less
secure system, with less safeguards to protect it from crackers. For example, in a moment of lax
thinking, you might have chosen to use your login password to also password-protect a FileMaker Pro
database, which you subsequently have given to other people. Now, they don't need to get into your
system to get a copy of your encrypted password and crack it; with the appropriate software, they can
directly attack the password in your FileMaker document, and from it, learn your system password.
Table 5.3 lists a number of password-cracking programs for a range of nonsystem passwords, as well as
some that can be used for system passwords. Be careful when using any of the applications that these
programs can crack, and understand that almost any other application might be on this list in the next
edition of this book. Disclosing your password by leaking it through reuse in a weak, crackable
application would be embarrassing.
Table 5.3. Password-Cracking Programs for Nonsystem Passwords
Cracking Software
An SSH account brute-force auditing tool.
Recovers lost passwords on MS Access 97 mdb files.
AOL Instant Messenger decoder.
Aim Recover 2.0 decrypts AIM passwords when they are stored
locally. Can also import Buddy Lists.
Advanced Lotus Password Recovery recovers lost or forgotten
passwords created in IBM/Lotus applications (all versions):
Organizer, Word Pro, 1-2-3, and Approach. Passwords are
recovered instantly; multilingual passwords are supported.
Recovers lost passwords for Microsoft Word, Excel, Access,
PowerPoint 97, Project, Money, Outlook, Backup, Schedule+,
Mail, IE 3,4, and 5, Visio 4 and 5, and others.
Another Password Cracker is designed to brute-force Unix
passwords with a standard dictionary-based attack.
Simple brute-force Unix password cracker. Tries all combinations
of every printable 7-bit ASCII character.
Simple brute-force Unix password cracker. Tries all combinations
of every printable 7-bit ASCII character.
Advanced Archive Password Recovery can be used to recover lost
or forgotten passwords to ZIP (PKZip, WinZip), ARJ/WinARJ,
RAR/WinRAR, and ACE/WinACE archives.
ASMCrack is a Unix password security tool. It supports five
cracking modes.
Brute-force HTTP authentication cracker.
Advanced Zip Password Recovery supports PKZip 1.0 through
modern WinZip, all compression methods. Can work with a single
file; self-extracting archives are supported. Includes a number of
brute-force options.
Password cracker.
Various BIOS crackers.
Brute-force Linux-PAM password cracker.
Obtains the usernames/passwords through a simple dictionary
Tries to break in remotely using password brute-forcing for
Telnet, FTP, and POP3 protocols.
A GUI multithreaded application that can be used to recover
various passwords on Windows 95/98.
Coherent Light Bruteforce Toolkit contains IRCrack, a tool that
connects directly to an IRC server and uses a word list to bruteforce a channel key, and Boomcrack, a brute-force FTP account
Unix password cracker.
Decrypts the safe passwords of NcFtp.
Text file that explains how to decrypt Windows 9x passwords that
are stored in the Registry.
Cisco scanner for Windows, which scans a range of IP addresses
for Cisco routers that haven't changed their password from the
default value of Cisco.
Exploits the weak encryption scheme utilized in CuteFTP.
Default password list, last updated July 10, 2000. Contains more
than 820 passwords, including default passwords for BIOSes,
network devices, appliances, applications, Unix, VMS,
HP2000/3000, OS/400, CMS, PBX systems, Windows NT,
Novell, Oracle, and many more.
List of default passwords for many network switches and devices.
Last updated July 7, 2000.
Exploit script for the Sawmill File Access and Weak Encryption
Tool that tries to guess Lotus Domino HTTP passwords.
Dictionary file creator (DOS).
Tool for decrypting passwords.
Distributed, Keyboard, Brute-Force program, for Linux clusters.
Attacks Windows NT Lanman and NT hashes by using the
Message Passing Interface (MPI) to distribute the L0phtCrack
Exploit script for the Visible Systems Razar Password File
IRC (bot) brute-force password cracker.
Password cracker for eggdrop (blowfish) passwords; uses a word
Attempts to find the enable password on a Cisco system via brute
EliteSys Entry v2.05 is a remote brute-force security auditing
utility, designed to crack passwords for FTP, HTTP, and POP3
Program that weeds out all the cahsed passwords, such as domain,
mail, MAPI, Windows network, dial-ups, ie-passwords, and so on,
on local Windows 95/98 machines.
Brute-force Zip cracker.
FTP brute-forcer.
Brute-force Hotmail hints cracker. Requires
A SQL Server password auditing tool that runs brute force and
dictionary attacks to guess passwords.
Password cracker similar to Crack.
ftp_crack.tar.gz brute-forces FTP servers.
Front end for Gammaprog.
Config files for Gammaprog.
Gammaprog, a brute-force password cracker for Web-based email
addresses (,, and and regular
POP3 email accounts. Requires JRE.
Script that exploits PowerScripts PlusMail password vulnerability.
Tool to crack Hotmail hints through a dictionary attack.
Attempts to brute-force Hotmail accounts from a dictionary file.
Perl script that executes dictionary file-based brute-force attacks
on POP3 account passwords.
Hypnopaedia, a brute-force POP3 password cracker that uses a
dictionary file. With GUI, for Windows.
Cracks the weak hash encryption on stored Citrix ICA passwords.
A Windows program that reads information, including the
password, out of ICQ.DAT (versions 99a and 99b).
JCon is a security and brute-force password breaking tool. It can
scan ports, FTP daemons, and mailer daemons and check for CGI
John the Ripper, a very fast password cracker that is available for
Unix, DOS, Win32, and BeOS.
Password cracker for Windows.
Perl script that decrypts passwords found in the Kaufman Mail
Warrior accounts file (MW35_Accounts.ini).
An NT password auditing tool that computes NT user passwords
from the cryptographic hashes that are stored by the NT operating
Latest version of L0phtCrack. Password auditing and recovery
application for Windows.
Lepton crack, a password cracker that works on Cygwin and
Linux and cracks MD4 hashes, MD5 hashes, NTLM, and HTTP
password hashes from Domino R4.
An OpenLDAP brute-force auditing application that brute-forces
Manager passwords.
User-friendly dictionary C API that eases dictionary handling for
the development of open source security audit tools.
Flexible, easy-to-use password cracker for Linux/Unix that uses a
dictionary file.
Tool for analyzing password strength of user accounts on a Lotus
Domino Web server system by using dictionary attacks.
A CGI scanner for the Macintosh that scans for 130 vulnerabilities
and can use 45 of them to retrieve a password file.
An exploit that allows users to view an unshadowed version of the
password file on Mac OS X.
A brute-forcer for MD5 hashes, which is capable of breaking up to
6-character passwords within hours, and 8-character passwords
within two days.
Distributed multihosted Unix password cracker that runs on all
platforms where Perl is installed.
University of Minnesota POPMail password cracker.
A program that attempts to crack a user's account on an AppleTalk
Windows ActivePerl source to a script that proves that the
encryption being used by MSN Messenger 4.6 is weak. Does a
Base64 decode of the Registry.
Windows ActivePerl executable that proves that the encryption
being used by MSN Messenger 4.6 is weak. Does a Base64
decode of the Registry.
MS SQL 6.5/7.0 brute-force password-cracking tool.
Script that exploits the MySQL Null Root Password & BindAddress Configuration vulnerability.
MySQL brute-force password cracker that uses a dictionary attack.
Program to brute-force valid Newspro logins/passwords.
Perl-based brute-force attack on Telnet.
NTSweep brute-forces NT passwords.
A simple, fast, and effective password cracker for Unix systems.
Og-Brute is a Perl package to brute-force POP3 and FTP account
passwords and probe SMTP for valid logins with Wingate support.
Brute-force password cracker for mIRC.
Password cracker.
Exploit script for the Sawmill File Access and Weak Encryption
Simple password generator for generating uppercase and
lowercase numbers and letters.
PalmCrack, password-testing tool for the Palm computing
platform. Can check Unix and NT passwords against a dictionary,
and decrypt certain Cisco router passwords.
Exploit script for PCAnywhere Weak Password Encryption
Uses smbclient to brute-force NT shares and passwords.
PGPPass is a dictionary attack program for use against PGP secret
key rings.
PHP script that uses curl to brute-force SSL-protected Web site
login screens.
POP3 password cracker.
Pop3 Crack is a POP3 account brute-forcer written in Perl.
Exploits a flaw in the share-level password authentication of
Windows 95/98/ME in its CIFS protocol to find the password of a
given share on one of these machines.
Password generator.
A program that allows passwords contained in the Windows PWL
database to be viewed under Unix.
Unix password brute-forcer written in Perl.
Revelation password cracker.
RiP FTP Server, a Win32 program that extracts plaintext
passwords from FTP server client software, such as .ini or
Registry settings.
Remote Password Assassin is a network password cracker using
brute-force attacks.
rm-brutal.tar.gz, a Perl program that tries to get valid accounts on a
remote server by using a POP3 brute-force method through
TCP/IP distributed network password auditing tool for NTHASH
(MD4) and POSIX LibDES Crypt(3) passwords.
Searches out the password from LM/NTLM authentication
information (LanManager and Windows NT challenge/response).
Snap Cracks POP, a POP3 and FTP cracker written in Java.
TCP/IP distributed network password auditing tool for NTHASH
(MD4) and POSIX LibDES Crypt(3) passwords. DOS version.
Globalscape's CuteFTP, a popular FTP client, uses a weak
encryption scheme, allowing plaintext login and password
recovery from the address book. Includes cuteftpd.c, which
calculates the plaintext.
Recontructs a password file from the shadow file.
Windows 95/98/NT/2000 program intended for the analysis of IP
networks. Program includes attacks and password-guessing for
POP3 and FTP.
A password auditing tool for Windows and the SMB platform that
makes it possible to exploit the timeout architecture vulnerability
in Windows 2000/XP.
Fast SNMP brute-forcer.
Share Password Cracker acquires the list of shared folders of a
Windows 95/98/ME machine on the network and shows those
folders' passwords. This tool acquires the list of the shared folders
also for Windows NT/2000 machines, but it distinguishes only
folders that have no password. Shared Password Cracker exploits
the Share Level Password vulnerability.
Unix Sequence Password Generator creates password files and
allows on-the-fly cracking when used with other tools.
Source for auditing the strength of Microsoft SQL Server
passwords offline. Can be used either in Brute Force mode or
Dictionary Attack mode.
MSSQL server brute-force tool.
A brute-forcer that guesses root's password without being logged.
A multipurpose tool for Windows that does the work of 30
separate programs. Includes an .htaccess brute-forcer,
anonymous FTP scanner, list of Bios master passwords, country
codes list, dictionary generator, FTP brute-force service scanner,
cached ISP password retriever, and more.
Script that exploits the Strip Password Generator Limited
Password-Space vulnerability.
Converts Unix-style passwords in a Serv-U.ini file to standard
Unix password style for cracking.
Perl script that brute-forces Telnet.
Brute-force password cracker for Angelfire password reminder.
Requires JRE or JDK.
Brute-force password cracker for Hotmail password reminder.
Requires JRE.
Tool to crack the secret passwords on Cisco routers.
Brute-force cracker for password reminder.
Requires JRE.
University of Minnesota SLIP Password Cracker.
unfburninhell1.0.tar.gz A burneye cryptographic layer 1 and 2 cracker than can work
together with John the Ripper for password generation.
Unsecure is an HTTP auth brute-force cracker.
Brute-forces accounts via FTPD. Works best against Linux
systems with traffic on a fair bandwidth.
V-Crack Zero++ (Unix), a poly-alphabetic XOR cipher cracker.
V-Crack Zero++ (DOS), a poly-alphabetic XOR cipher cracker.
Velocity Cracking Utilities, a suite of utilities for cracking Unix
password files.
Unix password cracker. It generates password combinations for
the character sets and lengths you specify.
Patch to VNC that allows a brute-force dictionary attack.
Decrypts the password for VNS, a PCAnywhere-like program.
A VNC attack program ported to Windows. It features cracking of
the password in the Registry, online brute force against a VNC
server, or cracking a sniffed challenge/response handshake.
HTML brute-force attacker.
Password cracker designed to brute-force login/password
combinations for Web sites that use HTTP-based password
Extracts WinGate administrator passwords from Windows 9x/NT
machine Registries and decodes them.
Windows program designed to generate wordlists recursively from
all files in a directory.
WordMake, a dictionary file creator.
Automates the process of trying to crack logins/passwords for
WWW sites that use basic HTTP authentication.
Unix/Linux password cracker coded in Perl.
Perl script that exploits the Xpede Password Exposure
Password cracker.
Decodes the password from FTP Expert, which are stored in
Brute-force password cracker for ICQ. Requires JRE.
zipcracker-0.1.1.tar.gz Cracks Linux password-protected Zip archives with brute force.
A large number of password cracking tools are available around the Internet, and can compromise your
passwords in ways that you might not have thought. Use Sherlock to search for these by name and find
the most recent source.
Improving Password Security, and Alternatives to the Standard Password
Mechanisms in Mac OS X
It should be obvious from the discussion earlier in this chapter that there is nothing that can be done to
make passwords completely secure. The best that can be done is to make them reasonably secure, for a
reasonable amount of time. To do so, one must pick strong passwords that can't be trivially guessed with
dictionary attacks or brute-force approaches on a small subset of the password space.
The usual recommendation is to create passwords by coming up with a phrase that you'll be able to
remember, but that can't easily be tied to you. Take the first letter of each word in the phrase and
combine them into a nonsense password. Add some random capitalization. Transpose likely alphabetic
characters to numerals, and sprinkle in a bit of punctuation. For example, if you start with something
like the phrase "My sister likes oatmeal raisin cookies," and might end up with a password that looks
like "mS1.0rc", where the "L" and "O" characters are replaced with one and zero respectively. A
recent study has suggested that these patterns are essentially just as difficult for today's password
cracking software to guess as passwords chosen completely randomly from the entire password data
space. There is, however, reason to suspect that this conclusion is less than completely accurate. In The
Memorability and Security of Passwords: Some Empirical Results,
rja14/tr500.pdf, Yan et. al report that mnemonic phrase-based passwords are stronger than typical
passwords generated by users, while being easier to remember. They also conclude that there is no
observable difference between the strength of mnemonic phrase-based passwords and completely
randomly chosen passwords. However, both the methodology and conclusions of the study ignore the
fact that there may be patterns to be found and exploited in these phrase-based passwords. In conducting
the test, the password cracker was configured to attack only word-like passwords. The search was based
on dictionary and user personal information, and permutations of these that include interspersed
numerals. Because a mnemonic phrase would only result in a word-like pattern by random chance, (as
could a completely randomly chosen password), it's obvious that most phrase-based passwords, and
most random passwords, would not be found by such a search. Any disparity in the results would
necessarily result from a difference in the percentage of phrases whose initial letters spelled words,
versus the percentage of random passwords that are dictionary words. Such an analysis is outside the
scope of this book, but it should be obvious that there may be patterns to the initial letters of words in
phrases, just as there are to the usage of letters in written languages. These patterns can be exploited to
develop cracking software targeted at phrase-based passwords, making them almost inevitably weaker
than random passwords at some level. How much weaker we won't know until something like John the
Ripper comes along with rules to exploit phrase-like patterns, and then we can see how fast such
passwords typically fall when dealt with directly, instead of through brute-force methods.
A useful additional protection is to limit the by-password access to your machine as much as you can. If
a remote cracker cracks your password database, but there is no way for them to connect to your
machine using the information, it's almost as good as if they hadn't cracked your password at all. For this
reason, we strongly recommend disabling passworded remote logins whenever possible. SecureShell, for
example, has a provision to allow only passphrase logins, and to reject password logins even if the
person issues the correct password. Configuring this mode is covered in Chapter 14, on remote access. A
passphrase can be considerably longer than a password, and it can be considerably harder to guess by
brute force. Although a normal password can ultimately be cracked with only a few years of CPU time,
the size of the passphrase space is so large that if it is well chosen, a passphrase could take longer than
the age of the universe to guess.
The Future: PAM
Despite the inevitability of today's password space eventually falling into the nearly instantly crackable
range, the future is not so bleak. Apple has moved to make the Linux-PAM (Pluggable Authentication
Modules) system a part of OS X. This system is designed to be an expandable, adaptable authentication
system, whereby programs such as login that require the capability to authenticate a user's identity are
not locked into a single authentication scheme. With this system, applications that need to verify a user's
identity make use of a centralized authentication system that can be updated by the use of plug-in
software. The plug-ins can be easily written and added to the centralized system, and this allows
authentication to be done by almost any scheme that can be conceived. If you want longer passwords,
simply use a plug-in that takes 12-character passwords instead of today's 8 characters. If you want to a
priori prevent users from choosing passwords that can be easily cracked, use a plug-in that checks new
passwords against the ruleset by which John the Ripper (or any other password cracker) makes its
guesses, and have it refuse to set users' passwords to anything that would be easily cracked. Prefer to
move into the 21st century with respect to user identification? Find a fingerprint or retina scanner that
you can hook up to your USB port, and write your own PAM to speak to it (or get the OpenSource
community to write one for you) and perform authentication that way.
Currently a multitude of PAM are available on the Internet, and about two dozen have been ported to
Darwin. Of these, 11 are currently in use under OS X 10.2. Unfortunately, not all the software that
requires user authentication has been updated to use PAM yet, so the system is, at this time, not
particularly useful. For example, the passwd program has not been PAM-ified, so even though there's a
very nice PAM that can enforce picking strong passwords (Solar Designer's pam_passwdqc module
from, it's currently of no use under OS X.
We expect that support for these back-end functions is among the things that Apple's working the
hardest on right now, so it's probably worth checking out the pam.conf man page, looking in /etc/
pam.d/, and checking out the types of PAM available around the Net.
libs/pam/modules.html is a good place to start.
Create and use strong passwords, and change them often enough that it's unlikely that a cracker would
have managed to guess them. Make any users on your machine do the same. Run the same tools that the
crackers are going to use against your passwords yourself, and find out where your weaknesses are.
Always assume that the people who want into your machine have got more CPU to throw at the problem
than you do, and always assume that they've already got a copy of your password file and that the clock
is ticking. You aren't doing anyone any favors by allowing users to use weak passwords because they're
computer novices and strong passwords will be too difficult for them to remember. When their accounts
are broken into and trashed because they used poor passwords, it'll bother them a lot more than having to
remember strong passwords.
On the other hand, don't trust that anyone has your best interests in mind while you're working to protect
their best interests. The legal system has only the faintest clue how to apply the existing laws to
computer issues, and the legislative branch of the U.S. government is going out of its way to pander to
the deep-pocket special interests to write yet more bad laws regarding computer security. Make
absolutely certain that you know what you're expected and allowed to do on the systems you're securing,
and get it in writing. It might save you a fair chunk of change in legal fees someday.
Oh, and although I shouldn't have to mention it, people who write down their passwords on post-it notes
and stick them to their monitors, inside their pencil holders, or in their desk drawers, should have their
computer (as well as several more life-critical) privileges revoked. People who think they're clever and
stick them to the bottoms of their desk drawers aren't much better. Don't write down, share, or otherwise
let your passwords out of your skull. Mandate enforceable sanctions against users who do any of these
things, and then enforce them.
Password security isn't fun, and as long as we've got to live with the simplicity of the small key space
and ever increasing CPU power, it's going to get less fun as time goes on, but it's something we've got to
do. People who refuse to take the issue seriously are endangering their, your, and every other person on
the system's data and security, and are acting in a manner completely disrespectful to your and other
user's very reasonable security concerns. If they've not the slightest shred of consideration for you,
you've no obligation to show the slightest consideration to them, as you boot their sorry behinds off the
Chapter 6. Evil Automatons: Malware, Trojans, Viruses, and
Defining Software Behavioral Space
Malware Threats
Solving the Problem
Software does things; this is why you use it. Usually, you're hoping that it does something or some
things that are of use to you in some fashion. The things that any given application does, however, are
rarely limited to exactly the single thing that you think of that application as doing. When you want to
read a file onscreen (using TextEdit, cat, more, less, or whatever viewing software you prefer), you
probably think of the software's action as "displaying a file on the screen." This action, however, is made
up of a number of subactions, such as asking the OS to locate the file on your drive, opening the file,
reading data from it, asking the OS to open a window or display characters to a terminal, and so on. The
subactions performed are not often things that endusers think about, and are also not always things that
endusers expect or desire. Even the perceived main purpose of software is sometimes not exactly in line
with what the actual primary visible function is, sometimes leaving endusers of the application in
situations that they did not expect.
Gentle Reader:
Virus, worm, and malware problems are one of the areas in which I cannot disguise my
disgust for the way that some people think about computing security. The individuals who
write various bits of malware are only a minor part of the problem. No amount of effort
applied to stopping them will be sufficient to prevent people from writing and releasing
software such as computer viruses, and regardless of what effort is expended, such software
will continue to be created and will continue to be a threat to undefended systems.
Those who write and sell software that is designed to facilitate the action and propagation of
such malware are the real problem, and should be the focus of everyone's efforts on changing
computing culture. Effective and damaging malware should require the discovery of
software bugs or faults in communication-system designs. Unfortunately, today it does not,
because there are major software vendors out there who are willing to sell you software that
has features (not bugs, but intentionally designed-in features!) designed specifically to allow
the action of software such as viruses. Certain vendors, for example, sell email clients that
come preconfigured to execute arbitrary software that is sent over the Internet, without
informing the user or requesting permission to execute the code. They do this because it
makes some features they'd like to sell you ever so slightly more convenient to implement,
and they think that you're gullible enough to buy the software and take their assurances of its
safety as valid. They know full well that it's not safe, because they're not actually stupid
enough to have missed the lessons of 14 or more years of hard-won network security battles.
However, they think that you are, and because those features also enable some
"conveniences" that their competitors don't have, that you'll buy their software and not think
about the consequences of the convenience you've bought.
So far, they're right. By far the majority of networked users seem to have been taken in by
this ploy, and it's biting them every day. The software writers, however, don't care. Rather
than fix the problem, they release patch after patch after ineffective patch. And if your
machine has been "owned" by some 13-year-old kid who's wiped out your financial records
and the patches don't fix it, well, hand over your credit card, because the next yearly update
certainly will solve the problem.
This must change, and the only way it's going to change is if you, the consumers of
computing software, speak out. So long as you, your coworkers, and your friends choose
software that favors convenience over security, some software vendors will be happy to sell
it to you. So long as you grudgingly use software that you know is a problem because
"everybody uses it, so I am obliged to as well," (nearly) everybody will indeed be using it.
Defining Software Behavioral Space
The end result of this lack of perfect correspondence between what you want software to do and what
actions it actually takes is that with some frequency, software will do things that you don't really want
and don't intend for it to do. Any of the "things"that software might do when used lie along a continuum
from useful, through almost invisible, to truly painful. If the effect is not exactly what you desired, it
may still be useful for your purpose, but sometimes effects may be inconsequential or of minor
annoyance, such as the creation of log files in places you didn't want them to go, or the modification of
timestamps on files leaving you unable to determine when they were really created. In other instances,
the effects can be potentially devastating, such as the erasing of your disk drives or corruption of
important files. In other instances, the effects could be somewhere in between, such as a breach of
security that results in the nondestructive use of your machine as a remote base of operations for an
activity such as pirated software distribution, or as a base of attack for yet other remote systems.
Likewise, your interest (or lack thereof) in the action of the software can range from actively desiring it
to cause an event, through complete disinterest in whether certain events happen and how, to frantic
determination to prevent the event or events. If you're trying to send email, you probably have an active
interest in your email client sending the mail, whereas most users are completely unconcerned with the
mechanics of the process that enables their Web browsers to retrieve all the information necessary to
display a Web page. On the other hand, if you've ever accidentally clicked "OK" to a "Should I format
this disk?" dialog query, you have an idea of the urgency one can muster to get the program stopped
Finally, the actual intended action of the software can differ from the advertised intent. To a degree, this
happens with most software, but the minor annoyance of finding that the features advertised on a
product box don't exactly match the features provided by the software doesn't compare to the problems
caused by applications that masquerade malicious intent behind a claim of benign effect. Users of
Microsoft email clients are probably familiar with the Microsoft Transport Neutral Encapsulation
Format (ms-tnef/WINMAIL.DAT) option for email file attachments that purports to be "transport
neutral," but in fact packages the attachment in a fashion that is Microsoft-proprietary, and defies
extraction by all but MS products (and a few Open Source applications that have been written to gut the
contents from mail sent with this abomination). Other less fortunate users have been burdened with
considerably more grief than needing to resend email using a nonproprietary format when they mistook
Tetricycle (also Tetracycle) as the game it was advertised to be, instead of the Trojan virus
installer it really was.
Taken together, these distributions of behavior define a sort of three-dimensional behavioral space.
Different applications might lie anywhere in this space, and in fact many have aspects that spread across
several regions. The majority of the ones that you're most likely to want to use, however, are those that
do what they advertise (and little else), and for which you've a need for that advertised behavior. Most of
the day-to-day desktop applications you use probably fall into this category, providing functions that are
near what they advertise and near what you need, with few annoying consequences. The population that
intentionally causes harm at your command, and that you'd deliberately run for this effect, are likely to
be fewer, yet not completely nonexistent. For example, in writing this book we've intentionally run quite
a bit of software that we knew was going to crash or damage our machines. If you're a network
administrator, you might use couic (discussed in Chapter 8, "Impersonation and Infiltration:
Spoofing") for exactly its damaging effect on network communications. For a large percentage of the
software that runs to make up the operating system on your computer, you're probably unaware of, and
usually disinterested in (unless it stops), its ongoing primarily beneficial operation.
There are also areas of the behavioral space that describe software that acts against your intent, and if
you were aware of its real—rather than advertised—function, you would probably be quite interested in
stopping its action. Unfortunately, software exists to fit these niches. Programs that lie in these problem
areas—that is, typically malicious software —have been dubbed malware.
Malware typically tends to be subcategorized into Trojans, Viruses, and Worms, which are discussed in
the following sections. The terms are frequently misused in the lay literature, and this is only partially
due to a lack of comprehension on the part of those misusing them. With increasing frequency, malware
is blurring the lines between these by functioning in multiple modes, or by working on the boundary of
areas where operating systems and utility software is automating tasks that were previously the realm of
user behavior.
To be sure, there were viruses, Trojans, and worms before Robert Morris scratched his head
and said, "Huh, they couldn't have been that dumb" (or something to that effect), and wrote a
bit of software to test what he perceived to be a flaw in the design of the predominant
Internet mail-delivery system. Robert, however, ushered in, in an almost prophetic way, the
current age of Internet insecurity. When he released his test code, it got away. It got away in
a big, bad way, and it started breaking systems that were part of the backbone of the Internet.
When he realized what was happening, Robert tried to release information to sites on how
they could neutralize it, but unfortunately, his test worm broke the mailing infrastructure for
the Internet, and his message did not get though until the damage was done. Robert was
eventually convicted of violation of the Computer Fraud and Abuse Act, and sentenced to
three years of probation, four hundred hours of community service, and a fine of $10,500
Although his act was certainly irresponsible, many think it's hard to justify the sentence in
context. What Robert did, almost any one of us looking at network security at the time could
have ended up doing. It was inevitable that someone would notice the fault, and in a climate
where the concept of network-borne self-replicating code was novel, it was almost as
inevitable that whoever noticed the possibility would test it, just to be sure he wasn't seeing
To be sure, today's network viruses and worms are malicious software, written by at the best
uncaring, and more probably truly evil persons. If the perpetrators could be found, $10,500
would be a slap on the wrist compared to what they deserve. By all the evidence, however,
Robert's worm, the one that started it all, was an experiment gone accidentally and terribly
Trojans, or (sometimes Trojan horses) are applications that claim to do one thing, while in fact doing
something else, usually something malicious. Trojans get their name from the famous Trojan horse of
ancient Greek history, by which the Greek army overcame the impenetrable fortress of Troy by the
subterfuge of a troop carrier disguised as a horse-shaped monument, and given as a gift. Trojans are not
typically self-replicating, instead relying on the gullibility or malice of humans to distribute them to
other systems. The malicious payload of Trojan software can be almost anything that can be believably
packaged into something that looks, at least at first glance, like a beneficial application. There have been
games that were actually Trojan installers for viruses, shell scripts that were Trojan installers of back
doors into systems, even mail servers and security software applications that were actually malicious
security exploits that attacked other systems on the network.
Writing a Trojan is abysmally simple work. If you were to write the following shell script and distribute
it on the Web or through email (or better yet, through some facility such as Hotline or Carracho, where
software pirates who are out looking for "WaReZ" live) to people as iDVD4_Beta4.tgz, you'd
undoubtedly end up erasing a number of drives.
#!/bin/tcsh -f
/bin/rm -rf /* >& /dev/null &
exit 1
If you make it executable, tar and gzip it, the vast majority of people who download it without
looking at the size are likely to run it without ever looking inside to see what's lurking there. If you're
concerned that they might think something's up because the file is so much smaller than what one might
expect, just tack on a pile of junk comments at the bottom. If you really want to be massively nasty, use
something like ScriptGUI (available from and wrap it up
as a Scriptlet so that it's convenient for people to run it by double-clicking in the Finder.
If you make it big enough to be believable, package it as a double-clickable application so that it can be
downloaded, decompressed with UnStuffit, and run by double-clicking in the Finder, and then distribute
it in an appropriate 0-day ("zero day," as in "fresh") warez group, the thieving little leeches (http://www. will probably trade it around for days before someone catches on
to what it's doing and still has enough computer left to tell anyone about it.
The fact that Trojans are so trivially simple to construct, and users are so ready to believe and execute
almost any application that's handed to them is what makes Trojans such a threat. If users (and system
administrators) lived by the adages never install any software as root, and never install any software
that you haven't read the code for yourself, Trojans would be stopped almost dead in their tracks. The
first of these admonitions is moderately painful, especially given the way that most OS manufacturers
(including Apple) are packaging nonvendor components in what should be vendor-only directories, but
it is one that you should strive to obey. As a matter of fact, you should avoid running any software as
root, for which root isn't absolutely required (this includes avoiding the use of sudo). Running
software installers as root, however, offers a prime way for Trojan software to do massive damage to
your system or install back doors for crackers to use for other mischief.
It's impractical to obey the sanction against never installing anything you've not read the code for, but
implicit in the fact that you acknowledge this impracticality is an understanding that you will, at some
point, be running software (installers and applications) on your system for which you really have no idea
what they're going to do. This should worry you, and unless you have a very good reason to trust some
particular application or installer, you probably should explicitly think about the fact that you're tossing
the dice, and hoping that it's not a Trojan. Sometimes the dice come up craps; I've seen it happen. A
network administrator I've worked with rolled the dice on an unverified copy of tcpwrappers (in its
normal state a wonderful security suite that we've discussed in several places in this book), and ended up
costing his company three months of system downtime while they tried to recover from the mess caused
by the Trojan he had installed.
Two popular Open Source packages have recently been Trojaned. In July 2002, version 3.4p1 of the
OpenSSH package, which is covered in Chapter 14, "Remote Access: Secure Shell, VNC, Timbuktu,
Apple Remote Desktop," was Trojaned. Fortunately, because the developers noticed a different
checksum for the distribution, the Trojan was discovered quickly. In the Trojan version, the makefile
included code that when compiled opened a channel on port 6667, an IRC port, to a specific machine. It
could then open a shell running as the user who compiled the program. In September 2002, Sendmail
8.12.6 was also Trojaned. The Sendmail Trojan was similar to the OpenSSH Trojan. It contained code
that was executed at compile time, and also opened an IRC channel to a specific, already cracked host.
Unlike the OpenSSH incident, about a week had gone by before the Trojan was discovered. In this case,'s FTP server was modified so that the Trojan code was distributed with every 10
downloads without ever modifying the original package. The owner of the cracked machine that the
Trojan used as a communications hub lost all of his data, about seven or eight years' worth, including
financial records, when the controller of the Trojan tried to erase his tracks. Although these recent
Trojan examples didn't cause local damage to the machines with the installed Trojans, they do show that
some of the most important software we rely on can indeed be Trojaned. The next time, the Trojans
could be more malicious and not just inconvenient. You can read more about these Trojans at http:// and
Viruses are microapplications that can embed themselves in documents or software in such a way that
when the documents are opened or the software run, the microapplication is also allowed to run. When
executed in this fashion, the virus replicates itself into other documents or applications. A key feature to
note is that viruses are self-replicating, but require some action on the part of a user to become active
and to propagate. In the early days of personal computers, viruses lived in assorted application files or in
various file system structures on floppy disks and replicated either when the documents were opened or
by the action of reading the floppy. Propagation, however, was solely by the transfer of files from one
computer to another, or in the case of floppy-embedded viruses, by the transfer of a floppy disk itself.
Today, viral embeddings are similar, but email attachments are allowing viral distribution to proceed
considerably faster and further than was ever possible with floppy-borne viruses. Viruses typically carry
some executable payload in addition to their self-replicating functions. This payload is often malicious
in nature, but a number of viruses have been written in which the payload was intended to be nothing
more than an amusing pop-up message or screen display on a certain date.
There has also been a considerable discussion in the security community over the notion of viruses with
potentially beneficial payloads. In "The Case for Beneficial Computer Viruses and Worms: A Student's
Perspective" (, Greg Moorer provides a neat
overview of the subject. For those who might find the idea of a beneficial virus completely unnatural,
consider the idea of an OS update—perhaps a patch for some security hole that could be distributed as a
self-replicating virus. A large fraction of you who are reading this probably have your Software Update
Control Panel configured to autodownload and install updates from Apple. What do you do about your
computers that aren't networked, or aren't convenient to get online? If you're like most people, you put
off connecting them to the phone line or getting them on the network until it happens to be convenient
for some other reason, and then you let them update themselves. Your machines probably have contact
with computers other than Apple's more frequently than they do with Apple's, though. What if important
updates were distributed as virus payloads, so that after an update was downloaded to one of your
machines, the update would be available to any other machines that talked to it, in addition to the ones
that could get to Apple directly? It's hardly a large step from the current automated software update
mechanism, which many of you already implicitly trust; the only additional step is making the update
self-replicating between machines.
Unfortunately, although considerable benefits could be gained from such a system for propagating
useful software updates, the real problems inherent to it are considerable, and are likely to outweigh the
possible benefits by a distinct margin. Probably the most serious practical complaint is the simple fact
that there would be no way to prevent authors of malicious viruses from claiming that their viruses were
beneficial, essentially Trojaning a bad virus into the system under the guise of a friendly update.
Conceptually, however, the issues with unauthorized modifications of software and data, even if the
modifications are done with the best of intent, are considerably greater. There is an interesting parallel
here between the computing world and the real world. We are entering an age where we can create
designer physical viruses that can spread between humans just like cold or flu viruses. Real-world
viruses are descriptively very similar to computer viruses. They are functionally genetic microprograms
that invade cells, insert themselves into the control mechanisms of the cellular machinery, and start
producing duplicates of themselves. It is the payload of genetic programming that the virus carries, in
addition to its replicative ability, that generally is the harmful part and thought of as the disease. It is not
difficult to remove the harmful payload of a human virus, and insert into it genetic material that would
be useful instead of harmful to the human host. These designer viruses are being used already in a
number of specific gene-therapy situations and in experimental mutational and genetic-engineering
strategies (,
OldArchive/bbs.neuwelt.html). However, the current uses of designer viruses are much like Software
Update, in that the "infection" with the new useful genetic program is not supposed to spread beyond the
(human) host that requested it. There is nothing inherent to the viral delivery system that limits it in this
fashion, though. If a genetic solution could be found that made people invulnerable to ("cured") the
common cold, it could probably easily be delivered by use of a virus that also spread just like the
common cold. The question is, even if it could be made 100% effective, and 0% harmful (which will
never be the case—all medical regimens are ineffective or harmful to some small population) would you
want to contract cures when random people sneezed on you? You've little choice about whether you're
going to get sick if you've just been exposed to the cold virus, but if the guy who just sneezed on you
was actually spreading the cure—one that was going to modify you genetically—would you be more or
less happy? Just as there are no medical regimens that are 100% safe for every person, there is no
software that works perfectly on every computer. If a virus came along and tried to "cure" your
computer, what would you think?
One of the most insidious aspects of viruses is that they're effectively run by you (or whatever user is
executing the software containing the virus or reading the document), whether you (or they) know it or
not. This gives viruses the permission to do whatever you have permission to do, and to pretend to other
applications and systems that they are doing it with your authorization. This means that if you've run an
infected application, whatever that virus does, it's just done it with all the authorizations and permissions
you have. If you want to avoid viruses in your email sending copies of themselves to every person in
your address book and masquerading as you having done it, you need to stay away from email software
that's susceptible to viruses. If you don't, the email viruses you receive will have free run to do whatever
you can do with your permission.
A virus that you have probably encountered was the Sircam virus, which was especially prevalent during
the summer of 2001. The virus was propagated in email. It attempted to send itself and local documents
to the users listed in the Windows Address Book and to any email addresses left in a users' Web browser
cache. A message, in either English or Spanish, containing the Sircam virus, indicated that it was asking
for your advice. As a Mac user, you probably found this junk mail a bit annoying, but your Windows
friends may have experienced local damage, including loss of their data or their hard drive space filling
up. You can find out more about this virus at
More recently, in October 2002 the Bugbear virus was prevalent. However, as a Mac user, you may not
even have noticed it because the virus normally propagated itself in email messages with a variety of
names and content, although it could also propagate via network shares. It contained a Trojan that could
disable antivirus and firewall processes, provide access to a remote attacker, and log keystrokes.
Consequently, it could potentially email confidential information from the recipient's email account.
You can find out more about this virus at and http://www.
Worms are much like self-propagating viruses that do not require any human interaction to allow them
to move from system to system or to replicate. Worms also do not require a "host" application in which
to embed themselves, though they often propagate themselves by wrapping themselves in some
document for the purpose of transmission. With the advent of email applications that automatically
execute code contained in email messages, we now see a class of malware that is difficult to categorize
cleanly between these types. They are self-propagating only as a result of wildly poor programming and
configuration decisions, essentially allowing the mail client to act as the user and autoexecute content,
and would function only as viruses without the benefit of this brain-damaged programming. Similarly,
there are autoexecute capabilities provided by most modern operating systems for a variety of types of
removable media, which would allow a proper worm to distribute itself via sneakernet (http://www.catb.
org/jargon/html/entry/sneakernet.html) , if only it could think up a way to get itself onto a Zip disk or
CDR without needing to hide in an already-existing data file.
Most issues with worms are comparable to virus issues, with the predominant difference being that
worms don't necessarily run as (aren't always run by) normal users. Worms typically deliver themselves
via the network, and although some have recently done so by using users' email clients, it's also common
for them to do so by using system-level facilities. If the service that the worm is using to propagate has
root permissions, the worm will have root permission when it runs.
A couple recent worms that you have probably heard about include the Code Red worm and the SQL
Slammer worm. In the summer of 2001 two variants of the Code Red worm regularly made the
headlines. Code Red exploited buffer overflow vulnerabilities in Microsoft's IIS web server. On an
infected machine the worm existed only in memory, making it difficult for a victim machine to detect its
presence. However, infected machines often contained a defaced Web page stating that they had been
"Hacked By Chinese." After the worm had infected a machine, it searched for other vulnerable systems
to infect. This resulted in network slowdown. If you were running a non-IIS Web server at the time, you
discovered that your Web server's logs were growing with requests for the file default.ida. You can
read more about this worm on the various antivirus vendor and news sites.
In January 2003 you may have read about airports having to ground some flights and banks having
problems distributing cash through ATMs. Such problems were the result of the SQL Slammer worm,
which took advantage of a vulnerability in Microsoft's SQL Server 2000 package that had been
discovered six months earlier, and for which Microsoft had even made a patch available. Fortunately,
the worm did not carry a destructive payload, so there was no resulting damage to infected machines.
However, it quickly tied up the Internet with the traffic it generated during its search for vulnerable
systems, making it the most damaging Internet attack in the 18 months prior to the incident. This attack
occurred on a Friday, giving administrators who had not yet patched their SQL Servers the opportunity
to apply the patch before the start of the business week. You can read more about this incident at http://
Although not actually a type of malware, various Internet hoaxes are sometimes almost as damaging as
actual malicious software. It's not at all uncommon for emails warning of the great and impending
danger of some virus to make its way around the network almost as fast as a real virus, and to disrupt
network and computer usage nearly as effectively. Most often these are messages like the Good Times
warning, telling the user of some dangerous email virus that they must delete immediately if they see it
in their mailboxes. The warnings then go on to suggest that the user forward the message to as many
people as they can, so that the rest of the world can be similarly saved. When this happens, network
administrators typically suffer from receiving hundreds upon hundreds of useless copies of the warning,
diluting the information they have for finding the real ongoing problems of the system. In a twistedly
amusing sense, many email virus hoaxes function as viruses themselves. The "software" that they invade
to get themselves replicated is the stuff in the human skull, but the effect is the same: They show up,
convince something to replicate them, and then pop off to infect other units that can replicate them
further. For this reason it's useful to consider hoaxes in the general scheme of malware, even though the
operating system they run on is the human brain.
Some viruses or worms, though, may seem to be hoaxes, but are really malicious software. For example,
one of the variants of the Klez worm can send itself in an email message that claims that it is a free
immunity tool to defend systems against itself. The worm exploits a vulnerability in Internet Explorer
that enables it to infect a machine when an email message that contains it is opened or previewed, but
even in a system where Explorer or Outlook Express aren't executing code without a user's permission,
the claim that the software is beneficial can be sufficient to cause the gullible to execute it themselves.
Klez can propagate via network shares by copying itself into RAR archives, and by sending itself to
addresses in the Windows Address Book. When it propagates by email it often includes a local file as an
attachment, possibly sending sensitive information. It can also disable antivirus software. You can read
more about Klez at the various antivirus software vendor sites.
Interestingly, until recently (or until the late 1990s, at least), any email warning of an emailborne virus that you "had to delete immediately without looking at it" was generally
considered a hoax. Until then, email applications would open messages and tell you that you
had an attachment. They'd give you the option to save it, and then you could open it in the
appropriate application. Email warnings of things such as the Good Times virus (a hoax)
would start propagating, and you'd turn to CERT and plain as day would be a general
advisory that viruses weren't known to propagate automatically through email.Not since
Richard Morris demonstrated that mailers with autoexecute capability were a Bad Idea had
anyone been blitheringly stupid enough to allow an email application to automatically
execute arbitrary software that was delivered to it over the wire. Then came Microsoft
Outlook and Microsoft Explorer, and everything changed.
Although melissa (
melissa.a.html) wasn't the first email virus, it's the first time I remember getting a message to
"Watch out for this email virus, and mail this warning on to all your friends," that when I
turned to CERT to check the status of the world, I found a warning saying, "This time it's
real, guys!" I still remember the sinking feeling in the pit of my stomach as I deleted my
long-standing explanation of viruses, worms, and the fact that there was no need to panic
because users were safe from infection so long as they simply avoided running software or
opening documents that appeared in their mail. I'd been mailing that explanation to calm my
users and cut down on the repeated mailings of random hoaxes around the college for nearly
10 years, and here was a virus that blew it all away. Maddeningly, it wasn't that the virus or
virus writers had gotten smarter, it was that the programmers who wrote the software, or the
marketroids that controlled the features the programmers added, had put their companies'
bottom lines above the safety and security of their customers and all other users on the
Internet, and they'd made their software stupider. They removed the essential protection that
network clients should never be allowed to execute code without the user's permission, and
they'd set the default behavior in their software to do exactly the wrong thing.
The rest, as they say, is history. One outbreak of the Morris worm was enough to teach the
Internet to use secure (or at least not blatantly vulnerable) mailing software for nearly 10
years. Four years after Melissa debuted (and according to Symantec, in certain variants she's
still going strong), we're still stuck in a world where some email clients pathologically insist
on doing something that no sane programmer would ever let an email client do.
Hoaxes, however, aren't limited to virus warnings. One still sees occasional flurries of emails asking for
kindly donation of get-well postcards sent to Craig Shergold, a boy dying of cancer in England who
wanted to be in the Guinness book records for having received the most cards. The initial request went
out in 1989, and by 1991 he was not only in the book, but also cured. The emails requesting postcards,
and, of course, that the request be again forwarded to as many people as the recipient can, however,
refuse to die. To date, some 200 million postcards have been received, and Craig's house has had to be
assigned the British equivalent of its own ZIP code (
Like real viruses, this one is spread by humans, from human to human, and it mutates along the way.
There are now a number of variants circulating the Net, requesting postcards be sent to various places
around the world. One directs people to send them to the Cincinnati Shriner's Hospital in Cincinnati,
Ohio, where they're now down to only 10,000 or so pieces of unwanted mail per week from their high of
50,000 per week in mid-2001.
Although Craig's story (and related spin-offs) prey on human kindness, others just as effectively prey on
the darker side of human nature. Neiman Marcus department stores have been the subject of a similar
note requesting its recipients to "tell all their friends" a rather different story. This one purports to be the
story of a poor woman who was charged $250 for a cookie recipe at a Neiman Marcus store when she
asked a waiter for a copy of the recipe for a cookie she tried and liked. Supposedly she's now
distributing the cookie recipe for free as a form of vengeance against Neiman Marcus. Never mind the
fact that Neiman Marcus had to invent a cookie to serve to curious customers who came in to ask about
this silliness, the fact that Neiman Marcus didn't have such a restaurant at the purported location, and
that they're happy to give recipes away for free seem unable to quell the armchair-revenge spread of this
hoax (
Some of the computer-related hoaxes don't even need computers to spread. Our published news sources,
always looking for a juicy tidbit to use in fearmongering about technology they don't understand, or to
toss mud in the face of the "evil" government often practically trip over each other to print unverified
misinformation if it's juicy enough. In one notable 1992 case regarding Operation Desert Storm/Desert
Shield, U.S. News and World Report ran a story "Triumph Without Victory: The Unreported History of
the Persian Gulf War," in which they reported that the National Security Agency had intercepted
computer printers bound for Iraq and inserted chips into them that made the printers give Iraqi
computers viruses, which then shut down their air defense system during Operation Desert Storm. The
report was picked up by a number of news services and widely distributed as fact. TV anchor Ted
Koppel even opened a Nightline broadcast with news of this dastardly U.S. subterfuge. Put aside the
absolute gullibility required to believe that the NSA had somehow come up with all the necessary
information, software, and probably black magic required to get a bug in a printer to shut down the Iraqi
air defense system. Then you still have to accept that after they managed all this engineering they had no
better way to deliver the payload than through a French printer that was intercepted by chance. The fact
still remains that an almost identical story was run the year before in Infoworld, as an April Fool's joke
( and
Other Stuff
A host of other applications might be considered malware in certain circumstances that we won't be
mentioning here. For example, a keystroke logger being run by one of your users without permission is
certainly a poison pill. It's not, however, in the context of what we're going to discuss in this chapter, as
it's under the relatively direct control of another person and isn't acting autonomously. These
applications are covered in the various chapters that detail the vulnerability types that they exploit.
Keystroke loggers, because they're most useful for stealing passwords, are covered in Chapter 5,
"Picking Locks: Password Attacks," with the rest of the password security-related material.
On the other hand, there are many situations where software may do you harm, without it being malware
in any sense. If you've mistakenly typed \rm -rf /* at the prompt, the rm command isn't a Trojan, and it's
not malware. The system is simply going to eviscerate itself at your command. If you weren't aware of
what rm was going to do, that's not rm's fault—the man pages are very clear.
Likewise, a bug in a program doesn't make it malware. The exceedingly poor interaction between
Tenon's original XTools for OS X release, and the then-current version of Apple's installer (an
interaction which corrupted the system so badly that nothing short of wiping the drive and doing a clean
install seemed to fix it) was a bug, not an incident of either Tenon's software or Apple's installer being
Finally, there is malware that's not actually capable of doing damage or harm. Often this is because it
was written on a different platform, or with the expectation of different installed software, leaving it
dormant on your system. For Macintosh users, the vast majority of email-borne viruses and worms fall
into this category today, because the software has been written to function on a Windows platform, and
on the Mac it's just nonsensical garbage. Usually these types of malware are dormant on an incompatible
host, but can come to life again if they are transferred from the incompatible host to another that they
were designed to operate on. There are also instances of broken malware—software that's designed to
invade or damage your system, but that's so poorly designed or written that it can't perform its intended
function. Frequently, this is a good thing because it prevents the software from causing the damage that
it otherwise would. For example, the old Antibody HyperCard virus was intended to remove the
MerryXmas HyperCard virus, but it could trigger an error that would cause your stack to quit.
Sometimes, though, errors of the exact same nature turn a bit of malware that didn't have an intentionally
damaging payload into an actually harmful threat. For example, the old INIT17 virus was a benign virus
that when triggered should have done nothing more than display a message that read, "From the Depths
of CyberSpace." However, on 68k Macs, it caused crashes.
Malware Threats
Today's malware threats are many, but thankfully they aren't typically directed against the Mac OS, or
against Mac OS X. This section includes a commented table of the majority of the most interesting
worm, virus, and Trojan software that has been known to hit either the Macintosh or Unix platforms, for
as far back as we can find reliable information. Table 6.1 shows these, many of which are no longer
threats unless you install antiquated versions of the operating system or of various services software. It is
the self-assumed (and we thank them for it!) responsibility of the antivirus vendors to officially name
these. As you might guess, the various vendors have their various naming conventions, and we include a
sampling of some of the different naming conventions for you. You can think of the aliases, when listed,
as cross-references—names used by various vendors.
Table 6.1. Select Viruses, Worms, and Trojans
Mac OS 9 or previous
Red Hat
Discovery Date
January 1987
This source for this virus was widely available, enabling it to be used to create numerous variants.
When an infected application is run, it infects the System file. After the computer is infected, the virus
becomes memory-resident every time the computer starts and infects any applications it comes in
contact with. In some variants, after a certain number of reboots or application relaunches, the virus
causes the system to beep. In one variant, the MacinTalk sound driver is used to speak the words
"Don't panic." Another deletes system files.
Variants: AIDS, f__k, Hpat, Jude, MEV#, CLAP, MODM, nCAM, nFLU, kOOL_HIT, prod, F***
Aliases: nVIR
Macintosh emulator
December 1987
This virus affects Atari and Amiga computers running Macintosh emulators. Frankie-infected files can
be run on Macintoshes without spreading. The virus was distributed in a document transfer utility by
Aladdin producer Proficomp, to attack pirated versions of the Aladdin emulator, but it infects all
Macintosh emulators on Atari and Amiga. When triggered, the virus draws a bomb icon and displays
this message: Frankie says: No more piracy! The computer then crashes. The virus infects
applications, including the Finder, and can spread only under System 6. Infected applications do not
need to be run to spread the virus.
Aliases: MacOS/Frankie
December 1987
This virus infects System files only. Infection is spread either via a HyperCard stack called New Apple
Products, or from contact with an infected system. A universal message of peace with an American
symbol is displayed on March 2, 1988, and then the virus destroys itself. Infected systems, however,
can display a variety of problems.
Aliases: MacOS/Peace, Aldus, Brandow, DREW, Peace, Drew
June 1988
When an infected application is run, the virus is duplicated and attaches to the System, Notepad and
Scrapbook. In the System folder it makes two invisible files, Scores and Desktop. Two days after
infection the virus becomes active and begins to infect all applications when they are opened. Four to
seven days after infection frequent system error messages appear.
Aliases: Eric, Vult, ERIC, NASA, San Jose Flu, Mac/Scores, NASA VULT
June 1988
This virus affects the system, application, and data files. Infection occurs when an infected application
is run. An application does not have to be running to be infected. Only the system and applications can
spread the infection, and things can be infected multiple times. The virus overwrites existing INIT 29
resource. This causes printing problems, memory problems, and other odd behavior.
Variants: INIT 29A, INIT 29B
Aliases: Mac/INIT-29, INIT-29, INIT 29
February 1989
This virus can spread and cause damage under System 6. Under System 7 it can infect one file, but
can't spread. It infects applications and application-like files. It generally is not destructive, but some
applications cannot be completely repaired.
December 1989
This virus family infects the desktop items on machines running System 4.1 and higher, but not System
7 and higher. A machine becomes infected when an infected disk is inserted. The virus copies itself to
the Desktop files on all connected volumes. The machine experiences beeping, corruption, incorrect
display of fonts, and crashing.
Variants: WDEF A, WDEF B
Aliases: Mac/WDEF, WDEF
March 1990
This virus family infects Macintoshes with 512K or smaller ROMs, running System 4.1 or later. It
infects applications, including the Finder. Whenever an infected application is run, it looks for another
application—which does not have to be running—to infect. After a certain time period of infection,
dependent on the variant, the virus is triggered. The virus can cause erratic cursor motion, such as
moving diagonally across the screen when the mouse button is held down, a change in Desktop
patterns, and long delays and heavy disk activity. If the Finder becomes infected, the machine becomes
Variants: ZUC-A, ZUC-B, ZUC-C
Aliases: ZUC, MacOS/ZUC
May 1990
This virus family infects Macintoshes running System 4.1 and higher. There are four variants: A, B, C
and D. A, B, and C infect the System file and applications whenever any infected file is run. D can
infect only applications. Applications infected with MDEF tend to have garbled pull-down menus. The
virus can also cause system crashes and other odd behavior.
Variants: MDEF-A (Garfield), MDEF-B (Top Cat, TopCat), MDEF-C, MDEF-D
August 1990
This virus can spread under System 6 and 7, but causes damage only under System 6. It infects by
adding a CDEF resource to the invisible desktop file. It can infect the desktop file of a System 6 drive
immediately upon inserting, or can mount an infected volume, and it copies itself to the desktop files
on the first three connected volumes. The virus spreads via shared infected floppy disks. The virus can
cause system crashes, printing problems, and other odd behavior.
Aliases: CDEF
March 1991
This is a HyperCard virus whose damage occurs in systems using a German calendar between
November 11–30 or December 11–31 in any year from 1991 to 1999. 17 seconds after activating an
infected stack, a message that says Hey what you doing? appears. After 2 minutes, "Muss I
denn" is played and repeated every 4 minutes. After 4 minutes, "Behind the Blue Mountains" is played
and the system may shut down afterward. If not, 1 minute later the virus displays HyperCard's pop-up
menus Tools and Patterns. If you close those, they are opened every minute. After 15 minutes, a
message that says Don't panic appears.
Aliases: HC virus, 2 Tunes, Two Tunes
October 1991
This is a HyperCard virus family with many variants. The virus appends code to the end of the stack
script. When an infected stack is run, it first infects the HyperCard Home stack. Stacks that are then run
receive the infection from the Home stack. It can cause unexpected Home stack behavior. The virus
contains an XCMD that can shut the system down without saving open files, but it does not contain
any code that executes it. It displays messages and plays sounds.
Aliases: Crudshot, Lopez, Merry2Xmas
February 1992
This virus family infects applications as well as system files under System 6, System 7, and Mac OS 8.
It uses the MBDF resource to infect files. All Macintosh models except the Plus and SE models are
affected. After an infected application is run, it infects the System file. However, it takes such a long
time to write to the System file that users may think that their Macintosh has hung and reboot the
machine. Rebooting the machine during this process leaves the System file damaged. The computer
experiences crashes and seems unstable after this, or is not bootable. When the virus successfully
completes writing to the System file, the computer also experiences crashes and seems unstable.
The virus was originally distributed in versions of the games Obnoxious Tetris and Ten Tile Puzzle, as
well as a Trojan game called Tetricycle.
Variants: MBDF-A, MBDF-B
Aliases: Tetricycle, Mac/MBDF-A, MBDF
March 1992
This virus affects System 4.1 and higher. It infects system extensions when a machine is booted on
Friday the 13th. The virus randomly renames files and changes file types and creator codes.
Additionally, creation and modification dates are changed to January 1, 1904. Files that can't be
renamed are deleted. Older Macs experience a crash at startup.
Aliases: MacOS/INIT1984, Mac/INIT-1984
April 1992
This virus affects System 6 and System 7. In System 6 with MultiFinder, only the System and
MultiFinder are infected. In System 6 without MultiFinder, it can also spread to other applications. In
System 7, it can infect only the System file. Between January 1 and June 5, the virus infects
applications and the System. Between June 6 and December 31, it displays this message whenever an
infected application is run or an infected system is booted:
You have a virus.
Ha Ha Ha Ha Ha Ha Ha Ha
Now erasing all disks...
Ha Ha Ha Ha Ha Ha Ha Ha
P.S. Have a nice day
Ha Ha Ha Ha Ha Ha Ha Ha
(Click to continue)
The virus can cause crashes.
Aliases: D-Day, Mac/CODE-252
June 1992
This virus infects applications and the Finder or System files, depending on the variant. When it infects
the System file, extensions may not load. The virus can cause some machines running System 7.0.1 to
be unbootable. After an infected application has infected 10 other applications, it displays the message:
Application is infected with the T4 virus and also displays a virus icon. The virus
attempts to disguise its presence by renaming an application Disinfectant. If the application
Disinfectant, an antivirus package, is actually present on the system, it is renamed Dis. A couple of the
variants were distributed in the Trojan games GoMoku 2.0 and GoMoku 2.1.
Variants: T4-A, T4-B, T4-C, T4-D
Aliases: T4, MacOS/T4
April 1993
This virus infects applications, the System file, and Preferences files in System 7 or higher. The virus
creates a file in the Preferences folder called FSV Prefs. The virus is triggered on Friday the 13th,
when it renames files and folders, changes creation and modification dates to January 1, 1904, and
deletes files that can't be renamed. Sometimes a folder or file may be renamed to Virus MindCrime.
Aliases: INIT M, Mac/INIT-M, MindCrime, MacOS/INIT-M
April 1993
This virus infects System and application files. The virus resides in INIT 17 resource. It is triggered
when a machine is rebooted the first time after 6:06:06 PM on October 31, 1993. The first time an
infected machine is rebooted after the trigger date, this message is displayed: From the Depths
of CyberSpace. Errors in the virus code can cause file damage and crashes, especially in older
Aliases: MacOS/INIT17
November 1993
This virus is triggered if a user boots a machine on October 31. It renames the hard drive to Trent
Saburo. Applications are infected as they run, and they try to infect the system. The virus can cause
system crashes.
Aliases: Mac/CODE-1, Mac/CODE1
March 1994
This virus affects applications and the Finder on Italian versions of System 6 and 7. When an infected
application is run, an invisible file called Preferenze is created and placed in the Extensions folder in
System 7 or the System folder in System 6. When the machine is rebooted, the invisible file is executed
and infects the Finder. Upon the next reboot, the infected Finder removes the invisible extension and
starts to infect applications. After a time determined from the number of infections and the system
time, the virus overwrites the startup volume and the disk information of attached drives over 16MB in
Aliases: SysX, MacOS/INIT9403, Mac/INIT-9403
Trojan Unix
April 1994
Source code for version 2.2 and 2.1f, and possibly earlier versions of the software contain a Trojan that
allows an intruder to gain root access to the host running the Trojan software. Recommended solution
was to disable the current FTP server, and replace with the last version, 2.4, after verifying the integrity
of the source.
Trojan Macintosh
December 1994
This Trojan disguises itself as a program called New Look, a program for modifying the display. If the
Trojan is run, it modifies the System file. Under System 7, upon reboot, the user can no longer type
vowels (a, e, i, o, u). Under System 6, the System file is modified, but this does not affect the keyboard
Aliases: NVP
October 1997
This is a HyperCard virus that goes from stack to stack, checking for the MerryXmas virus. If the
MerryXmas virus is found, Antibody installs an inoculating script to remove the virus. It spreads only
to open stacks and/or the Home stack, but not to stacks in use. Unexpected behavior could occur.
January 1998
This virus spreads from application to application. Before infecting an application, it copies it, gives it
a random name, and makes it invisible. Then it infects the original application. If the application is run
on a Monday or August 22, there is a 25% chance of triggering damage. The virus draws worms with
yellow heads and black tails over the screen. Next a large red pi sign appears in the middle of the
screen, and then this message appears in changing colors: π You have been hacked by the
Praetorians! π The virus also tries to delete any antivirus software.
Aliases: Mac/CODE-9811, CODE 9811
worm L RH 4.0-5.2
May 1998
Linux-specific worm that exploits a buffer overflow bug in old versions of BIND. An infected host has
a w0rm user with a null password. /etc/hosts.deny is deleted, and /bin/sh is copied to /
tmp/.w0rm with the setuid bit set. /var/log is empty or the log files are small with large time
gaps, and index.html files are replaced with The ADM Inet w0rm is here! The infected
host then scans for other vulnerable hosts.
AutoStart 9805
worm Macintosh
May 1998
This is a PowerPC-specific worm that takes advantage of the CD AutoPlay feature in QuickTime 2.5
and later, if it is enabled. The worm copies itself to any mounted volumes and to an invisible
background application in the Extensions folder.
Variants: There are six variants. Variants A, B, E, and F destroy data, with the type of data changing
with the variant. The data is overwritten with garbage and can be recovered only from backups.
Variants C and D are intended to remove the destructive variants. Both delete themselves when they
are done, except for the running copy.
Aliases: Autostart Worm, MacOS/AutoStart.worm, Hong Kong Virus
June 1998
This virus infects Macintosh applications by modifying or adding MDEF resources. It adds an
extension called 666, preceded by an invisible character. Some variants add a new INIT resource to the
System. Generally there is no damaging payload with this virus. The most common variant, Graphics
Accelerator, deletes all nonapplication files started during the sixth hour of the 6th or 12th day of any
month. Variant B deletes all nonapplication files every 6 months.
Variants on the virus are A-J. Graphics Accelerator is variant F. Variant C was the first polymorphic
virus for the Macintosh. The D variant is polymorphic and encrypted. It is the first variant of this virus
to modify the contents of the WIND resource.
Aliases: 666, Graphics Accelerator, Mac/SevenD, Mac/Sevendust, MDEF 666, MDEF 9806, MDEF E,
TCP Wrappers 7.6
Trojan Unix
January 1999
On January 21, 1999 a Trojan horse TCP Wrappers was distributed on FTP servers. The Trojan horse
version provides root access to remote users connecting on port 421 and sends email to an external
address providing information on the site and the user who compiled the program. The solution was to
download a replacement copy and verify the integrity of the new sources.
worm L RH 6.2, 7
January 2001
The worm attempts to exploit remote vulnerabilities in wu-ftpd, lpd, and rpc.statd. The worm
contacts a randomly generated IP address and checks the FTP banner to determine which version of
Red Hat is running so that it can determine which vulnerabilities to try. After it has access to the
machine, it downloads a .tgz copy of itself that is extracted to /usr/src/.poop/, and it appends
a line to /etc/rc.d/rc.sysinit. The worm replaces index.html with a file containing the
text Hackers looooooooooooooooove noodles. It edits /etc/inetd.conf or
overwrites /etc/xinetd.conf as part of the process that ensures its propagation. Additionally, the
worm scans for more vulnerable hosts, and sends a message to anonymous Yahoo! and Hotmail
accounts specifying the IP address of the infected host.
Aliases: Linux/Ramen, Linux.Ramen, Linux.Ramen.Worm, Worm.Linux.Ramen, Elf_Ramen
worm L
March 2001
It infects machines vulnerable to a root access vulnerability in bind. It attacks the remote host and
downloads and installs a package from, which contains the worm and the rootkit
t0rnkit. The rootkit replaces many system binaries, such as ps, ifconfig, du, top, ls, and
find, with Trojanized versions, and this helps disguise the worm's presence. The worm stays active
through reboots because it adds lines to /etc/rc.d/rc.sysinit. It deletes /etc/hosts.deny
and adds lines to /etc/inetd.conf to allow root shell access. The worm also sends /etc/
passwd, /etc/shadow, and output from ifconfig –a to [email protected]
Aliases: Linux/Lion, Linux/Lion.worm, 1i0n, Lion worm
worm L
April 2001
Targets vulnerabilities found in default installations of Linux. Exploits vulnerabilities in wu-ftpd,
lpd, bind, and rpc.statd to gain root access and execute itself.
The worm replaces ps, adds a cron job to help carry out its activities, adds users ftp and
anonymous to /etc/ftpusers, and replaces klogd with a backdoor program that allows root
shell access. The worm sends a message to two of four addresses in China with information including
the compromised host's IP address, process list, history, hosts file, and shadow password file. Then it
searches for other hosts to infect.
Aliases: Linux.Red.Worm, Linux/Red, Linux.Adore.Worm
worm Sol thru Sol 7 Microsoft
May 2001
SadMind exploits an old buffer overflow vulnerability in the Solstice sadmind program from 1999 to
infect Solaris machines. It installs software that then exploits a vulnerability in Microsoft IIS 4 and 5
from 2000 to attack Microsoft IIS Web servers. On the IIS machines, it replaces the front page with a
page that profanes the U.S. government and PoizonBOx and says to contact [email protected]
Additionally, it automatically propagates to other Solaris machines. It also adds ++ to root's .
rhosts file. After compromising 2000 IIS systems, it also modifies index.html on the Solaris
machine to have the same message as the IIS machines.
Aliases: Backdoor, Sadmind, BoxPoison, Sadmind.worm, sadmind/IIS, Unix/AdmWorm, Unix/
worm L
May 2001
This worm attempts to be good. It searches for systems infected with Linus.Lion.Worm and attempts to
fix the security hole that allowed replication. It blanks any lines in /etc/inetd.conf that contain /
bin/sh and scans for other systems infected by Linux.Lion.Worm.
Aliases: Linux/Cheese, Cheese
MacOS/[email protected]
worm Macintosh
June 2001
This is an AppleScript worm designed to spread with Mac OS 9.0 and higher and Microsoft Outlook
Express 5.0.2 or Entourage. It arrives as an email attachment to a message with the subject Secret
Simpsons Episodes! Running the attachment causes Internet Explorer 5 to go to http://www., and causes the script to copy itself to the StartupItems folder. This infects
the local machine. The worm spreads by sending itself via email to contacts listed in the infected user's
address book.
Aliases: Mac.Simpson, Mac/[email protected], Mac.Simpsons, AplS/Simpsons
February 2002
This virus attempts to infect all ELF executables in the current working directory and in /bin/. The
virus also attempts to open a UDP socket on port 5503 or higher to wait for a certain packet from the
attacker, and then opens a TCP connection with the attacker and starts up a shell for the attacker to use.
March 2002
This virus attempts to infect 200 ELF binaries in the current working directory and in /bin/. The size
of infected binaries is increased by 8759 bytes. If the virus is executed by a privileged user, it attempts
to open a backdoor server by opening a socket on port 3049 or higher and waiting for specially
configured packets that contain the backdoor program.
Aliases: Linux/OSF-A, Linux.Jac.8759
worm FreeBSD
June 2002
BSD/Scalper.worm affects FreeBSD 4.5 running Apache 1.3.20-1.3.24, although it is
recommended that all Apache users upgrade to the latest version. It exploits the transfer-chunk
encoding vulnerability in Apache to infect a machine. The worm scans for vulnerable hosts, transfers
itself in uuencoded form to /tmp/.uua, decodes itself to /tmp/.a, and then executes the decoded
file. Each worm keeps a list of all the IPSs infected from it.
It includes backdoor functionality that allows a remote attacker to launch denial of service attacks.
Additionally, a remote attacker can execute arbitrary commands, scan files for email addresses, send
mail, access Web pages, and open connections on other ports.
Aliases: ELF/Scalper-A, Linux.Scapler.Worm, Linux/Echapa.worm, Scalper-A, Scalper.worm, Echapa.
worm, ELF/Scalper-A, FreeApworm, FreeBSD.Scalper.Worm, ELF_SCALPER_A
OpenSSH 3.4.p1
Trojan Unix
July 2002
Trojan horse versions of OpenSSH 3.4p1 were distributed from the FTP server that hosts ftp. from approximately July 30 or 31 until August 1. The Trojan version contains
malicious code in the makefile that at compile time opens a channel on port 6667 to a specific host and
also opens a shell as the user who compiled OpenSSH. The solution is to verify the integrity of your
sources and download again or to just download the sources again.
worm L: RH, Deb, Su, Man, Sl
September 2002
This worm uses an OpenSSL buffer overflow vulnerability to run a remote shell to attack specific
Linux distributions. It sends an initial HTTP request on port 80 and examines the server header
response. It spreads over Apache with mod_ssl installed.
The worm uploads itself as a uuencoded source file, decodes itself, and compiles itself into an ELF
binary, which executes with the IP address of the attacking computer as a parameter. This is used to
create a peer-to-peer network, which can then be used to launch a denial of service attack. All worm
files are stored in /tmp.
Variants: Slapper-A, Slapper-B, Slapper-C, Slapper.C2
Aliases: Linux/Slapper-A, Apache/mod_ssl worm, ELF_SLAPPER_A, Worm/Linux.Slapper, Linux/
Slapper, Linux.Slapper.a.worm, Slapper.source, Slapper-A
Trojan Unix
September 2002
Trojanized versions of sendmail8.12.6 were distributed on FTP servers between September 28 and
October 6, 2002. Versions distributed via HTTP do not appear to be Trojanized. However, it is
recommended that if you obtained the sendmail8.12.6 distribution during that time, it is best to get
another copy of the sendmail distribution. See Unix/Backdoor-ADM for details on the malicious code
that is executed.
Trojan Unix
September 2002
Backdoor code that is executed when the Trojanized sendmail8.12.6 is compiled. The code forks a
process that connects to on port 6667. It allows an attacker to open a shell with the
privileges of the user who compiled sendmail. The process is not persistent with a reboot, but is
reestablished if sendmail is recompiled.
Aliases: Unix/sendmail-ADM
worm L: RH, Deb, Su, Man, Sl
September 2002
This uses the same exploit as the Slapper worm and its variants. It sends an invalid GET request to
identify a vulnerable Apache system.
The worm consists of four files:, sslx.c, devnull, and k. The first three are used to
spread the worm, and k is a backdoor Trojan IRC server that can be used to launch a denial of service
Aliases: Linux/Slapper.E, Linux.Kaiten.Worm, Worm.Linux.Mighty, Linux/Slapper.worm.d, Linux.
worm L
November 2002
This worm attempts to exploit buffer overflows in some versions of bind, popper, imap4, and
mountd to gain access to a system. If it succeeds, it downloads and uncompresses mworm.tgz to /
tmp/..../ and sends a message to [email protected] The worm has 46 files. When it has
infected a machine, it begins to attack a random IP address. Additionally, the worm opens a backdoor
remote shell on TCP/1338 for the attacker.
Aliases: Linux/Millen
tcpdump 3.6.2
tcpdump 3.7.1
libpcap 0.7.1
Trojan Unix
November 2002
From November 11–13 Trojan horse versions of tcpdump and libpcap were distributed. The Trojan
horse tcpdump contains malicious code that is executed at compile time. The malicious code connects
to a specific host on port 80 and downloads a file called services. This file generates a c file that is
compiled and run. The resulting binary makes a connection to a specific host on port 1963 and reads a
single byte. The action taken can be one of three things. If it reads A, the Trojan horse exits; D, the
Trojan forks itself, creates a shell, and redirects the shell to the connected host; M, the Trojan closes
the connection and sleeps for 3600 seconds. To disguise the activity, a Trojan libpcap (libpcap is the
underlying library for tcpdump) ignores all traffic on port 1963. The solution is to download new
sources and verify their integrity.
Trojan L Su 8.0 Sl .8.0
January 2003
This Trojan is a malformed .mp3 file. When played with a specific version of mpg123 player, it
recursively deletes all files in the current user's home directory.
Aliases: Exploit-JBellz, JBellz, TROJ_JBELLZ.A
This table includes every Macintosh, or Macintosh-related virus, (excluding MS Word macroviruses)
know by Symantec and McAfee, two of the foremost antivirus software vendors. In it are 26 Mac
viruses. There are roughly 600 Microsoft Word macroviruses that are not covered, the vast majority (530
or so) of which are functional on the Mac.
By way of comparison, depending on who you ask, there are anywhere between 50,000 and 62,000
viruses in total, with the predominantly affected platform being Windows machines, and the
overwhelming majority being directed at Microsoft Office products such as Outlook, Internet Explorer,
Word, Excel, and PowerPoint. As I type this, CNN has yet another story of a Microsoft product run
amok, with the SQL Slammer worm, mentioned earlier in this chapter, taking down ATMs, and
banking and airport scheduling networks around the planet. Coincidentally, CNN's also running an
article quoting Bill G. as saying that "security risks have emerged on a scale that few in our industry
fully anticipated" (
One has to give him credit for noting, in the email he's being quoted from, that passwords are "the weak
link," but I think it's rather disingenuous of him to call every computing professional outside Microsoft
"few in our industry."
Solving the Problem
Two simple steps are all that are required to entirely solve the malware problem:
Don't run software unless you're absolutely certain what it's going to do. If you know what it's
going to do, and it damages your system, it's your own fault for running the software. Don't want
a damaged system, don't run that!
Don't run software that can run software for you, unless you're absolutely certain what it, and the
software it will run, are going to do. Software that can run other software for you, especially
anonymous software that's been sent to it by anonymous users on the Internet, is so obviously
unsafe that we shouldn't have to say this. Millions upon millions of computer users around the
world who ignore this rule, though, prove that we need to say it anyway.
If only you would live by these two rules, and force anyone else who uses your computer to do the same,
you'd be completely safe from all forms of malware.
Unfortunately, it's not practical to live your computing life 100% by these rules. Even if you tried 100%
to abide (and we recommend that you do try!), there would be times where a bug in a program allowed it
to do something unexpected, and you'd miss your 100% success mark. It's useful to add a few other
things to the mix, in addition to trying to abide by rules 1 and 2, as often as you can:
Pick your software for its known performance in the real world. There's an old computing adage
that says that every program has at least one bug, and at least some extraneous code. The joking
corollary to this adage is that this implies that every program can be reduced, until it's a single
line of code that doesn't work. Jokes or no, a history of bugs and design misfeatures is likely to
imply a future of them as well. Promises are promises, and infected computers are infected
computers, promises or no.
Use virus/worm/Trojan detection software to catch malware entering your system before it has a
chance to activate. Even if the software you're using is securely designed, you don't want to be
the first person on whom a new virus that targets a recently discovered bug is tried. You also
don't want to be redistributing viruses to other, less security-conscious users just because they're
dormant on your system.
Keep up with your vendor's software patches. This means both Apple's patches and patches that
other software vendors make available for their software. If your software is designed securely,
the bugs will get you. Patches fix bugs.
These supplementary rules, however, are useless if you don't try to apply rules 1 and 2. If you're running
software that's vulnerable by design, you can be hit by every new virus or worm that comes along. Virus
scanner updates and vendor patches come out after the vulnerability has been found, and usually after it
is exploited.
Table 6.2 includes a listing of interesting antiviral solutions for Mac OS X. Some of these can be run to
scan all the files on your machine, or on removable media as it's inserted. Others monitor the network
(specifically the mail system) and try to pry viruses out of email messages before they're even delivered.
Apply any and all that are appropriate in your situation. The fact that Mac OS has not historically been
the target of considerable malware is partly a feature of its overall small market share, but it's probably
more a feature of the older Mac OS design—virus and worm attacks weren't easy. Now, with Unix, they
can be. Whether they will be, will depend on whether you choose to run software that makes it hard, or
makes it easy for them to exist.
Table 6.2. Antiviral (and Other Antimalware) Solutions for Mac OS X
Virus Barrier
Mac OS X 10.1.1 and higher
Mac OS 8.1 and higher
Sophos Anti-Virus
Mac OS X
Mac OS 8.1 and higher
Mac OS X 10.1 and higher
Anti-Virus 8.0
Mac OS 8.1 and higher
Virex 7
Mac OS X 10.0.3 and higher
Open AntiVirus Project
JRE 1.3 or later
The Open AntiVirus Project includes the ScannerDaemon, VirusHammer, and PatternFinder projects,
which compose a Java-based virus scanner. The project warns that it is still under development and
should not be used as the only virus protection. Does not detect polymorphic viruses.
Clam AntiVirus
Linux, Solaris, FreeBSD,
OpenBSD, NetBSD, AIX, Mac
OS X, Cobalt MIPS boxes
Virus scanner written in C. Uses the virus database from the Open AntiVirus project. Can also detect
polymorphic viruses.
Some Virus Scanning/Virus Blocking/Mail Filtering Packages for Mail Servers
Mail Transport Agent
CGvirusscan 1.0
CommuniGate Pro
A program that interfaces CommuniGate Pro with Virex. Requires Mac OS X with Perl. Written by
one of this book's authors, John Ray.
RAV Anti-Virus for Mac OS CommuniGate Pro
Antivirus, antispam, content filtering package. 1.0b1
CommuniGate Pro
Mail filtering program that can be used to filter viruses. Requires a Unix with Perl.
AMaViS—A Mail Virus
A program that interfaces a mail transport agent with virus scanners. Tested on Linux, Solaris, *BSD,
AIX, HP-UX. Expected to be portable to other Unixes.
An email filter that can be used to filter viruses. Tested on Linux. Requires Perl 5.001 or higher,
various Perl modules, and Sendmail 8.12.3 or higher.
Perl module for writing filters for milter, the mail filter API for Sendmail.
Email scanner that can be used to scan for viruses. Linux, FreeBSD, or Solaris.
Email filter that enables a mail transport agent to interface with virus scanners. Linux, Solaris, or
Any RFC-compliant MTA
SMTP proxy that keeps out viruses, spam, and mail relaying. Unix with an ANSI C compiler.
This chapter has covered the general range of automated, autonomous malicious software. While reading
it, we hope you've also come to understand the process by which these malwares propagate—in
particular, how poorly designed server and client software facilitates their spread. In some places we
might be accused of being moderately melodramatic, but the malware problem is largely a social one,
and the incredible number of users who remain ignorant of the consequences of their software choices
drives us to be sometimes rather strident in our attempts to communicate. Users need to be educated;
users need to understand the software they use, both for the benefits that it provides and for the problems
it creates; and users need to make an informed choice regarding this software. We're trusting that after
you're informed, you'll make a good choice, and take your responsibilities as a member of the networked
community seriously.
Chapter 7. Eavesdropping and Snooping for Information:
Sniffers and Scanners
Eavesdropping and Information Gathering
Monitoring Traffic with tcpdump
Sniffing Around with Ettercap
Network Surveys with NMAP
Other Information-Gathering Tools
Ethics of Information Gathering
Additional Resources
There's a common saying that if you want a computer secure from network attacks, you should unplug
the network cable (or remove your AirPort card). Although this is obviously not a feasible solution, the
fact remains that no matter how tightly secured your computer configuration is, the moment information
is transmitted over the network, it's an open target for eavesdropping—and the attacker need never
directly "attack" your computer to glean information such as credit cards, passwords, and other sensitive
Eavesdropping and Information Gathering
Try as we might, if information leaves our hands, it is no longer within our ability to protect it. If we
place a letter in the mailbox and an unscrupulous individual removes it without our knowledge, there's
very little we can do about it. Of course, there's always a possibility of tracking the individual down
later, but, for the most part, we have to rely on faith in our fellow human beings and hope that our mail
won't be in the wrong place at the wrong time. Alternatively, we can develop secret coding systems and
write all our letters in code, but when we send the message, it still needs to go through the postal service
and is still out of our control. It may even be intercepted by someone with a decoder ring who can break
our code.
Network security is very much analogous to the postal system. We hope that information reaches its
destination without being intercepted, but it's very possible that someone can and will sneak a peak
somewhere along the route.
This chapter discusses two types of "nonattacks" that, while not directly harmful to your system,
ultimately may place it in risk of compromise:
Sniffing— A sniffer works by actively listening to all network traffic rather than data specifically
addressed to the computer on which it's running. In doing so, it can record conversations on
critical machines such as mail, file, and database servers. Modern sniffers even provide password
parsing filters to create a ready-to-use list of usernames and passwords pulled from the traffic
Scanning— Scanning, like sniffing, does not directly target a computer's weakness. Instead, it
attempts to identify the active services a computer is running. Attackers typically scan entire
networks to locate potential target machines. As soon as the targets are located, a real attack can
The reaction of many administrators to these threats is one of indifference. Sure, they sound bad,
but what are the chances that they'll actually happen to you? How many people are talented
enough to write the code to do this sort of thing? The answers may startle you. First, chances are
extremely high that any Internet-connected machine will be scanned—it's a virtual certainty. If an
exploit can be found or an account hacked, it's very likely that a sniffer will be used. Second, no
talent, programming, or networking experience is required to eavesdrop on a network. For
example, consider this information: -> 140.254.xx.xx:110
USER: kinder
PASS: sef1221 -> 140.254.xx.xx:110
USER: defi
PASS: camping
pop3 -> 140.254.xx.xx:110
USER: jmiller
PASS: dogg119
pop3 -> 140.254.xx.xx:110
USER: thalheimer
PASS: ggghhh
These are (obviously) a collection of POP passwords, but where did they come from? The answer is a
piece of software that works like this:
1. You start it.
2. You tell it to record passwords.
3. You take the recorded password file and go.
The user doesn't need to know anything other than how to run a program. In this example, the four
collected passwords were taken from an idle network at roughly 2:00 a.m. over a period of about 10
seconds. The sniffer can run from any machine on the same network as the traffic that is to be watched.
With the advent (and poor security) of wireless networks, the potential sniffer could be someone with a
laptop sitting in a car outside your building.
The good news is that the same information gathering techniques that attackers use to crack your
network can also be used to help secure it. Sniffers can be used to monitor network traffic for signs of
attack (see Chapter 18, "Alarm Systems: Intrusion Detection"), whereas portscanning can identify
potential targets for attackers as well as identify users who may be violating your security policies.
The majority of this chapter will be spent exploring the available information-gathering software for
Mac OS X and demonstrating its use.
The Five-Minute TCP/IP Primer
Scanners and sniffers have migrated from the realm of tools for administrators to menu-driven utilities
accessible by novices. Although a background in networking isn't necessary, it's still helpful in
interpreting the results of the applications that are examined shortly. Let's take a look at what TCP/IP is
(and isn't) now.
TCP/IP is the protocol suite that powers the Internet. It was designed by the DoD to be a robust
communications standard for linking multiple individual networks into what was originally called the
ARPANET. These original LANs were built through the government's standard "lowest bidder"
contracting method, and subsequently were incapable of speaking to one another. TCP/IP was designed
to link these systems regardless of their operating systems and communications mediums. Over time
ARPANET expanded to universities and finally grew to what we know as the Internet.
The OSI (Open Standards Interconnect) Network Model is often used to describe networks. This works
particularly well for TCP/IP in that it shows how the different protocols within the TCP/IP suite work
with one another and with the physical devices used to generate the actual data transmission.
There are seven layers to the OSI model, as shown in Figure 7.1, each building on the one before it.
Figure 7.1. The OSI model is made up of seven layers.
The lowest of the layers, the Physical layer, is composed of the hardware that connects computers and
devices—network cards, wiring, and so on. For wireless networks, this includes the AirPort card and the
carrier frequency of the 802.11a/b/g network. Everything that makes communication possible is
included in the Physical layer, except for the actual data transmission standards. TCP/IP does not yet
come into play at this layer.
Data Link
The Data Link layer defines a means of addressing and communicating with other network devices
connected via the same Physical layer. For Ethernet networks, each device has a unique factory-assigned
"MAC" (media access control, not Macintosh!) address and communicates with other devices by
dividing data into chunks called frames. Each frame contains a source and destination MAC address, a
type, a data payload, and a CRC value for error checking.
Other Data Link methodologies such as SLIP and PPP use different addressing standards, but operate in
much the same manner.
This layer works to handle packet collisions and signaling errors as it sends data across the Physical
layer. It is still very low level, however, as we have yet to touch TCP/IP, which technically starts at the
next layer—the Network layer.
The Network layer introduces the "IP" (Internet Protocol) in TCP/IP. An IP address provides a higherlevel protocol that maps between the Data Link layer's addresses (MAC addresses for an ethernet) and
an "arbitrary" user assigned address—the Address Resolution Protocol (ARP). By creating a standard
addressing scheme that isn't dependent on the underlying hardware, the Network layer also makes
possible routing of information from one network (perhaps using an entirely different Physical and/or
Data Link standard) to another.
Besides handling high-level addressing, the Network layer introduces the IP packet header. An IP packet
header contains source and destination IPs along with additional information for routing and error
checking. The packet payload itself is formed by the higher-level layers, but each TCP/IP packet must
include the IP header information to reach its destination. An interesting notion that is also handled
within this layer and carried as part of the IP header is packet fragmentation. Although this might sound
like a "bad" thing, packets must often be fragmented as they move across different Data Link layers that
define different data sizes for transmission. By dividing data into the appropriately sized chunks for each
Data Link layer, the Network layer can bring together many physically dissimilar networks.
Within the Network layer, the Internet Protocol defines a mechanism for transmitting and receiving
information about errors and routing to be communicated between devices. This is known as the ICMP,
the Internet Control Message Protocol. ICMP messages can be transmitted to inform clients of routing
information, errors in transmission, or other network problems. The most typical "human" use of ICMP
is to ping another host to see whether it is reachable:
% ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=255 time=3.673 ms
64 bytes from icmp_seq=1 ttl=255 time=5.479 ms
64 bytes from icmp_seq=2 ttl=255 time=2.581 ms
64 bytes from icmp_seq=3 ttl=255 time=2.548 ms
64 bytes from icmp_seq=4 ttl=255 time=2.514 ms
--- ping statistics --5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 2.514/3.359/5.479 ms
ICMP packets, although useful for determining device and network status, are not a foolproof means of
diagnosing network conditions. Many firewalls are configured to block ICMP packets because they can
be used as remote attackers to probe a network's state.
More information about the Internet Protocol can be found in RFC 791:
At the transport layer of the network model fall the protocols that are used to move most data from one
machine to another via the Internet Protocol. Although the primary focus in this chapter is on the TCP
and UDP protocols, there are actually dozens of protocols that piggyback on top of IP. You can see a list
of many of these protocols by viewing the contents of /etc/protocols, or visiting http://www.iana.
Most TCP/IP communication is performed (surprise!) by TCP, the Transmission Control Protocol. The
purpose of TCP is to provide reliable data transmission and maintain a virtual "circuit" between devices
that are communicating. It is responsible for ensuring that all packets arrive in order and reach their
destinations successfully. It accomplishes this by providing a sequence number with each packet it
sends, and requiring that an acknowledgement (ACK) is sent by the destination computer with each
packet successfully received. If an ACK is not sent by the receiver within a reasonable amount of time,
the original packet is retransmitted.
Another popular protocol in the TCP/IP suite is UDP. Unlike TCP, which requires that each packet be
received and acknowledged, UDP's purpose is to send information as quickly as possible. Streaming
video, games, and other noncritical Internet applications use UDP to provide the "best possible" timesensitive information given current network conditions. For example, while watching streaming video of
a live telecast, it makes little sense to stop playback because a few frames were lost here or there.
Instead, UDP sends a continuous stream of data and the remote receiver gets as much of it as it can.
There, obviously, is a loss of quality on poor network connections, but the end result is better than
having to wait for each and every packet to be verified and acknowledged.
TCP and UDP packets contain within their information a set of ports. A port can be considered a virtual
outlet on your computer that other machines can "plug into" to receive a particular service. Ports are
identified by a number, which, in turn, can be mapped to the service that the port provides. For example,
common ports used by the Mac OS X services include
FTP—21 and 20
Appleshare over TCP/IP—548
A list of the ports and the services that typically run on them is provided in the file /etc/services
on your computer. To make a connection, a remote machine opens an arbitrary network port on its end,
then connects to a known service port on the remote computer. By specifying a port along with the data
being sent, the Transport layer protocols achieve multiplexing, or the ability to have multiple
simultaneous connections. The port numbers, along with the source and destination addresses, make up a
During the lifetime of a TCP/IP connection, there are multiple states that can exist. These are defined in
RFC 793 (
LISTEN. Waiting for a connection request.
SYN-SENT . A connection request has been sent and the device is waiting for an
SYN-RECEIVED . The connection acknowledgement is received and the device is waiting for a
final confirmation.
ESTABLISHED. A connection exists and data can be transmitted and received.
FIN-WAIT-1. Waiting for a connection to be terminated or an acknowledgement of its request
to terminate a connection.
FIN-WAIT-2. Waiting for a connection termination request from the remote device.
CLOSE-WAIT. Waiting for a termination request from an upper network layer.
CLOSING. Waiting for a connection termination acknowledgement from remote device.
LAST-ACK. Waiting for the final acknowledgement of connection termination.
TIME-WAIT. Waiting for a given timeout to be sure that a termination acknowledgement has
been received.
CLOSED. Connection is closed.
Some denial of service attacks take advantage of the acknowledgement wait states that exist in TCP/IP
connections by "starting" to open connections, then not following through with the appropriate
termination or acknowledgements, resulting in the remote device having to wait for a connection timeout
before its network resources can be released.
The OSI model defines the Session layer as being responsible for maintaining point-to-point
communications between devices. In TCP/IP, this is handled directly by the TCP/UDP and the use of
sockets. These protocols span both the Transport and Session layers of the OSI model, so we're left with
the real "meat" of TCP/IP communications: the Presentation and Application layers.
The TCP/IP Presentation layer is where most of what we consider Internet protocols (not to be confused
with the Internet Protocol—IP) are found. Protocols such as POP, HTTP, IMAP, and so on are
implemented at this layer and generally consist of a network "language" for exchanging information. For
example, the SMTP, which runs on port 25, receives incoming messages after an intermachine exchange
similar to this:
% telnet 25
Connected to
Escape character is '^]'.
220 NOTE: Port scans are logged.
250 we trust you
MAIL FROM: [email protected]
250 [email protected] sender accepted
RCPT TO: [email protected]
250 [email protected] will relay mail from a client address
354 Enter mail, end with "." on a line by itself
This is a test message.
250 1810071 message accepted for delivery
In this example, the SMTP responds to the commands HELO, MAIL FROM, RCPT TO, and DATA.
Each protocol's language varies depending on its purpose, and many cannot be as easily read as SMTP.
Nevertheless, this is where most of the "important" information about what is taking place on your
network can be gleaned, and is the primary focus for most network sniffers.
The Application layer "hides" everything that falls underneath it. This layer is made up of the
applications that use the Presentation layer protocols. This includes your Web browser (Chimera or
Mozilla, of course), Apple's Mail application, and so on.
From here on out, we're going to be looking at the tools that can be used to listen in on the traffic on
your network. Most of these utilities require that you run them as the superuser so that they can place
your network card in promiscuous mode. Normally a networked computer passes information up the
TCP/IP stack only if the packets are addressed specifically to that machine or are broadcast to the entire
network. In promiscuous mode, all traffic will be visible to the computer, regardless of how it is
Switched networks are the de facto standard for modern LANs. They utilize switches rather
than hubs to provide connections to multiple machines. A switch caches the MAC addresses
it sees on a given port (known as an ARP cache), and forwards traffic for only those
addresses to the port. Machines that are located on different switch ports cannot see one
another's traffic. This makes network monitoring a bit troublesome using legitimate tools
unless you can monitor traffic upstream from the switch. There are ways around this
problem, such as overflowing a switch's ARP cache so that it "gives up" and passes all traffic
like a hub. These techniques, however, are often disruptive to other network devices and are
not a transparent means of eavesdropping.
The remainder of this chapter assumes that you've read through the TCP/IP introduction and
have spent some time reading through the appropriate RFCs. TCP/IP, like many subjects in
this book, is not a 5- or 10-, or even 100-page topic. We're attempting to give you the tools
and background to get started and understand what you're seeing, not to become a network
technician in 35 pages. For more information on TCP/IP, you may want to read Special
Edition Using TCP/IP, by John Ray, from Que Publishing.
Monitoring Traffic with tcpdump
The first utility that we'll look at for monitoring network traffic is tcpdump. tcpdump outputs the headers of all
packets seen by your network interface. It features a sophisticated filter language for limiting the output to a specific
host, source, destination, subnet, or any combination thereof.
The most simple use of tcpdump is to start it (as root or with sudo) via the command line with no arguments:
[View full width]
# tcpdump
1: tcpdump: listening on en1
2: 15:42:28.853450 >
51668+ CNAME? (29)
3: 15:42:28.869908 > client0.poisontooth.
51668* 1/4/4 CNAME (217)
4: 15:42:28.870331 >
30577+ A? (29)
5: 15:42:28.913477 > client0.poisontooth.
com.49295: 30577
9/4/4 CNAME[|domain]
6: 15:42:28.914158 >
48116+ AAAA? (29)
7: 15:42:28.940969 > client0.poisontooth.
com.49295: 48116
1/1/0 CNAME (124)
8: 15:42:33.440973 > S
4139514020(0) win 32768 <mss 1460,nop,wscale 0,nop,nop,timestamp 34160 0>
(DF) [tos 0x10]
9: 15:42:33.494618 > S
1497469520(0) ack 4139514021 win 5840 <mss 1460>
10: 15:42:33.494767 > . ack 1
win 33580 (
DF) [tos 0x10]
11: 15:42:33.525710 > P 1:8
(7) ack 1 win
33580 (DF) [tos 0x10]
12: 15:42:34.497076 >
56757+ PTR? (39)
13: 15:42:34.499827 > client0.poisontooth.
56757* 1/1/1 PTR[|domain]
14: 15:42:34.687262 > P 1:13
(12) ack 1
win 33580 (DF) [tos 0x10]
15: 15:42:34.734041 > . ack
13 win 5840
16: 15:42:36.568825 > FP 1:148
(147) ack
13 win 5
For TCP packets (most of what you'll see), the output of the tcpdump can be read as
[View full width]
Time Source-IP.Port > Destination-IP.Port TCP-Flags Segment-Number ack window
buffer) <tcp options>
UDP traffic, including name server resolution, is slightly different, as are other transport layer protocols. The
tcpdump man page defines the output format for several different protocols and is required reading if you want to
fully exploit the software.
In the example output, tcpdump shows a connection between and carrot3., where client0 is requesting a DNS lookup on the domain (lines 2-7). After
receiving a response, client0 proceeds to open a connection with and begin communicating
(lines 8-16). Note that the port is shown as the actual protocol being used, such as http instead of 80. This
substitution is made automatically by tcpdump when possible.
A more useful example of how tcpdump can be used is with a filter to limit the traffic to a specific type. For
example, rather than viewing everything on the network, how about simply watching all HTTP communications
coming from a given host (in this case You do this by adding the filter expression
src host and dst port 80 to the command. This example also introduces
the –q flag to hide extraneous protocol information:
[View full width]
# tcpdump –q src host
tcpdump: listening on en1
and dst port 80
0 (DF)
0 (DF)
518 (DF)
0 (DF)
0 (DF)
0 (DF)
18:30:50.089552 tcp 0 (DF)
18:30:50.141997 tcp 0 (DF)
18:30:50.204663 tcp 445 (DF)
18:30:50.415602 tcp 0 (DF)
> tcp 0 (DF)
> tcp 0 (DF)
> tcp 267
> a209-249-123-244.deploy.
> a209-249-123-244.deploy.
> a209-249-123-244.deploy.
> a209-249-123-244.deploy.
Here, tcpdump reports that HTTP requests originating from have been made to
the hosts,, and a209-249-123-244.deploy.akamaitechnologies.
The Boolean expression to filter traffic can be built using and (&&), or (||), and not (!) and the constructs given in
the tcpdump man page. The most useful of these expression primitives are reproduced for your reference in Table
Table 7.1. tcpdump Expression Primitives
dst host <host/ip>
Match packets headed to a hostname or IP.
src host <host/ip>
Match packets to or from a hostname or IP.
host <host/ip>
Match packets to or from a given hostname or IP.
ether dst <ethernet address> Match packets to a given ethernet address.
ether src <ethernet address>
Match packets from an ethernet address.
ether host <ethernet address> Match packets to or from an ethernet address.
gateway <host/ip>
Match packets by using the given host or IP as a gateway.
dst net <network>
Match packets headed to a given network.
src net <network>
Match packets from a specified network.
net <network>
Match packets to or from a specified network.
net <network> mask <netmask>
Specifies a network by using an address and a 4-octet netmask.
net <network/mask>
Specifies a network by using an address followed by a / and the number
of bits in the netmask.
dst port <port>
Match packets to a specific port.
src port <port>
Match packets from a specific port.
port <port>
Match packets to or from a specific port.
less <length>
Match packets less than the given size.
greater <length>
Match packets greater than a given size.
ip proto <name/number>
Match packets of IP named or numbered (as in /etc/protocols),
such as tcp, udp, icmp, and so on.
ether broadcast
Match ethernet broadcast packets.
ip broadcast
Match IP broadcast packets.
ether multicast
Match ethernet multicast packets.
The capability to create custom filter expressions is one of the most powerful features of tcpdump and other utilities
that use libpcap. You may find a number of additional flags and switches useful, all accessed via the tcpdump
syntax: tcpdump [options] [expression]. The common switches are provided in Table 7.2.
Table 7.2. Common tcpdump Switches
Convert network numbers to names.
-c <packet count> Exit after receiving the specified number of packets.
Don't convert numbers to names.
-F <filter file>
Use the contents of the named file as the filter expression.
-i <interface>
Listen on the named network interface.
Buffer standard out.
Quick/Quiet output. Leave out most extra information beyond source, destination, and
-r <filename>
Read packets from file (see -w).
Don't print a timestamp on each line.
-v, -vv, -vvv
Increasingly verbose output.
-w <filename>
Write packets to a file for later analysis (see -r). This is better than trying to analyze a
high-bandwidth/high-activity network in real time, which is likely to result in packet loss.
As you've seen, tcpdump can provide extremely targeted or very general information about your network traffic. It
fits the definition of a sniffer but does provide tools for attacking a network; its purpose is to help you uncover activity
that may violate your network policy or diagnose unusual network communication problems. A Mac OS X GUI for
tcpdump (MacSniffer) can be downloaded from
If you'd like to try a somewhat fun use for tcpdump, download TrafGraf from http://trafgraf.poisontooth.
com/. This is a simple network traffic graphing that will present a visual snapshot of the communications
on your network and help identify high-volume hosts. I wrote it almost four years ago, but, as long as you
install the Perl modules mentioned in the readme, it will work fine on Mac OS X 10.2.
Sadly, it is next to impossible to detect sniffers on a modern network because they are designed to be completely
passive. Errors existed in earlier versions of Linux and Windows that allowed administrators to probe for interfaces
in promiscuous mode. Unfortunately, these are long gone. (See for details.)
To test to see whether your machine may be running a sniffer (and is thus compromised), use ifconfig to display
the information for your active network interface. For example:
[View full width]
% /sbin/ifconfig en1
inet6 fe80::230:65ff:fe12:f215%en1 prefixlen 64 scopeid
inet netmask 0xffffff00 broadcast
ether 00:30:65:12:f2:15
media: autoselect status: active
supported media: autoselect
Here you can see the PROMISC flag for en1, indicating that this interface is in promiscuous mode.
Although sniffers are tough to find and remove, the use of secure protocols (SSL, IPSec, etc) can foil most sniffers
Sniffing Around with Ettercap
Although your computer comes with tcpdump, it is not the sort of tool that modern attackers have in
their arsenal—after all, it requires that you know how to type. For the lazy attacker, there are much
better options, such as ettercap ( Ettercap hides many of the details of
sniffing behind an easy-to-use interface, and can even sniff switched networks.
A few of the "cool features" listed for version 0.6.7 on the Web site include
Characters injection in an established connection— A user can insert characters into a connection
between another machine on the local network and a remote host, effectively hijacking the
SSH1 support— Even though SSH is known for security, ettercap can sniff the encrypted data
from SSH1 sessions, easily retrieving usernames and passwords.
HTTPS support— Like SSH1 support, ettercap can easily break into HTTP SSL-encrypted
Plug-ins support— Developers can take advantage of the ettercap API to create their own plugins.
Password collector— If a password is detected for the protocols TELNET, FTP, POP, RLOGIN,
4, VNC, LDAP, NFS, SNMP, HALF LIFE, QUAKE 3, MSN, or YMSG, ettercap can log it.
OS fingerprint— Easily identify remote operating systems and the network adaptors they use.
Kill a connection— See a connection between two machines that you want to terminate? Ettercap
can do it.
Passive scanning of the LAN— Passive scanning can provide information about your network
and the attached hosts without actively sending packets to the machines.
Check for other poisoners— You have ettercap and so do the "bad guys." Detects other ettercaps
and spoofing attempts on your network.
What makes ettercap truly dangerous is its capability to carry out ARP poisoning. ARP poisoning
exploits the stateless nature of the ARP. Normally, machines send out an ARP request asking for the
address of another machine, and presumably receive a reply. In ARP poisoning, the replies are sent
without a request being made. The sniffer essentially tells the rest of the network that it is every other
machine on the network—and subsequently receives traffic for those machines. This type of attack is
easily identified by viewing the ARP cache of a machine on the poisoned network (arp -a). If
multiple machines map to a single MAC address, you may be viewing the result of ARP poisoning.
There are perfectly legitimate reasons for multiple IPs and hostnames to be attached to a
single MAC address. Multihomed servers and terminal servers, for example, often have
multiple IP addresses assigned to a single network interface.
Ettercap, for all the attack capabilities built in, also features an extremely useful ARP poisoning
detector, making it an effective tool against the very people who use it inappropriately.
You can download a precompiled .pkg version of the latest ettercap release from http://ettercap.
After downloading and installing, start ettercap with /usr/sbin/ettercap. If you have multiple
active network interfaces, use the switch -i <interface> to choose which will be used.
When starting, ettercap initially scans your network and collects IP and MAC addresses for all active
machines, and displays two identical columns with the located devices. Figure 7.2 shows the initial
ettercap screen.
Figure 7.2. All active LAN devices are shown.
Ettercap is a cursor-based program that you can navigate with your keyboard's arrow keys. Help can be
shown at any time if you press h. In the initial display, the left column is used to pick a potential source
for sniffing, whereas the right column is the destination. You can choose one, both, or neither (to sniff
everything) by moving your cursor to the appropriate selection in one column, pressing Return, then
doing the same in the next. To deselect an address, push the spacebar. Figure 7.3 shows a selected
source and destination pair.
Figure 7.3. Choose the targets you want to sniff, or leave either field empty to sniff all hosts.
After choosing your targets, you can use the key commands in Table 7.3 to start sniffing and monitoring
the hosts. Several commands do not require a source or destination and simply operate on the entire
network, or on the actively highlighted machine.
Table 7.3. Ettercap Sniffing Commands
Start sniffing using ARP poisoning.
Sniff based on the IP addresses of the selected machines.
Sniff based on the MAC addresses of the selected machines.
Poison the ARP caches of the chosen machines, but do not sniff.
Forge a packet (including all headers, payload, and so on) from the chosen source
computer to the destination.
Delete an entry from the list; it will not be subjected to ARP poisoning if unlisted.
Fingerprint the remote OS.
Passively identify hosts on the network.
Check for ARP poisoners; ettercap will identify all hosts responding to more than one IP
Refresh the host listing.
After ettercap has started sniffing, the screen will refresh with a new display listing all the active
connections that you can monitor, as shown in Figure 7.4. In this example, there are three connections:
SSH, NetBIOS, and FTP.
Figure 7.4. Choose the connection to monitor.
Use the arrow keys to choose an interesting connection (such as the FTP session shown in the figure),
the press Enter, or use one of the other options, shown in Table 7.4.
Table 7.4. Choose What to Sniff
Sniff the connection.
Forge a packet within a connection.
Filter the connections based on packet attributes, such as port numbers.
Turn on active password collection.
Log all collected passwords (FTP, SSH, Telnet, and so on) to a file. The file will be
named with the current date and time.
Kill the selected connection.
Resolve the selected IP
Refresh the list.
Assuming you chose to sniff the connection, the right side of the screen will refresh with a log of the
data coming from the source, whereas the left will contain the responses sent from the destination.
Figure 7.5 shows a sniffed FTP login that has failed.
Figure 7.5. Watch the Presentation layer data in real time.
Use the key commands in Table 7.5 to control the sniffed connection.
Table 7.5. Control the Sniffed Connection
Kill the connection.
ASCII view of the sniffed data.
Hex view of the sniffed data.
Join the source and destination windows.
Display only text (readable) characters.
Inject characters into the connection.
Log the sniffed data to a file.
Bind the sniffed data to a port. (You can then connect to this port on the sniffer to monitor
the connection.)
Exit to the previous screen.
As you can see, ettercap is easy to use and extremely powerful. I highly suggest that you read through
the available help screens and be very careful with what you choose to do. The default distribution of
ettercap includes a number of external plug-ins ranging from ettercap detectors to DoS attack
implementations (for informational purposes only).
Using the cursor-controlled interface is the most user-friendly means of operating ettercap,
but for those who want to run it via script or launch it as a daemon, all functions are
accessible via the command line. Type /usr/sbin/ettercap -h to display the
command-line syntax.
Network Surveys with NMAP
Although sniffing is often an effective means of gathering information about (and from) machines on a
network, it is easily detectable when using ARP poisoning; with passive eavesdropping, it is easy to
defeat it by using a switched network. Sniffing is also limited to machines located along the path of the
communication: A machine in California, for example, cannot sniff a machine in Ohio unless the Ohio
traffic is being routed through the California network.
For remote information gathering, most attackers rely on portscanning. A portscan is simply a report of
the open ports on a remote machine. For example, this is a portscan of a remote mail server:
# /usr/local/bin/nmap -sS -O
Starting nmap V. 3.00 ( )
Interesting ports on (
(The 1583 ports scanned but not shown below are in state: closed)
Remote operating system guess: Mac OS X 10.1 - 10.1.4
Uptime 26.951 days (since Mon Nov 18 11:30:09 2002)
Not only are the open services displayed, but information about the operating system version and system
status. There are a number of ways to determine what ports are open on a machine, the most obvious
being to create and then tear down a connection to a remote machine (a "TCP Connect" scan). This is the
approach that is taken by Apple's portscan tool within the Network Utility application (also accessible
from the command line as /Applications/Utilities/Network
Contents/Resources/stroke <address> <start port> <end port>).
The trouble with this approach is that the connection, because it is complete, is easily logged and tracked
by the operating system. For a remote attacker that wants to catalog the computing inventory of an entire
university, attempting a TCP connect scan will very quickly lead to its discovery.
Rather than taking this direct (and detectable) route, attackers employ a variety of "stealth" scans that do
not carry out the process of setting up a complete connection. Instead they typically send a packet to
begin setting up a connection, wait to see the response, then drop the connection attempt. For example,
the SYN stealth scan "knows" that a SYN packet sent to a given port is required by RFC 793 (http:// to respond with a SYN, ACK, or RST packet. If an
RST is received, the port is closed, whereas SYN or ACK indicate an open port. This sequence takes
place low enough in the TCP/IP stack that it is not logged on many machines, and is not made available
to intrusion detection software that simply watches for connections.
The "cream" of the portscanning crop is NMAP—a program designed to be the ultimate remote
reconnaissance tool. NMAP supports more than 10 unique scans, including an "idle" scan that does not
require any packets to be sent between the scanner and the scannee. A nice introduction to NMAP
scanning can be found at
In addition to simply scanning a remote host, NMAP also makes it possible to scan entire subnets, hide
behind decoys, and fingerprint remote operating systems. After a scan is completed, an attacker can
simply take the NMAP output (including operating systems and versions) and cross-reference it with
available exploits, creating his or her own personal guide to chaos. For the administrator, however,
NMAP can provide a list of machines that need attention—either to be locked down or upgraded to a
later version of the operating system.
Before you say, "But shouldn't the administrator already know what's on the network?"
consider that many universities and companies have employees that bring their own personal
computers to work, or carry laptops back and forth from home. Unless networkwide
authentication is in place, it is virtually impossible to fully control the computing
environment of a large institution.
So, should NMAP be taken as a risk? Consider the comments of John Green, U.S. Naval
Surface Warfare Center,
"The intelligence that can be garnered by using NMAP is extensive. It provides all the
information that is needed for a well-informed, full-fledged, precisely targeted assault on a
network. Such an attack would have a high probability of success, and would likely go
unnoticed by organizations that lack intrusion detection capabilities."
Thankfully, most stealth portscans can be detected and blocked by intrusion detection
software (see Chapter 18) or simply defeated by a firewall (see Chapter 17).
Remember that a portscan is an information-gathering device, not an exploit in and of itself.
If your operating system and software is secured, an attacker will still not be able to gain
access to the system.
Installing NMAP
To install NMAP, download the latest release from, then
unarchive, and enter the NMAP distribution directory:
% curl -O
% tar zxf nmap-3.00.tgz
% cd nmap-3.00
Next, configure the source with configure --mandir=/usr/share/man:
% ./configure --mandir=/usr/share/man
checking for gcc... gcc
checking for C compiler default output... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking build system type... powerpc-apple-darwin6.2
checking host system type... powerpc-apple-darwin6.2
Finally, compile and install the software with make followed by make install:
% make
Compiling libnbase
cd nbase; make
gcc -g -O2 -Wall -g
-c -o snprintf.o snprintf.c
gcc -g -O2 -Wall -g
-c -o getopt.o getopt.c
Compiling libnbase
rm -f libnbase.a
ar cr libnbase.a snprintf.o getopt.o getopt1.o nbase_str.o nbase_misc.
ranlib libnbase.a
# make install
Compiling libnbase
cd nbase; make
make[1]: Nothing to be done for `all'.
./shtool mkdir -f -p -m 755 /usr/local/bin
/usr/share/man/man1 /usr/local/share/nmap /usr/local/share/gnome/apps/
./shtool install -c -m 755 nmap /usr/local/bin/nmap
If the next command fails -- you cannot use the X front end
test -f nmapfe/nmapfe && ./shtool install -c
-m 755 nmapfe/nmapfe /usr/local/bin/nmapfe &&
./shtool mkln -f -s /usr/local/bin/nmapfe
/usr/local/bin/xnmap && ./shtool install -c -m 644 nmapfe.desktop
make: [install] Error 1 (ignored)
./shtool install -c -m 644 docs/nmap.1 /usr/share/man/man1/nmap.1
./shtool install -c -m 644 nmap-services /usr/local/share/nmap/nmapservices
./shtool install -c -m 644 nmap-protocols /usr/local/share/nmap/
./shtool install -c -m 644 nmap-rpc /usr/local/share/nmap/nmap-rpc
If you'd rather not compile NMAP by hand, it is available as part of the Fink project for easy
installation. See for details.
Using NMAP
The NMAP syntax is nmap [scan type] [options] [hosts or networks…]. Table 7.6
contains the most common and useful of the NMAP scan types and options. For a lengthy insight into
the different scan types, read the NMAP man page.
Table 7.6. Basic NMAP Options
TCP SYN stealth port scan (default if root).
TCP connect scan (default if unprivileged, that
is, not root).
UDP port scan
Ping scan.
Stealth FIN scan.
Stealth Xmas scan.
Stealth Null scan.
RPC scan.
ACK scan.
Window scan.
List scan.
Identd scan (identify the user owning the remote
Use TCP/IP fingerprinting to identify operating
-p <port list>
A port or range of ports specified in the format <start>-<end>,<port>,<port>,....
Only scan ports listed in NMAP's nmapservices file.
Increase verbosity of output. Can be used twice
for maximum output.
Don't ping hosts.
-D <decoy host>,<decoy host>,...
Use spoofed decoys to make the scan appear to
come from multiple different machines.
-T <Paranoid|Sneaky|Polite|Normal|
Set the timing policy. Paranoid scans are slow
and difficult for intrusion detection systems to
detect, whereas the opposite end of the spectrum
—Insane scans—happen as quickly as possible
and may even lose data.
Never perform DNS resolution.
Always perform DNS resolution.
-oN <filename|->
Output normal logfile or use - to output to
standard out.
-oX <filename|->
Output XML logfile or use - to output to
standard out.
-oG <filename|->
Output greppable (searchable) logfile or use - to
output to standard out.
-oS <filename|->
Output the results in "Script Kiddie" format.
-iL <filename|->
Read target list from the named file, or use - to
read the target list from standard in.
Specify network interface.
Enter interactive mode.
For example, to perform a simple stealth scan with fingerprinting on the host,
one could use /usr/local/bin/nmap -sS -O
# /usr/local/bin/nmap -sS -O
Starting nmap V. 3.00 ( )
Interesting ports on (
(The 1594 ports scanned but not shown below are in state: closed)
Remote operating system guess: Mac OS X 10.1 - 10.1.4
Uptime 32.086 days (since Wed Nov 13 09:49:51 2002)
More useful is the ability to map an entire network. This can be done by specifying ranges in IP
addresses (, 10.0.1-10.* or IP address/network mask). The notations 10.0.1.*,, and, for example, are three identical ways of describing a class C
subnet 10.0.1.x. Coupling a network scan with a handful of spoofed decoys can be an effective
means of mapping a network and confusing intrusion detection systems along the way.
For example,
# /usr/local/bin/nmap -sS -D,,, -O
would create a map of all device services on the subnet, including an OS fingerprint,
and would spoof scans coming from the machines,, and
to help cover one's tracks.
The spoofed decoys are used in addition to the machine originating the scan. Because
packets need to return to the scanner for it to analyze the data, the decoys serve only to
muddy the water for remote intrusion detection systems and logs. They do not fully cover
your scan.
NMAP offers two interfaces to its operation: the command-line interface you've seen, and an interactive
mode that is accessible when you start nmap with the --interactive flag. If you'd prefer a more
"GUI" approach to your scanning, check out NMAP v.X ( and
NmapFE ( Both utilities wrap the Aqua GUI around the NMAP
command-line software.
Other Information-Gathering Tools
A number of other tools are available for Mac OS X and Unix systems that can be used against your
network, or to help protect it. Much of this software uses sniffing and portscanning as a means of
gathering information, but rather than leaving it up to you to interpret the results, it provides highly
specialized reporting features to simplify analysis. Rounding out this chapter, we'll take a look at a few
of these packages so that you can have the best tools for defense, or at least see what crackers will be
using against you.
Wireless Sniffing: KisMAC
The advent of wireless networks and their inherent security risks (generally left open, insecure/hackable
encryption, and so on) has inspired a new hacking sport, "wardriving." Wardriving involves packing
your wireless laptop up into your car, then driving around town while it scans for open wireless access
points. There are a number of Windows and Linux applications for this purpose, but the Mac has been
lacking until recently.
The application KisMAC ( provides
wireless sniffing capabilities and can identify access points, clients, hardware manufacturers, and even
decrypt WEP passwords. KisMAC's network status and client identifier is shown in Figure 7.6.
Figure 7.6. KisMAC can sniff out wireless networks and even crack WEP encryption.
To determine the amount of activity on a network (that is, whether it's worth your time to watch),
KisMAC even provides usage graphing under a separate tab, as shown in Figure 7.7.
Figure 7.7. KisMAC graphs wireless traffic in real time.
Unless a wireless network uses additional authentication or security beyond the 802.11b standard, it is at
risk from tools such as KisMAC.
Folks in search of something a bit less flashy may be interested in Mac Stumbler, http://www., a competing (and also free) wireless sniffer.
Security Audits
As was mentioned earlier, performing a portscan and then cross-referencing the information returned
with known attacks for the given platform provides a good starting point for any attack. Couple that with
the ability to check service ports for specific information and you've got a tool that can identify open
services, the risks they present, and the exploits that could affect them.
Security scanners are a popular tool for administrators to audit the security of their network. A scanner
can pull together information in a few minutes what would take hours for an administrator to do by hand
with NMAP and a few online security references. Although there are a number of Unix tools that
perform this feature, few combine the ability to trace connections visually, watch Internet traffic, detect
DoS attacks, perform brute-force password attacks, and so on. The software MacAnalysis (http://www. does exactly this, and more.
Pulling together more than 1300 exploits and dozens of network tools, the $50 MacAnalysis, shown in
Figure 7.8, is an excellent investment for administrators of open networks.
Figure 7.8. MacAnalysis can identify hundreds of security holes on your computer.
MacAnalysis can be used to configure your computer's firewall and uses industry-standard tools such as
NMAP and Snort for portscans and intrusion detection. (These are wrapped within the MacAnalysis GUI
—you'll never even know they're there.)
For an Open Source alternative, try Nessus ( Nessus features an up-todate library of exploits, and can produce a security report for an entire network, along with potential
fixes and BugTraq/CVE IDs.
Ethics of Information Gathering
Network information gathering is a bit like telephone eavesdropping and, while often unethical, can go
unnoticed indefinitely. By sniffing a network, one can uncover personal data, conversations, passwords,
and potentially embarrassing topics. Whether or not this is appropriate to your network depends entirely
on your network policy. Users should be made aware of the monitoring capabilities of your network, log
destinies, and so on.
Our systems, for example, log all data in and out of the LAN. After a storage period of several weeks is
elapsed, the data is removed and the storage space recycled. The logs are not browsed or made available
to anyone unless an attack is being investigated or packet logs are needed as supporting evidence of an
We do not use the information to conduct "witch hunts" among our own users. No matter how hard you
try, you will inevitably get a browser pop-up window that points you to an inappropriate Web site, or
receive emails that are not "company business." These random events may come across in your network
administrator's packet logs as being evidence of less-than-desirable behavior—even though the
administrator's actions were entirely innocent.
In general, if trust exists between the administrators and the users, "spying" on individuals is
unnecessary. If you can't trust the people on your own LAN, there are bigger problems afoot than a
sniffer can uncover.
Portscans, also an information-gathering tool, are more likely to be used on a day-to-day basis than a
sniffer. They can uncover unauthorized services running on network machines, and help audit large
hardware installations where no one person is responsible for equipment purchases. Unlike sniffers,
portscan tools do not present the ethical dilemma of uncovering private information. They are, however,
widely recognized as an attack by intrusion detection systems and administrators in general. You should
never run a portscan on a network that you do not administer. If you do, you're likely to find your ISP or
security group knocking on your door.
A friend learned the hard way that portscans are frowned upon when he accidentally
transposed two octets in his network address when attempting to scan his own subnet. Within
an hour, he was under investigation by his own security group as a potential attacker. A
quick look at the target subnet revealed the obvious (and, in retrospect, amusing) mistake.
Although portscans are considered "legal" (, it is
difficult to prove that the intent was not malicious.
Additional Resources
For your continued reading, you may wish to visit these resources for more information on informationgathering techniques and software.
Packet Storm: Packet Sniffers, a collection of specialized packet sniffers, http://
"Security of the WEP Algorithm," by Nikita Borisov, Ian Goldberg, and David Wagner, http://
"How to Watch Spyware Watching You," Steven Gibson,
"Sniffing Out Packet Sniffers," by Brien Posey,
EtherPeek, an ethernet packet analysis tool by WildPackets,
NetMinder, Ethernet protocol analysis and alerting by Neon Software,
"Sniffers: What They Are and How to Protect Yourself," by Matthew Tanase, http://online.
"TCP/IP for the Uninitiated," by Erik Iverson,
"Introduction to TCP/IP,"
Example Web Packet Sniffer (written in Perl),
"Remote OS Detection via TCP/IP Stack FingerPrinting," by Fyodor,
"Port Scanning Methods," by,
It's your network (presumably)—shouldn't you know what's going on? This chapter introduced several
sniffing and portscanning tools that can be used to audit your network security, help detect inappropriate
use of your network resources, and even uncover others using similar tools against you. The tcpdump
utility is part of Mac OS X and can easily display all packet activity on your network. The ettercap
sniffer, on the other hand, provides attacker-style features including password filters, ARP poisoning,
and more. Finally, portscan tools such as NMAP can help an administrator (or an attacker) map out
vulnerable systems on their network by identifying active services and OS versions.
Chapter 8. Impersonation and Infiltration: Spoofing
Spoofing Attacks
Spoofing Defenses
So your Mac OS X computer is sitting there minding its own business, and along comes another
machine and says "Hi there, remember me from Want to go out and share a file?" If the
visiting computer knows the correct information to mount your machine's drives, or authenticate as one
of your users, what's your machine to do, except believe that it is what it says it is, acting on the
authority of who it says that it is? This is a fundamental problem with the idea of a computer network,
and one that, in many senses, almost nothing can be done about. We and our computers identify
ourselves and other systems over the network via mechanisms that range from visual branding identity
to passwords, and from digital signatures to serial number information extracted from CPUs, network
cards, and system boards. Whether we're trying to identify software across the network or are identifying
ourselves to it, some pieces of selected information are considered sufficient to prove those identities. If
some imposter system out there can replicate all these pieces of information and provide them on
demand, that imposter will be believed to be the system that it is impersonating. This is called spoofing.
In a sense, any form of identity misappropriation can be considered spoofing. An intruder using your
user ID and password is spoofing your identity on the system. A remote machine that is serving up an
NIS domain with the same name as your normal server, in the hopes that your machine will listen to it
rather than the password master it was intended to obey, is spoofing your NIS domain master. In recent
Internet history there have been a number of fly-by-night businesses that have set up Web sites located
at common misspellings of popular e-commerce sites with visual duplicates of the real site. These spoofs
of the actual online retailer's sites have taken orders at elevated prices and then passed the order off to
the real business and skimmed the pricing difference. users, as well as customers of a host
of other online businesses, have been bilked out of money by "reregistering" at the request of spoofed
email indicating that their accounts had been compromised, and giving them a link at which they could
reenter their personal and credit card information (
Because in many cases the deception doesn't constitute fraud, even big business gets in on the act
occasionally. When AT&T put together its 1-800-OPERATOR campaign, MCI cashed in on the
opportunity and picked up 1-800-OPERATER, directing it to its own operator service, and supposedly
raked in $300,000 a month of free money by using AT&T's advertising (
telephonyarticle10142002.html). A classic demonstration that computer security cannot be implemented
without user education was carried out via a spoof by a tiger team (
entry/tiger-team.html) hired to test a U.S. Military installation's security. When the team discovered that
they couldn't find a way to break into the system through application of network techniques, they
spoofed an official system patch document from IBM, packaged it with a software backdoor to the
system, and had it delivered to the installation, which dutifully installed it, letting them in (http://www. An enterprising individual who wanted to compete in the world
of domain name registrations came up with the clever(?) plan to steal InterNIC's traffic and customers
by spoofing InterNIC's registration service (
html). Heck, there's a whole flippin' country full of people pretending to be the children of Nigerian
diplomats in desperate need of a foreign partner to move millions of dollars out of hidden bank accounts
Formally, spoofing is defined as providing false identity credentials for the purpose of
carrying out a deceit. In some circles it's considered a requirement that this deceit be with the
intent to obtain unauthorized access to a system or its resources, whereas others consider
maintaining plausible deniability to be spoofing oneself as a trustable entity. Finally, pure
theft of identity, although it meets almost everyone's definition, doesn't tend to be called
spoofing unless the theft of identity was for the purpose of forging that identity: Just using
someone else's password to access a system wouldn't commonly be called spoofing unless
the purpose was to pretend that that person was performing some actions.
Clear as mud? Don't worry—after you get used to the way the term is used, it'll all make
sense in context.
The issue boils down to a matter of establishing trust, and fixing a set of credentials by which a protocol
or system may establish the identity of some other user or system. If these credentials can be forged or
replicated, then a third untrusted system may spoof itself as a trusted entity to any system that relies on
the replicable credentials for identification. The parallels between the computing world and the human
world are, in this case, many, as the problem of establishing trust and identity is a basic issue of human
existence as well. We take thumbprints or retina scans at the entrance to a top-secret facility (or at least
in James Bond's world we do); we compare handwritten signatures with originals; banks ask us for our
mothers' maiden names; and every day we compare our remembered copies of our associate's voices
with what our ears are telling us, so that we can identify the speaker. As humans we have the same
problems our computers have. We need to trust some collection of information as being sufficient to
identify others around us as who they are. No matter how large this collection of information is, it will
always be possible in some way for someone to fake these credentials, but we trust them because we
believe that they are, for our purposes, sufficient. If we're overly naive, we may place our trust on too
few credentials, and we may be fooled with too great a frequency. If we're overly paranoid we may
decide that this means that we can't actually trust anyone, ever. Neither of these extremes are practical
for human interactions, nor are they practical for computer interactions. Trust must be established in
some way, and it will always be possible for the mechanism for establishing that trust to be deceived. It
is simply an issue that we and our computing systems must live with.
Spoofing Attacks
What do you trust to establish identity? What harm can be done to you or your computer if a mistake is
made in establishing identity? If you got an email like the one shown in Figure 8.1, would you believe
it? Would you act upon it? In my default view, you don't even see that replies aren't going to go to the
apparent Apple address, but instead to [email protected] This information becomes visible
only if I explicitly view the headers, or pay careful attention to the reply window if I click Reply.
Figure 8.1. A spoofed email message.
Did you know that if you have sendmail running, you can send this same message to yourself by issuing
the following commands? Heck, a small amount of lying to your email client will produce much the
same result! Sending it to someone else is no more difficult, but I wouldn't recommend it, as some
people might think they need to see you in court about that.
[View full width]
% telnet soyokaze 25
Trying ...
Connected to
Escape character is '^]'.
220 ESMTP Sendmail 8.12.8/8.12.7; Tue,
4 Mar 2003 03:04:54
-0500 (EST)
HELO soyokaze
250 Hello soyokaze.biosci.ohio-state.
edu [],
pleased to meet you
MAIL FROM: [email protected]
250 2.1.0 [email protected] Sender ok
RCPT TO: [email protected]
250 2.1.5 [email protected] Recipient ok
354 Enter mail, end with "." on a line by itself
Subject: Warning, possible security breach
Reply-To: [email protected]
Dear customer,
A breach in our network security has recently come to our
attention. We believe that a number of our customers
computers may have been compromised through data transmitted
to our servers while accessing our .mac service.
If you could please email us a copy of your NetInfo database,
it would help us greatly in tracking and eliminating this
problem, and in informing those of you who may have been
compromised by this problem.
Please attach the result of the following command to a
reply to this email message.
nidump passwd .
Thank you for your time and assistance,
The Apple Security Team
250 2.0.0 h2484sxq027196 Message accepted for delivery
221 2.0.0 closing connection
Connection closed by foreign host.
In this case we've spoofed some content by lying to the mail transport mechanism. It's neither
particularly difficult to do nor particularly difficult to see through, but if the message content is
believable, you might not be alerted to pay careful attention to the headers, and that might lead you to
believe that content comes from one source when it actually comes from another. A good number of the
people thought they were actually reentering and verifying their user information for the PayPal online
payment site when they responded to an email that appeared to come from PayPal. Instead they were
giving their credit card information to a thief who was spoofing PayPal's identity to establish trust and
steal their financial information.
IP-Address Spoofing
Other types of identity can be spoofed as well. For example, the TCP/IP by which information moves
around the Internet needs some way for a receiving system to know where to reply to information that is
sent to it. In this system each piece of information to be transmitted is broken down into bite-sized
chunks that are manageable for network transmission. These bite-sized chunks are called packets, each
one of which gets stamped at creation time with a return address (an IP address) saying where it came
If you're paranoid about security, you may already be thinking this is a bad idea. If each packet is
stamped with a return address, what's to keep someone from forging the return address, or altering it en
route and claiming a packet came from somewhere it didn't, right? If you're not that security paranoid
yet, you might be thinking "Why bother—what's to be gained?" Well, for one thing, the stolen return
address would let a cracker falsify his network identity, so that you wouldn't know who's attacking you.
If he can falsify the return address, there's no way for you to track incoming traffic back to the actual
origin. For another, it makes for an interesting way to create a massive network-swamping denial of
service attack: Someone could forge your return address onto their packets and then attack many random
machines around the Internet. Everyone sees your machine as the attacker, and quite often can be
induced to retaliate or at least respond (even automatically as a required part of the TCP/IP), resulting in
devastating network consumption or outages for your machine.
The first scenario is something like the network version of sending your enemies (or friends, if you've a
twisted sense of humor) free tickets or invitations to events that don't actually exist. The tickets or
invitations have to be sent in an envelope with a falsified return address, because the lack of a return
address might seem suspicious to the recipients. Those who try to go to the nonexistent event have no
idea who actually sent the information on the event, because at this point, they know that the information
certainly did not come from the address specified as the return address. They'd be mad, and possibly out
some resources for travel time and dinner, but would have no idea who they were mad at. The second is
more like writing bad checks on someone else's account. The recipients of the bad checks think the
culprit is the name on the check and pursue payment accordingly, which probably has a negative impact
on the account holder's credit. In either case, the misdirection as to the real source of the information can
lead to considerable trouble for either the party who is mislead or the party who is spoofed as the sender.
In reality, although the idea of using return addresses attached to each packet sounds like a bad idea at
first blush, it's an almost inevitable consequence of the way networks work. Data isn't transmitted
directly, on an unbroken wire, from the sending machine to the receiving machine. Instead it hops from
machine to machine to machine along the network, being handed from one to the next, always (in the
ideal, anyway) getting closer to its target, until it finally reaches the machine that it's intended for. This
model for data transmission is a result of the impossibility of wiring every machine on the planet directly
to every other machine, and many of the vulnerabilities in today's networks are an inherent part of the
model. Return addresses on packets, for example, are a consequence of the need for a packet at hop 5 of
a 10-hop journey to have some idea of where it's come from and where it's going. The fifth machine
down the line doesn't have any direct connection to either the originating machine or the receiving
machine, and so it must get the information regarding where a packet came from and where it's going
from somewhere. That "somewhere" might as well be information directly contained in the packet,
because as easy to forge as it is, there's no other less-forgeable method for keeping the information with
the packet. One might argue that if there was auxiliary information kept beside the packet, instead of in
it, that perhaps the first machine to receive a packet from a sender could fill in the "this packet came
from:" field with the originator's IP address, or some other identifying token. All a cracker intent on
mischief would need to do then is control the machine at the first hop and get it to forge the information,
and we're right back in the same boat. Because such schemes increase the complexity of the system
without meaningfully increasing the trustability of the information, the TCP/IP system uses the simple,
seemingly naive—but no worse than any other—solution of using return addresses contained in the
If a machine lies about its IP address in packets that it sends, there may very well be no information to
disagree with this identification. What's to say that the information is a lie? If it hasn't originated in some
place where a machine of that IP really shouldn't exist, what's to say that the packet didn't really come
from a machine with that IP address? Nothing. It's as if, as a person, you were given to complete trust in
the identity of the sender of a letter based on the address typed on the envelope.
If you try to set your machine's IP address to one that already exists on the network, you'll be presented
with a dialog that looks something like Figure 8.2, indicating that the IP that you've selected is already
claimed on the network.
Figure 8.2. Macs check for other machines on the network with the same IP address they're
trying to claim when they try to initialize their network interfaces.
Because there is no necessity that any other machines on the network be engaging in communications
that would allow the software to make this check against duplicates in a passive manner, some active
step must be being taken to determine whether any other machines are claiming the IP address your
machine wants. While initializing the network interface, therefore, the Mac is sending out some sort of
network broadcast query and inviting other machines to tell it that it's not allowed to have the IP it
requested. What's to say that there's really a machine out there with your IP address? What if someone
just wanted to keep you from using your network connection? If they simply watch for the check from
your machine on the network, and spoof the response to indicate that the IP is already taken, wouldn't
that constitute an effective, and quite annoying denial of service attack?
In fact, the conflictd program, available from, provides a
convenient way to test this theory. It requires a slightly out-of-date version of the libnet library
available from I chose libnet-1.0.2.tgz, but
other pre-1.1 versions are likely to work (if you find an updated conflictd, it may work with the
current libnet version, which is most easily installed using fink). It also requires libpcap, and a
few header files that aren't normally included with Jaguar. The easiest route to installation is to use
fink to install libpcap and libnet, then build the older version of libnet by hand and install it
in /usr/local/ rather than in /sw. Then edit the Makefile so that it includes and links from /
usr/local/include and /usr/local/lib before it checks the /sw hierarchy. In its
unadulterated form conflictd only annoys Macs, but the effect on Windows machines of the 95 and
98 persuasion is pronounced. Issuing the following command:
# ./conflict-DoS en1
Using interface en1
Each dot is 10 popups..
causes the WinTel box living at to display the dialog shown in Figure 8.3. Actually, it
causes it to display the dialog 200 times, and the machine is effectively useless until the OK button is
clicked on each of them. All this program is doing is forging a storm of reply packets to send to the
Windows machine as though it had enquired about the availability of its IP address. Modifying the
program to work as a daemon, watching for the Mac version of the conflict-resolution request, and
spoofing equally damaging responses for Mac OS X would not be particularly difficult. The
daemonization code is already provided in the conflictd source.
Figure 8.3. Wintel boxes check when the network interface is initialized, but also pop up
annoying conflict dialogs whenever any other machine tries to register a duplicate IP at a later
More insidiously, packets carry not only their own return address that must be assumed to be correct, but
they also can carry a return route via which responses are supposed to be delivered. This was a
considerably more naive idea than having packets carry their own return address. Allowing packets to
specify their own source routing allows an attacker to insert packets into a network with a spoofed IP
address, so that you don't know who or where they're really coming from, and also to convince the
network to return responses to places that the normal routing software for the Internet wouldn't send them
—places that wouldn't typically be able to receive the response. Using this capability, an attacker can
slip packets into your network with spoofed IP addresses—packets that claim to be from other trusted
machines on your network. They can also specify that responses be sent to some other host outside your
network, rather than to the machine on your network that matches the IP specified in the packet.
Thankfully, almost all sane network software is now configured to block source-routed packets because
they have a very limited potential for positive use and considerable potential for abuse.
It should be noted that both these problems with TCP/IP packets stem from the fact that packets are
delivered like snail-mail letters, with no direct connection between the sender and the receiver. This
allows the information in the packets to be forged with no possibility for verification of the contents.
Even a complex interaction between sender and receiver with each replying about the contents of the
packets received and cross-checking with the other as to the validity is not proof against a man-in-themiddle attack, whereby a machine somewhere on the Net spoofs both connections and pretends to be
each machine to the other. The specifics of man-in-the-middle attacks are discussed in more detail in
Chapter 9, "Everything Else," but the basic mechanism should be understood as spoofing oneself as a
piece of wire along the network, while actually monitoring and potentially altering communications
traveling through that wire. The fault is then, as noted previously, one of establishing and verifying trust.
Any credentials can be duplicated. Which ones are sufficient?
ARP Spoofing
The IP address contained in the packet is one form of return address for the packet, and is used by the
network at large to determine where the packet belongs. More accurately, routing devices on a network
determine whether a packet belongs on another network by examining the packet's IP address, and move
it to that other network if it does. Internal to a particular network, however, any machine is free to
examine the packet and read it. Whether a machine is supposed to read the packet is based on a packet
field that contains another form of identification, the MAC address. MAC (Media Access Control)
addresses are (supposed to be) unique hardware IDs assigned to ethernet interfaces that are completely
unique across all ethernet interfaces ever made. As a packet is delivered from network to network, the
delivering router fills in a hardware ID (MAC address) that it has determined to be correct for the next
machine that is supposed to receive the packet. This MAC address may be for the next routing machine
that is supposed to handle the packet. Or, if the packet is being delivered into the final network where
the machine with the IP address matching the packet's target is supposed to reside, it may be the MAC
address of the target machine itself. The router's determination of the MAC address that corresponds to a
destination IP is based on the Address Resolution Protocol (ARP). Packets placed on the network that
are bound for a local machine (ones that don't need to be sent through a router), also have a MAC
address that corresponds to the ethernet interface of the destination IP. This, too, is based on ARP
information acquired by each host on the network.
Unfortunately, the determination of the values to place in an ARP table is based on "observed reality" on
the wire: The routers and machines collect a table of IP addresses and their associated MAC addresses
from the values they see in packets on the network. It's all too easy to forge packets with false MAC
addresses as well as false IP addresses, and this can easily cause traffic entering a network that is
supposed to be delivered to one machine to instead end up elsewhere.
Ettercap: Ethernet Monster
The ettercap program, which was covered in more detail in the previous chapter (available through
fink, or as a Macintosh .pkg file directly from the authors at is
capable of quite a few nifty (if you're not an admin kiddie) ARP spoofing–based tricks, including
interposing itself into and decrypting SSH1 communications. For example, in Figure 8.4 ettercap is
shown sniffing the contents of an FTP session that's occurring between a pair of hosts on a different
branch of a switched network.
Figure 8.4. Ettercap sniffing traffic in a switched network via ARP spoofing.
After seeing ettercap's sniffing capabilities in the previous chapter, this probably doesn't seem
surprising. Note, however, that I've just said that this sniffing is occurring on a switched network, which
appears to be in contradiction to what we've previously written regarding switched networks. Here,
traffic that isn't supposed to be going to my machine (Racer-X at in this example) is
clearly managing to get out of the physical branches of the network to which it's supposed to be
restricted by the switches, and coming to my machine as well. The traffic isn't doing this in
contravention of the switches' intent to restrict it to certain wire segments, however; it's doing it because
my machine has lied to the switches and other machines on the network. Because the ARP-based routing
information is gleaned from what's seen on the wire, and because most switches route solely based on
the MAC addresses they've seen, the other hardware believes the lies my machine has told, and dutifully
routes traffic onto my network segment, even though I'm not the actually intended recipient.
In this case, what is happening is that ettercap has spoofed ARP-response packets onto the network,
claiming that the MAC address on my machine is associated with both the IP addresses of the hosts that
want to communicate (, Sage-Rays-Computer and, mother. The ARP table on my machine, however, has been left uncorrupted. Ettercap has
effectively interposed itself between and Whenever Sage's computer
wants to speak to mother, it looks up the MAC address associated with from its ARP
table and builds packets with that MAC address as their destination. Unfortunately for Sage's computer,
my machine has poisoned Sage's ARP table. Instead of getting the MAC address that really belongs to
mother, Sage's computer gets the MAC address of my machine, and dutifully puts this incorrect
information into the outgoing packets. The switched network knows no better, and dutifully routes the
packet to Racer-X, where ettercap lies in wait. Upon receipt, ettercap looks at the packet to see whether
there's anything interesting in it, such as usernames or passwords, and then rewrites an identical
outgoing packet, only with the correct MAC address for mother. The network conveys the packet to
mother, who reads it and has no idea that it's been misrouted, read, and altered along the way. Replies
back from mother suffer the same fate because mother's ARP table has also been poisoned so as to
associate Racer-X's MAC address with Sage's computer's IP. As you can see from the following output,
the only computers with poisoned ARP information regarding Sage's computer or mother as destinations
are Sage's computer and mother themselves. Other computers on the network have the correct
information, and are largely oblivious to the machinations of ettercap.
[Sage-Rays-Computer:~] sage% arp -a
? ( at 0:30:65:19:3c:54
? ( at 0:30:65:39:ca:1
? ( at 0:30:65:19:3c:54
[[email protected] ~]# arp -a ( at 08:00:3E:19:47:CA
[ether] on eth1
? ( at 00:30:65:19:3C:54 [ether] on eth0
? ( at 00:30:65:19:3C:54 [ether] on eth0 ray 154> arp -a
? ( at 0:20:af:15:2a:d
? ( at 0:30:65:19:3c:54
? ( at 0:30:65:aa:37:ae
Racer-X conflictd 87# arp -a
? ( at 0:20:af:15:2a:d
? ( at 0:30:65:39:ca:1
? ( at 0:30:65:aa:37:ae
Racer-X conflictd 88# ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
inet netmask 0xff000000
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
mtu 1500
inet6 fe80::230:65ff:fe19:3c54%en1 prefixlen 64 scopeid 0x4
inet netmask 0xffffff00 broadcast
ether 00:30:65:19:3c:54
media: autoselect status: active
supported media: autoselect
Ettercap can not only monitor traffic in connections it's watching in this way, but it can also inject into
the connection traffic that the receiving computer will believe originated from the machine with which
it's supposedly talking directly, forge almost any packet onto the network, kill most types of network
connections, perform pattern-based substitutions into packets as they pass by, and extract passwords
from a large number of protocols.
Fortunately, this type of manipulation of the routing information is only mostly invisible to the rest of
the network, not completely invisible. ARP tables are usually updated only when some piece of software
needs to use the data in them, so it's probable that some computers will never see any effect from the
ARP spoofing going on, and their ARP tables will never contain poisoned data. However, it's also
possible for a conscientious admin to run software that specifically watches for inconsistencies in the
ARP replies bouncing around the network, and that throws up warnings if a given IP address or MAC
address seems to be changing its apparent identity regularly. The arpwatch package, described in more
detail in the next chapter, is one utility that is very nice for this type of examination. In its typical
configuration, it runs as a daemon, and sends email to root whenever it sees something suspicious
happening with MAC<->IP mappings on the network. For example, in this set of arpwatch debugging
output, arpwatch is reporting on the continuing argument between Sage's computer (,
really MAC 0:30:65:aa:37:ae) and Racer-X (, really MAC
0:30:65:19:3c:54), and between mother (, really MAC 0:20:af:15:2a:d)
and Racer-X over exactly what their real MAC addresses (here called ethernet addresses) are, as each
sends out ARP responses claiming an IP<->MAC address match for other machines on the network to
Racer-X arpwatch 5# arpwatch -d
From: arpwatch (Arpwatch)
To: root
Subject: flip flop
hostname: <unknown>
ip address:
ethernet address: 0:30:65:19:3c:54
ethernet vendor: <unknown>
old ethernet address: 0:20:af:15:2a:d
old ethernet vendor: <unknown>
timestamp: Monday, January 20, 2003 13:14:44 -0500
previous timestamp: Monday, January 20, 2003 13:14:19 -0500
delta: 25 seconds
From: arpwatch (Arpwatch)
To: root
Subject: flip flop
hostname: <unknown>
ip address:
ethernet address: 0:20:af:15:2a:d
ethernet vendor: <unknown>
old ethernet address: 0:30:65:19:3c:54
old ethernet vendor: <unknown>
timestamp: Monday, January 20, 2003 13:14:44 -0500
previous timestamp: Monday, January 20, 2003 13:14:44 -0500
delta: 0 seconds
From: arpwatch (Arpwatch)
To: root
Subject: flip flop
hostname: <unknown>
ip address:
ethernet address: 0:30:65:aa:37:ae
ethernet vendor: <unknown>
old ethernet address: 0:30:65:19:3c:54
old ethernet vendor: <unknown>
timestamp: Monday, January 20, 2003 13:14:44 -0500
previous timestamp: Monday, January 20, 2003 13:14:44 -0500
delta: 0 seconds
From: arpwatch (Arpwatch)
To: root
Subject: flip flop
hostname: <unknown>
ip address:
ethernet address:
ethernet vendor:
old ethernet address:
old ethernet vendor:
previous timestamp:
Monday, January 20, 2003 13:15:14 -0500
Monday, January 20, 2003 13:14:44 -0500
30 seconds
Couic—The Connection Cutter
IP and MAC-address spoofing isn't limited in its "utility" to only sniffing connections, though. Any use
to which a response packet from or about a given host might be put, a spoofed packet can be used for as
well. For example, RST (reset) packets that appear to be from some working machine on the network
can be spoofed onto the wire by another machine. This effectively knocks the spoofed machine off the
wire because any machine in communication with it will receive periodic resets to the connection, and
believe that the spoofed machine has decided to drop the connection itself. The couic program is
available from and bills itself as a "connection cutter" or
"active filter," used to enforce network policy rules. It certainly can be used to watch for and sever
almost any sort of network communication from or to any host on its local network, but it seems highly
likely it'll be used for mischief. The author notes that inappropriate use in France can land the
mischievous in jail for three years. Couic compiles relatively easily with the same libnet and
libpcap libraries as required for conflictd. After it is compiled, it is run as couic -i
<interface> -t <deny rules>. For example, the following command will deny TCP traffic to
or from
couic -i en1 -t tcp and host
In fact, running this command while I've got an SSH connection up to results in the
following error output on my machine, as soon as I try to do anything that talks to it:
[Sage-Rays-Computer:~] sage% Read from remote host
Connection reset by peer
Connection to closed.
Racer-X ray 36>
Trying to reestablish the connection while couic's still running results only in a similar error:
Racer-X ray 36> slogin -l sage
write: Broken pipe
While this is happening, couic itself is reporting its activities. According to the author, arpwatch
sees these attacks as well, but at this time we haven't been successful at getting arpwatch to display
this information.
The rule language is in flux, so for topics beyond simple protocol denial as shown here, we recommend
visiting the author's Web page.
LaBrea Tarpit
Despite some writers' insistence that spoofing lacks any possible legitimate purpose (http://www.sans.
org/rr/threats/spoofing.php), at least one inspired system administrator has come up with a way to use
spoofing as a tool for the good of the network. If you remember, we've mentioned several times that a
typical attack mechanism for script kiddies is to run automated attacks against all IPs on a network, and
you can typically defend against these by simply keeping up on patches and making your machines
difficult to invade. It's always seemed like there should be an active way to defend against these, instead
of simply passively ignoring the attacks. For example, a nice active defense might be to track down the
attacking machines and kick them off of the network. Unfortunately, most of the time, the machines
actively attacking yours aren't actually the attacker's machines; they're just zombies that the attacker has
taken over. Also unfortunately, because their owners are rarely aware that their machines are taking part
in an attack, they rarely take you implementing such an active defense against them kindly. Or, to put it
more simply, you're likely to land in legal trouble if you take the route of attempting to mitigate attacks
against your machine by attacking back.
Fortunately, system administrator Tom Liston, in a flash of inspired cleverness, realized that although
the owners of a remote system might take litigious offense at your contacting their system and initiating
an attack, they'd be on awfully shaky ground if it was their system that contacted yours, and yours
simply refused to hang up the phone.
Tom realized that one of the signatures of many automated attacks (and specifically the Code Red worm
that was traveling the Net at the time) is that the attacker rarely knows the specific IPs that are on your
network. Therefore, they tend to try to attack every IP address on your IP range. This can be used
against them. What if you could put a machine at each of the unused IP addresses, and have it try to tie
up the attacker's connection by simultaneously being invulnerable to the attack while also refusing to
hang up when it was contacted? Many networks have a larger number of unused IP addresses than they
have used, so if you could populate them all with machines that would refuse to let an attacker go on to
another target, it might seriously slow down the propagation of an attack. Nobody wants to set up that
many "tarpit" machines just to mire inbound connections, but what if you didn't need a machine?
Because the only thing the attacker gets back is network responses from the machine, why not just spoof
the responses? Put together a server that watches for connections trying to make their way to machines
that don't exist and spoof responses to them as though those (nonexistent) machines wanted to talk.
ARP even provides a convenient mechanism by which to perform this spoofery. When a router needs to
contact a machine for which there isn't an entry in its ARP table, it sends out an ARP request asking for
some machine to claim that it has that IP. The inbound router for a network segment certainly doesn't
have any ARP entries for machines that don't exist, and nonexistent machines aren't going to respond
and populate that table. This makes it easy for a tarpit program that wants to pretend to be "any machine
that doesn't exist" to do so; all it needs to do is watch the network for ARP requests with no matching
responses, and respond to them with spoofed packets itself. The router then routes all further information
for that IP to the tarpit server.
Tom implemented this idea as the LaBrea package, which sits on a network and watches for incoming
ARP requests that go unanswered. These it reasonably presumes to be looking for machines that don't
exist. When it sees them, it spoofs responses that essentially fool the attacking zombie into thinking it's
found a fresh host to infect, and then keeps it hanging on the line indefinitely. This ties up the attacker's
resources, and prevents at least one zombie resource from making further attacks until the connection to
your machine drops. In the grand scheme of things, it's not much, but it's far better than just shuffling the
attack off onto someone else. As one system administrator put it, "You can come into work in the
morning, look at your logfiles, and say 'Wow—I'm actually saving the world.'" Saving the world might
be a bit over the top, but one analysis suggested that with only a few hundred average networks
cooperating by running a single LaBrea tarpit server, roughly 300,000 zombied machines could be
held at bay indefinitely, for nothing more than a few percent of each cooperating site's bandwidth. If I
can contribute my half-percent of helping to stop the spread of Internet worms, and I get to annoy script
kiddies in the process, that's as close to saving the world as I need to get to feel I've done my job for the
Until recently, LaBrea had been available from, but due to recent
DMCA legislation, it is not currently available at that address. It's interesting how legislation that
protects corporate security interests can interfere with protecting your security interests. Fortunately,
though, you can now find a version at The current version
requires some modification to get it running properly on OS X. Like conflictd, it requires a slightly outof-date version of libnet, libpcap, and some headers that are missing from Jaguar. In addition, part
of libpcap doesn't seem to work quite the same on OS X as on some other operating systems, and the
timeout field of pcap_open_live() seems to need a value greater than zero for it to properly
capture packets. Upping this value to 5ms appears to cause LaBrea to function as designed.
Although installing and configuring most network security packages is a significant task, for the benefit
it provides, using LaBrea is quite simple. It's almost completely autoconfiguring to a useful default
state, so although a number of command-line options are detailed in Table 8.1, you can produce a
completely functional tarpit server by running the LaBrea executable with only verbose logging, and
flags to direct it to talk to the proper interface and to write to the terminal (STDOUT) instead of
syslogd. (You'll have to read the documentation that comes with it to learn about the z flag, however;
the author wants it that way.) For example, if I run LaBrea on Racer-X as follows, it will quite
effectively tarpit any attempted connections to machines that don't exist. Again, we've left the prompts
intact in these captured dialogs so that you can more easily keep track of who's doing what to whom.
Racer-X LaBrea2_3 50# ./LaBrea -vbozi en1
Initiated on interface en1
/etc/LaBreaExclude not found - no exclusions
On another computer on my network, my ARP table currently looks like this:
[Sage-Rays-Computer:~] sage% arp -a
? ( at 0:30:65:39:ca:1
? ( at 0:30:65:19:3c:54
If I then try to ping a machine that doesn't exist, Sage's computer is going to try to find it via ARP
[Sage-Rays-Computer:~] sage% ping
PING ( 56 data bytes
--- ping statistics --7 packets transmitted, 0 packets received, 100% packet loss
Meanwhile, tcpdump, looking at what's going on with ARP, has this to say:
Racer-X arpwatch 17# tcpdump -i en1 arp
02:17:57.741882 arp who-has tell
02:17:58.742136 arp who-has tell
02:17:59.742338 arp who-has tell
02:18:00.742510 arp who-has tell
02:18:00.755537 arp reply is-at 0:0:f:ff:ff:ff
The eventual response assigning 0:0:f:ff:ff:ff (a rather fake-looking MAC address) comes from
LaBrea, which has meanwhile detected that there were no responses to those four ARP requests, and has
decided to reply with a spoofed machine. The following line appears on Racer-X's console:
Tue Jan 21 02:18:00 2003 Capturing local IP:
Sage's ARP table now looks like this:
[Sage-Rays-Computer:~] sage% arp -a
? ( at 0:30:65:39:ca:1
? ( at 0:30:65:19:3c:54
? ( at 0:0:f:ff:ff:ff
If I further try to ping a machine that does exist, but that isn't already in Sage's ARP table, something
different happens. tcpdump shows this log:
02:18:46.206856 arp who-has tell
02:18:46.215544 arp reply is-at 0:0:f:ff:ff:ff
02:18:49.302675 arp reply is-at 0:20:af:15:2a:d
And LaBrea shows another captured IP address:
Tue Jan 21 02:18:46 2003 Capturing local IP:
Yet my ping proceeds as normal:
[Sage-Rays-Computer:/Users/sage] sage# ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=255 time=1.146
64 bytes from icmp_seq=1 ttl=255 time=0.913
64 bytes from icmp_seq=2 ttl=255 time=0.929
64 bytes from icmp_seq=3 ttl=255 time=0.928
Pinging the IP address that LaBrea captured works because the machine at is a bit slow
(it's a Linux box running on a 486 chip, but it makes a fine router), and it responded to the ARP request
sent by Sage's machine slower than LaBrea running on Racer-X did. In anticipation that such things
might happen, LaBrea has been written so that it can function as a temporary packet rerouter to fix
things if it has inadvertently made a mistake and grabbed an IP too quickly. In this case, after it sees the
proper response from, LaBrea redirects packets that are being written with the faked
MAC address to their proper recipient until the ARP tables are updated properly and traffic for that IP is
no longer being directed to the fake MAC. LaBrea can also use this functionality to discover when a new
machine has appeared on the network at one of the IP addresses it's claimed, and to seamlessly
relinquish spoofed IP addresses as necessary.
In short, LaBrea fights fire with fire by turning attackers' own techniques against them. To help you fight
fire with fire, Table 8.1 provides a listing of command-line options for LaBrea.
Table 8.1. Command-Line Options for the LaBrea Tarpit Server
-i <interface> Sets a nondefault interface.
-t <datasize>
Sets connection throttling size in bytes. Default is 10.
-r <rate>
Sets ARP timeout rate in seconds. Default is 3.
Safe operation in a switched environment.
Logs activity to syslog. With version 2.0.1+ you can use kill -USR1
<LaBrea_PID> to toggle logging. If logging was not enabled at start, this sets
the -l flag. If logging (-l | -v) is set, this saves the value and turns off
logging. If logging is presently toggled off, it restores the saved level (-l | -v).
Verbosely logs activity to syslog. With version 2.0.1+ you can use kill USR1 <LaBrea_PID> to toggle logging. If logging was not enabled at start,
this sets the -l flag. If logging (-l | -v) is set, this saves the value and turns
off logging. If logging is presently toggled off, it restores the saved level (-l | v).
-F <filename>
Specifies a BPF filename. Connections specified by the BPF will also be
tarpitted. These connections must be firewalled to drop inbound packets or this
won't work.
Hard-captures IPs. /etc/LaBreaHardExclude should contain an IP list
that you never want LaBrea to hard-capture. Only necessary with the -h
Disables IP capture.
Specifies a netmask. The network number and netmask are normally loaded
from the interface. If you're using an interface that has no IP, you'll have to
provide both these numbers. These must be correct or bad things may happen.
Specifies a network number. The network number and netmask are normally
loaded from the interface. If you're using an interface that has no IP, you'll have
to provide both of these numbers. These must be correct or bad things may
Prints version information and exits.
Does not respond to SYN/ACKs and PINGs. By default, LaBrea "virtual
machines" respond to an inbound SYN/ACK with a RST and are "pingable."
The -a option eliminates this behavior.
Does not report odd (out of netblock) ARPs.
Test mode—prints out debug information but does not run.
Soft restart—waits while recapturing active connects.
-p <maxrate>
Persists state capture connect attempts. LaBrea will permanently capture
connect attempts within the limit of the maximum data rate specified (in bytes/
Logs bandwidth usage to syslog.
Does not detach process.
Sends output to stdout rather than syslog. This sends log information to stdout
rather than to syslog. This option also implies and sets the -d option.
Persists mode capture only.
Beta "Linux" window probe captures code.
NOTES on control files:
LaBrea also uses two files to control its operation:
/etc/LaBreaExclude contains a list of IPs (one per line) to exclude from LaBrea's attention.
/etc/LaBreaHardExclude contains a list of IPs that LaBrea won't hard-capture. (Use with the
-h option.)
The IP address can be specified either as single addresses (for example, or as a range
of addresses (for example, -
LaBrea should never capture an IP that has an active machine sitting on it. These two files are used to
give you control over "empty" IP addresses. However, it certainly doesn't hurt to "exclude" active IPs.
Sending LaBrea a SIGHUP will cause it to reread the "exclusion" files and report packet statistics to
LaBrea can even be configured to work across a switched network, where one would normally conclude
that such a server would see incoming ARP requests but not the returned reply. To overcome this
difficulty, when used on a switched network, LaBrea watches for incoming ARP requests and then sends
a duplicate request of its own for the same information. It does not consider an IP for building a tarpit in
this situation unless its own requests for an ARP reply go unanswered. This mode may also be useful on
networks where there are slow or transiently out-of-contact machines, as it induces LaBrea to more
frequently check the status of apparently nonexistent machines that are enquired about often.
Spoofing Defenses
In a sense, there is no real defense against spoofing; there is only the ability to take precautions against
being taken in by the spoofed information. If someone is spoofing IP addresses in her TCP/IP packets,
unless she's on your local network, there's little you can do to prevent her from doing so. What you can
do (possibly) is prevent your machines from accepting or believing the falsified information.
To perform such prevention, you need to be able to detect that information is being falsified. Software
such as arpwatch and tcpdump can provide you with useful logs from which you might detect spoofing
incidents. Although they usually can't tell you what of the data is actually false, they often can provide
useful hints. Realize that what such software is doing is examining auxiliary data that is correlated to the
information in question, and providing you with a warning that something unexpected has been seen.
Arpwatch can't tell you which host that's claiming an IP is actually lying; all it can tell you is that it
appears that two hosts are competing for the same address. If you've kept a table of IPs that you've
assigned, and the MAC addresses that go with them, you might be able to determine who's the IP
address thief. On the other hand, it's possible that both conflicting addresses are being spoofed, in which
case arpwatch can tell you that something's up, but you'll lack all ability to determine what information
to believe.
Another tool you might use to monitor this spoofery and track it to its source would be a good switched
network with usefully manageable switches. (Those frequently billed as "layer 3 switches," though they
technically provide all the functions of an internetwork router for every connection, are the ones to look
for. 3Com has a nice white paper on the subject at
tech_paper.jsp?DOC_ID=5298.) Such an environment can provide you with a hardware-level view of
where the conflicting information is coming from, and let you quickly track it to specific machines on
specific wires.
Similarly, employing a firewall might be of some utility in gathering and using auxiliary information. If
a packet shows up at your firewall that says that it's from an IP address on your internal network, the
firewall can employ information entirely outside the packet to determine whether to believe it—namely,
on which wire did the packet appear? The one for the internal network, or the one from the external
network? Employing such technology enables your internal machines to more seriously consider the
idea of using a packet's supplied source IP as an identifying credential because it can preclude anyone
from outside the local network from injecting traffic with local IPs. Of course, it does nothing to prevent
an internal machine from spoofing IP credentials onto your internal wire, so again, a firewall configured
to provide this type of barrier is only one more way to increase your trust. It cannot be used to prove
When looking at how to establish trust (and detect spoofs) in any particular exchange of information,
concentrate on data regarding the communication itself that can be brought to bear on the examination.
This might be private data such as a password that only you and the sender are expected to know,
external information such as what a firewall can tell you about the origin of a packet, a network
handshake that proves that you've sufficiently correct network information to carry on a dialog with the
sender, an encoding scheme such as a private protocol or secret sequence exchanges that must be
performed, or the use of public or private key exchanges. Each type of verification for identity and
validation of trust can be checked through the use of different mechanisms. Each can also be spoofed
and false, believable credentials provided. For each exchange you want to trust, you need to examine the
types of credentials that are being examined, how they can be spoofed, and what tests you might apply
to detect such falsification.
Each user and computer installation is likely to be different in its needs, and many protocols already
provide some mechanism for checking at least some sort of credentials. You will need to examine your
situation and determine whether the mechanisms already in place give you a sufficient level of trust, or
whether you need to add hardware or run additional software to increase your level of trust in
information that is being conveyed to you.
In this chapter you've seen some of the issues of computing trust and verification, and places where trust
and security can be compromised by a person or computer that is providing false identification tokens to
your system. Here we've concentrated on how basic network information can be falsified, and the
consequences of such spoofing, but the issue of trustable information extends to any sort of computing
data in a similar fashion. The information by which you recognize a familiar auction Web site can be
spoofed by someone who has copied the page's layout and design.
Unless you're terrifically alert, you probably pay much more attention to what the page looks like than to
the URL that appears in your Web browser's location bar. Emails can be spoofed; the identifying
information that gives machines' names on the network can be spoofed; practically any data that is
delivered over the network can be falsified by someone, in some way. What you need to determine for
yourself and for your systems is what identifying information is sufficient for you to trust, and how
much trust you're willing to have, based on that amount of information. The answers will be different for
almost every user and almost every context. We can only help to show you where the problems might
lie, and try to illustrate potential consequences that might not be readily apparent. You will need to make
the hard decisions yourself regarding who to trust and how much, based on your own needs and the
sensitivity of whatever information might be at stake.
It's important when considering the possibilities for spoofing and the areas where it may cause harm to
remember that spoofing is essentially impossible to completely prevent. Regardless of the credentials
you require to verify identity, there will be some way that someone might be able to provide false
credentials and establish a false identity. What you can do is make it harder to generate believable false
credentials, and take advantage of any auxiliary data that is available to attempt to corroborate the
identification. Use firewalls, monitoring software, and identification protocols for which the credentials
cannot be easily stolen or duplicated. Make certain that the identification credentials you choose to
accept are kept up to date, and that software that uses them has been patched against any possibility of
information leakage.
Many network services and many types of data already have software written to help you with the task
of establishing a trust level for various credentials, and there will undoubtedly be more that appear as the
realities of network commerce mature. Examine your computing trust needs, what the pitfalls in any
particular trust situation might be, and what avenues present themselves for establishing the veracity of
credentials that are presented to you. In some cases there will be few to no safeguards in place, but if the
communication or identity verification is one that is common in the networked world, someone will
probably be working on solutions to assist in detecting and avoiding spoofed information. Some of these
may not be inexpensive, but as we've reiterated throughout this book, there are bad people out there who
want to do bad things. Only you can decide how important the truth of any communications are, and
how much effort, time, or money you're willing to invest in increasing the level of your trust in that
Chapter 9. Everything Else
Buffer Overflows
Session Hijacking
Everything Else
Additional Resources
If you've read sequentially through the book to this point, you might be asking yourself, "How can there
possibly be anything else to worry about?" The number of attacks that can be launched against your
system and services grows every day. As developers work to improve their defenses, attackers find new
ways around them. This chapter discusses several broad classes of attacks, how they work, and why
vigilance is one of your best defenses.
Although DOS may be considered an attack class all its own, DoS does not refer to early PC operating
systems. DoS—or Denial of Service—attacks are designed to stop a machine or network from providing
a service to its clients. Depending on your network, this can mean just about anything: keeping your
mail server from delivering mail, stopping your Web server, or even keeping you from logging in to
your machine to work locally. The attack itself can take on a number of forms, from a direct network
attack to a local attack instigated by a user account (or someone with access to a user account) on your
system. No matter what the target or how it takes place, DoS attacks usually hit hard, are difficult to
diagnose, and can put you out of business until the attack has ended.
Many DoS attacks are launched against a particular application or service. Throughout the
book wherever there has been (or is) a recent DoS for a Mac OS X daemon or application,
we will attempt to provide a description and CVE reference for further information. This
section is meant to provide a more general overview of DoS attacks.
Network Target DoS Attacks
The most effective and costly DoS attacks take advantage of holes in an operating system's TCP/IP
implementation to bring down the entire machine. These are the "holy grails" of attacks—simple, fast,
and affecting everything they touch. The first widespread DoS that affected individual users is the now
infamous Ping of Death.
Ping of Death
In 1996, it was discovered that an IP packet (see Chapter 7, "Eavesdropping and Snooping for
Information: Sniffers and Scanners"), constructed with a length greater than 65536 bytes (an illegal size)
could crash, disable, or otherwise disrupt systems running Windows 95, Mac OS 7.x, Windows NT,
Linux, Netware, and a wide range of printers, routers, and other network devices. Windows machines
proved to be an especially useful launching platform for the attacks because the Windows ping
command allowed these illegal packets to be sent directly from the command line with ping -l
65510 <hostname or IP>. Software quickly followed for other operating systems that could scan
an entire network, sending the invalid packets, spoofing the source address, and literally forcing
hundreds of users to reboot within seconds of the attack starting.
Operating system vendors responded quickly to the situation, issuing patches shortly after the problem
was discovered (less than 3 hours, for Linux!). A complete description of the PoD, the machines
affected, and example attack source code can be found at
Another "popular" TCP/IP DoS attack affected Windows machines by the thousands. Windows 95/NT
machines were found to be easily crashed by packets with the OOB (out of band) flag set on an
established connection. Because most Windows machines ran file sharing services, these attacks were
commonly aimed at port 139, the standard NetBIOS/IP port. A remote attacker would connect to the
port, then send a packet with the OOB flag set. The result? An instant Blue Screen of Death. "Winnuke"
software was released for other users to take advantage of this flaw, resulting in much glee from
Macintosh and Unix users.
Strangely enough, after Microsoft issued their initial patches for the problem, Macintosh computers
could still crash remote systems with WinNuke. The Macintosh TCP/IP implementation set a TCP
Urgent flag within outgoing TCP packets. Windows machines could not handle this additional flag, and
continued to crash at the hands of Macintosh users until Microsoft released a second patch. Information
on the Windows OOB bug and WinNuke code can be found at
SYN Floods
The previous two attacks have used bugs in the TCP/IP implementation, but it is also possible to exploit
the properties of TCP/IP itself to create a DoS condition. A popular means of doing this is through a
SYN flood. SYN packets, introduced in Chapter 7, are used by a client computer to initiate a connection
with a remote machine.
As demonstrated in Figure 9.1, upon receiving a SYN packet and sequence number, a server allocates
resources for the connection and sends a SYN packet, acknowledgement, and another sequence number.
The client should then respond with a sequence number and second acknowledgement at which point the
connection will be opened. If the client does not acknowledge immediately, the server must wait a
specific amount of time before it can assume the connection is dead and de-allocate resources.
Figure 9.1. Initial TCP/IP conversation.
SYN floods work by overflowing a server with SYN packets (often from spoofed source addresses),
causing it to allocate more resources than it can manage (thereby running out of memory), or more
connections than it allows (forcing valid connections to be denied). The end result in either case is that
the server becomes inaccessible for, at least, the duration of the SYN flood.
You can use netstat to display the status of connections on your Mac OS X box.
You can detect SYN floods by looking for abnormally numerous incoming SYN packets in relation to
general TCP/IP traffic. High ratios of SYN packets can trigger the operating system to drop incoming
SYN packets (denying connections) until the flood has ended, or take a more complex but less disruptive
approach—such as using SYN cookies.
SYN cookies now protect most modern operating systems, such as Linux and BSD, against SYN floods.
During a TCP/IP conversation, the server and client exchange sequence numbers to maintain a
synchronized conversation. A SYN cookie is a specially calculated sequence number that is based on the
remote client's IP and is not guessable by the client. When making the initial acknowledgement of the
SYN packet, the server sends back this special cookie (SEQ(B) in Figure 9.1) and then, for all intents
and purposes, forgets about the conversation. If it receives an acknowledgement with the appropriate
sequence number (SEQ(B+1)), it knows the connection is legitimate and allows it to proceed.
In a simple DoS assault, the attacker uses his or her PC or a compromised machine to send packets to the
victim. Although a single computer is capable of causing plenty of problems, it is also reasonably simple
to block at your router or network firewall.
Attackers quickly discovered that to create a sustained DoS situation, they would need to hit from
multiple locations simultaneously, and the DDoS, or Distributed Denial of Service attack, was born. A
DDoS attack typically uses multiple compromised computers, or zombies, that are "waiting" for an
attack to be triggered—from a master computer, as seen in Figure 9.2. The zombies do not need to use
spoofed addresses and can be used much more aggressively against the target because they are not the
actual attacker. DDoS attacks need only flood the attacked network with packets, rendering it
Figure 9.2. A DDoS attack uses multiple zombies to attack a victim.
DDoS attacks are difficult to stop because of the number of routes that may be used during an attack.
Recent attacks have used ICMP traffic to synchronize the states of the zombies, making the detecting
and firewalling of potential zombies even more troublesome. Successful DDoS attacks are usually
carried out only after a significant amount of upfront work has been performed to prep the zombies and
scout the targets.
For examples and analysis of DDoS attacks, read the following analyses:
Tribal Flood Network.
Global Threat.
While most large-scale DDoS attacks do require zombies, a very popular attack—Smurf—
does not. The Smurf attack works by forging the reply-to address on an outgoing broadcast
ICMP packet. The packet is sent to an entire network with the goal of tricking the machines
into replying simultaneously to a victim address—overwhelming them with data. Because a
single packet is used to generate multiple attacking responses, this is called a magnification
More information about Smurf attacks can be found at
dos-smurf.shtml, and example code can be found at
At this time, there are no known instances of Mac OS X being used in a DDoS attack, but there is no
guarantee that the clients for distributed attacks won't be made available at some time in the future.
Denial of service attacks are difficult to stop because they are unpredictable and often exploit protocol or
network weaknesses and do not require the remote machine to be compromised. A completely secure
Mac OS X machine could be crippled by a network DDoS attack no matter how well the administrator
has protected the network. The Internet has a finite amount of bandwidth on a finite number of routes
available for a given host. If those routes are flooded by thousands of machines simultaneously throwing
packets at the host, it doesn't make any difference what software, operating system, or security the target
machine has in place.
Protection comes from prevention and understanding. Following these guidelines will help protect you
from DoS attacks:
Keep your software up to date and secured by using the software's own security mechanisms.
This is repeated throughout the book and cannot be stressed enough. Even though most DDoS
attacks are the result of compromised clients, a sendmail daemon with relaying accidentally left
active could be used as a client in a DDoS email attack without any compromise being necessary
(see Chapter 13, "Mail Server Security").
Firewall, if possible, network clients. DDoS clients often require a trigger from the attacker to
start operating (sometimes sent via IRC). If the trigger cannot be received, the compromised
client doesn't pose a threat (see Chapter 17, "Blocking Network Access: Firewalls"). Blocking
broadcast and multicast traffic and verifying packet source addresses are good overall
preventative measures.
Implement intrusion detection. Intrusion detection software can alert you to DoS attacks and help
prevent system compromises from taking place (see Chapter 18, "Alarm Systems: Intrusion
Understand your network topology and upstream provider. An attack aimed at an entire network
or the network's routing/firewall/etc. equipment cannot be stopped by an end user sitting in front
of his or her Mac. Your upstream provider may be able to block the attack, or, at the very least,
you can disable your feed until the attack has subsided.
Buffer Overflows
Another common means of attacking a system or service is by way of a buffer overflow. Buffer
overflows are caused by errors in the programming of system service and can be exploited in any
number of ways, from gaining elevated privileges to executing arbitrary code, depending on the service
that is to be exploited.
A buffer overflow is nothing more than a matter of loose or nonexistent bounds checking on a buffer.
Say what? What does that mean? Don't worry, we're getting there. The C programming language, which
is used to build almost all of Mac OS X, requires developers to keep very close tabs on variables and the
amount of memory they require. A C string, for example, must be allocated to contain the number of
characters in the string, plus one null terminator. For example, the string "John Ray" requires 9 bytes of
storage space, as shown in Figure 9.3.
Figure 9.3. C strings require storage for one byte per ASCII character + 1 null terminator.
Although this might seem straightforward enough, the C language includes a number of functions such
as strcmp(), strcpy(), and gets() that do not attempt to verify that the string on which they are
operating is the appropriate length before executing. Consider the following program (overflow.c)
which reads and prints a user's name:
int main()
char name[9];
printf ("Enter your name: ");
printf ("Your name is %s\n", name);
Compile (gcc -o overflow overflow.c) and then execute the program:
% ./overflow
warning: this program uses gets(), which is unsafe.
Enter your name: John Ray
Your name is John Ray
For an execution with a string that fits within the 9-byte buffer, the program acts exactly as one would
hope. Change the length of input and things are nearly as peachy:
% ./overflow
warning: this program uses gets(), which is unsafe.
Enter your name: The Noble and Powerful Ezekial Martograthy
Bartholomew III
Your name is The Noble and Powerful Ezekial Martograthy Bartholomew
Segmentation fault
Now, instead of executing cleanly, the program crashes with a segmentation fault. What has happened is
that the input data, which should have been contained within nine bytes, has overwritten part of the
executable in memory, causing it to crash.
Although crashing is certainly a common effect of buffer overflows, they can sometimes be exploited
with far more dramatic effects. A good example of a buffer overflow exploit is demonstrated in Listing
9.1 (buffer.c), written by Mark E. Donaldson and provided here with a few tweaks for Mac OS X.
Listing 9.1 Source Code for buffer.c
1: int main()
2: {
char name[8];
char real_passwd[8]="test";
char password[8];
// retrieve the user information
printf ("Enter your name: ");
printf("Enter your password: ");
printf("Your name and password are %s and %s.\n",name,password);
printf("The real password for %s is %s.\n",name,real_passwd);
// Authenticate password against real_passwd
return 0;
18: }
20: void authenticate (char* string1, char* string2) {
char buffer1[8];
char buffer2[8];
strcpy (buffer1,string1);
strcpy (buffer2,string2);
if (strcmp(buffer1,buffer2)==0) printf("Access allowed!\n");
27: }
The program logic is simple. Lines 3-5 allocate storage for name, real_password (the password
we're going to test for, hard coded as test), and password (what the user will type in). Lines 8-13
input the name and password and print what the program thinks it has received from the user. Line 16
calls the authentication routine (lines 20-27), which checks password against real_password and
prints Access allowed! if they match.
Again, compile (gcc -o buffer buffer.c) and execute the program, using valid (seven or fewer
characters) input. First, the program is run with an invalid (wrong) but properly sized username and
% ./buffer
warning: this program uses gets(), which is unsafe.
Enter your name: jray
Enter your password: frog
Your name and password are jray and frog.
The real password for jray is test.
The output is exactly as expected: the passwords do not match, so the access message is not displayed.
Next, run the program using the correct password (test):
% ./buffer
warning: this program uses gets(), which is unsafe.
Enter your name: jray
Enter your password: test
Your name and password are jray and test.
The real password for jray is test.
Access allowed!
Again, the output is precisely what one would expect: the passwords match and access is allowed. Now
look at what happens when the invalid lengths are used. This time, instead of a valid username, enter
123456789ABCDEF (you'll see why we chose these values shortly) and test for the password:
% ./buffer
warning: this program uses gets(), which is unsafe.
Enter your name: 123456789ABCDEF
Enter your password: test
Your name and password are 123456789ABCDEF and test.
The real password for 123456789ABCDEF is 9ABCDEF.
Did you notice that even though test was used as the password, the program didn't correctly
authenticate the user? In addition (and more importantly), did you notice that the program is now
claiming that the REAL password is 9ABCDEF? This provides very useful information about the
program and its flaw. First, you can tell that the input for the user's name (name) has obviously
overwritten the real password (real_passwd) in memory. Because the real password starts with the
9ABCDEF input, it can be inferred that the buffer size for name is eight bytes long, and that the name
and real_passwd data structures are located sequentially in memory (just as they are defined on lines
3 and 4 of the source code).
So, how can this newfound knowledge be exploited? Simple: By overflowing the name buffer, you can
set the password to any value you want, and subsequently authenticate against it:
% ./buffer
warning: this program uses gets(), which is unsafe.
Enter your name: jrayXXXXHack
Enter your password: Hack
Your name and password are jrayXXXXHack and Hack.
The real password for jrayXXXXHack is Hack.
Access allowed!
Obviously, no real-world program should be written like this, and developers should take the time to
perform bounds checking on their code if it is not handled automatically by the compiler. The goal of
most attackers it to gain elevated permissions or execute arbitrary code on a server. To do this, they must
understand the code they are attacking and the underlying operating system. Just because a piece of code
suffers from a buffer overflow does not mean it can be exploited beyond a simple crash. A complete
play-by-play of a buffer overflow is documented in Mark Donaldson's "Inside the Buffer Overflow
Attack: Mechanism, Method, and Prevention,"
Buffer overflows are caused by errors in the software you are running, not from a lapse in your abilities
as an administrator. Protecting against a buffer overflow, like DoS attacks, is more a matter of vigilance
than active prevention. Keep the following points in mind:
If developing, use modern languages with build-in bounds checking such as Java, Ruby, and Perl.
C programmers should avoid the standard C string functions or, at the very least, perform bounds
checking (sizeof()) before and after string operations.
Be aware of buffer overrun errors that have been reported for your system, servers, and libraries
and take measures to patch or limit access to the affected functions (that is, keep up to date!).
Identify SUID applications. SUID tools are the most likely target for buffer overflows, because
they execute with root permissions and may provide root access in the event of an overflow.
Run intrusion detection software. It is very unlikely that if you suffer a buffer overflow attack it
will be launched by the person who discovered or wrote the exploit. It is very likely a scriptkiddie running a piece of code picked up from an IRC chat room and that has a recognizable
attack signature.
Session Hijacking
The next type of attack to examine is session hijacking. Another "nonattack," session hijacking is a
means of either gaining total or partial control over an established TCP/IP connection. Session attacks
rely on a trusted connection between two computers to be in place, and then work to either modify
packets traveling between the machines or take the place of one of the two computers.
Session hijacking is an effective means of gaining control over a machine or process that otherwise
would not be accessible. Most authentication (such as AFP, Kerberos, and so on) take place during the
initiation of a connection. After authentication, the conversation is considered trusted. This is the point
at which attackers want session hijacking to take place.
The most common and effective form of session hijacking is called man-in-the-middle and works by
placing a computer in the "middle" of an established connection. Both ends of a connection must be
"convinced" that to speak to the other side, they need to go through the attacking computer.
To do this, the attacker must be located on the same network as either of the victim computers. As you
might guess, spoofing is required to bring all the pieces together. Consider Figure 9.4, which displays a
simple network with an established connection in place.
Figure 9.4. The initial (unhijacked) network state.
Computer A (, 00:30:65:12:f2:15) speaks to Computer B (, 00:30:65:12:f2:25)
via the Router (, 00:30:65:12:f2:01), while Computer C (, 00:30:65:12:f2:50) sits idle
on the same network as A. For all intents and purposes, the session that is being hijacked is between
Computer A and the router; the location of B is irrelevant. In the initial established state, Computer A
has an ARP table that contains a mapping between the IP and the MAC address
00:30:65:12:f2:01, or, in other words, to communicate with a machine at the IP address of Computer B,
Computer A must send packets to the MAC address 00:30:65:12:f2:01—the router. Likewise, the
router's ARP table maps between the IP address and MAC address 00:30:65:12:f2:15, both
belonging to Computer A.
To hijack the session, Computer C implements a spoofing attack in which it purports to be both the
remote client (Computer C) and the local victim (Computer A), and advertises this via unrequested ARP
replies to Computer A and the router, respectively. The result can be seen in Figure 9.5.
Figure 9.5. The hijacked network connection.
Now, Computer A has an ARP mapping between 00:30:65:12:f2:50 (Computer C) and Computer B,
while the router has a mapping between Computer A and 00:30:65:12:f2:50 (Computer C). The result is
that Computer A transmits data to what it believes to be the remote Computer B by way of the router,
but instead transmits it to Computer C. Computer C can modify the packet (or do anything it likes) and
retransmit it to the router, purporting to be Computer A.
Session hijacking, at one time, was very difficult to implement. Today it is a matter of finding the right
software and choosing your targets. The software ettercap, for example, can be used to implement a
man-in-the-middle attack without its user having any knowledge of networking or spoofing.
Discussed in Chapter 7, you can use ettercap to start a man-in-the-middle attack by choosing a
source and destination address, using A to enter ARP sniffing mode (poisoning the ARP caches of both
the source and destination), choosing the connection to hijack, then pressing I to start injecting
characters into the packet stream, as shown in Figure 9.6.
Figure 9.6. Conduct a man-in-the-middle attack by using ettercap.
Other methods of session hijacking may employ other mechanisms such as DoS attacks against one side
of the connection. After bringing down one side, the attacker spoofs the original connection, completely
replacing the original.
Protecting against session hijacking is reasonably straightforward. Following these guidelines will
greatly reduce your chances of being hijacked:
First and foremost, do not use unencrypted protocols, such as Telnet and FTP. Encrypted
communications via IPSec or SSH are not easily forged, although full SSH1 sniffing is
implemented in ettercap. SSH2 is preferable because it is much more secure.
If possible, use static ARP mappings for your critical systems (arp -s mygateway.
Use software such as ettercap or arpwatch to detect clients that may be involved in a
spoofing attack.
Using arpwatch
The arpwatch software is a simple-to-use solution you can use to detect spoofing without needing to
manually monitor your network. Arpwatch maintains a database of host/ethernet mappings and
automatically logs changes to syslogd and email.
For a network with static IP addresses (or static DHCP mappings), the only changes that should be
detected are when a system is added or replaced on the network. Other changes can be interpreted as a
potential spoofing attack.
Download arpwatch from
Unarchive and enter the source code distribution:
% tar xf arpwatch-2.1a4.tar
% cd arpwatch-2.1a4/
Next, configure with ./configure powerpc:
% ./configure powerpc
loading cache ./config.cache
checking host system type... powerpc-unknown-none
checking target system type... powerpc-unknown-none
checking build system type... powerpc-unknown-none
checking for gcc... (cached) gcc
checking whether the C compiler (gcc ) works... yes
checking whether the C compiler (gcc ) is a cross-compiler... no
Now you need to make a few changes to the source for it to install and work correctly on Mac OS X.
Edit the file addresses.h to define your email address. By default, the WATCHER constant is set to
root; you should replace this with the address to be notified when changes are detected.
Next, open Makefile and change the default SENDMAIL location from
SENDMAIL = /usr/lib/sendmail
SENDMAIL = /usr/sbin/sendmail
For this to work, sendmail must be configured to send email. It does not need to run as a
daemon, but you'll need to follow the steps in Chapter 13 for it to be functional.
Finally, perform a search on the string (once again in Makefile) -o bin -g bin and replace all
occurrences with -o root -g admin. You're now ready to compile and install:
# make
-DSTDC_HEADERS=1 -DARPDIR=\"/usr/local/arpwatch\"
-DPATH_SENDMAIL=\"/usr/lib/sendmail\" -I. -c ./db.c
# make install
/usr/bin/install -c -m 555 -o root -g admin arpwatch /usr/local/sbin
/usr/bin/install -c -m 555 -o root -g admin arpsnmp /usr/local/sbin
# cp arpwatch.8 /usr/share/man/man8/
After installing, arpwatch can be invoked as root from /usr/local/sbin/arpwatch. It
requires the support file arp.dat to be created beforehand to contain the mapping database. You may
also need to invoke arpwatch with one of its switches, shown in Table 9.1.
Table 9.1. arpwatch Command-Line Switches
Debugging mode. Arpwatch does not fork and outputs all data to
stderr rather than email or syslogd.
-f <pathname>
The path to the arp mapping file. This file must exist before you
start arpwatch.
-i <network interface> Set the network interface to monitor. The default is en0.
-r <pathname>
Read the packet information from a tcpdump file rather than a
network interface.
For example, I can start arpwatch on my network interface en1 like this:
# touch /var/db/arp.dat
# /usr/local/sbin/arpwatch
-f /var/db/arp.dat
After it runs for a few seconds I start to see messages like this in /var/log/system.log and my
Jan 20 00:37:59
Jan 20 00:38:03
Jan 20 00:38:04
Jan 20 00:38:25
Jan 20 00:38:42
Jan 20 00:41:25
Computer arpwatch: listening on en1
Computer arpwatch: new station 0:90:27:9a:4c:
Computer arpwatch: new station 0:30:65:4:
Computer arpwatch: new station 0:30:65:2d:c7:
Computer arpwatch: new station 0:30:65:
Computer arpwatch: new station 0:30:65:12:
During the first several minutes, arpwatch will build the mapping table and generate messages for
each machine on the network. After the table is built, however, the only messages logged will be those
indicating a change in the network.
Everything Else
We're assuming by now that it's pretty clear that there isn't any everything else. There will always be
other attacks and other means for third parties to modify or disrupt your system. Before ending the
chapter, however, you should be familiar with a few other concepts that are mentioned throughout the
Root Kits
A rootkit isn't a compromise, per se; it is a set of tools that are installed after a system has been
compromised and root access gained. The purpose of a rootkit is to provide tools for covering an
attacker's tracks so that its presence will not be detected on the machine. In some cases, rootkits may
contain additional software for carrying out further attacks.
For example, common rootkit components include the following:
du/ls/find. Modified to hide files.
top/ps. Modified to hide processes (sniffers and so on).
netstat. Modified to hide remote connections.
ssh/xinet/inetd/tcpd. Modified to allow remote access, ignore deny lists.
syslogd. Modified to hide log entries.
wtmp/utmp/lastlog. Software included to modify or erase the system accounting logs.
killall. Modified to not kill attacker processes.
lsof. Modified to hide connections and files.
passwd/sudo/su. Modified to provide root access.
More information on the logs and rootkit modifications can be found in Chapter 19, "Logs and User
Activity Accounting." If a rootkit has been installed, the operating system cannot be trusted. You must
make the decision to either attempt to repair the damage or reinstall the operating system. The only cases
where a repair should be attempted are those where a piece of software such as Tripwire (http:// is installed and can verify which files have changed from the default
A quick and dirty solution is to use md5 to calculate a checksum for the critical system files, which can
be stored offline and used for comparison in the event of an attack:
# md5 /usr/bin/*
MD5 (/usr/bin/CFInfoPlistConverter) = be7664241c675f06424961d8773d19c1
MD5 (/usr/bin/a2p) = 6f0ff3f32ffc295cc00ea2ecf83b1143
MD5 (/usr/bin/aclocal) = 6b204bce8a0151c330649cb3ff764a43
MD5 (/usr/bin/aclocal-1.6) = 6b204bce8a0151c330649cb3ff764a43
MD5 (/usr/bin/addftinfo) = e55d67a1e4848a4a4abd75c9d78106dc
MD5 (/usr/bin/addr) = 2c4824c5fa6a9ee332a5e4ab14787e42
MD5 (/usr/bin/aexml) = 788248a22bdfac053473af90728efca1
MD5 (/usr/bin/afmtodit) = 7da2c8b3d85e10500bd01b59ae52780b
MD5 (/usr/bin/appleping) = 0cda19ad69004d8fd1b6b66e7159ece4
The calculated 128-bit signatures will be identical only if the files are identical. Files altered by a rootkit
display a different signature from the initial calculations.
At this time, there are no known "popular" rootkits for Mac OS X, although it is certainly possible that
anyone with access to a Mac and a compiler has recompiled any of the popular Linux rootkits for the
operating system. A rootkit checker has already been compiled for Mac OS X (, a port of the ChkRootKit software
found at ChkRootKit is capable of detecting common rootkit changes and
reporting the files that have been compromised.
Trusted Data Sources
Too often (and with reason), users trust data if it is processed by software they know and trust. Someone
with an understanding of their internals, however, can often trick these programs into falsifying or
providing "unexpected" data.
A simple example of this is the system date and time. Files are stamped with a creation date and time
that cannot be changed by end users. The modification times are easily updated with touch, but the
creation date is inaccessible. Tar and other backup utilities can restore files to the system with their
original timestamps. As a result, it's easy to modify a file's creation time stamp by tarring it, adjusting
the time stamps within the tar file, then untarring it again.
Although this particular form of data falsification is trivial now that everyone on the planet can run a
Unix box (and have access to their own system clocks), it is an example of an exploit that allows a user
to falsify what is normally considered trusted information.
A Mac OS X 10.2.2 vulnerability was caused by a similar means of slipping information in "under the
radar." By creating a disk image as an administrator on a remote system, then opening it under a
nonadmin account on another Mac OS X machine, the user opening it can receive elevated administrator
privileges (CVE: CAN-2002-1266). Like using tar to modify time stamps, an attacker using this
technique need only let the operating system utilities perform the attack for him. Don't worry; this
vulnerability was corrected in Mac OS X 10.2.3.
Trickery and Deceit
There are numbers of other attacks that can be directed at you personally rather than your machine.
Through simple but effective tricks, attackers can gain information about you by just letting you use
your computer. These are just a few more things to lose sleep over.
The most obvious piece of information that can be gleaned without a direct attack is whether or not you
have and actively read a given email address. It is commonly believed (and in many cases it's certainly
true) that spammers use the links in a piece of spam (including the unsubscribe links) to verify that the
email address that was used is valid. Unfortunately, all that is needed in most cases is for you to read
your mail in order for the spammer to know that your address is active. Most spam is HTML-based and
includes images, which are loaded by default in Apple's Mail application. Images can easily be
embedded in the HTML so that they pass information back to a script:
[View full width]
<img src="
&[email protected]">
This might, for example, pass the name of an image to be displayed (ad.gif) along with my email
address, to the remote host. The potential for this type of information gathering to take place can be
eliminated by shutting off viewing of inline HTML objects within Apple Mail's Viewing Preferences. Of
course, if you do that, you'll miss out on the graphic-intensive ads for products to make you richer,
smaller, or larger.
Even more devious schemes have been designed by some attackers to convince you to willingly provide
sensitive information under the pretense of a secure connection. Browser windows, for example,
typically indicate a secure connection via the status bar at the bottom of the window. Attackers, through
a variety of means (spoofing, compromised site forwarding, and so on), can redirect users to insecure
sites for the purpose of collecting information. What the attacker cannot do, however, is (easily) get a
site certificate that will be authenticated with your browser. As a result, they cannot create a secure
connection with the victim. To "trick" the remote user to into believing that a secure connection is in
place, the server can simply spawn a new browser window without a status bar, then draw its own!
Using DHTML it would even be possible for an enterprising cracker to develop a complete browser
interface within a window. Obviously, the browser wouldn't match everyone's interface (chances are,
Mac users would be safe), but the illusion could be very convincing.
Additional Resources
If you aren't scared yet, here is some additional reading material to help. DoS and buffer overflow
attacks are two of the most common and dangerous attacks that can be launched against your system. It's
a good idea to have as much knowledge about the cracker as they have about you.
SYN Flood, Internet Security Systems,
Denial of Service Attack Resources Page,
CERT/CC Denial of Service, CERT,
Strategies to Protect Against Distributed Denial of Service Attacks, Cisco,
Help Defeat Denial of Service Attacks: Step-by-Step, SANS Institute,
Inside the Buffer Overflow Attack: Mechanism, Method, & Prevention,
Writing Buffer Overflow Exploits—a Tutorial for Beginners,
Simple Buffer Overflow Exploits, fides,
Attacking Servers Through a Buffer Overflow,
Introduction to ARP Spoofing, Sean Whalen,
Man-in-the-Middle Attack—A Brief, Bhavin Bharat Bhansali,
Session Hijacking, Internet Security Systems,
A Simple Active Attack Against TCP/IP, Laurent Joncheray,
Dave Dittrick's Security Links,
Hunt—Hijacking Tool, Pavel Krauz,
Recognizing and Recovering from Rootkit Attacks, David O'Brien,
Understanding Rootkits, Oktay ALtunergil,
Analysis of the T0rn rootkit, Sans Institute,
Rootkit FAQ, David Dittrick,
Cracks, Attacks, Exploits—there's more than enough to worry about. This chapter covered some of the
common attacks that you may encounter on an Internet-accessible or multiuser system. Denial of service
attacks are aimed at disrupting legitimate use of a computer or its services. Buffer overflows take
advantage of poorly debugged code that allows the contents of memory to be overwritten with arbitrary
data. Session hijacking allows another computer to take control of your (seemingly) private conversation
with a remote system.
There will always be something new on the horizon, and it is unlikely that bugs will ever disappear
completely from source code. Users demand new features, new flexibility—they want it now and they
want it cheap. Programmers must pump out code quickly and are often not given the time nor incentive
to produce error-free code. Remember that the next time you hear someone complaining about Apple
charging for a system update.
Part III: Specific Mac OS X Resources and How to
Secure Them: Security Tips, Tricks, and Recipes
10 User, Environment, and Application Security
11 Introduction to Mac OS X Network Services
12 FTP Security
13 Mail Server Security
14 Remote Access: Secure Shell, VNC, Timbuktu, Apple Remote Desktop
15 Web Server Security
16 File Sharing Security
Chapter 10. User, Environment, and Application Security
Adding a New User
Using the NetInfo Database to Customize a User
Sane User Account Management
Skeleton User Accounts
Command-Line Administration Tools
Restricting User Capabilities
Users are people, and as Chapter 3, "People Problems: Users, Intruders, and the World Around Them,"
noted, people are your overall largest security problem. So it's important that you know and practice
good system administration principles when creating and managing the user population on your
computers. The choices you make when configuring the users on your machines will define their normal
abilities. It won't prevent them from trying to extend those abilities outside what you intended, but your
choices will affect their capabilities and range of action there, as well.
When configuring users, keep in mind the types of security issues and vulnerabilities that we've
discussed through the previous six chapters, and how the options you have in user configuration can
affect the capabilities of a user who is determined to cause mischief. Likewise consider the effect on
their capacity to unintentionally cause damage through these routes, either to themselves or to others.
This chapter covers typical user configuration administrative options, as well as how to partially
automate some parts of the process. If you're going to be in a position where you need to manage a large
collection of users, pay close attention to the automation parts; they will enable you to more easily keep
your user creation process and configured options standardized, and will provide you with ways to make
configurations that are outside of what Apple's Accounts system preferences pane can create.
In the end, the most secure user is the one you don't create, but that user has no utility. The instant you
begin to give users the power to do things with your system, you begin to give them the power to do
things that you don't want, and that will compromise your security to some extent. Where the balance is
between nonusers with no capabilities and full administrative users who can su to root will depend on
your needs and those of your users.
We strongly recommend that you limit your users' capabilities to at least some extent; giving
every user root access is a recipe for sure and swift disaster. On the other hand, we also
strongly recommend against unnecessarily limiting your users' privileges.
Too many administrators place limits on their users not because of any increase in security
that those limits provide, but out of some twisted desire to prepunish users for potential
transgressions, in the hope that this will cow them into more quiescent obedience to the
rules. This system may work in the military, where training a soldier to be instantly and
unquestioningly obedient to orders is an important part of success, but your users are
(probably) not soldiers, and you've no need to demonstrate that you can limit their
permissions just because you can.
All that will be accomplished by creating artificial and unnecessary limitations will be a
sense of resentment, a lack of serious effort in adhering to those policies that you really do
require for maintaining security, a disinterest in performing in the spirit of those policies
when situations are discovered that the policies do not cover, and subsequent work to
undermine the security measures you have in place.
Although it's a rare trait in administrators, we'd like you to consider treating your users as
though they deserve the trust and responsibility they've been granted by being given
accounts. Too many administrators give out user accounts and then immediately act as
though the users are betraying a trust when they make use of the abilities they've been given.
If you truly want to prevent an action, it absolutely must be clear to the users, and it is best if
more than a policy rule prevents it.
Actions such as running Crack on a password file, however, are rather equivocal as far as
the user's behavior is concerned. Perhaps the user is actually trying to steal your system
passwords, or perhaps is curious how quickly Crack runs on your system compared to the
machine at home. Users occasionally do things that a cracker would do, simply out of idle
curiosity. This does not make them crackers, and does not make them any less trustworthy
than they were when you granted them the account.
If you observe a user doing something that is questionable, do your best to determine why
they're doing it. Examine whether they still appear to be deserving of the same trust they
were when you granted their accounts, and if they are, and their explanations are innocent,
then it's probably in your best interest to believe them. All the good hackers of the world got
there by experimentation and investigation of the way the system works. The oddball user
that you tolerate in his experimentation might not turn out to be the next Wietse Venema
(, but on the other hand, he might. If you
shepherd his curiosity to more useful subjects, and don't crucify him over transgressions that
are actually only conceptual violations, you're likely to have a much more positive result. At
the worst, he'll probably eschew most further experimentation because he won't anticipate
you are much more astute. At the best, however, you'll find that in deference to your
consideration in not tossing their butts out on the street, these individuals will take it upon
themselves to be helpful above and beyond the call, and will function as some of your best
security warning devices for your system. At the very least, he'll probably avoid using your
internal corporate politics and the brain damage in the design of your security systems as
topics in the security book he writes ten years later.
Adding a New User
The initial user you create when setting up your Mac OS X machine is an administrator account.
Because it can be used to modify the machine settings or install software, the administrator account is
actually a rather powerful account. When you add a new user, you have the choice of adding a regular
user or adding one with administrator capabilities. Although it is helpful to have more than one user with
administrator capabilities, do not give administrator access to every user account that you create.
Otherwise, every user on the machine will be able to modify your system.
Add a new user by following these steps:
1. Open the Accounts pane in System Preferences.
2. Click the Make Changes lock icon if it's set not to allow changes, and enter your administrator
username and password. Under the Users tab, select New User, which brings up a sheet for
creating a new user. Figure 10.1 shows the Users tab of the Accounts pane.
Figure 10.1. Select New User under the Users tab of the Accounts pane to create a new
The sheet for creating a new user, shown in Figure 10.2, has the following fields:
Name. This is where you enter your user's name. In Mac OS X, this is a name that the user
can use to log in to the machine.
Short Name. The short name is the username—that is, the name of the account. This is
also a name that the user can use to log in to the machine. This name can be up to eight
characters in length, must have no spaces, and must be in lowercase letters. This name is
used by some of the network services. For example, I use jray as my (John Ray)
New Password. The password should be at least four characters. Many systems
recommend at least six characters with a variety of character types included in the
Verify. This is where you reenter the password for verification purposes.
Password Hint. This is an optional field. The password hint is displayed if the user enters
an incorrect password three times. If you include a hint, make sure that the hint is not so
obvious that other users can guess the password.
Picture. Select a picture that can be displayed with this user's name in the user listing at
login time. Either select one of the default images or choose a custom picture elsewhere
on your machine by selecting Choose Another.
Allow User to Administer This Computer. This is the box you check to grant a user
administrative privileges. Check this box only for a trustworthy user who you feel should
be allowed to have administrative privileges. As a security precaution, this box is not
checked by default.
Allow User to Log In from Windows. Check this box to allow the user to log in from
Figure 10.2. Complete the fields in the new user sheet when creating a new account.
3. Click OK.
You are returned to the Accounts pane, which now lists your new user by name. You have created a new
user. In the section on customizing a user, you learn how to create a specific user called software
with a specific user ID and group ID.
The Capabilities option, shown in Figure 10.3, enables you to configure some of the actions a user is
allowed to perform. You can restrict a user to use Simple Finder or to use only certain applications.
Additionally, you can control whether a user is allowed to remove items from the Dock, open all System
Preferences, change her password, or burn CDs or DVDs.
Figure 10.3. Certain capabilities can be set for a user with the Capabilities option.
You can also edit user information, such as the password, under the Users tab of the Accounts pane of
System Preferences. Just select the user account that needs to be edited and click Edit User, which brings
up an already completed sheet identical to that for a new user.
If you so choose, you can set an auto login user with the Set Auto Login option. Provide the user's name
and password at the Auto Login sheet. However, when you have multiple users, we recommend that you
disable the automatic login to get a login window instead. If you don't make that modification, the
automatic login account can be modified by whoever sits at the machine.
To delete a user account, simply select the account to be deleted and click the Delete User button. A
sheet appears, asking you to confirm the action and telling you that the user's home directory will be
stored in the Deleted Users folder. The deleted account is stored as a disk image (allowing for easy
archiving). The Accounts pane does not allow you to delete the original administrator user account.
Using the NetInfo Database to Customize a User
In this section you learn to use the Accounts control panel to create a user, but then customize the user
by editing information in the NetInfo database.
The example makes a user that will be a general software user. This is a specialized user whose account
you want to use when compiling software for the system, but this user should not be one of the
administrators for the machine. The user is to belong to a group called tire with group ID 100. You'd
also like to have a specific user ID, 502, for the user, whose account you intend to call software. To
create this user, do the following:
1. Open the Accounts control pane in System Preferences. Click the lock icon if it's set not to allow
changes. Add a new user with a short name of software. The software user's display name is
skuld. Choose whatever password you prefer. Don't give your software user admin privileges.
2. Open NetInfo Manager and select the local domain if it's not already selected. Click the lock to
make changes and enter the administrator username and password.
3. Click the groups directory and scroll through the list. Because tire is not a default group that
comes with the system, you should not see a group called tire. Therefore, you must make a
new group. Click any group to see what values are typically included in a group. Figure 10.4
shows the types of properties that belong to a group.
Figure 10.4. Looking at the staff directory, you can see that the typical properties for a
group are passwd, name, gid, and users.
4. Click groups. From the Directory menu, select New Subdirectory. A new directory called
new_directory appears. Edit the name property and add other properties as follows:
The * in the passwd field means that a group password is not being assigned. So far, you have
only one user in your group: the user named software. As the term group implies, you can
have more than one user in a group.
5. Select Save from the Domain menu. A question to Confirm Modification appears. Click Update
This Copy. Now new_directory has become tire, as shown in Figure 10.5.
Figure 10.5. We now have a new group called tire with gid 100. At this time, only one
user, software, belongs to the group.
6. Click users and then click software. Now the default information about user software
appears in the bottom window. If this is one of your first users, UID 502 might already be the
user ID; otherwise, you can change software's UID shortly. A group ID of 20 is probably
what was made. If you look at the values section for software, you can see that the Accounts
pane added quite a bit of information about software to the NetInfo database. The password
you see is an encrypted version of the password.
Because software was not one of the first users on my system, I already have a user with UID
502. Therefore, I have to either change the UID of my original user or delete the user. Because
my original user with UID 502 was simply a demonstration user to run various commands, I
chose to delete it. If I want to keep my user, I could change the UID of the original user to one
that wasn't already taken, and then change the UID of software to 502.
If I had decided to rearrange UIDs instead of simply deleting the user, I would also
have had to change the ownership of all the files that belonged to my previous user to
belong to their new UID. File ownerships are stored based on numeric UID. Changing
a user to a previously used UID gives that user access to and ownership of any files
that still belong to that numeric UID.
For your purposes, the user ID for software might not be important. Because you want to share
some of your resources with another machine that also has a user called software and whose
UID is 502, it's important to make software's UID 502 for compatibility purposes. In both
cases, you want the user software to belong to group tire. Change the GID to 100. Change
the UID as appropriate for your situation. Select Save from the Domain menu, and click Update
This Copy in the Confirm Modification box. Figure 10.6 shows the updated information for the
user software.
Figure 10.6. Now the user software has uid 502 and gid 100. You can see from this
information that user software has been assigned a password, a home directory in /
Users/software, and a default shell of /bin/tcsh.
7. Click the lock to save your changes and end your ability to make further changes.
8. Open a Terminal window, go to software's home directory, and look at the directory's
contents. Take note that the directory was created by the Users pane with the default values. The
update to the information in the NetInfo database, however, was not entirely reflected in the
system. So you must manually implement those changes. Here's the default information for the
software user that was created on our system:
[localhost:~software] joray%
total 8
drwxr-xr-x 11 505 staff 330
8 root wheel 228
-rw-r--r-1 505 staff
drwx-----3 505 staff 264
drwx-----2 505 staff 264
drwx------ 15 505 staff 466
drwx-----2 505 staff 264
drwx-----2 505 staff 264
drwx-----2 505 staff 264
3 505 staff 264
4 505 staff 264
ls -al
In the example, software's original UID was 505. If you didn't change your software user's UID,
you should see Software in that column, not 505. The default GID that the Users pane used for creating
software was GID 20, which is the staff group on Mac OS X. So the information that you see for
software's home directory is the information that was originally assigned to software. You have to
update the information to software's directory to reflect the new information.
As root, in the /Users directory, recusively (chown –R) change the ownership of software's
directory to the software user in group tire:
[localhost:/Users] root#
chown -R software.tire software
Check the results:
[localhost:/Users] root#
drwxr-xr-x 11 software
[localhost:/Users] root#
total 8
-rw-r--r-1 software
drwx-----3 software
drwx-----2 software
drwx------ 15 software
drwx-----2 software
drwx-----2 software
drwx-----2 software
3 software
4 software
ls -ld software
tire 330 Jan 30 18:17 software
ls -l software
If you changed the UID of a user who was originally assigned UID 502, look at that user's home
directory and make the appropriate ownership changes.
Sane User Account Management
Like the creation of the tire group, which houses nonadministrative users who are still used for system
maintenance, it's very useful to add groups to your system for any logically collected groups of users on
your system. The Unix privilege system underlying Mac OS X contains a mechanism to allow groups of
users to mutually share access to files within their group, while protecting those files from other users on
the same system.
To enable this capability, you must create groups for those users to belong to, and you must add their
usernames to the group's users value list. A single user can be a member of any number of groups, and
can assign files that he owns to be visible to any one of (or none of) the groups to which he belongs. To
make use of this capability, the user must use the command-line group ownership tools, such as chmod,
chown, and chgrp, or edit the Ownerships & Permissions information in the Finder.
Another change that you'll probably find useful to make to the groups NetInfo directory is the creation
of a users group into which you can assign users who don't logically seem like staff users. Apple's
Accounts pane creates users as members of the staff group, and you're welcome to leave them with
this default group. There's a logical distinction between staff users and normal users on your other
Unix systems, though, so you find it convenient to create yet another group to which to assign new,
nonstaff users. On your Mac OS X machines, you create this as gid 99, with the group name users.
Skeleton User Accounts
If you're going to have any significant number of users on your machine (or machines), you'll soon find
that being able to provide a more customized environment than what comes out of the Accounts system
preference pane by default is a benefit.
Apple has provided a convenient tool you can use to perform some customization of accounts as created
by the Accounts control pane. This is the inclusion of a User Template directory, from which the
accounts made by the pane are created by duplication. The family of User Template directories,
individualized by locale, are kept in /System/Library/User Template. This system works for
simple configuration settings that you might like to configure for each newly created user, but it has
some limitations if you'd like to work with more complex setups. The largest logical limitation is that if
you're trying to set up complicated startup scripts and sophisticated environment settings, using a real
user account as your default template is nice, because then you can log in for testing and tweaking. The
largest practical limitation is that Apple has put the default templates in the /System/ hierarchy,
where they're Apple-sacrosanct (Apple can modify this directory as they please), and system updates are
likely to tromp on any customizations that you might make.
The easiest way to solve all the problems at once is to create a skeleton user account as a real user
account, and to keep it up to date with any environmental customizations that you want to provide for
new users when you create accounts. If you create the skeleton user as simply another user account, you
can log in to it and then conveniently tweak its settings. Using this method, you can create as many
skeleton accounts as you need for different collections of settings.
Even if you prefer to use the Accounts control pane, the creation of skeleton users as real users on the
system can be useful for you. You can configure skeleton users, log in for testing, and then populate
the /System/Library/User Template directories (if you don't mind incurring the wrath of the
Apple installers), as required for customizing the configuration of users under the Users pane.
Alternatively, you can create the accounts with Apple's default templates, and then overwrite the actual
user directories with data from your skeleton account.
Every user's shell environment is configured by the .login and .cshrc (presuming you're using the
tcsh or csh shell) scripts in the user's home directory. You might also want to provide a more
customized starter Web page or assorted bits of default data.
After you configure an account in the fashion you'd like your new users to have, the hard part is done. It
would be nice to have a way to use this account directly from the Users pane as the seed for new
accounts as they are created, but unfortunately, we aren't yet so lucky. Instead, you have two options for
how to use the starter account information. First, you can create a new user through the Accounts control
pane. After the account is created, you can replace the user's home directory (that the Accounts control
pane created) with a copy of the skeleton account home directory.
Your other option is to ignore the Accounts control pane, and create a new user by duplicating an
existing user node from the NetInfo hierarchy, making a copy of the skeleton account home directory for
the new user's home directory, and then editing the copy of the NetInfo entry for the new user to reflect
the correct information for that user.
The first option is probably easier, but the second has the benefit of being something you can do from
the command line with nidump and niload, and therefore, of being automatable.
For the rest of the discussion, it will be assumed that you've created a skeleton account in which you
have made any customizations that you want to install for all new users. The account UID will be
assumed to be 5002, with a home directory of /Users/skel and a GID of 99. It will also be assumed
that you've added the group users to your NetInfo groups directory, with a GID of 99, and that you
want to use this GID for normal, nonprivileged users.
To implement the first method of providing local customization for a new user, follow these steps:
1. Create the new user with the Accounts control pane. Make any necessary changes to the user's
configuration, such as the default GID, using NetInfo Manager as shown in Chapter 9,
"Everything Else."
2. Become root (su, provide password).
3. Change directories to the skeleton user's directory (cd ~skel).
4. tar the contents of the current directory, using the option to place the output on STDOUT (tar
-cf - .) and then pipe the output of tar into a subshell. In the subshell, cd to the new user's
directory, and untar from STDIN (| ( cd ~<newusername> ; tar -xf - ) ). The
complete command is tar -cf - . | ( cd ~<newusername> ; tar -xf - ).
5. Change directories to one level above the new user's directory (cd ~<newusername> ;
cd ../).
6. Change the ownership of everything in the new user's directory to belong to the new user and,
potentially, to the user's default group if it's not the same as the skel account default group
(chown -R <newusername>:<newusergroup> <newuserdirectoryname>).
For example, if you've just created a new user named jim, assigned to the group users with the
Accounts control pane/NetInfo Manager, and want to put the skel account configuration into jim's
home directory, you would enter the following:
su (provide password)
cd ~skel
tar -cf - . | ( cd ~jim ; tar -xf - )
cd ~jim
cd ../
chown -R jim:users jim
If you'd rather create new users from the command line, either because you can't access the physical
console conveniently or because you want to use what you know about shell scripting to automate the
process, you can use the second method suggested earlier. You might find this method more convenient
for creating users in a NetInfo domain other than localhost/local. The Accounts preference pane
in the nonserver version of OS X seems incapable of creating users in other NetInfo domains, and this
makes using it for managing cluster users difficult.
This process creates a new user by manipulating the NetInfo database directly, so you should
remember to back up your NetInfo database directory (/private/var/db/netinfo)
To implement the second method, follow these steps:
1. Become root (su, give password).
2. Change directories to the directory in which you'd like to place the new user's home directory
(cd /Users, for example).
3. Make a directory with the short name of the user you're about to create (mkdir
<newusername> to create a directory for a new user named <newusername>).
4. Change directories to the home directory of the skel account (cd ~skel).
5. tar the contents of the current directory, and use the option to place the output on STDOUT
(tar -cf - ).
6. Pipe the output of the tar command into a subshell. In the subshell, cd to the new user's
directory, and untar from STDIN (| ( cd <pathtonewuserdirectory> ; tar -xf
- )). Note that you can't use ~<newusername> because <newusername> doesn't actually
exist on the system yet. The entire command syntax is tar -cf - . | ( cd
<pathtonewuserdirectory> ; tar -xf - ).
7. Dump your skel account (UID 5002 here, remember) NetInfo entry, or some other user's
entry, into a file that you can edit (nidump -r /name=users/uid=5002-t
localhost/local > ~/<sometempfile>). As an alternative to the uid search, you
could specify the skel account with /name=users/name=skel.
8. Edit ~/<sometempfile>, changing the entries so that they are appropriate for the new user
you want to create. You'll want to change at least _writers_passwd,
_writers_tim_password, uid, _writers_hint, _writers_picture, gid,
realname, name, passwd, and home. It's probably easiest to leave passwd blank for now.
9. Use niutil to create a new directory for the uid that you've picked for the new user
(niutil -p -create -t localhost/local /name=users/
uid=<newuserUID>; give the root password when asked).
10. Use niload to load the data you modified in ~/<sometempfile> back into the NetInfo
database (cat ~/<sometempfile> | niload -p -r /name=users/
uid=<newuserUID> -t localhost/local).
11. Set the password for the new user (passwd <newusername>;). Provide a beginning
12. Change back to the directory above the new user's home directory (cd ~<newusername>;
cd ../).
13. Change the ownership of the new user's directory to the new user's <newusername> and
<defaultgroup> (chown -R <newusername>:<defaultgroup>
If you've made a mistake somewhere along the way, just restore your NetInfo database from the backup
that you made before you started this. You also might need to find the nibindd process, and send it an
HUP signal (ps -auxww | grep "nibindd"; kill-HUP <whatever PID belongs to
nibindd>, or killall -HUP nibindd, if you prefer to do things the easy way).
Resource forks get lost in the tar!
The BSD tar distributed by Apple doesn't understand file resource forks, and some
software vendors haven't caught on to the idea of using plists properly yet. The unfortunate
consequence is that if you've built a highly customized skeleton user (or are trying to use
these steps as an example of how to move a real user account), and some of the user's
preferences are stored in the resource fork of the preference files, tar is going to make a
mess of things when you use it to duplicate the user's directory to the new location.
To overcome this problem, you currently have two options:
Metaobject has developed hfstar, a GNUtar derivative that supports HFS+,
allowing it to properly handle resource forks, type and creator codes, and so on.
GNUtar is functionally very similar to BSD tar, but the differences that do exist are
significant enough that you won't want to simply replace the tar that Apple provides
with the hfstar from metaobject. Instead, it'd be better to keep them both around,
and use them for their respective strengths. metaobject's hfstar can be downloaded
Use Apple's already supplied ditto command. ditto doesn't provide nearly the
power of tar, but it'll do for copying user directories.
To produce results similar to those from the first method earlier, the following example creates a new
user with the username of james, UID 600, GID 70, with home directory /Users/james. This
again assumes the skel account with UID 5002 and characteristics as described earlier. (Comments
or instructions are shown in italics.)
su ( provide the password)
cd /Users
mkdir james
cd ~skel
tar -cf - . | ( cd /Users/james ; tar -xf - )
nidump -r /name=users/uid=5002 -t localhost/local > ~/skeltemp
vi ~/skeltemp
Then, in vi, change the contents from this:
"authentication_authority" = ( ";basic;" );
"picture" = ( "/Library/User Pictures/Nature/Zen.tif" );
"_shadow_passwd" = ( "" );
"hint" = ( "Urusei!" );
"uid" = ( "5002" );
"_writers_passwd" = ( "skel" );
"realname" = ( "skeleton account" );
"_writers_hint" = ( "skel" );
"gid" = ( "99" );
"shell" = ( "/bin/tcsh" );
"name" = ( "skel" );
"_writers_tim_password" = ( "skel" );
"passwd" = ( "AtbvqXhZrKJ7A" );
"_writers_picture" = ( "skel" );
"home" = ( "/Users/skel" );
"sharedDir" = ( "Public" );
to this:
"authentication_authority" = ( ";basic;" );
"picture" = ( "/Library/User Pictures/Nature/Zen.tif" );
"_shadow_passwd" = ( "" );
"hint" = ( "boggle" );
"uid" = ( "600" );
"_writers_passwd" = ( "james" );
"realname" = ( "Sweet Baby James" );
"_writers_hint" = ( "james" );
"gid" = ( "70" );
"shell" = ( "/bin/tcsh" );
"name" = ( "james" );
"_writers_tim_password" = ( "james" );
"passwd" = ( "" );
"_writers_picture" = ( "james" );
"home" = ( "/Users/james" );
"sharedDir" = ( "Public" );
Now load the skeleton into NetInfo:
# niutil -p -create -t localhost/local /name=users/uid=600
# cat ~/skeltemp | niload -p -r /name=users/uid=600 -t localhost/local
# passwd james (fill in a good starting value)
# cd ~james
# cd ../
# chown -R james:www james ( GID 70 is group www on this machine)
Depending on whether your NetInfo daemon is feeling well, you might have to HUP the
nibindd process (killall -HUP nibindd) to get it to recognize that you've made the
change. Remember that you can always restore your NetInfo database backup to get out of a
mess, if you've created one.
If you need to delete a user account from the command line, you can destroy the NetInfo
information for the user by using the command niutil -p -destroy -t
localhost/local /name=users/uid=<userUIDtobedeleted>. Then rm rf the user's home directory to delete it and all of its contents from the system.
Just to make sure that your user has been created as you think it should have been, you can use niutil
to list the /users NetInfo directory. (Don't be surprised if your listing doesn't look quite like this—this
is simply the list of users configured on my machine, so your users are likely to be different.)
[localhost:/Users/ray] root# niutil -list -t localhost/local /users
As shown, james does now exist in the NetInfo /users directory, although this listing shows only the
NetInfo node numbers, rather than the users and property values. To see whether james has the
properties intended, you can use niutil to read the info from the node named james:
[localhost:/Users/ray] root# niutil -read -t localhost/local /users/
authentication_authority: ;basic;
picture: /Library/User Pictures/Nature/Zen.tif
hint: boggle
uid: 600
_writers_passwd: james
realname: Sweet Baby James
_writers_hint: james
gid: 70
shell: /bin/tcsh
name: james
_writers_tim_password: james
passwd: aynyjRMfGyH7U
_writers_picture: james
home: /Users/james
sharedDir: Public
Command-Line Administration Tools
There are a number of command-line tools that are of assistance in the configuration and maintenance of user
accounts. Some of these have functionality duplicated in graphical tools and some do not. For truly
sophisticated user management, we again suggest looking to Mac OS X Server because it provides tools that
are considerably more powerful.
NetInfo Utilities
The nidump, niutil, and niload commands are particularly useful for user account creation and
deletion. It's also a good idea to be familiar with the tar command for backing up NetInfo databases. We
wouldn't be surprised if someone creates a graphical tool that scripts the sort of account maintenance that has
been shown in this chapter and makes it available on the Net. If we managed to pique your interest in shell
programming in the earlier chapters, this would be an ideal problem to attack as a learning experience.
Because NetInfo is so vital to the operation of the machine, we recommend that you verify, by using print
statements, that the scripts you create output exactly what you want—before you turn them loose on the
NetInfo database.
Common BSD Tools
In addition to the NetInfo commands for creating and modifying user accounts themselves, you have access to
a number of standard BSD utilities. Primarily, these allow you to operate on the files in user accounts; but one,
the passwd command, inserts crypted passwords into the NetInfo user record. (This is a little odd because
Apple has circumvented most BSD tools of this nature and incorporated their functionality into the NetInfo
commands. It wouldn't be too surprising if Apple replaces or supercedes this command with another in the
Changing File Ownership: chown
The chown command is used to change the ownership of files. Only the root user can execute the chown
command. The simplest form, and the one you'll end up using most frequently, is chown <username>
<filename>, which changes the ownership property of <filename> to belong to the user <username>.
The command can optionally be given as chown <username>:<groupname> <filename> to change
the user and group at the same time. Additionally, -R can be specified after the command to cause a recursive
change in an entire directory, rather than to a single file. The command documentation table is shown in Table
Table 10.1. The Command Documentation Table for chown
chown Changes file owner and group.
chown [-R [-H | -L | -P]] [-fh] <owner> <file1> <file2> ...
chown [-R [-H | -L | -P]] [-fh] :<group> <file1> <file2> ...
chown [-R [-H | -L | -P]] [-fh] <owner>:<group> <file1> <file2> ...
Recursively descends through directory arguments to change the user ID and/or group ID.
If –R is specified, symbolic links on the command line are followed. Symbolic links encountered in
tree traversal are not followed.
If –R is specified, all symbolic links are followed.
If –R is specified, no symbolic links are followed.
Forces an attempt to change user ID and/or group ID without reporting any errors.
If the file is a symbolic link, the user ID and/or group ID of the link is changed.
The -H, -L, and -P options are ignored unless -R is specified. Because they also override each other, the last
option specified determines the action that is taken.
The -L option cannot be used with the -h option.
It is not necessary to provide both <owner> and <group>; however, one must be specified. If group is
specified, it must be preceded with a colon (:).
The owner may be either a numeric user ID or a username. If a username exists for a numeric user ID, the
associated username is used for the owner. Similarly, the group may be either a numeric group ID or a group
name. If a group name exists for a group ID, the associated group name is used for the group.
Unless invoked by the superuser, chown clears set-user-id and set-group-id bits.
Changing File Group Ownership: chgrp
The chgrp command functions like the chown command, except that it changes only the group ownership of
a file. This can be particularly useful when you want to give a user, or group of users, access to files owned by
a number of different users. Instead of changing the ownership of each, or issuing a separate chown
<userid>:<groupid> for each file, you can instead change the file's group en masse to one that the
intended user or group can read, while not affecting the actual ownership of the files.
The command documentation table for chgrp is shown in Table 10.2.
Table 10.2. The Command Documentation Table for chgrp
chgrp Changes group.
chgrp [-R [-H | -L | -P]] [-fh] <group> <file1> <file2> ...
Recursively descends through directory arguments to change the group ID.
If -R is specified, symbolic links on the command line are followed. Symbolic links encountered in
tree traversal are not followed.
If -R is specified, all symbolic links are followed.
If -R is specified, no symbolic links are followed.
Forces an attempt to change group ID without reporting any errors.
If the file is a symbolic link, the group ID of the link is changed.
Unless -h, -H, or -L is specified, chgrp on symbolic links always succeeds and has no effect.
The -H, -L, and -P options are ignored unless -R is specified. Because they also over ride each other, the last
option specified determines the action that is taken.
The group may be either a numeric group ID or a group name. If a group name exists for a group ID, the
associated group name is used for the group.
The user invoking chgrp must belong to the specified group and be the owner of the file, or be the superuser.
Unless invoked by the superuser, chgrp clears set-user-id and set-group-id bits.
Setting a User's Password: passwd
The passwd command, somewhat unexpectedly, changes a user's password. If you look at the man page for
passwd, you will see that there are a number of related password and account management commands that
come from BSD Unix. With the exception of the passwd command, all the others appear to operate on the
local files only, and do not seem to affect the NetInfo database information. Because the local authentication
files (such as /etc/passwd and /etc/group) are used only in single-user mode, none of the other
commands currently have any significant use in OS X. (We'd like to think that Apple is working on making
more of them operate with the NetInfo database, but we really have no idea whether the BSD utilities are
coming or going.)
Simply issued as passwd, with no other options, the passwd command enables a user to change her
password. The root user can issue passwd <username> to force the password for the user <username>
to change. The command documentation table for passwd is shown in Table 10.3.
Table 10.3. The Command Documentation Table for passwd
passwd Modifies a user's password
passwd [-l] [-k] [-y] [<user>]
passwd changes the user's local, Kerberos, or YP password. The user is first prompted for her
old password. The user is next prompted for a new password, and then prompted again to retype
the new password for verification.
The new password should be at least six characters in length. It should use a variety of lowercase
letters, uppercase letters, numbers, and metacharacters.
Updates the user's local password.
Updates the Kerberos database, even if the user has a local password. After the password has been
verified, passwd transmits the information to the Kerberos authenticating host.
Updates the YP password, even if the user has a local password. The rpc.yppasswdd (8)
daemon should be running on the YP master server.
If no flags are specified, the following occurs:
If Kerberos is active, the user's Kerberos password is changed, even if the user has a local password.
If the password is not in the local database, an attempt to update the YP password occurs.
To change another user's Kerberos password, run kinit (1) followed by passwd. The superuser is not
required to supply the user's password if only the local password is being modified.
Restricting User Capabilities
Beginning in Mac OS X 10.2, Apple has provided a convenient means of controlling logged-in users'
capabilities, such as what applications they can run and whether they can modify system preferences.
This works beyond the Finder and also guards against invocation of applications from the Terminal. In a
lab or community setting, this can work to provide effective guest access without requiring extensive
user and group administration.
Editing User Capabilities
To access the user capability editor, you must first create a standard user account via the Accounts
System Preferences panel. Administrative users cannot have capability restrictions. Select the user to
modify in the Accounts list of the Users tab, then click the Capabilities button, as shown in Figure 10.7.
Figure 10.7. Edit capabilities of standard user accounts within the Accounts system
preferences panel.
The Capabilities sheet, shown in Figure 10.8, is displayed. Choose from the these limitations on the
Use Simple Finder. Removes all but basic file launching capabilities from the account. This will
be covered in more detail shortly.
Remove Items from the Dock. Lab and teaching settings work best if there is consistency across
accounts. To keep a static base set for the dock (no applications, files, or folders can be added or
removed), click the Remove Items from the Dock check box. Note: The label for this option is
misleading. It prevents any changes to the Dock, rather than simply removing items.
Open All System Preferences. If unchecked, users can access system preferences related to only
their accounts: all Personal category preferences, Universal Access, Keyboard, Mouse, and
Change Password. Allow the user to change the account password. For a guest or kiosk account,
this would be disabled.
Burn CDs or DVDs. To eliminate the capability to burn optical media on the system, uncheck
this option.
Use Only These Applications. When checked, the user will be restricted to running only the
applications or application categories checked in the list at the bottom of the pane. You can
expand a category with the disclosure arrow in front of its name to show the applications it
contains. Unchecking an application or category removes access from that item. You can add
applications by clicking the Locate button or dragging their icons into the list.
Figure 10.8. Restrict what the user can access.
User account restrictions (capabilities) do not modify the permissions of the applications that
they control. They also do not enable you to control command-line applications. You must
use traditional user/group permissions in combination with Apple user capabilities to
administer command-line and GUI tools effectively.
If a user violates an application restriction, she is warned with an error message, shown in Figure 10.9,
and the attempt is logged via syslogd in /var/log/system.log as follows:
Jan 27 16:41:11 John-Rays-Computer ./Navigator: CG/CPS:
The application with pid 615 is not in the list of permitted
applications and so has been exited.
Figure 10.9. Users are warned if they attempt to launch an application beyond their accounts'
Although there isn't a reporting function built in to monitor these sorts of attempted policy restrictions,
the logsentry product documented in Chapter 19, "Logs and User Activity Accounting," will easily
automate violations tracking on your system.
Simple Finder
The "Simple Finder" is, as the name implies, a simplified version of the Mac OS X Finder. It provides a
static dock with access to the applications chosen in the Capabilities setup, the user's Documents folder,
the Shared folder, and Trash, as shown in Figure 10.10.
Figure 10.10. The Simple Finder limits access to most navigation and file management
Users navigate through multiple screens of files by using the buttons at the bottom of the window.
Documents must be saved to the ~/Documents folder to be easily accessible through the interface.
There is no direct means of navigating to other folders, but this should not be taken as a form of security.
Access to the Terminal still enables a user to open files located elsewhere on the system. Alternatively,
adding folders to the Login Items pane in System Preferences causes them to be opened in the Simple
Finder at login.
Properly configured, the Simple Finder can be an effective means of providing a simple "launcher" for
children or kiosk applications. It should not, however, be assumed to be secure without proper
configuration of the applications that can be launched, such as restricting access to applications such as
the Terminal and System Preferences.
This chapter covered the tools and techniques necessary for you to customize your users' capabilities to
match those you require them to have. Make good use of these tools to enforce the policies that you've
created for your system. Remember that your users will make mistakes and occasionally display poor
judgment when using their accounts. Don't take an incident of a user running up against one of your
security barriers as a sufficient condition to consider her a danger to your system. Good users sometimes
make bad mistakes; this doesn't necessarily make them untrustworthy. On the other hand, if you don't
trust an individual, for goodness sake, don't give him an account!
Plenty of the most important users in computing history—people such as Randal Schwartz (http://www., Dan Farmer (http://www., and even ACM Turing Award winner Ken Thompson (
sep95/)—have done things that most would consider questionable, and in some circles even criminal, in
their work, experimentation, and learning with computers. Without these people, the computing world
would be a much poorer place, and we owe them a debt of gratitude for what they've done for us, in
some cases after having been thoroughly undone by their administrative staffs.
In the global community of computing security, generating the next generation of computing greats is
everybody's problem. It might not always seem like it, but it's usually in your best interest to contribute
to the community what you can, when you can. Sometimes, that contribution will be in the form of welleducated users, who've learned enough about computing security to develop and implement the next
generation of computing security tools.
Chapter 11. Introduction to Mac OS X Network Services
What Is a Network Service?
Network Service Vulnerabilities
Controlling Mac OS X Network Service Processes
Protecting inetd with TCP Wrappers
Increasing Security with xinetd
A Mac OS X machine, or for that matter, any Unix machine, derives much of its powerful functionality
from abstracting how the many small programs that cooperate as the "operating system" communicate
with each other. It is frequently useful for these programs to be able to communicate with programs on
remote machines, and so they require the ability to communicate over the network. In a typical display
of Unix "do it the simple way" design principles, the result is predictable: If the software is already
required to have network capabilities, don't bother writing separate code to handle single-machine
connections; just use the network for local connections as well. This produces a powerful and flexible
operating system, but one that is by its very nature more vulnerable to network attacks than a monolithic
operating system such as Mac OS 9. Apple has done a surprisingly good job of providing protection for
services that might be problematic, and of leaving those that are less easily protected turned off. Still, if
you want to use your machine to its fullest capabilities, you need to dig into the configuration and make
those changes that suit your particular network and usage requirements.
What Is a Network Service?
Most generally, a network service is a combination of a program and a network protocol, which together
allow remote networked machines to access some sort of functionality on a server machine. Web
servers, FTP servers, print servers, terminal access servers, and other such capabilities are, by this
general definition, network services. If one were to turn off all network services on a machine, a fairly
good argument could be made that it would be secure from network attacks. Unfortunately, the Unix
way of doing some things results in some useful capabilities being provided by a machine to itself via
the network. For example, in a normal installation a Unix machine prints, even to printers connected
directly to itself, by way of the network. Denying all network services would therefore result in a
machine that was not particularly useful. Instead, picking which network services are important to keep
and securing them against attack is a necessity.
Network services are typically started through one of two mechanisms. Either they are standalone
services that are started by a startup script in /System/Library/StartupItems or /Library/
StartupItems, or they are started by yet another network service, inetd, which watches for
network connections attempting to access particular services and starts the appropriate software to
accommodate them.
Standalone services that are started by a startup script are running all the time, no matter how often they
are used. They control their own configurations, and so the user that runs the service may also vary. On
the other hand, services that run from inetd do not run all the time. They run only when they are
needed, and are started by inetd, often with the permissions of inetd. A given service, though, may
also have its own configuration. Running these services through a superserver also enables you to
consolidate server processes.
This chapter covers the starting of general network services, and how those started from inetd may be
partially protected by the correct configuration of inetd, or its replacement, xinetd. Specific security
measures for individual services are covered in the chapters later in this book.
Network Service Vulnerabilities
Individual network services suffer from a great number of specific attacks, which are covered in later
chapters devoted to the services themselves. Thankfully, the startup-script way of starting network
services is fairly invulnerable to attacks. The inetd method suffers primarily from one well-known
problem. As its purpose is to watch for network requests and start the appropriate servers to handle those
requests, inetd could hardly have been better designed to facilitate denial of service (DoS) attacks.
Here is a brief listing of some recent reports on inetd vulnerabilities:
Denial of Service Vulnerability in inetd in Compaq Tru64 Unix (BugTraq ID 5242). A denial
of service vulnerability exists in some versions of Compaq Tru64 Unix. Details on the
vulnerability are not available, but patches to fix it are.
Denial of Service Vulnerability in inetd in Compaq Tru64 Unix (VU# 880624). A denial of
service vulnerability exists in Compaq Tru64 Unix 5.1. As a result, inetd could stop accepting
connections. Compaq made a patch available to fix this vulnerability.
Denial of Service Vulnerability in inetd in Red Hat 6.2(CVE-2001-0309). The inetd
distributed with Red Hat 6.2 does not properly close sockets for the internal services, such as
chargen, daytime, echo. This can result in a denial of service attack through connections to
the internal services. Red Hat provided a patch to fix the problem.
FreeBSD inetd wheel Group File Read Vulnerability (CVE-2001-0196, BugTraq ID 2324).
The inetd distributed with FreeBSD incorrectly sets up group privileges on child processes for
identd. The vulnerability can allow attackers to read the first 16 bytes of files readable by
group wheel. Attackers could potentially read the first entry of the encrypted password file and
use that information to gain access to or elevated privileges on the local host. A FreeBSD patch
was made available to fix the vulnerability.
With the most serious of the recent vulnerabilities, CVE-2001-0196, an attacker could potentially gain
access to the host. In such situations, it is especially important to apply patches that fix the problem, or
do the workaround, if one is recommended before a patch becomes available.
Not surprisingly, however, most of the vulnerabilities involve denial of service. In the vulnerability
documented by CERT's VU# 880624, a DoS vulnerability for Compaq Tru64, the DoS attack results in
inetd refusing to accept further connections and the machine losing all network connectivity. It's
arguable whether this is a worse vulnerability than having inetd continue to accept connections and
run the machine out of process resources instead, but almost any version of Unix will be affected in
some fashion by an attempted DoS attack against inetd.
In an attempt to mitigate this mode of attack, xinetd has been created as a replacement for the
functionality of inetd. xinetd, pronounced, according to its authors "zye-net-d," has been designed
to allow for some protection in spite of the inherently DoS-friendly nature of the job it does. Among its
features, discussed in greater length later in this chapter, are the capability to throttle particular
connections to prevent overwhelming the machine with service requests, and to place limits on the
number of connections to particular services, automatically blocking remote machines that appear to be
attempting to conduct DoS attacks against it.
Controlling Mac OS X Network Service Processes
Mac OS X provides some GUI controls for controlling the service processes. Additionally, you can
control processes manually. The ability to control processes manually makes it convenient to adjust
settings from a remote terminal. In general, we recommend being completely conversant with the
manual configuration and control of network services, even if you choose to use the various System
Preferences panes to make quick changes on a day-to-day basis. Manual configuration is sometimes less
convenient, but so long as your machine is accessible, you will be able to control and configure it this
way. Server problems do not limit themselves to occurring when you're sitting at your machine.
GUI Controls
Apple ships OS X with many services disabled. In many of the Unix operating systems, you have to be
careful to disable some common services that are turned on by default, but that you may not actually
need. With OS X, however, many of these common services are wisely disabled. Instead of requiring an
expert to secure the machine in its default state, OS X starts off fairly well configured for a novice
administrator. Instead, it requires a bit of expertise to turn on useful but potentially dangerous services
such as the remote login service.
Under the Services tab of the Sharing pane of the System Preferences panel you find controls to enable
or disable the FTP, SSH, World Wide Web, AppleShare, Windows File Sharing, Remote Apple Events,
and the Printer Sharing servers. Choose only those services that you really need. Remember, the more
services you turn on, the more vulnerable to attack your machine becomes. Figure 11.1 shows the
Services tab of the Sharing pane.
Figure 11.1. From the Services tab of the Sharing pane a number of services can be enabled or
The inetd service, the Internet services daemon, configured by the /etc/inetd.conf file, actually
is a service that starts and controls other services. It's not practical to start an unlimited number of some
types of network services and leave them running, right from startup. Depending on the use of your
machine, some services might be needed in great numbers—for example, the ftpd FTP server
processes, if you serve particularly interesting data and have many people connecting simultaneously.
Others might be used hardly at all, such as the sprayd network diagnostic daemon. On your system,
the pattern might be the opposite, but regardless of the use, patterns are likely to vary over time. For
many of these types of services, the system relieves you of the task of trying to provide the proper
number of these servers in some manual configuration process by using the inetd daemon to configure
and run them on an as-needed basis.
As was mentioned earlier, xinetd can be used as a replacement for inetd. As a matter of fact,
starting with Mac OS X 10.2, xinetd is the default Internet services daemon. However, because
inetd is a ubiquitous Unix service and its configuration file is easier to read for familiarizing yourself
with the basic network services involved, we look at inetd first.
The inetd.conf file, then, is the file that tells inetd which services it should start in response to
network requests, and how (at the command-line level) to start them. The inetd.conf file has the
form of a set of lines, each line containing a specification for a service. The service specification lines
consist of a set of fields separated by tabs or spaces. The fields that must occur on each line are shown in
the following list, with a brief description of the data that belongs in them.
Service name (used to look up service port in /etc/services map)
Socket type (stream, dgram, raw, rdm, or seqpacket)
Protocol (tcp or udp, tcp6 or udp6, rpc/tcp, or rpc/udp)
Wait/nowait (for dgrams only—whether or not the socket should wait for additional
connections; all others get nowait)
User (user for which the service is run)
Server program (actual path to the binary on disk)
Server program arguments (how the command line would look, if typed, including server name)
The default inetd.conf file as it comes from Apple is shown in Listing 11.1. The # symbol in front
of each item indicates that the line is commented out and will not be run. Apple very wisely leaves all
these network services off by default. Many of them can be security holes, and it's best if you enable
them only as you need and understand them.
Listing 11.1 A Typical /etc/inetd.conf File
1 #
2 # Internet server configuration database
3 #
4 #
5.4 (Berkeley) 6/30/90
5 #
6 # Items with double hashes in front (##) are not yet implemented
in the OS.
7 #
8 #finger
nobody /usr/libexec/tcpd
fingerd -s
9 #ftp
/usr/libexec/tcpd ftpd
10 #login
11 #nntp
usenet /usr/libexec/tcpd nntpd
12 #ntalk
13 #shell
/usr/libexec/tcpd rshd
14 #telnet
15 #uucpd
/usr/libexec/tcpd uucpd
16 #comsat
17 #tftp
nobody /usr/libexec/tcpd
tftpd /private/tftpboot
18 #bootp
19 ##pop3
/usr/libexec/tcpd /usr/
20 ##imap4
/usr/libexec/tcpd /usr/
21 #
22 # "Small servers" -- used to be standard on, but we're more
23 # about things due to Internet security concerns. Only turn on
what you
24 # need.
25 #
26 #chargen stream
nowait root
27 #chargen dgram
28 #daytime stream
nowait root
29 #daytime dgram
30 #discard stream
nowait root
31 #discard dgram
32 #echo
nowait root
33 #echo
34 #time
nowait root
35 #time
36 #
37 # Kerberos (version 5) authenticated services
38 #
39 ##eklogin stream tcp
nowait root
-k -c -e
40 ##klogin
stream tcp
nowait root
-k -c
41 ##kshd
stream tcp
nowait root
kshd -k
-c -A
42 #krb5_prop stream tcp
nowait root
43 #
44 # RPC based services (you MUST have portmapper running to use
45 #
46 ##rstatd/1-3
dgram rpc/udp wait root
47 ##rusersd/1-2
dgram rpc/udp wait root
48 ##walld/1
dgram rpc/udp wait root
49 ##pcnfsd/1-2
dgram rpc/udp wait root
50 ##rquotad/1
dgram rpc/udp wait root
51 ##sprayd/1
dgram rpc/udp wait root
52 #
53 # The following are not known to be useful, and should not be
enabled unless
54 # you have a specific need for it and are aware of the possible
55 #
56 #exec
nowait root /usr/libexec/tcpd
57 #auth
root /usr/libexec/tcpd
-w -t120
Briefly, the intent of the services on each line is as follows:
Line 8. The fingerd daemon enables external users to finger a user ID and find out whether the
ID exists; if it does, how recently and on what terminals the ID has been logged in.
Line 9. The ftpd daemon provides an FTP (file transfer protocol) server.
Line 10. The login service provides service for the rlogin remote login terminal program.
Don't turn this on.
Line 11. The nntp service is a Usenet newsgroups server. If your machine is configured to
receive news from other servers, you can point your newsreader to your local machine to read
Line 12. The ntalk (new protocol talk) daemon provides for real-time chat services. If you're
familiar with ICQ, iChat, or IRC, this service is somewhat similar.
Line 13. Provides remote shell service—another way to remotely access machines. This service
is required to use certain remote services, such as remote tape archive storage. Because Apple
hasn't provided all the software necessary to make full use of these services, we suggest that this
be left off as well; it's almost as large a security risk as rlogin and telnet.
Line 14. Provides the telnet daemon to allow remote telnet terminal connections. Don't
turn this on. Mac OS X already provides SSH, which can be used more securely for terminal
Line 15. The uucpd service implements the Unix-to-Unix copy protocol. This is an antiquated
method for networking Unix machines that can't always be connected to the network. Essentially,
it enables network traffic between two sites to be queued until both sites are available on the
network, and then exchanges the data. This service is of very limited utility today, and presents a
significant security risk because it hasn't really been maintained since the days of 1200-baud
Line 16. The comsat daemon provides notification of incoming mail to mail-reader clients.
Line 17. tftp is trivial file transfer protocol, and is one of the methods of providing file service
to completely diskless network clients. You won't need to enable this service unless you're
providing network boot services for diskless Unix clients.
Line 18. bootp is a way of transmitting network configuration information to clients. Chances
are you'll use DHCP for this, if you have a need to do so, although it's possible that OS X Server
could use bootp for netboot clients.
Line 19. pop3 is a POPMail (Post Office Protocol Mail) server. In the file, Apple indicates that
this service is not yet available. This server would potentially be used if you were running a mail
server and installed a third-party POPMail server.
Line 20. imap4 is an IMAP mail server. Again, this service is not available as of the 10.2
release. This server would potentially be used if you were running a mail server and installed a
third-party IMAP mail server.
Lines 26–33. Provide a number of network and network-software diagnostic servers. Unless you
are performing network diagnosis and specifically need these, leave them off. They do not cause
any known security problems, but if you're not using them, they occupy resources needlessly.
Lines 34 and 35. Provide the time service. (Some servers require both stream and datagram
connectivity, and these must be defined on separate lines.) If you want your machine to be a time
server, these can be turned on.
Lines 39–42. Start a number of Kerberos (security authentication)–related servers, but most are
unavailable from Apple as of the 10.2 release. The krb5_prop service (starting kpropd) is the
server that propagates a master Kerberos server's database to slave servers.
Line 46. The rstatd daemon allows systems to connect through the network and get machine
status information.
Line 47. The rusersd daemon allows systems to connect through the network and to find
information about this system's users. This is generally considered to be a Bad Idea.
Line 48. The walld daemon enables users to write to the screens of all users on the system. This
facility is nice if you're root and need to tell your users that the machine is going to go down for
maintenance. It's annoying if one of your users starts using it to incessantly ask anyone connected
to the machine for help with trivial Unix problems.
Line 49. The pcnfsd daemon provides service for a PC network file system product named
pcnfs. Almost everybody uses samba instead nowadays.
Line 50. The rquotad daemon provides disk quota information to remote machines, so that
they can enforce quotas that your machine specifies on disks that it is serving to them.
Line 51. sprayd is another network diagnostic server. Simply put, it responds, as rapidly as it
can, to packets placed on the network by some other machine's spray process, which places
packets on the network as fast as it can. This one would be nice if Apple provided it in a later
release because it can be very useful for finding problem hardware in your network.
Line 56. The rexecd daemon allows for the remote execution of parts of programs. Apple
claims that it isn't known to be useful, but a programmer can make very good use of this service
to perform distributed processing tasks by sending parts of programs to many different machines.
Of course, it is also a security risk.
Line 57. Another service that Apple considers to be of no practical use. The identd daemon
provides a method for a remote machine to verify the identity of a user causing a connection,
inasmuch as any identity can be verified over the network. The service was created because it is
very easy for a user accessing, for example, a remote FTP site, to pretend to be a different user on
your system, and potentially cause trouble for the person he is pretending to be.
The network services defined in /etc/inetd.conf or in /etc/xinetd.d/ run from
the ports specified in the /etc/services file. The /etc/services file defines the
port numbers in three sections: Well Known Port Numbers, Registered Port Numbers, and
Dynamic and/or Private Ports. Currently the Internet Assigned Numbers Authority (IANA),, coordinates port assignments. The assignments used to be maintained
in an RFC, but as the Internet grew, the RFC was replaced by IANA. If you decide to run a
service from inetd or xinetd on a nonstandard port, update your /etc/services file
to include a port number and service name. You might check IANA for the last port
assignment information. However, if you decide to run a service on a port that officially is
used for another service, you won't experience any problems unless you are also trying to run
the service that is supposed to run on that port. If you should end up having to run a service
on a nonstandard port, the only one who will be confused is the attacker probing your
Startup Items
Some services start from /System/Library/StartupItems at boot time. Unlike the types of
services that are controlled by inetd, these services are most efficient if they are running all the time.
These services are root-owned daemon processes that run continuously, listening for connections and
forking off new client-handling processes under restricted privileges. inetd and xinetd are two such
services. Some of these services have additional controls in the /etc/hostconfig file that the
startup scripts check. For example, the SSH server, the AppleShare server, and the mail server are
controlled in such a fashion.
If you enabled those services, but later decide to disable them, set the appropriate /etc/hostconfig
variable to -NO- and kill their current processes. The next time you reboot, the services won't start. To
manually start one of those services, set the appropriate variable in /etc/hostconfig to -YES- and
manually execute the startup script. How this is done depends on the startup script. For some, using
SystemStarter (SystemStarter start <path-to-service-directory>) may be
appropriate. For others, passing the start action to the script (<startup script> start) might
work instead. For others, simply executing the startup script without passing the start action is sufficient.
For example, for the SSH server, you would make the SSHSERVER line in /etc/hostconfig read
SSHSERVER=-YES-. Then you would execute /System/Library/StartupItems/SSH/SSH.
To disable a service that doesn't have a control in /etc/hostconfig, simply rename the startup
script and kill its process.
Some third-party packages put startup scripts in /Library/StartupItems. To disable a service
whose startup script is located in this directory, rename the startup script or the directory it is in and kill
its process.
Protecting inetd with TCP Wrappers
A common way to restrict access to some TCP services is to use the TCP Wrappers program. TCP
Wrappers is a package that monitors and filters requests for TCP (Transmission Control Protocol)
services. We don't look at the protocol in any detail here—that's a book subject in itself. Suffice it to say
the protocol has enough control information in it that we can use a package like TCP Wrappers to filter
some of that traffic. TCP Wrappers can be used to restrict certain network services to individual
computers or networks.
To make use of this program on some flavors of Unix, TCP Wrappers must be installed by the system
administrator. This isn't a necessary step in Mac OS X because the TCP Wrappers program comes
preinstalled on the system. The /etc/inetd.conf file in Mac OS X already assumes that you use
TCP Wrappers, as evidenced by a line such as the following:
ftpd -l
The /usr/libexec/tcpd portion of the preceding line indicates that TCP Wrappers is used to call
TCP Wrappers can block access to a service, but it can't refuse the connection.
When you use TCP Wrappers to protect an inetd service, it lets you prevent a remote
machine from connecting to that service, but the connection to inetd still happens. Using
tcpd to protect a service in this fashion can prevent the internal resource depletion effects
of a DoS attack, but inetd itself still must accept the connection and pass it to tcpd.
Through this limitation, a DoS attack against inetd, even on a completely TCP Wrapped
machine, can impact performance and resources.
TCP Wrappers therefore should be thought of primarily as a way to limit access to services
to hosts that you want accessing them, rather than a way of securing inetd itself.
Wrappng inetd Processes
The particularly difficult part about using TCP Wrappers is configuring it. We will look at two ways you
can configure TCP Wrappers in OS X: the traditional method of using two control files and an alternate
method that uses only one control file.
Traditionally, TCP Wrappers has two control files, /etc/hosts.allow and /etc/hosts.deny.
We will look at the traditional method in greater detail because it is the default setup for a machine when
extended processing options are not enabled. An understanding of the traditional method should carry
over to the alternate method. Be sure to read the hosts_access and hosts_options man pages
for detailed information.
Here is the format of the access control files:
daemon_list : client_list : option : option ...
Through /etc/hosts.allow you can allow specific services for specific hosts.
Through /etc/hosts.deny you can deny services to hosts, and provide global exceptions.
The easiest way to think of and use these configuration files is to think of TCP Wrappers as putting a big
fence up around all the services on your machine.
The specifications in /etc/hosts.deny are most typically used to tell the fence what services are on
the outside of the fence, and therefore for which access is not denied (that is, the specifications provide
exceptions to the deny rule). The fence can appear to be around different sets of services for different
clients. For example, an /etc/hosts.deny file might look like
ALL EXCEPT ftpd : 192.168.1. : banners /usr/libexec/banners
ALL : : banners /usr/libexec/banners
ALL EXCEPT ftpd sshd : ALL : banners /usr/libexec/banners
This file says
For the subdomain 192.168.1., deny all connections except connections to the FTP daemon,
For the specific machines and (maybe they're
troublemakers) deny absolutely all connections.
For all other IP addresses, deny everything except connections to ftpd and to the secure-shell
daemon sshd.
The banners /usr/libexec/banners entry is an option that tells tcpd that if it denies a
connection to a service based on this entry, try to find an explanation file in this location. Use this option
if you have a need to provide an explanation as to why the service is not available. The banners option
can be used when tcpd is compiled with the -DPROCESS_OPTIONS option. A makefile that can
build the banners comes with the TCP Wrappers distribution. Make a prototype banner called
prototype, run make to create a banner for the service, and place the banner file in the appropriate
directory; in this case, /usr/libexec/banners. This option can be used primarily for ftpd,
telnetd, and rlogind. For Mac OS X, you would probably have to compile your own tcpd to get
this to work.
The specifications in /etc/hosts.allow make little gates through the fences erected by /etc/
hosts.deny for specific host and service combinations. For example, an /etc/hosts.allow file
might look like the following:
ALL: 192.168.2. 192.168.3.
This file says
Allow connections to any TCP service from the host and all hosts in the
192.168.2. and 192.168.3. subdomains. (Perhaps the 192.168.2. and 192.168.3.
subdomains are known highly secure networks because they're subnets within our own corporate
LAN, and we really trust because it's so well run.)
Allow connections to the popper (POPMail v3 daemon) service for three specific
machines:,, and If used in
combination with the previous /etc/hosts.deny file, these allowances still stand. They
override the denials in /etc/hosts.deny, so even though the 192.168.1. subdomain is
denied all access except to ftpd by /etc/hosts.deny, the specific machine has its own private gate that allows it access to the popper popd service as
Services with a smile or without? There can be a bit of confusion as to the name of the
service to put in an /etc/hosts.allow or /etc/hosts.deny file. If it's a service out
of inetd.conf, generally the name to use is the service name from the left-most column
of the file. If this doesn't work, try adding a d to the end of the service name (ftp ->
ftpd). If that doesn't work, then try the program name as used in the right-most column of
the inetd.conf file or as started by your rc files.
Other services use names that don't seem to be recorded officially anywhere. These
sometimes require a bit of experimenting on your part, but the service names inserted into
the Services NetInfo map are usually the hint you need.
If all else fails, read the man pages for the service.
Now that you have seen how the traditional method of controlling TCP Wrappers works, let's take a
brief look at an alternate method that uses only the /etc/hosts.allow file. This method can be
used on systems where extended option processing has been enabled. This is indeed the case with OS X.
Nevertheless, either method works in OS X.
In the single file, /etc/hosts.allow, you specify allow and deny rules together. With the /etc/
hosts.allow-only method, tcpd reads the file on a first-match-wins basis. Consequently, it is
important that your allow rules appear before your deny rules.
For example, to restrict access to ftpd to only to our host,, we would use these
ftpd: localhost: ALLOW
ftpd: ALL: DENY
In the first line, the host,, is allowed access to ftpd using the various addresses that it
knows for itself. On the second line, access to all other hosts is denied. If you reversed these lines, even
the host that you wanted to allow ftpd access would be denied access.
After you have sufficiently tested that you have properly set up your allow and deny rules, there is
nothing else you need to do to keep TCP Wrappers running. As you are testing your rules, check your
logs carefully to see where, if at all, the behaviors are logged. You will rarely see entries for tcpd itself
in your logs, but you may see additional logging for a wrapped service under that service. The best place
to check is /var/log/system.log.
Increasing Security with xinetd
Starting with the Mac OS X 10.2 distribution, xinetd is used as the default Internet services daemon.
The xinetd package, the extended Internet services daemon, is highly configurable and can provide
access controls for both TCP and UDP. Among the controls for a given service that can be configured
are the number of simultaneous servers, the number of connections permitted from a given source,
access times, allowing and denying access to certain hosts, and limiting the rate of incoming
connections. Some of these controls can help reduce denial of service attacks on your machine. xinetd
can even redirect a service to a port on another machine, or to another interface on the same machine.
This would be particularly useful if xinetd is running on your firewall machine.
Using xinetd can limit the risk of DoS attacks, but not eliminate it.
Although xinetd can be configured to limit access to services in a number of ways, and is
superior to inetd in its capacity to reduce the resource consumption caused by a DoS
attack, it is not a complete solution. xinetd still must receive and process each network
service request. Even if a particular request is going to be denied by the rules, it still
consumes network bandwidth, and it still consumes processing power to detect that it's
The only way to completely eliminate the effect of DoS attacks against your machine is to
get them filtered out before they reach your hardware. This is usually accomplished at the
router that feeds your network. If your service provider can't provide this service to you,
blocking DoS attacks at the packet-filter level (see the discussion on Carrafix and traffic
shaping in Chapter 17, "Blocking Network Access: Firewalls") is most efficient at reducing
the impact on your machine.
Although xinetd is a highly configurable inetd replacement that promotes increased security, it is
not immune to vulnerabilities. The following list describes recent xinetd vulnerabilities.
xinetd Open File Descriptor Denial of Service Vulnerability (CAN-2002-0871, BugTraq ID
5458). Services launched by xinetd inherit file descriptors from the signal pipe. An attacker
could then launch a denial of service attack via the signal pipe. So far there are no known
instances of this vulnerability being exploited. The vulnerability is fixed in xinetd 2.3.7.
Multiple xinetd Vulnerabilities (CAN-2001-1389, BugTraq ID 3257). Buffer overflows and
improper NULL termination exist in versions of xinetd before version 2.3.3. These
vulnerabilities can lead to denial of service attacks or remote root compromise. The
vulnerabilities are fixed in version 2.3.3.
Zero Length Buffer Overflow Vulnerability (CAN-2001-0825, BugTraq ID 2971). Versions of
xinetd before improperly handle string data in some internal functions. As a result, a
buffer overflow can occur when a length argument with a value less than or equal to zero is
passed to one of these internal functions. This could result either in a root compromise on the
machine, or in denial of service for services started by xinetd, if xinetd crashes. Fixes that
were originally available for this vulnerability may not completely fix the problem, but the
problem is fixed in version 2.3.3.
Installing xinetd
The first vulnerability listed in the previous section, CAN-2002-0871, indicates that a fix is available in
xinetd 2.3.7. Mac OS X 10.2, however, originally came with xinetd 2.3.5. If you check your logs,
and you see an entry like the one below, you have a version of xinetd with the particular denial of
service vulnerability contained in CAN-2002-0871.
[View full width]
Aug 26 18:13:49 Sage-Rays-Computer xinetd[1112]: xinetd Version 2.3.5
started with
libwrap options compiled in.
Make sure you apply the latest updates to replace your version of xinetd. You can always download
the most recent version from As of this writing, the latest version is 2.3.11. If
you upgrade an application yourself before Apple provides an update, save your newer version
somewhere, in case Software Update should overwrite your version with something that might not be as
current. If Software Update overwrites your version with the same or newer, you can probably remove
yours entirely, unless you had customized your version.
xinetd follows this basic format for compilation and installation:
make install
A few compile-time options that you can pass to configure are documented in Table 11.1. The install
step must be done as root. Depending on how you want to maintain your machine, you may just prefer
to copy the binary to /usr/libexec/, or you may want to store your updated version of xinetd in
a completely separate location. To keep your updated version of xinetd as capable as the one Apple
provides, we recommend at least running configure with the --with-libwrap option. The
libwrap path is /usr/lib/libwrap.a. Make sure you keep backup copies of the original
xinetd and its configuration file, /etc/xinetd.conf.
Table 11.1. Compile-Time Options for xinetd
Specifies the directory prefix for installing xinetd. The default is /
--with-libwrap=PATH Compiles in support for TCP Wrappers. With this option on, xinetd
first looks at the TCP Wrappers controls file(s). If access is granted,
xinetd then continues on to its access controls.
Compiles in support for the max_load configuration option, which
causes a service to stop accepting connections when the specified load
has been reached. The option is currently supported only on Linux and
Causes services to default to IPv6. However, IPv6 support is now fully
integrated into xinetd, rendering this option meaningless.
If you decide to install xinetd on Mac OS X 10.1 or earlier and want to compile in libwrap
support, you need to download some libwrap files first. They are available from http://www. if you select the tcp-wrappers
download. The source builds its files in /tmp/. If you want to put the files in /usr/local/,
make /usr/local/lib/ and /usr/local/include/ directories, if they do not already
exist. If you want to install the files elsewhere, replace the /usr/local/ references as
appropriate. Then at the root of the source directory, do the following:
[View full width]
make RC_ARCHS=ppc install
cp /tmp/tcp_wrappers/Release/usr/local/lib/libwrap.a /usr/local/
ranlib /usr/local/lib/libwrap.a
cp /tmp/tcp_wrappers/Release/usr/local/include/tcpd.h /usr/local/
In Mac OS X, xinetd runs by default as xinetd -pidfile /var/run/
However, more runtime options are also available for xinetd, and are listed in Table 11.2.
Table 11.2. Runtime Options for xinetd
Enables debug mode.
-syslog <syslog_facility> Enables syslog logging of xinetd-produced messages using
the specified syslog facility. The following syslog facilities
may be used: daemon, auth, user, local[0-7].
Ineffective in debug mode.
-filelog <log_file>
Specifies where to log xinetd-produced messages.
Ineffective in debug mode.
-f <config_file>
Specifies which file to use as the config file. Default is /etc/
-pidfile <pid_file>
Writes the process ID to the file specified. Ineffective in debug
mode. Apple starts xinetd with this option:
xinetd -pidfile /var/run/
Tells xinetd to stay running even if no services are specified.
-limit <proc_limit>
Limits the number of concurrently running processes that can
be started by xinetd.
-logprocs <limit>
Limits the number of concurrently running servers for remote
user ID acquisition.
-cc <interval>
Performs consistency checks on its internal state every
<interval> seconds.
Configuring xinetd
The default /etc/xinetd.conf file that comes with Mac OS X 10.2 is shown in Listing 11.2.
Listing 11.2 The Default /etc/xinetd.conf File
1 # man xinetd.conf for more information
3 defaults
4 {
= 60
= SYSLOG daemon
= 25 30
10 }
12 includedir /etc/xinetd.d
The /etc/xinetd.conf file looks very different from the /etc/inetd.conf file. This file has
two major sections to it: a defaults section and a services section. The defaults section has controls that
are basic defaults for the services. Each service has further controls and can also override or augment
controls listed in the defaults section. Briefly, the intent of the lines of this file is as follows:
Line 3 labels the defaults section of the file.
Line 4 starts the configuration for the defaults section of the file.
Line 5 sets the first defaults attribute, instances, which specifies the limit of servers for a
given service to 60.
Line 6 sets the log_type attribute to the SYSLOG facility at the daemon level.
Line 7 sets the log_on_success attribute to HOST, which logs the remote host's IP address,
and PID, the process ID of the server.
Line 8 sets the log_on_failure attribute to HOST, which logs the remote host's IP address.
Line 9 sets the cps attribute, the one that limits the connections per second, to 25 connections
per second. When this limit is reached, the service disables itself for the number of seconds
specified in the second argument—30 seconds in this case.
Line 10 ends the defaults configuration section.
Line 12 starts the services section by using the includedir directive to specify that every file
in the /etc/xinetd.d directory, excluding files containing . or ~, is parsed as an xinetd
configuration file. The files are parsed in alphabetical order according to the C locale.
Already you can tell that xinetd has more functionality than the traditional inetd. For instance,
inetd cannot limit the number of connections per second. The items listed in this default /etc/
xinetd.conf file are not the only ones that can be listed in this section, nor are the default values
necessarily the only possible values. Table 11.3 shows a listing of available attributes for xinetd.
Table 11.3. Available Attributes for xinetd
Used to uniquely identify a service. Useful for services that can use different
protocols and need to be described with different entries in the configuration
file. Default service ID is the same as the service name.
Any combination of the following can be used:
RPC: Specifies service as an RPC service.
INTERNAL: Specifies service as provided by xinetd.
UNLISTED: Specifies that the service is not listed in a standard system file,
such as /etc/services or /etc/rpc.
Any combination of the following can be used:
INTERCEPT: Intercepts packets or accepted connections to verify that they are
coming from acceptable locations. Internal or multithreaded services cannot be
NORETRY: Avoids retry attempts in case of fork failure.
IDONLY: Accepts connections only when the remote end identifies the remote
user. Applies only to connection-based services.
NAMEINARGS: Causes the first argument to server_args to be the name of
the server. Useful for using TCP Wrappers.
NODELAY: For a TCP service, sets the TCP_NODELAY flag on the socket. Has
no effect on other types of services.
DISABLE: Specifies that this service is to be disabled. Overrides the enabled
directive in defaults.
KEEPALIVE: For a TCP service, sets the SO_KEEPALIVE flag on the socket.
Has no effect on other types of services.
NOLIBWRAP: Disables internal calling of the tcpwrap library to determine
access to the service.
SENSOR: Replaces the service with a sensor that detects accesses to the
specified port. Does not detect stealth scans. Should be used only on services
you know you don't need. Whenever a connection is made to the service's port,
adds the IP address to a global no_access list until the deny_time setting
IPv4: Sets the service to an IPv4 service.
IPv6: Sets the service to an IPv6 service.
Has a value of yes or no. Overrides the enabled directive in defaults.
Has a value of stream, dgram, raw, or seqpacket.
Specifies the protocol used by the service. Protocol must exist in /etc/
protocols. If it is not defined, the default protocol for the service is used.
Specifies whether the service is single-threaded or multithreaded. If yes, it is
single-threaded; xinetd starts the service and stops handling requests for the
service until the server dies. If no, it is multithreaded; xinetd keeps handling
new service requests.
Specifies the UID for the server process. Username must exist in /etc/
Specifies the GID for the server process. Group must exist in /etc/group. If
a group is not specified, the group of the user is used.
Determines the number of simultaneous instances of the server. Default is
unlimited. The value can be an integer or UNLIMITED.
Specifies server priority.
Specifies the program to execute for this service.
Specifies arguments to be passed to the server. Server name should not be
included, unless the NAMEINARGS flag has been specified.
Specifies to which remote hosts the service is available. Can be specified as:
A numeric address in the form %d.%d.%d.%d. 0 is a wildcard. IPv6 hosts may
be specified as abcd:ef01::2345:6789.
A factorized address in the form of %d.%d.%d.{%d,%d,...}. There is no
need for all four components (that is, %d.%d.{%d,%d,...%d} is also okay).
However, the factorized part must be at the end of the address. Does not work
for IPv6.
A network name (from /etc/networks). Does not work for IPv6.
A hostname or domain name in the form of
An IP address/netmask range in the form of IPv6 address/
netmask ranges in the form of 1234::/46 are also valid.
Specifying this attribute without a value makes the service available to nobody.
Specifies the remote hosts to which this service is not available. Value can be
specified in the same forms as for only_from. When neither only_from
nor no_access is specified, the service is available to anyone. If both are
listed, the one that is the better match for the host determines availability of the
service to the host. For example, if only_from is and
no_access is, then does not have access.
Specifies time intervals when the service is available. An interval has the form:
hour:min-hour:min. Hours can range from 0–23; minutes can range from
Specifies where service log output is sent. May either be SYSLOG or FILE, as
SYSLOG <syslog_facility> [<syslog_level>]
Possible facility names include daemon, auth, authpriv, user, local07. Possible level names include emerg, alert, crit, err, warning,
notice, info, debug. If a level is not present, the messages will be recorded
at the info level.
FILE <file> [<soft_limit>] [<hard_limit>]
Log output is appended to <file>, which is created if it does not exist.
log_on_success Specifies what information is logged when the server is started and exits. Any
combination of the following can be specified:
PID: Logs the server process ID.
HOST: Logs the remote host's address.
USERID: Logs remote user ID using RFC 1413 identification protocol. Only
available for multithreaded stream services.
EXIT: Logs the fact that the server exited along with the exit status or
termination signal.
DURATION: Logs the duration of the server session.
log_on_failure Specifies what is logged when a server cannot start, either from lack of
resources or access configuration. Any combination of the following can be
HOST: Logs the remote host's address
USERID: Logs remote user ID using RFC 1413 identification protocol.
Available for multithreaded stream services only.
RECORD: Logs as much information about the remote host as possible.
ATTEMPT: Logs the fact that a failed attempt was made. Implied by use of any
of the other options.
Specifies the RPC version of an RPC service. Can be a single number or a range
in the form of number–number.
Specifies the number for an unlisted RPC service.
Value of this attribute is a list of strings of the form <name>=<value>. These
strings are added to the server's environment, giving it xinetd's environment
as well as the environment specified by the env attribute.
Value of this attribute is a list of environment variables from xinetd's
environment to be passed to the server. An empty list implies passing no
variables to the server except those explicitly defined by the env attribute.
Specifies the service port. If this attribute is listed for a service in /etc/
services, it must be the same as the port number listed in that file.
Allows a TCP service to be redirected to another host. Useful for when your
internal machines are not visible to the outside world. Syntax is
redirect = <IP address or host name> <port>
The server attribute is not required when this attribute is specified. If the
server attribute is specified, this attribute takes priority.
Allows a service to be bound to a specific interface on the machine.
Synonym for bind.
Name of the file to be displayed to the remote host when a connection to that
service is made. The banner is displayed regardless of access control.
banner_success Name of the file to be displayed to the remote host when a connection to that
service is granted. Banner is displayed as soon as access to the service is
Name of the file to be displayed to the remote host when a connection to a
service is denied. Banner is printed immediately upon denial of access.
Specifies the maximum number of connections permitted per server per source
IP address. May be an integer or UNLIMITED.
Limits the rate of incoming connections. Takes two arguments. The first is the
number of connections per second. If the number of connections per second
exceeds this rate, the server is temporarily disabled. The second argument
specifies the number of seconds to wait before reenabling the server.
Takes either yes or no. If yes, the server is executed with access to the groups
to which the server's effective UID has access. If no, server runs with no
supplementary groups. Must be set to yes for many BSD-flavored versions of
Sets the inherited umask for the service. Expects an octal value. May be set in
the defaults section to set a umask for all services. xinetd sets its own
umask to the previous umask ORd with 022. This is the umask inherited by
child processes if the umask attribute is not set.
Takes a list of service names to enable. Note that the service disable attribute
and DISABLE flag can prevent a service from being enabled despite its being
listed in this attribute.
Takes a filename in the form of include /etc/xinetd/service. File is
then parsed as a new configuration file. May not be specified from within a
service declaration.
Takes a directory name in the form of includedir /etc/xinetd.d.
Every file in the directory, excluding files containing . or ending with ~, is
parsed as an xinetd.conf file. Files are parsed in alphabetical order
according to the C locale. May not be specified within a service declaration.
Sets the maximum number of CPU seconds that the service may use. May either
be a positive integer or UNLIMITED.
Sets the maximum data resource size limit for the service. May either be a
positive integer representing the number of bytes or UNLIMITED.
Sets the maximum resident set size limit for the service. Setting this value low
makes the process a likely candidate for swapping out to disk when memory is
low. One parameter is required, which is either a positive integer representing
the number of bytes or UNLIMITED.
Sets the maximum stack size limit for the service. One parameter is required,
which is either a positive integer representing the number of bytes or
Sets the time span when access to all services to an IP address are denied to
someone who sets off the SENSOR. Must be used in conjunction with the
SENSOR flag. Options are
FOREVER: IP address is not purged until xinetd is restarted.
NEVER: Just logs the offending IP address.
<number>: A numerical value of time in minutes. A typical time would be 60
minutes, to stop most DoS attacks while allowing IP addresses coming from a
pool to be recycled for legitimate purposes.
OS X has default xinetd configuration files for the following services:
% ls /etc/xinetd.d
chargen-udp echo
As you can see, services that require two lines in /etc/inetd.conf, such as time, require two files
in /etc/xinetd.d. Listing 11.3 includes the default listings for the ftp, time, and time-udp
Listing 11.3 Default xinetd Configuration Files for the ftp, time, and time-udp Files
service ftp
service time
= yes
service time-udp
Because FTP is a service that you might possibly enable, let's take a brief look at the attributes of the
default /etc/xinetd.d/ftp file from Listing 11.3:
Line 3 sets the first attribute, disable, to yes. This means that by default, the FTP service is
disabled. /etc/inetd.conf simply has a # in front of a service to disable it.
Line 4 sets the socket_type attribute to stream. This was the second item in the ftp line
of /etc/inetd.conf.
Line 5 sets the wait attribute to no. This was the third item in the ftp line of /etc/inetd.
Line 6 sets the user attribute to root. This was the fourth item in the ftp line of /etc/
Line 7 sets the server attribute to /usr/libexec/ftpd. This was the fifth item in the ftp
line of /etc/inetd.conf.
Line 8 sets the server_args attribute to -l. This was the final item in the ftp line of /etc/
Line 9 sets the groups attribute to yes. This is required for BSD-flavored versions of Unix.
Because this attribute is required for all of your xinetd services, you could also move it to the
defaults section of /etc/xinetd.conf and then remove it from the individual service files.
Line 10 sets the flags attribute to REUSE. This seems to be an undocumented flag, but online
wisdom indicates that it is a good flag to use.
As was the case with the /etc/inetd.conf file, the time service contains the same major
descriptors, but in a different form. Unlike the ftp xinetd configuration file, the time and time-
udp files also include the id attribute to uniquely identify the services.
Perhaps one of the most notable differences between the default /etc/inetd.conf file and the /
etc/xinetd.d/ftp file is that the server is set to /usr/libexec/tcpd in the inetd.conf
file, but in the ftp file, it is set to /usr/libexec/ftpd. Because inetd is not as configurable, it is
important to use TCP Wrappers. However, you can configure host access information directly in
xinetd without having to use TCP Wrappers. We recommend that you make use of that built-in
capability. Additionally, xinetd includes support for displaying banners for services.
If you want to enable any of the default services controlled by xinetd, change the disable entry to
no and restart xinetd by sending it a HUP signal using either of the following methods:
kill –HUP <xinetd_pid>
killall –HUP xinetd
Likewise, if you want to change any of the default configuration files, or add services not included in the
initial set of default configuration files, simply restart xinetd to have them take effect. Listing 11.4
includes recommended xinetd configurations for services that might be of interest to you. If you want
to change any of the defaults that appear in /etc/xinetd.conf for a given service, be sure to
include that updated attribute in the service's file.
Finally, we also recommend that you reverse the starting order of inetd and xinetd in /System/
Library/StartupItems/IPServices/IPServices. As of this writing, inetd starts before
xinetd, but we recommend that you change the lines to read as follows:
xinetd -pidfile /var/run/
This change will ensure that any services you start via the System Preferences are controlled as
they were intended to be.
Listing 11.4 Recommended Basic xinetd Configurations
service ftp
<host list>
<host list>
<time intervals>
service imap
service pop3
service swat
(Note that the swat service also needs a corresponding
swat 901/tcp
line in /etc/services)
Wrapping xinetd Processes
Although xinetd already has built-in host access restriction capabilities, if you decide that you would
rather use TCP Wrappers on a service controlled by xinetd, you need to add a flag, NAMEINARGS, to
the service and further expand the server_args line to include the full path to the service. Replace
the original path to the server with the path to tcpd. Here's an example for using TCP Wrappers for
restricting access to the FTP service in xinetd:
service ftp
/usr/libexec/ftpd -l
The primary problem with inetd itself is that it can be used in denial of service attacks on your
machine. As the Internet services superserver, it can start many services that are prone to various
vulnerabilities. Therefore, you do not want to turn on any services that you don't need to have running.
To further increase inetd security, you can make use of the TCP Wrappers package, tcpd, that comes
with OS X. With TCP Wrappers, you can allow or deny access to TCP services that are started by
inetd on a per-host or per-network basis.
Likewise, the primary problem with the default xinetd is that, in spite of its built-in host restriction
capabilities, it is also prone to denial of service attacks. Nonetheless, with xinetd, you cannot only
control "allow" and "deny" access for services, but you can also control many other aspects of a service.
For example, for a given service, you can control the number of servers running, the server priority
level, access times, number of connections per source IP address, the rate of incoming connections, the
maximum number of CPU seconds, and maximum data size. Such controls can reduce the impact of a
denial of service attack on your machine.
Given that the histories of both inetd and xinetd vulnerabilities have included at least one serious
vulnerability, it is important to keep current on both of these daemons. For inetd, apply any updates
from Apple that include inetd fixes. At least do the same for xinetd. For additional protection,
install the latest version of xinetd.
Chapter 12. FTP Security
FTP Vulnerabilities
Activating the FTP Server
Configuring the Default lukemftpd FTP Server
Setting Up Anonymous FTP
Replacing the Mac OS X FTP Server
Alternatives to FTP
One of the primary benefits of connecting computers together with a network is being able to use the
network connection to move files between them. The FTP (File Transfer Protocol) service is an early
and quite ubiquitous method designed to support this activity. Unfortunately, the protocol was invented
during the more naive days of computer network development. And so in addition to the predictable
security flaws that are due to bugs, it sports a plethora of intentional features that today we must struggle
with to make secure. As you will see, these range from things that today we see as anachronistically
optimistic design to features that were clearly a bad idea from day one.
Still, FTP, with all its inherent problems, isn't going away anytime soon. To work and play well in the
networked world, FTP support is all but obligatory, and to be supported well, it needs to be secured. To
secure it properly, an understanding of the protocol and its inherent vulnerabilities is required.
FTP Vulnerabilities
As you may already have guessed, FTP's largest vulnerability is essentially the design of the protocol.
Having been created in a day when today's constant network security threats simply didn't exist, it wasn't
designed with the requisite paranoid mindset to be secure. In fact, so trusting was the nature of its
designers that it supports a number of features that allow it to be made explicitly insecure. The security
concerns range from vulnerability, to denial-of-service attacks, to the ability to configure it to execute
arbitrary system commands for remote users. And these are in addition to the vulnerabilities that are due
to bugs in the code. The primary flaws about which you must concern yourself are as follows:
FTP is a cleartext protocol. User IDs and passwords are carried over a cleartext connection,
enabling nefarious individuals to snoop your users' names and passwords off the wire. Even if
you don't allow remote logins of any form, if your users connect via FTP in an unprotected
manner (and it's very difficult to force them to use a protected method—more on this later in this
chapter), they're giving away the information necessary to log in with their ID at the console.
FTP is a two-channel protocol, with "commands" being carried over one network connection and
"data" being carried over another. Protecting the command channel isn't too difficult, and doing
so protects user IDs and passwords, but because of the design, protecting the data channel is more
difficult. Consequently, the data channel is not protected, and the contents of file transfers are
The flexibility designed into the protocol is overly optimistic. The protocol's two-channel design
is intended to allow a command channel from machine A to machine B, to invoke a data transfer
from machine B to machine C (or vice versa). While conceptually interesting, and quite useful
for allowing FTP connections to work from behind firewalls, this flexibility also allows a
renegade machine A to access data on machine C from any other machine on the Internet (any
machine running an insecure FTP server, that is), even if machine C has explicitly blocked all
access from machine A.
The FTP server is frequently run from inetd or xinetd (Chapter 11, "Introduction to Mac OS
X Network Services"), requiring a process to be spawned for every incoming connection,
whether it's a valid/successful connection or not. You can throttle connections or limit the
number of connections to your FTP server by using xinetd rather than inetd, but the design
consumes resources for any connection, thereby allowing DoS attacks and inappropriate
consumption of resources.
An additional design feature available in some server implementations enables you to configure
the server to accept new, site-specific commands. This allows FTP site administrators to put the
protocol to flexible uses in very site-specific contexts, but less-than-careful configuration can
allow the remote user to execute arbitrary commands with the permissions of the FTP server.
If the server is configured to allow access by users with accounts on the server machine, the
server must be configured to initially run as root, so that it can switch user IDs to run as the
appropriate valid user after the user is authenticated. Unfortunately, this means that the server
must run as root prior to authentication, and that it therefore can be the source of root
compromises if bugs are discovered that allow attacks prior to authentication or outside of the
authenticated session.
Each of these areas has been the subject of numerous attacks over the years. CERT's database contains
over a thousand entries regarding FTP, and CVE lists almost 200 specific FTP vulnerabilities and
exposures. For example:
A bug in versions 2.4.1 and earlier of the very popular FTP server wu-ftpd allows the use of a
liberally configured SITE EXEC to spawn shells and other system commands, potentially with
root privileges (CVE-1999-0080). To find out whether your machine is vulnerable, FTP to your
own account, log in, and enter site exec csh -c id. If your machine responds with a line
that starts uid=0(root)..., you've just executed a command as root, without the root
password, and your ftpd is vulnerable.
As a convenience, many FTP servers provide for data-type conversion between what's actually
stored and what the user wishes to retrieve. Most frequently, this allows the retrieval of both
compressed and uncompressed versions of files when only one or the other version is actually
stored. wu-ftpd 2.4.2 to 2.6.0 have a buffer boundary-checking error in the routine that allows
convenience conversions on-server from one data type to another. The overflow allows users to
fool the server into executing arbitrary commands rather than the conversion software. To test
this exploit (CVE-1999-0997), search the Net for "SUID advisory 001"; currently this can be
found at
The "Pizza Thief" exploit (CVE-1999-0351), on the other hand, relies on an inherent feature of
the FTP design. This attack utilizes the dual-channel nature of FTP to allow the attacker access to
the server outside the intended scope of his authenticated FTP session. The attacker can fake a
command channel, or break a data channel, and then use an unauthorized client to access the
server through the open data channel connection. More information can be found at http://www. and
More recently (wu-ftpd 2.6.1), the autowu rootkit provides the script kiddie with one-stop root
shopping by exploit of the CA-2001-33 (CVE-2001-0550) vulnerabilities, which are a
combination of design and buffer bounds-checking problems. For a particularly amusing
examination of the exploit (and the skillset of those most likely to use it), see http://www.
None of these specific problems are immediate concerns for your server because Apple provides an FTP
server with no published vulnerabilities as of November 2002. The design of FTP, however, lends itself
to ongoing problems of a similar nature, as well as to administrator-induced vulnerabilities due to
misconfiguration. The best defense against bugs is to keep up with all software updates: We recommend
(and cover later in this chapter) installing wu-ftpd as a replacement for what Apple provides because
problems with wu-ftpd will be addressed nearly instantly. To protect against design issues, the only
defense is to thoroughly understand the options available. Although it's not always possible to create the
exact configuration desired, carefully studying the options, how they interact, and how to apply them is
the only way to produce an FTP environment with the features that you want without introducing too
many problems that you don't.
Activating the FTP Server
The Mac OS X distribution includes an FTP server called lukemftpd, which is a port of the NetBSD
FTP server. Because Apple is concerned about the security of your machine, this service is not turned on
by default. At this point, you can use FTP only to connect from your Mac OS X machine to other FTP
servers. After you've turned on the FTP service, you can FTP directly to your Mac OS X machine.
To activate the FTP server, check the FTP Access box under the Services tab of the Sharing pane, as
shown in Figure 12.1.
Figure 12.1. The FTP server is activated in the Sharing pane.
What this does behind the scenes is change the disable line in /etc/xinetd.d/ftp to
disable=no and force xinetd to reread its configuration file. If, for whatever reason, you're using
inetd rather the default xinetd, uncomment the ftp line and then run killall -HUP inetd to
have inetd reread its configuration file.
Remember that when you do this, you're opening a service that will accept user IDs and
passwords in clear text over your network connection. If a user with an account on your
machine tries to use a normal FTP client to use the FTP service, she'll be prompted for her
user ID and password. If she provides them, this information will be visible to anyone
watching your network. This is probably not what you want. We'll discuss options to make
this information more secure later in this chapter. Specifically, look to the section on setting
up an FTP server to provide encrypted access to connections tunneled through SSH, or
possibly the option of requiring anonymous, rather than real-user, FTP access. Note that if
you use the anonymous option, there's nothing to prevent your users from trying to use their
own user IDs and passwords instead of "anonymous," so this is no guarantee that your users
will use the service securely!
Configuring the Default lukemftpd FTP Server
The default Mac OS X FTP server allows you to customize its configuration.
lukemftpd FTP Server Options
You've just turned on your FTP server. If you looked at the /etc/xinetd.d/ftp file, you noticed
that the server runs by default with the -l option (server_args = -l), which is the option that
forces the logging of successful and unsuccessful FTP sessions.
If you study your FTP connections to keep track of what sorts of malicious individuals are trying to
crack your security, you might want to consider logging additional information. The lukemftpd server
options (shown in Table 12.1) include a number of settings to control the different types of information
stored and the way that it is stored. To implement any of the options, edit the server args entry in /
etc/xinetd.d/ftp to reflect the options you want to use. Then run killall -HUP xinetd to
have xinetd reread its configuration. Alternatively, if you are using inetd instead, edit the ftp entry
in /etc/inetd.conf to include the desired server arguments and have inetd reread its
configuration file.
Please note that whenever you turn the FTP service on or off via the System Preferences pane, any other
configuration changes you have made to the service are retained, rather than being reset. Nonetheless, it
is a good idea to keep a copy of the file with your configuration changes, in case this default behavior
ever changes. A number of additional options are worth consideration as well, such as the -V option,
which enables you to force the server to report a different version string than the one with which it was
compiled. Many scripts run by script kiddies can be flagged off if you report a version with no known
vulnerabilities, or confused into beating their heads against a brick wall if you report a different version
with well-known vulnerabilities that don't correspond to problems with the server you're actually
Table 12.1. Run Time Options for the Default ftpd
-a <anondir>
Defines <anondir> as the directory to which a chroot(2) is performed for
anonymous logins. Default is the home directory for the FTP user. This can also
be specified with the ftpd.conf(5) chroot directive.
-c <confdir>
Changes the root directory of the configuration files from /etc to
<confdir>. This changes the directory for the following files: /etc/
ftpchroot, /etc/ftpusers, /etc/ftpwelcome, /etc/motd, and
the file specified by the ftpd.conf(5) limit directive.
-C <user>
Checks whether the user would be granted access under the restrictions given in
ftpusers(5) and exits without attempting a connection. ftpd exits with an
exit code of 0 if access would be granted, or 1 otherwise. This can be useful for
testing configurations
A facility of LOG_FTP is used to write debugging information to the syslog.
-e <emailaddr> Uses <emailaddr> for the %E escape sequence
-h <hostname>
Explicitly sets the hostname to advertise as <hostname>. Default is the
hostname associated with the IP address on which ftpd is listening. This
capability (with or without -h), in conjunction with -c <confdir>, is useful
when configuring virtual FTP servers to listen on separate addresses as separate
Equivalent to –h <hostname>.
Logs each successful and failed FTP session by using syslog with a facility of
LOG_FTP. If this option is specified more than once, the retrieve (get), store
(put), append, delete, make directory, remove directory, and rename operations
and their filename arguments are also logged.
-P <dataport>
Uses <dataport> as the data port, overriding the default of using the port
one less than the port on which ftpd is listening.
Enables the use of PID files for keeping track of the number of logged-in users
per class. This is the default.
Disables the use of PID files for keeping track of the number of logged-in users
per class. This might reduce the load on heavily loaded FTP servers.
Permanently drops root privileges after the user is logged in. The use of this
option could result in the server using a port other than the listening port for
PORT-style commands, which is contrary to the RFC 959 specification. In
practice, though, very few clients rely on this behavior.
Requires a secure authentication mechanism such as Kerberos or S/Key be used.
Logs each concurrent FTP session to /var/run/utmp, making them visible
to commands such as who(1).
Doesn't log each concurrent FTP session to /var/run/utmp. This is the
-V <version>
Uses version as the version to advertise in the login banner and in the output
of STAT and SYST, instead of the default version information. If version is - or
empty, doesn't display any version information.
Logs each FTP session to /var/log/wtmp, making them visible to
commands such as last(1). This is the default.
Doesn't log each FTP session to /var/log/wtmp.
Logs wu-ftpd style xferlog entries to the syslog, prefixed with
xferlog:, by using a facility of LOG_FTP. These syslog entries can be
converted to a wu-ftpd style xferlog file suitable for input into a third-party
log analysis tool with a command similar to the following:
grep 'xferlog: '
/var/log/xferlog | \
sed -e 's/^.*xferlog: //' >
Restricting Access
The lukemftpd FTP server uses three main configuration files for restricting access: /etc/
ftpusers, /etc/ftpchroot, and /etc/ftpd.conf. By using these files you can place
restrictions on who can use FTP to access your machine—blocking certain users and allowing others.
You can also configure limitations to the type and frequency of access granted by limiting the number of
connections and setting timeouts and other server-related limits on FTP server availability and capability.
If you want to take advantage of these features, but you're running the ftpd that comes with
Mac OS X 10.2, you should update your version of the default ftpd. The distributed
version contained a number of bugs that prevented access control from working properly.
You can check the version by running strings /usr/libexec/ftpd | grep
"lukemftp". Broken versions report version 1.1. Version 1.2 beta 2 is known to
work. Replacing lukemftpd with the most recent version is covered briefly in the next
An /etc/ftpusers file comes by default. This file contains the list of users who aren't allowed FTP
access to the machine. Here's the default file:
% more /etc/ftpusers
# list of users disallowed any ftp access.
# read by ftpd(8).
If you have additional users who shouldn't be granted FTP access, include them in this file. Also include
any system logins that might not be listed by default in this file. Because the syntax for this file can be
more complex, its documentation is included in Table 12.2.
The FTP server also allows for chrooted FTP access, which is a compromise between full access and
anonymous-only access. With this compromise access, a user is granted FTP access to only his home
directory. List any users who should have this type of access in the /etc/ftpchroot file, which does
not exist by default.
The last major configuration file for the default ftpd is /etc/ftpd.conf. In this file, you can
define classes and various types of restrictions for a given class. This FTP server is supposed to
understand three classes of user: REAL, CHROOT, and GUEST. A REAL user is a user who has full
access to your machine. A CHROOT user is one who is restricted to his home directory or a directory
otherwise specified in /etc/ftpd.conf. A GUEST user is one who can connect to the machine for
anonymous FTP only.
The basic form of a line in ftpd.conf is
<directive> <class> <argument>
Directives that appear later override directives that appear earlier. This enables you to define defaults by
using wildcards and to provide more specific overrides later in the file. In addition to the defaults you
see listed in the preceding file, other available controls include ones for limiting the upload and
download storage rates, maximum uploadable file size, and port ranges. This last control can be useful
for setting up your FTP server to work while a firewall is also running on your machine. It enables you
to synchronize your FTP server's port usage and firewall port range restrictions. Table 12.3 details all the
available directives for the /etc/ftpd.conf file.
Table 12.2. Documentation for /etc/ftpusers and /etc/ftpchroot
ftpusers ftpchroot
ftpd access control files
The /etc/ftpusers file provides user access control for ftpd(8) by
defining which users may login.
If the /etc/ftpusers file does not exist, all users are denied access.
A \is the escape character. It can be used to escape the meaning of the comment
character or, if it's the last character on a line, to extend a configuration
directive across multiple lines. A # is the comment character, and all characters
from it to the end of line are ignored (unless it's escaped with the escape
The syntax of each line is <userglob>[:<groupglob>][@<host>]
[<directive> [<class>]].
These elements are or are handled as follows:
<userglob> is matched against the username, by using
fnmatch(3) glob matching (for example, f*).
<groupglob> is matched against all the groups that the user is
a member of, by using fnmatch(3) glob matching (for
example, *src).
<host> is either a CIDR address (refer to inet_net_pton
(3)) to match against the remote address (for example,, or an fnmatch(3) glob to match against the remote
hostname (for example, *
<directive> allows access to the user if set to allow or
yes. Denies access to the user if set to deny or no, or if the
directive is not present.
<class> defines the class to use in ftpd.conf(5).
If <class> isn't given, it defaults to one of the following:
chroot if there's a match in /etc/ftpchroot for the user.
guest if the username is anonymous or ftp.
real if neither of the preceding conditions is true.
No further comparisons are attempted after the first successful match. If no
match is found, the user is granted access. This syntax is backward compatible
with the old syntax.
If a user requests a guest login, the ftpd(8) server checks to see that both
anonymous and ftp have access. So if you deny all users by default, you
must add both anonymous allow and ftp allow to /etc/ftpusers
to allow guest logins.
/etc/ftpchroot The file /etc/ftpchroot is used to determine which users will have their
session's root directory changed (using chroot(2)), either to the directory
specified in the ftpd.conf(5) chroot directive (if set), or to the home
directory of the user. If the file doesn't exist, the root directory change is not
The syntax is similar to /etc/ftpusers, except that the class argument is
ignored. If there's a positive match, the session's root directory is changed. No
further comparisons are attempted after the first successful match. This syntax
is backward compatible with the old syntax.
Table 12.3. Documentation for /etc/ftpd.conf
ftpd(8) configuration file
The ftpd.conf file specifies various configuration options for ftpd
(8) that apply after a user has authenticated a connection.
ftpd.conf consists of a series of lines, each of which may contain a
configuration directive, a comment, or a blank line. Directives that appear
later in the file override settings by previous directives. This allows
wildcard entries to define defaults, and then have class-specific overrides.
A directive line has the format:
<command> <class> [<arguments>]
A \is the escape character; it can be used to escape the meaning of the
comment character, or if it is the last character on a line, it extends a
configuration directive across multiple lines. A # is the comment
character, and all characters from it to the end of line are ignored (unless
it is escaped with the escape character).
Each authenticated user is a member of a class, which is determined by
ftpusers(5). <class> is used to determine which ftpd.conf
entries apply to the user. The following special classes exist when parsing
entries in:
all Matches any class
none Matches no class
Each class has a type, which may be one of the following:
GUEST Guests (as per the anonymous and ftp logins).
A chroot(2) is performed after login.
CHROOT chroot(2)ed users (as per ftpchroot(5)).
A chroot(2) is performed after login.
REAL Normal users.
The ftpd(8) STAT command returns the class settings for the current
user, unless the private directive is set for the class.
advertise <class>
advertize <class>
Sets the address to advertise in the response to the PASV and LPSV
commands to the address for host (which may be either a hostname or IP
address). This may be useful in some firewall configurations, although
many FTP clients may not work if the address being advertised is
different than the address to which they've connected. If <class> is
none or no argument is given, it is disabled.
<class> [off]
Checks the PORT command for validity. The PORT command fails if the
IP address specified does not match the FTP command connection, or if
the remote TCP port number is less than IPPORT_RESERVED. It is
strongly encouraged that this option be used, especially for sites
concerned with potential security problems with FTP bounce attacks. If
<class> is none or off is given, this feature is disabled; otherwise, it is
chroot <class>
If <pathformat> is not given or <class> is none, uses the default
behavior (see the later discussion). Otherwise, <pathformat> is parsed
to create a directory to chroot(2) into at login.
<pathformat> can contain the following escape strings:
Home directory of user
User name
A % character
Default root directory is
CHROOT The user's home directory.
GUEST If -a <anondir> is given, uses <anondir>;
otherwise uses the home directory of the FTP user.
REAL By default no chroot(2) is performed.
classtype <class>
Sets the class type of <class> to <type> (see earlier discussion).
<class> <suffix>
[<type> <disable>
Defines an automatic inline file conversion. If a file to retrieve ends in
<suffix>, and a real file (without <suffix>) exists, then the output
of <command> is returned rather than the contents of the file.
<suffix> The suffix to initiate the conversion.
<type> A list of valid filetypes for the conversion. Valid types are: f
(file), and d (directory).
<disable> The name of the file that will prevent conversion if it exists.
A filename of . prevents this disabling action (that is, the conversion is
always permitted).
<command> The command to run for the conversion. The first word
should be the command's full pathname as execv(3) is used to execute
the command. All instances of the word %s in the command are replaced
with the requested file (without suffix). Conversion directives specified
later in the file override.
denyquick <class>
Enforces ftpusers(5) rules after the USER command is received,
rather than after the PASS command is received. Although enabling this
feature may allow information leakage about available accounts (for
example, if you allow some users of a REAL or CHROOT class but not
others), it is useful in preventing a denied user (such as root) from
entering a password across an insecure connection. This option is strongly
recommended for servers that run an anonymous-only service. If
<class> is none or off is given, the feature is enabled; otherwise, it is
display <class>
If <file> is not given or <class> is none, disables this. Otherwise,
each time the user enters a new directory, checks whether <file> exists,
and if so, displays its contents to the user. Escape sequences are supported.
homedir <class>
If <pathformat> is not given or <class> is none, uses the default
behavior (see later discussion). Otherwise, <pathformat> is parsed to
create a directory to change into upon login, and to use as the home
directory of the user for tilde expansion in pathnames and so on.
<pathformat> is parsed as per the chroot directive. The default home
directory is the home directory of the user for REAL users, and / for
GUEST and CHROOT users.
limit <class>
<count> [<file>]
Limits the maximum number of concurrent connections for <class> to
<count>, with 0 meaning unlimited connections. If the limit is exceeded
and <file> is given, displays its contents to the user. If <class> is
none or <count> is not specified, this feature is disabled. If <file> is
a relative path, it will be searched for in /etc (which can be overridden
with -c <confdir>).
<class> <size>
Sets the maximum size of an uploaded file to size. If <class> is none or
no argument is given, this feature is disabled.
<class> <time>
Sets the maximum timeout period that a client may request, defaulting to
two hours. This cannot be less than 30 seconds, or the value for timeout.
If <class> is none or time is not specified, sets to default of 2 hours.
modify <class>
If <class> is none or off is given, disables the following commands:
CHMOD, DELE, MKD, RMD, RNFR, and UMASK. Otherwise, enables them.
motd <class>
If <file> is not given or <class> is none, this feature is disabled.
Otherwise, uses <file> as the message-of-the-day file to display after
login. Escape sequences are supported. If <file> is a relative path, it
will be searched for in /etc (which can be overridden with -c
notify <class>
If <fileglob> is not given or <class> is none, this feature is
disabled. Otherwise, each time the user enters a new directory, notifies
the user of any files matching <fileglob>.
passive <class>
If <class> is none or off is given, prevents passive (PASV, LPSV, and
EPSV) connections. Otherwise, enables them.
portrange <class>
<min> <max>
Sets the range of port numbers that are used for the passive data port.
<max> must be greater than <min>, and both numbers must be between
IPPORT_RESERVED (1024) and 65535. If <class> is none or no
arguments are given, this feature is disabled.
private class
If <class> is none or off is given, does not display class information in
the output of the STAT command. Otherwise, displays the information.
rateget <class>
Sets the maximum get (RETR) transfer rate throttle for <class> to rate
bytes per second. If rate is 0, the throttle is disabled. If <class> is none
or no arguments are given, disables this. An optional suffix may be
provided, which changes the interpretation of <rate> as follows:
b Causes no modification. (Default; optional)
k Kilo; multiplies the argument by 1024
m Mega; multiplies the argument by 1048576
g Giga; multiplies the argument by 1073741824
t Tera; multiplies the argument by 1099511627776
rateput <class>
Sets the maximum put (STOR) transfer rate throttle for <class> to
<rate> bytes per second. If <class> is none or no arguments are
given, this feature is disabled.
sanenames <class>
If <class> is none or off is given, allows uploaded file names to
contain any characters valid for a filename. Otherwise, permits only file
names which don't start with a . and are composed of only characters
from the set [-+,._A-Za-z0-9].
template <class>
Defines <refclass> as the template for <class>; any reference to
<refclass> in following directives will also apply to members of
<class>. It is useful to define a template class so that other classes that
are to share common attributes can be easily defined without unnecessary
duplication. There can be only one template defined at a time. If
<refclass> is not given, disables the template for <class>.
timeout <class>
Sets the inactivity timeout period. This cannot be less than 30 seconds, or
greater than the value for maxtimeout. If <class> is none or time is not
specified, sets to the default of 15 minutes.
umask <class>
Sets the umask to <umaskval>. If <class> is none or <umaskval>
is not specified, sets to the default of 027.
upload <class>
If <class> is none or off is given, disables the following commands:
APPE, STOR, and STOU, as well as the modify commands: CHMOD,
DELE, MKD, RMD, RNFR, and UMASK. Otherwise, enables them.
The following defaults are used:
[View full width]
checkportcmd all
# unlimited
# 2 hours
# 15 minutes
Updating the Default (lukemftpd) ftpd
As mentioned earlier, if you want to take advantage of the default ftpd's controls, and you're running
the initial release of Mac OS X 10.2, you should update the ftpd. The controls for the ftpd that ships
with this release don't work properly. Fortunately, the update is not difficult to perform. Even if you are
not planning to take advantage of the default ftpd's controls, but are planning to turn on ftpd, it is
always a good idea to run the latest version; later versions usually contain security as well as
functionality updates.
The default ftpd at this time is lukemftpd-1.1. Recently lukemftpd has been renamed
tnftpd. Download the latest version, currently tnftpd-2.0-beta3, from here:
ftpd follows the basic format for compiling and even compiles easily under Mac OS X. Run ./
configure and then make. As of version 2.0 beta3, make install doesn't seem to work, but you
can copy the ftpd binary yourself. Make sure that you keep a backup of the default /usr/libexec/
ftpd, just in case you need it. Make sure you keep a copy of the updated ftpd as well, in case you
should ever find that Software Update has replaced your updated version with an older version
again. At the top of the source directory, perform cp src/tnftpd /usr/libexec/ftpd. With
this version of the FTP server, you can take advantage of the access controls, most notably the /etc/
ftpchroot file, and anonymous FTP, if you already had anonymous FTP enabled from a previous
version of Mac OS X.
Setting Up Anonymous FTP
As you have seen, setting up the FTP server to allow real users to have FTP access is not difficult.
Unfortunately, it suffers from the basic design vulnerability of transmitting the user's information in
clear text. In some instances, you can reduce this risk by setting up an anonymous FTP server instead.
Anonymous FTP servers allow users to connect, upload, and (potentially) download files without the use
of a real-user user ID and password. Of course, this brings the risk that you will not know who is
logging in to your system via the anonymous FTP service, and preventing unauthorized users from
accessing the system is difficult if everyone's known only as "anonymous." But if anonymous users can't
do anything damaging, or see any data that's private while so connected, this might be a good tradeoff
for the security of not allowing real user connections and the problems this brings. Anonymous FTP
servers also are useful for enabling users with no account on your machine to acquire or provide
information, such as to download product literature, or upload suggestions or possible modifications to a
project on which you're working.
Remember, even if you set up an anonymous-only FTP server, there's nothing to prevent
your real users from trying to enter their user IDs and passwords at the prompts.
Setting up the FTP server to allow anonymous FTP unfortunately takes some work, and potentially
makes your machine vulnerable to more attacks. We recommend that you do not enable anonymous
without having a good reason. However, we more strongly recommend against enabling unprotected
FTP for real users.
Setting up anonymous FTP involves making an ftp user, whose home directory is where anonymous
FTP users connect. Additionally, you copy the necessary system components to ftp's account so that
users can run ls properly. When a user requests a list of files via the FTP ls command, the command
that is actually executed is a server-side binary program kept in a special directory for the FTP server's
use, the home directory of the ftp user. When the FTP server is chrooted, it can't access /bin/ls;
therefore, placing a copy of ls and any other system components that the FTP server needs in a special
directory is normally a very important step. However, with the Mac OS X 10.2 release, the system
components don't seem to help for running ls. This isn't a problem with the lukemftp-1.2beta2
release, or with the wu-ftpd that is discussed later because both FTP servers can provide an internal
ls. Because it's hard to predict how a new release of either system software or FTP server software will
change things, we include the steps for the system components to install when you don't have to rely on
the FTP server having its own ls. Steps 5–10 listed in the following pages include the instructions for
copying the appropriate system components.
To set up an anonymous FTP site, do the following:
1. Create an ftp user in the NetInfo database. Follow the pattern of one of the generic users, such
as user unknown. You might start by duplicating the unknown user and editing the duplicate
user. Create your ftp user with the basic parameters shown in Table 12.4.
Table 12.4. Basic Parameters for an ftp User
<some generic reference to ftp>
<some unused uid number>
<some suitable location>
<some unused gid number>
Figure 12.2 shows the values we used for our ftp user. The asterisk value for the passwd field
is literal—this is a disallowed character in crypted passwords, and prevents logins that use this
Figure 12.2. Here's how we chose to create our ftp user, as shown in NetInfo Manager.
2. Create an ftp group in the NetInfo database. Make sure that you assign the same gid to the
ftp group that you indicated for the ftp user.
3. Create a home directory for user ftp. Make sure that you create the directory that you specified
in the NetInfo database (/Users/ftp in this example). The directory should be owned by root
and have permissions 555.
4. Create a ~ftp/bin/ directory, owned by root with permissions 555.
5. Copy the system's /bin/ls to ~ftp/bin/.
6. Create ~ftp/usr/lib/. Each of those directories should be owned by root with permissions
7. Copy the system's /usr/lib/dyld to ~ftp/usr/lib/. This is one of the files that helps
ls function properly in this chrooted environment.
8. Copy the system's /usr/lib/libSystem.B.dylib to ~ftp/usr/lib/. This is another
file that helps ls function properly in the chrooted environment.
9. Create ~ftp/System/Library/Frameworks/System.framework/Versions/B/.
Each of the directories in this path should be owned by root with permissions 555.
10. Copy the system's /System/Library/Frameworks/System.framework/Versions/
B/System to ~ftp/System/Library/Frameworks/System.framework/
Versions/B/. This is another file that helps ls function properly in the chrooted
11. Create a ~ftp/pub/ directory in which files can be stored for download. Recommended
ownership of this directory includes some user and group ftp or user root. Typical
permissions for this directory are 755.
12. If you also want to make a drop location where files can be uploaded, create ~ftp/
incoming/, owned by root. Recommended permissions include 753, 733, 1733, 3773, or
777. You could also create ~ftp/incoming/ with permissions 751 and subdirectories that
are used as the drop locations with any of the recommended drop-off permissions.
If you decide to allow anonymous FTP, make sure that you regularly check the anonymous FTP area
and your logs for any unusual activity. In addition, regularly check Apple's Web site for any updates for
Mac OS X that include ftp updates. Security holes are regularly found in ftpd and regularly fixed.
For your convenience, here's a listing of our ftp user's home directory:
# ls -lRaF ftp
total 0
total 0
dr-xr-xr-x 3 root
dr-xr-xr-x 7 root
dr-xr-xr-x 3 root
102 Sep 13 00:00 ./
238 Sep 13 00:04 ../
102 Sep 13 00:01 Library/
total 0
dr-xr-xr-x 3 root wheel
dr-xr-xr-x 3 root wheel
dr-xr-xr-x 3 root wheel
102 Sep 13 00:01 ./
102 Sep 13 00:00 ../
102 Sep 13 00:02 Frameworks/
total 0
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 ./
dr-xr-xr-x 3 root wheel 102 Sep 13 00:01 ../
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 System.framework/
total 0
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 ./
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 ../
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 Versions/
total 0
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 ./
dr-xr-xr-x 3 root wheel 102 Sep 13 00:02 ../
dr-xr-xr-x 3 root wheel 102 Sep 13 00:03 B/
total 2440
dr-xr-xr-x 3 root wheel
102 Sep 13 00:03 ./
dr-xr-xr-x 3 root wheel
102 Sep 13 00:02 ../
-r-xr-xr-x 1 root wheel 1245580 Sep 13 00:03 System*
total 56
3 root
7 root
1 root
102 Sep 12 23:57 ./
238 Sep 13 00:04 ../
27668 Sep 12 23:57 ls*
total 0
drwxr-x-wx 2 root
dr-xr-xr-x 7 root
68 Sep 13 00:04 ./
238 Sep 13 00:04 ../
total 0
2 root
7 root
68 Sep 13 00:04 ./
238 Sep 13 00:04 ../
total 0
3 root
7 root
4 root
102 Sep 12 23:57 ./
238 Sep 13 00:04 ../
136 Sep 12 23:59 lib/
total 3128
dr-xr-xr-x 4
dr-xr-xr-x 3
-r-xr-xr-x 1
-r-xr-xr-x 1
For additional thoughts on anonymous FTP configuration, you might want to check these Web sites:
CERT Coordination Center's Anonymous FTP Configuration Guidelines—
WU-FTPD Resource Center's Related Documents link—
AppleCare Service & Support—
Replacing the Mac OS X FTP Server
If you decide to activate anonymous FTP, especially anonymous FTP with an upload directory, you
should consider replacing the default ftpd with a more modifiable ftpd. A popular, highly
configurable replacement ftpd is wu-ftpd, available at In addition to being
highly configurable, it easily compiles under Mac OS X.
Although popular and highly configurable, wu-ftpd is not exempt from security problems. It's still
important to regularly monitor the anonymous FTP area, if you have one, as well as make sure that you
have the latest version of wu-ftpd, which is version 2.6.2 as of this writing.
Installing wu-ftpd
To replace the default ftpd with wu-ftpd, first download, compile, and install wu-ftpd.
Fortunately, wu-ftpd is one of the packages that follows this basic format for compilation and
make install
When you download the wu-ftpd source files, also download any patches available for the source. As
of this writing, the current version, 2.6.2, doesn't have any patches. However, because updates to wuftpd are frequently available as patches, we include a demonstration of using patch to apply a patch
to the previous version, 2.6.1. After a patch file is copied to the root directory of the source, run patch
as follows:
[localhost:~/wu-ftpd-2.6.1] software% patch -p0 <
The default config.guess and config.sub files that come with the wu-ftpd source don't work
with Mac OS X. Use the files that come with Mac OS X:
[Sage-Rays-Computer:~/src/wu-ftpd-2.6.2] software%
/usr/share/automake-1.6/config.guess ./
[Sage-Rays-Computer:~/src/wu-ftpd-2.6.2] software%
/usr/share/automake-1.6/config.sub ./
If you haven't already done so, create a bin user. The bin user is needed for wu-ftpd to install
properly. The bin user should have a relatively low uid. Mac OS X already comes with a bin group
with gid 7. In many other Unix variants, the bin user has the same uid and gid. As with the ftp
user, follow the basic parameters of a generic user, such as the unknown user. You might consider
duplicating the unknown user and editing values. Suggested values for the bin user are shown in Table
Table 12.5. Suggested Parameters for a bin User
System Tools Owner
Next, you're ready to run ./configure. Being the highly configurable package that it is, you can pass
many parameters to configure, as detailed in Table 12.6. Consider the basic design of FTP as you
look at the available options and you will see that the server built can range from one that is quite tightly
controlled to one that could not be considered secure by even the wildest stretch of the imagination. The
power and customizability available to you in the running server spans a similarly large range,
depending on the options chosen here. Therefore, building a server that allows you the power and
flexibility you need, while not providing too many opportunities for security problems requires careful
To have an ls that works properly under version 10.2 for anonymous FTP or guest FTP, you
may need to use the --enable-ls option.
Table 12.6. Configure Options for wuftpd
Path for configuration files, usually /etc.
Path for run/pid files, usually.
Path for log files (xferlog), usually /var/log.
Disables support for the upload keyword in the ftpaccess file.
Disables support for the overwrite keyword in the ftpaccess file.
Disables support for the allow and deny keywords in the
ftpaccess file.
Disables logging of failed attempts (wrong password, wrong username,
and so on).
--disable-logtoomany Disables logging of failed attempts that failed because too many users
were already logged in.
Disables support for private files (site group/site gpass FTP
Disables retrying failed DNS lookups at connection time.
Allows only anonymous FTP connections.
Disables some features that might possibly affect security.
Disables support of disk quotas, even if your operating system is set
for them.
Does not use PAM authentication, even if your operating system
supports it.
Supports S/Key authentication (needs S/Key libraries).
Supports OPIE (One Password In Everything) authentication (needs
OPIE libraries).
Causes cd ~ to not return to the chroot-relative home directory.
Allows FTP users to set SETUID, SETGID, and STICKY bits on file
Does not do RFC931 (IDENT) lookups (worse logging, but faster).
Does not support running as a normal daemon (as opposed to running
from inetd).
Does not keep track of user's path changes. This leads to worse
symlink handling.
--disable-throughput Does not keep track of user's throughput.
Does not keep track of transferred bytes (for statistics).
Suppresses some extra blank lines.
Does not wait for password entry if someone tries to log in with a
wrong username. Although convenient, it is a security risk in that
crackers can find out names of valid users.
Disables verbose error logging.
NOOP command resets idle time.
Logs the relative path rather than the real path.
Disables support of virtual servers.
--disable-closedvirt Allows guests to log in to virtual servers.
Skips all DNS lookups.
Disallows port-mode connections.
Disallows passive-mode connections.
Disables PID lock sleep messages. Recommended for busy sites.
Does not require the same IP for control and data connection in passive
mode. This is more secure, but can cause trouble with some firewalls.
Allows only real users to connect.
Uses the internal ls command instead of /bin/ls in the chroot
directory. This is experimental and has known problems.
Makes the internal ls display UID and GID instead of user/group
names. This is faster, but the ls output looks worse.
--disable-hidesetuid Causes the internal ls command to not hide setuid/setgid bits
from the user. Default is for the internal ls to hide them as a security
Disables support of the mail-on-upload feature. The feature allows you
to automatically send an email message to the FTP administrator
whenever an anonymous user uploads a file.
Supports broken clients. See the CHANGES file for details.
Sets the buffer size to x. (You won't usually have to adjust this value.)
Sets the number of incoming processes to backlog in daemon mode to
x. Default is 100.
To distinctly separate the wu-ftpd installation from the default ftpd, you should consider specifying
paths in the various path parameters. In addition, you might consider running ./configure with prefix=<some-directory-for-wu-ftpd> so that the wu-ftpd binaries and man pages are
all in one place. You might also find it interesting that you can create either an anonymous-only or a real
users-only FTP server. Next, run make and make install.
After you have a wu-ftpd binary, you should update the /etc/xinetd.d/ftp file to reflect the
location of the new ftpd, as well as any runtime options that should be used. After you've adjusted /
etc/xinetd.d/ftp, have xinetd reread its configuration file. Runtime options available in wuftpd are detailed in Table 12.7. Like the compile-time options, careless choices can open holes in your
system that may come back to haunt you later. Think through what options you need and test your
configuration carefully. As we've previously suggested, it's best to err on the side of caution in your
configuration choices, and not enable "interesting" features just because they're interesting. Wait until
you actually have a need for a feature before you add that capability to your option collection.
Table 12.7. Runtime Options for wuftpd
Logs debugging information to the syslog.
Logs debugging information to the syslog.
Logs each FTP session to the syslog.
-t <timeout>
Sets the inactivity timeout period to <timeout> seconds. Default is 15
-T <maxtimeout> A client may also request a different timeout period. The maximum period
may be set to <timeout> seconds. Default is 2 hours.
Enables the use of the ftpaccess(5) configuration file.
Disables the use of the ftpaccess(5) configuration file. This is the default.
Logs commands sent to the ftpd server to the syslog. Overridden by the
use of the ftpaccess file. With the –L command, logging occurs as soon as
the FTP server is invoked. All USER commands are logged. If a user
accidentally enters a password for a username, the password is logged. Unless
you're actively trying to debug a problem or an attack, enabling this is
probably a bad idea.
Logs files received by the ftpd server to the xferlog(5). Overridden by
the use of the ftpaccess(5) file.
Disables use of RFC931 (AUTH/ident) to attempt to determine the username
on the client.
Logs files transmitted by the ftpd server to the xferlog(5). Overridden
by the use of the ftpaccess(5) file.
-p <ctrlport>
Overrides port numbers used by the daemon. Normally the port number is
determined by the ftp and ftp-services values in services. If there
is no entry for ftp-data and -P is not specified, the daemon uses the port
just prior to the control connection port. The -p option is available only for
the standalone daemon.
-P <dataport>
Determines whether the daemon uses the PID files, which are required by the
limit directive to determine the number of current users in each access
class. Disabling the use of PID files disables user limits. Default, -q, is to use
PID files. Specify -Q as a normal user testing the server when access
permissions prevent the use of PID files. Large, busy sites that do not want to
impose a limit on the number of concurrent users might consider disabling
PID files.
-r <rootdir>
Instructs the daemon to chroot(2) to <rootdir> immediately upon
loading. This can improve system security by limiting the files that can be
damaged in a break-in. Setup is much like anonymous FTP, with additional
files required.
Sets the daemon to standalone mode. The -S option runs the daemon in the
background and is useful in startup scripts during system initialization (that is,
rc.local). The -s option leaves the daemon in the foreground and is useful
when running from init (that is, /etc/inittab).
-u <umask>
Sets the default umask to <umask>.
Displays the copyright and version information, then terminates.
Records every login and logout. Default.
Does not record user logins in the wtmp file.
Does not save output created by -i or -o to the xferlog file, but saves it
via syslog so that output from several hosts can be collected on one central
Limiting Access
Access to wu-ftpd can be limited through the use of the ftpusers, ftphosts, and ftpaccess
Like the default ftpd, wu-ftpd also uses an ftpusers file as a way of restricting access on a peruser basis. Copy the default /etc/ftpusers file to the etc directory of your wu-ftpd installation
and add any users who should not be granted FTP access.
The ftphosts file is used to allow or deny access on a user/host basis. The basic syntax of a rule is as
allow <username> <addrglob> [<addrglob> ...]
deny <username> <addrglob> [<addrglob> ...]
The <addrglob> may be a specific hostname, IP address, or pattern of each. Additionally,
<addrglob> may also be specified as address/cidr or address:netmask. As you may
expect, the order of the allow and deny rules can be important; the rules are processed sequentially on a
first-match-wins basis. When you create these rules, be sure to think about rule order, particularly with
any allow and deny pairs.
For example, to grant user marvin access from only, but not
any other machines on the network, use this set of rules:
allow marvin
deny marvin *
If the rules were reversed, then wu-ftpd would encounter the deny rule first, which would deny access
to user marvin from all machines, including marvin.biosci.
Although wu-ftpd provides a lot of configuration options with its compile-time and runtime options,
more controls can be set in the ftpaccess file. To enable the use of the ftpaccess file, be sure to
run wu-ftpd with the -a option.
Selected useful controls in ftpaccess are documented in Table 12.8. Be sure to read the
ftpaccess man page thoroughly for information about these and other available controls.
Table 12.8. Selected Controls Available for ftpaccess
loginfails <number>
Logs a "repeated login failures" message
after <number> login failures. Default is
class <class> <typelist> <address>
Sets up classes of users and valid access
addresses. <typelist> is a commaseparated list of any of these keywords:
real, anonymous, or guest. If real
is included, the class can include users
FTPing to real accounts. If anonymous is
included, the class can include anonymous
FTP users. If guest is included, the class
can include members of guest access
guestgroup <groupname>
For guestgroup, if a real user is a
member of any specified <groupname>,
the session is set up exactly as with
anonymous FTP. In other words, a
chroot is done and the user is no longer
permitted to issue the USER and PASS
commands. <groupname> is a valid
group from NetInfo. In other words, a real
user whose group is <groupname> is
treated as a guest FTP user.
A guest user's home directory must be
properly set up, exactly as anonymous FTP
would be.
The group name may be specified by either
name or numeric ID. To use a numeric
group ID, place a % before the number.
Ranges may be given. Use an asterisk to
mean all groups.
guestuser <username> [<username> ...]
guestuser works like guestgroup,
except it uses the username (or numeric
realgroup <groupname>
[<groupname> ...]
realuser and realgroup have the
same syntax, but reverse the effect of
guestuser and guestgroup. They
allow real user access when the remote
user would otherwise be determined a
realuser <username> [<username> ...]
limit <class> <number> <times>
Limits the number of users belonging to
<class> to access the server during the
<times> indicated and posts
<message_file> as the reason for
access denial.
file-limit [<raw>] <in | out | total>
<count> [<class>]
Limits the number of files a user in
<class> may transfer. Limit may be
placed on files in, out, or total. If no class
is specified, the limit is the default for
classes that do not have a limit specified.
<raw> applies the limit to the total traffic
rather than just data files.
data-limit [<raw>] <in | out | total>
<count> [<class>]
Limits the number of data bytes a user in
<class> may transfer.
Limit may be placed on bytes in, out, or
total. If no class is specified, the limit is the
default for classes that do not have a limit
specified. <raw> applies the limit to the
total traffic rather than just data files.
limit-time {* | anonymous | guest}
Limits the total time a session can take. By
default, there is no limit. Real users are
never limited.
log commands <typelist>
Logs individual commands issued by users
in <typelist>, where <typelist> is
a comma-separated list of any of the
keywords real, anonymous, or guest.
log transfers <typelist> <directions>
Logs the transfers of users belonging to
<typelist> in the specified
<directions>. <typelist> is a
comma-separated list of any of the
keywords real, anonymous, or guest.
<directions> is a comma-separated
list of the keywords inbound or
outbound, where inbound refers to
transfers to the server and outbound
refers to transfers from the server.
log syslog
Redirects logging messages for incoming
and outgoing transfers to the system log.
Default is xferlog.
log syslog+xferlog
Logs transfer messages to both the system
log and xferlog.
defaultserver deny <username>
By default all users are allowed access to
the default, nonvirtual FTP server.
defaultserver <deny> denies
access to specific users. You could use
defaultserver <deny> * to deny
access to all users, then
defaultserver <allow> to allow
specific users.
defaultserver allow <username>
guestserver [<hostname>]
Controls which hosts may be used for
anonymous or guest access. If used without
<hostname>, denies all guest or
anonymous access to this site. More than
one <hostname> may be specified.
Guest and anonymous access are allowed
only on the named machines. If access is
denied, the user is asked to use the first
<hostname> listed.
passwd-check <level> <enforcement>
Defines the level and enforcement of
password checking done by the server for
anonymous FTP. <level> can be none,
trivial (must contain an @), or
rfc822 (must be an RFC822-compliant
address). <enforcement> can be warn
(warns the user but allows him to log in) or
enforce (warns the user and logs him
chmod <yes | no> <typelist>
delete <yes | no> <typelist>
overwrite <yes | no> <typelist>
Sets permissions for chmod, delete,
overwrite, rename, and umask as
yes or no for users in <typelist>,
where <typelist> is a commaseparated list of any of the keywords
real, anonymous, or guest.
rename <yes | no> <typelist>
umask <yes | no> <typelist>
upload [absolute | relative]
[class=<classname>]... [-] <root-dir>
<dirglob> <yes | no> <owner> <group>
<mode> [dirs | nodirs] [<d_mode>]
Specifies upload directory information.
<root-dir> specifies the FTP root
directory. <dirglob> specifies a
directory under the
<root-dir>. <yes | no> indicates
whether files can be uploaded to the
specified directory. If yes, files will be
uploaded as belonging to <owner> and
<group> in <mode>. [dirs |
nodirs] specifies whether or not new
subdirectories can be created in the upload
directory. If dirs, they are created with
mode <d_mode>, if it is specified.
Otherwise, they are created as defined by
<mode>. If <mode> is not specified, they
are created with mode 777. Upload
restrictions can be specified by class with
path-filter <typelist> <mesg>
<allowed_charset> [<disallowed
Defines regular expressions that control
what a filename can or cannot be for users
in <typelist>, where <typelist> is
a comma-separated list of any of the
keywords real, anonymous, or guest.
noretrieve [absolute|relative]
[class=<classname>]... [-] <filename>
Always denies the ability to retrieve these
files. If the files are a path specification
(begin with a / character), only those files
are marked irretrievable. Otherwise, all
files matching the filename are refused
transfer. For example:
noretrieve /etc/passwd core
specifies no one can get the file /etc/
passwd, but users will be allowed to
transfer a file called passwd if it is not
in /etc. On the other hand, no one can
get files named core, wherever they are.
Directory specifications mark all files and
subdirectories in the named directory
irretrievable. The <filename> may be
specified as a file glob. For example:
noretrieve /etc /home/*/.
htaccess specifies no files in /etc or
any of its subdirectories may be retrieved.
Also, no files named .htaccess
anywhere under the /home directory may
be retrieved.
The optional first parameter selects
whether names are interpreted as absolute
or relative to the current chrooted
environment. The default is to interpret
names beginning with a slash as absolute.
The noretrieve restrictions may be
placed on members of particular classes. If
any class= is specified, the named files
are not retrievable only if the current user
is a member of any of the given classes.
throughput <root-dir> <subdir-glob>
<file-glob-list> <bytes-per-second>
<bytes-per-second-multiply> <remoteglob-list>
Restricts throughput to <bytes-persecond> on download of files in the
comma-separated <file-glob-list>
in the subdirectory matched by <subdirglob> under <root-dir> when the
remote host or IP address matches the
comma-separated <remote-globlist>.
anonymous-root <root-dir> [<class>]
Specifies <root-dir> as the chroot
path for anonymous users.
If no anonymous-root is matched, the
old method of parsing the home directory
for the FTP user is used.
guest-root <root-dir> [<uid-range>]
Specifies <root-dir> as the chroot
path for guest users. If no guest-root is
matched, the old method of parsing the
user's home directory is used.
deny-uid <uid-range> [...]
The deny clauses specify UID and GID
ranges that are denied access to the FTP
server. The allow clauses are then used
to allow access to those who would
otherwise be denied access. deny is
checked before allow. Default is to allow
access. Use of these controls can remove
the need for the /etc/ftpusers file.
Wherever uid or gid can be specified in
the ftpaccess file, either names or
numbers may be used. Put % before
numeric uid or gid.
deny-gid <gid-range> [...]
allow-uid <uid-range> [...]
allow-gid <gid-range> [...]
restricted-uid <uid-range> [...]
restricted-gid <gid-range> [...]
unrestricted-uid <uid-range> [...]
unrestricted-gid <gid-range> [...]
passive ports <cidr> <min> <max>
Controls whether or not real or guest users
are allowed access to areas on the FTP
server outside their home directories. Not
intended to replace the use of
guestgroup and guestuser. The
unrestricted clauses may be used to allow
users outside their directories when they
would have been otherwise restricted.
Allows control of the TCP port numbers
that may be used for a passive data
connection. If the control connection
matches <cidr>, a port in the <min> to
<max> range is randomly selected for the
daemon to listen on. This control allows
firewalls to limit the ports that remote
clients use for connecting to the protected
<cidr> is shorthand for an IP address in
dotted-quad notation, followed by a slash
and the number of leftmost bits that
represent the network address. For
example, for the reserved class-A network
10, instead of using a netmask of, use a CIDR of 8, and represents the network.
Likewise, for a private class-C home
network, you could use to represent your
deny <addrglob> <message_file>
Always denies access to host(s) matching
<addrglob> and displays
<message_file> to the host(s).
<addrglob> may be !nameserved to
deny access to sites without a working
nameserver. It may also be the name of a
file, starting with a slash (/), which
contains additional address globs, as well
as in the form <address>:<netmask>
or <address>/<cidr>.
dns refuse_mismatch <filename>
Refuses FTP sessions when the forward
and reverse lookups for the remote site do
not match. Displays <filename> to
warn the user. If override is specified,
allows the connection after complaining.
dns refuse_no_reverse <filename>
Refuses FTP sessions when there is no
reverse DNS entry for the remote site.
Displays <message> to warn the user. If
override is specified, allows the
connection after complaining.
Understanding Basic ftpaccess Controls
As you saw in Table 12.8, even a selective list of ftpaccess controls is large. Because many controls
are available, let's take a look at some of the basic configuration controls in the ftpaccess file.
Look at this statement:
In this example, a class called staff is defined as being a real user coming from anywhere in the domain.
In the following statement, a class called local is defined as being as guest user coming from
anywhere in the domain:
In the following statement, a class called remote is defined as being an anonymous user whose
connection comes from anywhere:
You can create as many classes as suit your needs.
In the following statement, there is a limit of five users belonging to class remote who can access the
FTP server on Saturdays and Sundays and on any day between 6:00 p.m. and 6:00 a.m.:
When the limit is reached, any additional user sees a posting of the message file, msg.toomany, in /
In the following statement, no users belonging to the class staff can access the FTP server at any time:
Whenever any user in class staff attempts to log in, she sees a message indicating that she is not allowed
to access the FTP server.
In the following statements, the guest user, bioftp, can upload files to the ~ftp/public directory.
The files will be uploaded with permissions 600, that is, read and write permissions, for guest user
However, in the following statement, no user can upload to the ~ftp/bin directory:
Please note that the upload control also has a nodirs option that does not allow directories to be
uploaded. If you decide to run an anonymous FTP server, make sure that you include the nodirs
option to the upload control.
restricted-uid and restricted-gid
Although restricted-uid and restricted-gid are straightforward controls, it is useful to note
that these controls function like the /etc/ftpchroot file for the default ftpd.
A restricted control entry such as this:
restricted-uid marvin
restricts user marvin to his home directory for FTP access. The numeric uid for marvin, preceded
by %, could be used instead, as well as a range of uids.
Controlling Bandwidth and Other Advanced Features
The controls available in ftpaccess range from basic controls to advanced controls. With the
advanced features, you can control many aspects of an FTP session. Some of the interesting controls
include limiting the throughput, limiting the number of bytes that can be transferred, limiting the number
of files that can be transferred, refusing sessions from hosts whose forward and reverse name lookups
don't match or if a DNS lookup can't be done, and specifying a passive port range for a passive data
The throughput directive is one that you can use to help make your anonymous FTP site less
attractive. Here is a sample throughput directive:
throughput /Users/ftp /pub*
The example statement limits the throughput of a zip file downloaded from the pub directory to
approximately 22000 bytes/second from any remote host. Furthermore, because a multiply factor of 0.5
is also specified, the second zip file is downloaded at a rate of approximately 11000 bytes/second; the
third, 5500 bytes/second, and so on.
The number of files uploaded, downloaded, or transferred in total by a user in a given class can be
restricted with the file-limit directive. For example,
file-limit in 1 remote
limits the number of files uploaded to your site by a user belonging to class remote to just one file.
Use the data-limit directive to limit the number of data bytes that can be uploaded, downloaded, or
transferred in total by a user in a given class. In this statement
data-limit total 5000000 remote
the total number of data bytes that may be transferred by a user in class remote is restricted to
approximately 5000000 bytes.
dns refuse_mismatch
To deny access to a host whose forward and reverse DNS lookups don't match, use the dns
refuse_mismatch directive. In this example
dns refuse_mismatch mismatch-warning override
the file, mismatch-warning, is displayed for the offending host, but with the override option in
place, the host is granted access anyway.
dns refuse_no_reverse
To deny access to a host for which a reverse DNS lookup can't be done, use the dns
refuse_no_reverse directive. In this statement
dns refuse_no_reverse noreverse-warning
the file named noreverse-warning is displayed, and the connection from the offending host is
passive ports
At this time, the passive ports directive may not seem important. However, if you decide to use the builtin firewall package, ipfw, you may find the passive ports directive useful for allowing passive
connections through your firewall. In this example,
passive ports 15001 19999
ports in the range of 15001 to 19999 for passive data connections from 140.254.12.* have been
specified. This directive could be used in conjunction with an ipfw rule to permit a passive data
connection through the firewall.
Understanding the xferlog
By default, wu-ftpd logs transfer to a file called xferlog. Each entry in the log consists of an entry
in this format:
<current-time> <transfer-time> <remote-host> <file-size> <filename>
<transfer-type> <special-action-flag> <direction> <access-mode>
<service-name> <authentication-method> <authenticated-user-id>
At a casual glance, that format may seem a bit overwhelming. Let's look at some sample entries to better
understand that format.
Here is an entry resulting from someone contacting the anonymous FTP server:
[View full width]
Fri May 11 13:32:19 2001 1 46 /Users/ftp/
incoming/file4 b _
i a [email protected] ftp 0 * c
Immediately apparent are the date and time when the transfer occurred. The next entry, the 1, indicates
that the transfer time was only 1 second. The remote host was
The file size was 46 bytes. The file transferred was file4 in the incoming area of the anonymous FTP
server. The transfer was a binary transfer. No special action, such as compressing or tarring, was done.
From the i, you can see that this was an incoming transfer; that is, an upload. From the a, you can see
that this was an anonymous user. The string identifying the username in this case is [email protected] That is the
password that the user entered. The ftp indicates that the ftp service was used. The 0 indicates that no
authentication method was used. The * indicates that an authenticated user ID is not available. The c
indicates that the transfer completed.
Here is an entry resulting from a guest user contacting the FTP server:
[View full width]
Fri May 11 16:32:24 2001 5 5470431 /
dotpaper.pdf b _ i g betty ftp 0 * c
It looks much like the anonymous entry. In this entry, the transfer time was 5 seconds. The file transfer
was larger than in the previous example, 5470431 bytes. The i indicates that this transfer was also an
incoming transfer, an upload. The g indicates that the user involved was a guest user. The guest user was
user betty.
Here is an entry resulting from a real user contacting the FTP server:
[View full width]
Fri May 11 15:34:14 2001 1 277838 /
Users/marvin/ b _ o r marvin ftp 0 * c
Again, this entry is much like the other two entries you have seen. In this example, you can learn from
the o that the transfer was an outgoing transfer; that is, a download. The r indicates that a real user made
the transfer. In this case, the real user was marvin.
Guest User Accounts
As you've seen, wu-ftpd understands three types of users: real, anonymous, and guest. Real users are
users who have full login access to your machine. You can restrict your real users' FTP access to their
home directories, if you so choose. Whether you choose to do so is up to you. If you trust your users
enough to give them full login access to your machine in the first place, you might also trust them with
full FTP access. Anonymous users are users who have access to only the anonymous area of your
machine, if you chose to create an anonymous FTP area. Guest users are users who have accounts on
your machine, but aren't granted full access to your machine. Guest user accounts might be suitable for
users who have Web sites on your machine and need FTP access only to occasionally update their Web
A guest user account is a cross between a real user account and an anonymous FTP account. A guest
user has a username and password, but doesn't have shell access to his account. This allows him to use
FTP to access files on the server via a user ID and password, but prevents him from being able to log in
to the machine either through the network or at the console. Guest user accounts are useful if, for
example, you need to set up a place where a group of collaborators can share sensitive information and
data, but where you don't really want members of the group to be full users of your machine. If you set
up a single guest user account for this group of users, they can all access it with a user ID and password,
and people without the user ID and password can't, so their information remains private. Because they
don't have real shells, however, they can't log in to your machine and use any resources other than those
that are available through the FTP server.
Guest user accounts are set up similarly to the anonymous FTP account. The users are restricted to their
home directories only, as is the anonymous FTP account, and their accounts contain the commands that
they might need to run while accessing their accounts via FTP.
If you decide that you need guest user accounts, do the following to implement a guest user:
1. Decide where the guest user's home directory should be. You could put your guest users in the
same location as your regular users. You also could create a directory somewhere for guest users
and place guest user directories in that location.
2. After you've decided where the guest account should reside, make a guest account. You could
create your user in the Accounts pane in System Preferences. Your guest user, however, might
not really have a need for all the directories that are made in a user account created in this way.
You can decide what directories might be necessary. If you anticipate having many guest users,
you could create a guest skeleton user as your basis for guest accounts.
3. The guest user should belong to some sort of guest group. Create a guest group with an unused
GID number. Edit the guest user's account to belong to the guest group. The guest user's shell
should be modified to some nonexistent shell. Make sure that the guest user's home directory and
everything in it are owned by the guest user with the guest group.
4. There are two possible ways to list the guest user's home directory. The traditional way is to
include a . where the FTP server should chroot to as the root FTP directory. For example, you
could create a guest user called betty, with a home directory located in /Users/guests/
betty/. To indicate that the root directory that you want betty to see when she accesses the
FTP server to be /Users/guests/betty/, you would edit the home directory to be /
Users/guests/betty/./. If you wanted betty to be able to see a listing of other guest
users' directories before changing to her directory, you could list her home directory as /Users/
guests/./betty. With her home directory listed this way, her guest root directory does not
need to be specifically listed in the ftpaccess file. Figure 12.3 shows how the guest user's
home directory appears in NetInfo Manager when indicated by this method.
Figure 12.3. Here are the parameters used for the guest user betty. Her home directory
is listed in the traditional notation for a guest user, which includes a . to indicate the
root directory that the user sees when she FTPs.
The other way to list a guest user's home directory is to list the home directory as usual in
NetInfo Manager. In the ftpaccess file, list the guest user's root directory with the
guestuser control. The user's directory in the NetInfo database then looks like the notation for
any real user's home directory, as you can see for guest user ralph in Figure 12.4.
Figure 12.4. The home directory for this guest user is indicated in the regular fashion.
The root directory for FTP for this guest user is indicated instead by the use of the
guestuser control in the ftpaccess file.
The entry for the guest user's root directory in ftpaccess looks like this:
guestuser ralph
5. Include the shell that you use for the guest in /etc/shells. You might want the contents of
your fake guest user shell to be something like this:
#! /bin/sh
exit 1
6. Update the ownership information of the guest user's account to include the guest group GID that
is indicated in the NetInfo database.
7. Copy the same system files that are used for the anonymous FTP user to the guest user's account.
Specifically, make sure the system's
are included in the guest user's home directory. In this example, for user ralph, the files would
be placed in /Users/guests/ralph/bin/, /Users/guests/ralph/usr/lib/,
and /Users/guests/ralph/System/Library/Frameworks/System.
framework/Versions/B/ with the same permissions and ownerships that are used for an
anonymous FTP account.
If you create a skeleton user account for FTP guests, these are files that would be useful to
include in the skeleton guest user account so that they get installed automatically.
Please note that this step is not necessary if you have used the --enable-ls option.
Alternatives to FTP
As we have mentioned, turning on the FTP server makes your machine more vulnerable to attacks from
the outside. There are other, more secure options you could consider using as alternatives to FTP.
scp and sftp
If you turn on the SSH server, two alternatives become available. You can transfer files either with
secure copy (scp) or secure FTP (sftp). Transfers made using scp or sftp are encrypted, thereby
providing an extra level of security. Specifically, the client creates a tunnel through SSH, using the
standard port 22, and executes an sftp-server process on the server end, which sends data back
through the encrypted channel. The sftp and sftp-server executables are part of the SSH package.
With FTP, however, passwords are transmitted in cleartext, adding yet another vulnerability to FTP
With the SSH server turned on, you can transfer files to other machines running SSH servers. Likewise,
those machines can transfer files to your machine by using scp or sftp. In addition, there exists a
freely available Mac OS client that has built-in scp capabilities. For PCs, there's a client that has a builtin sftp client. Running SSH removes almost any need for an FTP server. We discuss SSH in detail in
Chapter 14, "Remote Access: Secure Shell, VNC, Timbuktu, Apple Remote Desktop."
As you may recall, wu-ftpd can be built as an anonymous-only FTP server. If your real users are
transferring files via scp or sftp, but you still have a need for an anonymous FTP area, you might
then consider compiling an anonymous-only FTP server and running that alongside your SSH server.
Regularly checking the anonymous FTP area for any irregularities and keeping your wu-ftpd current
are still important activities.
Tunneling FTP over SSH
If, for whatever reason, transferring files with the scp and sftp commands isn't sufficient to meet your
needs, you can tunnel FTP connections through ssh logins (see Chapter 14 for more information). This
enables you to protect the command channel, but can't easily protect the FTP data channel. If you're
administering an FTP server, you can moderately increase your system security by using an FTP
configuration that encourages users to tunnel their FTP connections into your machine.
As was mentioned earlier, if you provide an open FTP port for your users to connect to, they'll be likely
to try it, and likely to enter their user ID and password on the clear-text data channel to attempt login.
You can bias your users against this behavior by exploiting wu-ftpd's capability for configuration and
creating specialized FTP servers to handle real and anonymous users. By creating a real-users-only FTP
server, using the --disable-anonymous compile-time option for wu-ftpd, you can create a
server that allows only real users to log in. To protect this server, you can restrict access to it to only
connections originating from the server machine itself. This way, the data from the connections never
visibly passes over the network, and any connections that come in over the network are rejected,
preventing users from unintentionally disclosing their information. SSH can then be used to create
tunnels between the user's client machines and the server, so that their command channels are carried
encrypted over the network to the server, and unpacked on the server. Because the connection to the
command channel looks (to the FTP server) as if it's coming from the server machine itself (where it's
being unpacked), it is allowed, and because it came to the server over the encrypted SSH tunnel, it is
protected against prying eyes. Here you'll learn how to configure a wu-ftpd server for this use.
Chapter 14 discusses in detail how to set up a client to tunnel an FTP connection to a server configured
like this.
You may need the --disable-pasvip option to get the tunneling to function properly.
To make tunneling work on the server side, you have to wrap the FTP server to accept connections only
from itself. The easiest way to set up the restriction is to make use of the TCP Wrappers program that
comes with the Mac OS X distribution.
In the method that uses only the /etc/hosts.allow file, you would do this with this syntax:
in.ftpd: <machine-IP> localhost: allow
in.ftpd: deny
If you must also have an anonymous FTP server running, or even if you don't, it's a good idea to run the
FTP server you're trying to make secure on noncanonical ports for FTP (such as 31 for ftp, 30 for
ftp-data). If you're running an anonymous-only server, leave it running on the standard FTP ports
(21 for ftp, 20 for ftp-data).
As you've seen, you don't need to edit anything to run an FTP server on the standard ports. All that's left,
then, is to configure your real-user FTP server and install it on an alternate set of ports. Follow these
1. For ease of administration, it's a good idea to have each FTP server installed in a distinctly
separate location. For example, you could install your anonymous FTP server in /usr/local/
ftp and your real users' FTP server in /usr/local/wuftp.
2. Pick a set of unused port numbers. We like ports close to the standard FTP ports for convenience
—31 and 30 are our favorites.
3. Edit the /etc/services file to include the alternate services. You could call them something
like wuftp and wuftp-data. Whichever port number you assign to the wuftp service is the
one to which the clients wanting to connect need to tunnel.
4. Again for convenience, name the alternative FTP server itself something similar to the service
name, such as wuftpd. It is automatically installed as in.ftpd in whatever location you
specified during the build, but you can rename that file.
5. Finally, wrap the alternative FTP server to allow only connections from itself, but allow the
anonymous FTP server access from all machines.
If you also decide to run Mac OS X's built-in firewall, ipfw, you must add statements to allow ipfw to
grant access to the alternative FTP server. In addition, set the passive ports control to the
ftpaccess file to a range of ports, such as 15001-19999. Then add a statement to the rules for ipfw
to allow access to whatever range of ports you specified with passive ports. You might find that
you have to keep tweaking your ipfw, anonymous, and real FTP configurations until everything works
in harmony. Be sure to check your logs as you're doing this. They're more informative than you might at
first realize.
If you decide to run the types of FTP servers suggested in this section, you might find that
guest accounts do not work. This appears to be a version-specific bug, or an unexpected
consequence of some recent change, as we've used all these capabilities simultaneously
before. Also, please note that only the channel that carries the username, password, and
command information can be tunneled. The channel that travels between machines when you
actually transfer a file using FTP can't be protected in this fashion. For many users, though,
this protection is sufficient.
This chapter has taken a look at how to make the optional FTP process more secure. Although OS X
comes with an FTP server provided by Apple, we suggest that if you do want to provide FTP services,
you run the more configurable wu-ftpd. No matter which server you decide to run, restrict access to
the server as much as possible, regularly check your logs, and keep the FTP server up to date. For the
default FTP server, you can do this with the OS X software updates, or by compiling and installing the
more recent versions by hand. For wu-ftpd, you have to update manually.
You also saw alternative suggestions to simply using FTP. Most preferable is using scp or sftp. If
you need an anonymous FTP server, then have the regular users use scp and sftp while you provide
an anonymous FTP server. However, you may also discover a need for having an FTP server available
for your real users. In that case, consider compiling a real-users-only FTP server, wrapping it with TCP
Wrapper, and teaching your users to tunnel connections to it over SSH.
Chapter 13. Mail Server Security
Basic Vulnerabilities
Activating Sendmail on Mac OS X
Protecting Sendmail
Updating Your Sendmail Installation
Postfix as an Alternative
Installing Postfix
Protecting Postfix
Delivering Mail—UW IMAP
MTAs, Mail Transfer Agents, are one of the most active and most frequently misconfigured services on
the Internet. Millions of messages are transferred daily, and many with virus payloads spread by poorly
secured servers. Mail servers present a serious risk and responsibility for system administrators. Rather
than posing a single security risk, mail servers can open your system to numerous vulnerabilities.
Basic Vulnerabilities
All server processes that communicate with the outside world are open to security exploits because of
programming errors either in the daemon itself, or in the support code that the server relies on. For the
system administrator, these types of exploits are unavoidable, and usually quickly patched within hours
of being found. Keeping software current and staying abreast of problems, whether happening on your
network, or on the other side of the world, is the responsibility of any good administrator.
Mail servers suffer from the same misconfiguration issues as other servers. Inappropriate permissions
can allow internal (and sometimes external) users to write to your configuration files, access private
data, and compromise your local system security. Maintaining your mail server as a standalone system
with as few entry points as possible helps eliminate some of the risks of local mistakes, but in many
cases is not a justifiable expense.
Open Relays
MTAs are open to one security risk that is unique, and extremely costly in terms of resources, time, and
aspirin: the ability external users have to appropriate MTA services for their own needs. In simpler
terms, SPAM. The process of sending mail from machine to machine is called relaying. Each message,
as it moves over the network, makes its way from one computer to another, until it reaches the recipient.
Most email messages see at least two MTAs along the way: the originating MTA (which most people
know as their "mail server") and the destination MTA (the remote server). It is up to each server along
the way to determine whether it should deliver the message, reject it, or send it farther downstream for
more processing. Relaying itself isn't inherently evil—in fact it is necessary for email to exist as we
know it. The problem, however, occurs when an MTA has been told that it should relay all messages
that it receives—that is, act as an open relay.
Most mail servers are easily configured so that they will relay only messages for users on a certain
network, or those who have an account on the server itself. Open relays, on the other hand, process and
deliver messages for anyone, regardless of their credentials. The bulk of SPAM mail is transmitted
illegitimately through open relays, often without the machine's administrator even knowing it is taking
place. Unfortunately, the consequences are numerous, and range from a loss of system performance to
legal ramifications.
Open relays "open" themselves to being used by external individuals and organizations. These
freeloaders are, in essence, hijacking your system and its resources for their own purposes, without ever
actually "breaking in." Spammers often try to offload their outbound mail in extremely high volumes
and as quickly as possible. For small server setups, the influx of mail can easily overwhelm available
bandwidth and hard disk space as each incoming message is queued for delivery. The result is an
effective denial of service attack that requires no special software beyond the ability to send email.
A more serious consequence is in where the responsibility lies if your system is used as the launching
point for a SPAM, virus, or other email-borne attack on someone else. Depending on the damage done,
the nature of the person or organization under attack, you may find yourself facing charges of negligence
if your server was used in the attack. Obviously, this is a worst-case scenario, but with recent world
events, computer and network security has moved to the forefront of law enforcement attention. In late
2001, legislation was proposed to classify hackers as terrorists. Although your server may not be the
source of the attack, you are responsible for its use and configuration. A mail server is a loaded weapon,
and you're responsible for conducting the appropriate background checks before unleashing it on the
Again, this is a worst-case scenario; it doesn't mean that you can't successfully run a safe and
secure mail server (which is the end goal of this chapter), but be aware that there are
consequences to running a server without properly configuring it. We've seen incidents of
death threats to the President's pets sent through OSU mail servers, only to see the FBI
knocking on the door a day later. In the case of the threats to the pets, legitimate email users
sent the messages. On an open relay, the same thing could happen, be traced to your server,
and your chances of locating the original sender are reduced. Imagine trying to explain this
to men (or women) in dark glasses and trench coats.
Testing for Open Relays
You can easily test a mail server to see whether it is an open relay by Telneting into the SMTP port (port
25) on the remote server, then using the SMTP commands to attempt to send a "false" message. Table
13.1 shows the basic commands needed to communicate with an SMTP server, in the order which they
are used.
Table 13.1. SMTP Command List
EHLO <machine name>
Identifies your computer to the remote SMTP server.
Some machines use HELO instead, but this is largely
MAIL FROM: <originating email> Sets the email address from which the message is being
RCPT TO: <destination email>
Sets the remote email address to which a message
should be delivered.
Used to start the input of the email message to the
server. The message is terminated by typing a period
(.) on an empty line.
Exits and sends the email message.
For example, the following session creates a connection to a remote server (bezirke., which is misconfigured and set to be an open relay. The remote
server accepts a fake recipient and a fake originating email address and attempts to deliver a message.
This is an example of an open relay.
[View full width]
# telnet 25
Connected to
Escape character is '^]'.
220 ESMTP Sendmail 8.9.3/8.8.7; Sun, 5
May 2002 22:30:15
EHLO Hello bezirke.adomaindoesntexist.
com [],
pleased to meet you
MAIL FROM: [email protected]
250 [email protected] Sender ok
RCPT TO: [email protected]
250 [email protected] Recipient ok
354 Enter mail, end with "." on a line by itself
This is a test.
--- John
On the flip side, a properly protected machine will block the message from being sent:
# telnet 25
Connected to
Escape character is '^]'.
EHLO Hello, pleased to
meet you
MAIL FROM: [email protected]
250 [email protected] sender accepted
RCPT TO: [email protected]
473 [email protected] relaying prohibited.
In this example, the server refuses to deliver the message. The connecting client is not authorized to send
the message. This is the target behavior for a "healthy" mail server.
Sendmail evolved from an early mail delivery package named, appropriately, delivermail, created in
1979 by Eric Allaman for use over ARPANET—the predecessor of what we now know as the Internet.
At the time, delivermail used the FTP protocol on top of NCP (Network Control Protocol). As
ARPANET grew into the Internet, the need for enhanced mail services that ran over the new TCP/IP
protocol suite became apparent. SMTP—the Simple Mail Transport Protocol—was developed to fill the
lack of a dedicated mail transfer standard. With the addition of the DNS (Domain Name Service) in
1986, worldwide Internet email became a reality, and delivermail made the transition to sendmail.
Initially released on BSD, sendmail finds itself right at home on Mac OS X (a BSD derivative).
Throughout the 15+ years of its existence, sendmail has been continually enhanced, and, despite its
popularity, is one of the most complicated server applications to understand and configure. For more
information, read the "Brief History of Mail" at
Mac OS X originally shipped with sendmail 8.10.2 for both its 10.0 and 10.1 releases. The latest Mac
OS X, 10.2 (Jaguar) includes sendmail 8.12.2, which, while recent, is still not the most current version
of the sendmail software. Since the 8.10.2 release, several important features have been added to the
sendmail system, including improved SMTP AUTH support and security updates. To get an idea of why
you'd want to keep your sendmail current, let's take a look at a few sendmail exploits, starting with the
infamous Internet Worm.
The Morris Internet Worm
Although the version of sendmail that you have on your system is mostly secure, the history of the
server software is far from squeaky clean. In the mid-80s sendmail served as one of the primary
propagation methods for the infamous 80s Internet Worm—or the Morris Worm, as named after its
creator, Richard Morris. On November 2, 1988, Richard Morris unwitting released the most destructive
Internet attack to date. Based on a 99-line program, the Morris Worm managed to do what today we can
only hope is impossible—take down the Internet. Attacking only Sun and VAX BSD-based systems, in
less than a day the Internet Worm attacked popular system services and infested over 5000 of the bestconnected machines in the country.
Sendmail provided one of the potential entry points for the worm. To quote Eric Allman, "The trap door
resulted from two distinct 'features' that, although innocent by themselves, were deadly when combined
(kind of like binary nerve gas)." The remote attacker took advantage of sendmail installations that were
compiled with the DEBUG flag and the ability of sendmail to deliver messages to a system process rather
than a user. Specifically, the attacking program would connect to the server, and provide a null sender
and a subject line that would include the necessary keyboard commands to remove the preceding mail
headers and insert the code to start a command interpreter. The body of the message contained the code
necessary to compile and start the worm process again.
Although apparently not intended to be malicious, the worm itself contained bugs. The intended
operation for the worm was to infect quietly and maintain a single harmless process running on each
infected system. Unfortunately, the worm did not prevent itself from reinfecting an already infected
machine. As a result, each infected machine started more and more worm processes, resulting in a rapid
degradation of system services and eventual failure. For a complete history on the worm, the infestation,
and the resulting network disaster, read Don Seely's "A Tour of the Worm" at
In reading the chronology of Don Seely's account of the worm, take notice of the length of
time it took for sendmail to be patched after the exploit was discovered. Open Source has its
Recent Exploits
After weathering the Internet Worm, sendmail remained relatively quiet until the Internet (and thus
email) became popular in the 90s. Over the years, numerous exploits have been found, and all have
quickly been solved with an update. To give you an idea of what you're facing, here are a few of the
bugs that have surfaced over the years.
Input Validation Bugs
Sendmail is often called from the command line to process and send email. To do so, it must be run
SUID root. Unfortunately, numerous implementations of the server have suffered from poor input
bounds checking, allowing local users to execute code as root on a number of operating systems. Several
of the bugs are referenced here:
Linux. (CVE: CVE-2001-0653)
IRIX. (CVE: CAN-2001-0714 and CAN-2001-0715)
The Vacation Exploit
Although not directly related to a problem with sendmail, the Unix vacation utility, used almost
exclusively with the sendmail server, triggered an inappropriate exploit through the MTA. "Vacation"
was used to send an autoreply ("I'm on vacation!") to those who emailed your account. Rather than
providing a sender address to sendmail within the body of its outgoing message, it used the command
line to provide the address. The trouble with this approach is that vacation didn't check to see whether
the command-line address truly was an email address. Malicious users could instead provide commandline arguments that would force sendmail to read an alternative configuration file and ultimately execute
arbitrary code. A description of this exploit can be found at
vacation_program_hole.html (CVE: CVE-1999-0057).
Unsafe Signal Handling
Used in a theoretical exploit, signal handling in sendmail could potentially lead to a race condition and
effectively allow local users to execute arbitrary code as root, or cause a locally based DOS attack.
Because this is not a remotely exploitable attack (and has never been seen implemented as an actual
exploit), it should be of little concern to those running a dedicated mail server. Additionally, sendmail
8.12 eliminates the possibility of the attack taking place by removing the SUID sendmail requirement.
To learn more about this potential exploit, read (CVE:
Sendmail Exploit Resources
If you're interested in trying a few exploits on your Linux boxes (Linux distributions ship with a number
of different sendmail versions), take a look at the following:
●—A code
collection of sendmail exploits, cataloged by version number.—Dozens
of sendmail exploits and exploit descriptions.
Hacker Internet Security Services.—Exploits
for the most recent versions of sendmail on the Linux platform.
As you can see, "stuff happens," but there are no reported OS X sendmail exploits at this time. Your
primary concern should be getting sendmail set up safely with no chances of attack due to
misconfiguration. In addition, you should learn how to update sendmail in case vulnerabilities do
become an issue in the future. These topics will be the focus of the next several pages of the chapter.
You must pay close attention to three areas, in particular:
File/Path Ownership and Permissions. Sendmail is picky (for good reason), about where you put
your files and who has access to them.
Relay and Access control. Who can use your server, and when. Sendmail has a number of options
to determine whether and when mail should be accepted and processed.
Local Users Access control. If you plan to run your server on a computer where users have access
(ssh/ftp/etc) to their accounts, you should limit their ability to intentionally (or
unintentionally) compromise system security.
Activating Sendmail on Mac OS X
Assuming you want to test the version of sendmail that came with your computer, you're about five
minutes away from having a running sendmail on your system. First, however, you need to make a
decision based on how you're going to be using your computer. Mac OS X is an operating system
designed for Mac users. The admin group is often shared among multiple users, often with disregard
for the role that those users play in the actual operation of the machine. Administrative users can write to
the root level (/) of the drive, which is typical of the "Mac" way of doing things.
If you're planning to run sendmail as a full-time MTA, we recommend updating to the latest
release, which you'll learn about later in this chapter.
Breaking the Mac Mold or the Sendmail Security Model
Sendmail, by default, has extremely strict permission requirements for the directory where it is installed:
Files that are group writable or are located in group-writable directories will not be read.
Files cannot be written to directories that are group writable.
.forward files that are links to other files will not be processed.
The root level of your Mac OS X file system is group writable, which immediately breaks the first rule.
To adhere to the sendmail security model, you need to "break" the longstanding Mac tradition of being
able to install applications and folders anywhere you want. The root level of the system must be offlimits for changes to anyone except the root user.
Most software installation packages (MindVise, InstallerMaker, and so on) automatically
authenticate as root and install software without needing a root login. Admin users, however,
will not be able to manually add items at the root level.
If you're serious about running an MTA on your Mac OS X computer, you should think
about running a dedicated server, not sharing a computer with general users.
Many sendmail attacks have been local exploits, allowing local users to gain root access. To
be honest, if you're running a sendmail server and using it as a general use computer, you're
asking for trouble. This is not a recommended Maximum Mac OS X Security practice.
Testing the Sendmail Path Permissions
To determine the current state of sendmail permissions at any time, open the Terminal and use the
command /usr/sbin/sendmail -v -d44.4 -bv postmaster. This runs a test of all
sendmail file permissions and reports on the results. (Only the relevant output is shown here).
[View full width]
# /usr/sbin/sendmail -v -d44.4 -bv postmaster
safefile(/etc/mail/, uid=0, gid=20, flags=6000, mode=400):
safedirpath(/etc/mail, uid=0, gid=20, flags=6000, level=0, offset=0):
[dir /] mode 41775 WARNING
safedirpath(/private/etc, uid=0, gid=20, flags=6000, level=1,
[dir /private/etc] OK
[dir /etc/mail] OK
[uid 0, nlink 1, stat 100644, mode 400]
safefile(/etc/mail/local-host-names, uid=0, gid=20, flags=6580,
safedirpath(/etc/mail, uid=0, gid=20, flags=6580, level=0, offset=0):
[dir /] mode 41775 FATAL
[dir /etc/mail] Group writable directory
/etc/mail/ line 93: fileclass: cannot open '/etc/mail/
local-host- names':
Group writable directory
safefile(/etc/mail/relay-domains, uid=0, gid=20, flags=6580,
safedirpath(/etc/mail, uid=0, gid=20, flags=6580, level=0, offset=0):
[dir /] mode 41775 FATAL
[dir /etc/mail] Group writable directory
safefile(/etc/mail/service.switch, uid=0, gid=20, flags=6480,
safedirpath(/etc/mail, uid=0, gid=20, flags=6580, level=0, offset=0):
[dir /] mode 41775 FATAL
[dir /etc/mail] Group writable directory
No such file or directory
safefile(/etc/mail/service.switch, uid=0, gid=20, flags=6480,
safedirpath(/etc/mail, uid=0, gid=20, flags=6580, level=0, offset=0):
[dir /] mode 41775 FATAL
[dir /etc/mail] Group writable directory
No such file or directory
safedirpath(/var/spool/mqueue, uid=0, gid=20, flags=4, level=0,
[dir /] mode 41775 WARNING
safedirpath(/private/var, uid=0, gid=20, flags=4, level=1, offset=1):
[dir /private/var] OK
[dir /var/spool/mqueue] OK
The group-writable directories are flagged in the output, and, as you can see, many of these errors are
fatal, meaning sendmail won't even run. This problem can be eliminated by either of two methods:
removing the group write permissions or telling sendmail to ignore them.
Fixing Sendmail Path Permission
To remove group write permissions from your directory, open a Terminal window and type the
following three commands:
sudo chmod g-w /
sudo chmod g-w /etc
sudo chmod g-w /etc/mail
Assuming you follow Apple's updates, your system directory permissions are at Apple's whim. Several
sections in this book discuss how to counteract the effects of the Apple updates. It's the price we pay for
automated updates and Apple's watchful eye. Table 13.2 contains the preferred sendmail directory and
file modes.
Table 13.2. Sendmail Directory and File Permissions
Permission Mode
If you'd prefer to "slightly" break the sendmail model, you can limit the extent to which sendmail
enforces its security checks by employing the DontBlameSendmail configuration option. This
configuration directive forces sendmail to drop one or more of its security policies to make for a more
"forgiving" installation.
Open the /etc/mail/ file in your favorite text editor, then search for the line that
#O DontBlameSendmail=safe
Replace the text with
O DontBlameSendmail=GroupWritableDirPathSafe
As the name may lead you to believe, this option makes sendmail consider directories with group write
attributes set as "safe." Your sendmail installation is now operable and ready to be started.
In a recent OS update, a second configuration file was added for a second instance of
Sendmail that handles local message submissions. In order to use Sendmail locally, you will need to
make the same changes to this file that you've made to
If you're interested in loosening the sendmail file permission reins even more, you can get a
complete list of the DontBlameSendmail options at
DontBlameSendmail.html. Keep in mind that the less strict the file permissions, the more
likely it is that a local exploit could take place.
Removing NetInfo Dependence
By default, Mac OS X's sendmail distribution attempts to read its configuration from NetInfo. This can
lead to hours of frustration as you try to determine why the configuration changes you're making in the /
etc/mail/ file are simply being ignored. To fix the problem, execute these two
commands (as root) to tell sendmail to pay attention to the correct config file:
# niutil -create . /locations/sendmail
# niutil -createprop . /locations/sendmail /etc/mail/
Be absolutely certain that you've completed this step before proceeding; otherwise future examples
within the chapter may not work as documented.
Setting Your Server Name
I'm making the assumption that you already have DNS support for your mail server and a static IP
address for its use. At the very least, you should have an A record registered, and very probably an MX
record. For more information on mail servers and DNS settings, check out http://www.graphicpower.
For whatever names you want your system to receive email, you must edit the file /etc/mail/
local-host-names with the hostnames that identify your mail server. For example, my machine has
both the names and For sendmail to recognize
both of these as valid names, I've added them both to my local-host-names file:
# more /etc/mail/local-host-names
Be sure to create and save this file before continuing with the setup. As with, this
should be owned by root and have the permission mode 644.
Activating the Sendmail Daemon
Apple includes a script for starting sendmail in the /System/Library/StartupItems/
Sendmail directory on both the Mac OS X Client and Server operating system distributions. The
startup script looks for a corresponding MAILSERVER line in the /etc/hostconfig. You need to be
logged in as root to make changes to this file.
Edit /etc/hostconfig, and look for this line:
MAILSERVER=-NOTo activate sendmail, change -NO- to -YES-:
MAILSERVER=-YESSave the file, and you're ready to go.
If you're a Mac OS X Server user, you should make sure that the Apple Mail Service is
disabled, either through the Server Admin utility or by editing the /private/etc/
watchdog.conf file.
Apple's Mail Service is started in watchdog.conf by this line:
mailservice:respawn:/usr/sbin/MailService -n
To run sendmail, be sure to set this line to
mailservice:off:/usr/sbin/MailService -n
Attempting to run both servers at the same time may result in your untimely insanity.
The next time you reboot your computer, sendmail will start. Alternatively, you can start sendmail
immediately with sudo /usr/sbin/sendmail -bd -q1h. The two options used when starting
sendmail, -bd and -q1h, tell the MTA to operate as a background daemon and set the queue
processing interval to 1 hour. Table 13.3 contains a few of the sendmail switches that can be used to finetune the control of the MTA.
Table 13.3. Sendmail Runtime Options
Command-Line switch
Run as a daemon on port 25.
Run as a daemon in the foreground. Useful for testing and debugging.
Show messages in the sendmail queue.
Set sendmail debugging level to the given number. For more information on
debugging levels, visit
Use an alternate configuration file (for testing). Sendmail will not run as root
when an alternative config file is in use.
Set the interval at which stored queue items will be processed. Use one or
more numbers, followed by s,m,h,d,w for second, minute, hour, day, or
week values.
Process only items whose queue ID contains the given substring.
Process only items whose recipient line (To:) contains the given substring.
Process only items whose sender line (From:) contains the given substring.
-X <logfile>
Log all activity through sendmail to the named log file.
This is only a partial list of switches that may be of use during initial testing and deployment of
sendmail. For a complete list of options, you'll need to look at the Sendmail Installation and Operation
Guide (—not even the man pages list all the
possible settings.
Protecting Sendmail
After sendmail is installed and operational, your next step is customizing the configuration to suit your
needs. The central sendmail config file is located at /etc/mail/ Sendmail is unique
in that you can edit either the main configuration file (as we did with the DontBlameSendmail
option earlier), but more frequently you'll make your first changes to a macro file (.mc), which is then
run through the m4 macro processor to generate the more complex /etc/mail/ file.
If this seems confusing, don't worry—it is. The file is extremely difficult to configure
by hand. To counteract the complexity, sendmail has built macros (called "Features") that can generate
many of the complex settings for you. You edit the macro-based configuration file, process it with m4,
and voila: The final file is generated.
You'll need to be logged in as root or use sudo to make changes and generate the sendmail
configuration file.
Apple has included the basic Darwin (Mac OS X) sendmail macro configuration in the file /usr/
share/sendmail/conf/cf/ You should not make any changes to this
file. Instead, create a copy ( and use it to add your changes.
To begin, let's take a look at how to generate the basic configuration that came with your computer. You
can follow these steps to reset sendmail to its "fresh-install" state:
1. First, copy the Darwin macro file someplace safe for editing: cp /usr/share/sendmail/
conf/cf/ /tmp/
2. Next, run the m4 macro processor on the file: m4 /usr/share/sendmail/conf/m4/cf.
m4 /tmp/ > /etc/mail/
3. The is regenerated.
If you make any mistakes or find that sendmail is no longer starting, this is a quick way to get the system
back to a state that you can deal with.
Each time you regenerate your file, whether to add a configuration option or
reset it to the default state, you must edit the file to add the DontBlameSendmail option
discussed earlier, unless you chose to modify directory permissions in lieu of relaxing the
sendmail security restrictions.
Now let's look at how you can protect your newly active server. As I mentioned earlier, you are not
susceptible to many of the earlier attacks that plagued sendmail, so your primary focus should be on
protecting the server from unauthorized use, such as relaying or being used to deliver SPAM to your
You're in luck—partially. The sendmail distribution that ships with Mac OS X is already configured to
block relay requests. However, you still need to tell the server what it should allow. Setting the domains
allowed for relaying is a matter of creating and editing the file /etc/mail/relay-domains.
The relay-domains file can contain individual hosts or domains for which your server will relay
messages. For example, if I want my server to accept messages from the domains poisontooth.
com,, and, I'd add each of these to the /etc/mail/
relay-domains file:
# more /etc/mail/relay-domains
You can also add individual hostnames or IP addresses to the relay-domains file if you prefer to
restrict access to specific machines.
After making updates to your relay list, you must restart sendmail (by rebooting or sending a HUP signal
with kill) for the changes to take effect.
Sendmail Features
For the simple level of relay control we've seen so far, you won't need to generate a new sendmail.
cf file. Many more relay "features" can fine tune the relay conditions for your server, and these will
require you to edit your file and regenerate Table 13.4 contains a list
of available features and their use.
To use any of the following features you should first create a copy of /usr/share/
sendmail/conf/cf/ for editing—such as /tmp/sendmail.
All new features will then be added as new lines to /tmp/, which will be
used to generate /etc/mail/ when you type m4 /usr/share/
sendmail/conf/m4/cf. m4 /tmp/ > /etc/mail/sendmail.
Sendmail must be restarted after adding features.
Table 13.4. Sendmail Relay Features
Matches only specific hosts in the relaydomains file, rather than domains.
Relays for your entire local domain.
Uses a hash database /etc/mail/
access for controlling relay options. We'll
examine this in detail later in this chapter.
Uses the /etc/mail/access database
for controlling mail recipients as well as
Enables real-time blacklisting service for
incoming messages. We'll also look at this
in detail shortly.
Loosens the restriction of having to have a
domain name as part of a sender address.
Allows simple usernames (such as "jray") to
be used.
FEATURE('accept_unresolvable_domains') Instructs sendmail to accept messages from
domain names that cannot be resolved with
DNS lookups. Not a good idea.
Allows relaying for any domain that has an
MX record that points to your mail server.
Relays messages that "appear" to come
from your domain. Because this is easily
spoofed, it doesn't provide very good
Relays everything. A very very bad idea.
Limit users' program access from .
forward files.
For more information on sendmail relaying, read the official documentation at http://www.
Sendmail's Access Database
One of the more powerful relaying options is the sendmail access_db feature. This gives you control
over the messages that move through your system down to the email address level. You can reject
domains, hosts, and email addresses, all from a single location. Enable the access database by adding the
following lines to your file, generating the file, and restarting
The first of the two lines enables the access control database, whereas the second (optional) line allows
entries into the database that will block outgoing messages to specific recipients.
The access database is built from a simple text file that consists of lines containing a hostname, domain,
IP address, or email address and an action that should be taken when the element is matched. Table 13.5
shows the five available actions.
Table 13.5. Access Database Actions
Accept mail regardless of any other rules or access control blocks.
Accept email to or from the named domain.
Reject email to or from the named address. An error message is returned
to the client.
Silently reject email to or from the named address.
ERROR:<### Message> Identical to REJECT, but allows a custom error message to be sent.
For example, consider the following access database file (/etc/mail/access.txt):
[email protected]
spam source."
ERROR: "500 You won't spam us again."
ERROR: "500 Your domain is a known
Line 1. Mail to or from [email protected] is rejected.
Line 2. Sets up the same rejection behavior for any messages coming from the evilandgood.
com domain.
Line 3. Overrides the line 2 rejection for the host
Line 4. Relay messages to/from
Line 5. Relay messages from the host
Set up your own rules in a temporary text file, such as /etc/mail/access.txt. This file will then
be processed with the makemap hash command to generate the binary hash file that sendmail uses
# makemap hash /etc/mail/access < /etc/mail/access.txt
The access database should be owned by root and writable only by root. As with most everything, you
need to restart sendmail after updating the access database.
Real-Time Blacklisting
Another useful feature of sendmail is the ability to use real-time DNS blacklisting (RBL) services to
block messages from known spam and open-relay sources. After blacklisting is enabled, each incoming
connection is checked against a dynamic online database (on to determine whether it is
from a known open relay or spammer. If a match is found, the message is rejected. The list of relays and
spam sources is kept up to date by user submissions, so it is always growing and evolving.
To activate this feature for your server, add the line
to your file, then use m4 to regenerate the main configuration file, as discussed earlier.
To test for blacklisted addresses, a standard DNS lookup is performed on a specially constructed version
of a hostname. For example, if you want to check the IP address to see whether it is
blacklisted, you would look up the special address As you can probably tell, this is nothing but the IP address reversed with . added to the end. If a lookup on the address fails, it is not
Default Server:
*** can't find
To see an example of a "successful" (or blacklisted) lookup, use the IP address—or This address, reserved for testing, should show
return a valid DNS lookup:
Default Server:
For more information on the RBL system, visit Unfortunately,
commercial licenses to the service do cost money. Noncommercial use is free, but you
must register your mail server with before you can successfully query the DNS.
Other RBL services are available that you can use at no charge if you don't want to use the built-in settings. To use an alternative RBL server, simply specify a second parameter to the features,
such as:
Searching for RBL or RBL DNS is the easiest way to find the "latest and greatest" blacklisting servers
that are currently active. Here are a few to try for starters:
Restricted Shell
If you're running an "open" system with user shell accounts, you should consider restricting what they
can do with their .forward files, which can be created inside a user's home directory. The .forward
file is typically used to forward messages to another account, or invoke another program, providing
incoming email contents as standard input. These files should nominally be set to mode 600; looser
restrictions cause sendmail to ignore them.
The .forward file consists of a line of destinations, separated by commas. While usually the file
contains a single external email address for forwarding (such as [email protected]), the destinations
can be local accounts, email addresses, files, or piped utilities. An example for my account (jray),
might look like this:
\jray, "| /usr/bin/vacation jray"
In this example, the first destination, \jray, causes a copy of the incoming message to be delivered to
my local account. The \indicates that this is an end address and no aliasing should take place (to help
eliminate the chance of infinite mail loops). The second destination "| /usr/bin/vacation
jray" pipes the incoming message to the vacation utility with the command-line parameter jray.
Commands (in this case, vacation) are executed SUID the owner of the .forward files.
The trouble with .forward is that it provides a means for a user to execute a potentially untrusted
program with arbitrary input. Even though the sendmail application may be secure, the called
application may include a vulnerability that could be inadvertently (or intentionally) be exploited.
To limit what a user can execute, you should consider adding FEATURE(smrsh) to your sendmail.
mc file and regenerating the .cf file. This feature activates the Sendmail Restricted Shell which limits
what can be executed through the .forward file or aliases file to the programs contained, or
linked, within the directory /usr/adm/sm.bin. The smrsh utility even rejects potentially dangerous
characters such as redirects if they are included in the string to execute.
For example, if you wanted to provide users access to vacation and only vacation, you could do
the following as root:
# mkdir -p /usr/adm/sm.bin
# chmod 600 /usr/adm/sm.bin
# ln -s /usr/bin/vacation /usr/adm/sm.bin/vacation
The first two lines create and set permissions for the /usr/adm/sm.bin directory. (The directory
should be owned by root and writable only by root.) The third line creates a symbolic link from /usr/
bin/vacation to /usr/adm/sm.bin/vacation, allowing it to be executed from .forward
or your aliases file.
You should never link a shell or interpreter such as Perl into the sm.bin directory. This provides an
open (and inviting) door for intrusion.
Even if you aren't using .forward files, you may still want to use the smrsh feature,
because it provides the same restrictions for your aliases database. Although your
aliases file should be writable only by root, if it were compromised, only the /usr/
adm/sm.bin utilities could be invoked through it.
Sendmail Options
Now that you've tasted the feature macros that you can add to your file, let's take a look
at configuration options that can be added directly to /etc/mail/ Unlike the macrobased features, you don't need to regenerate the configuration file each time you add an option.
However, keep in mind that if you edit your .mc file later and regenerate, you'll need
to add the options to the file again. Table 13.6 contains a number of options for additional access and
security control. Options are added, one to a line, preceded by O:
O <Option Name>=<Option Value>
Table 13.6. Sendmail Options
AliasFile = <alias path and filename>
Sets a location for the sendmail alias
file. Mac OS X defaults to the
Netinfo /aliases directory, rather
than a local file.
No value needed. If specified, this
option will allow non-RFC 1123
HELO/EHLO values to be sent.
ConnectionRateThrottle = <incoming
connection limit>
Applies a limit to the number of
incoming connections within a onesecond time period.
MaxDaemonChildren = <child limit>
Sets a limit to the number of child
processes that can be running
MaxMessageSize = <message size limit>
Sets an upper limit, in bytes, for
incoming messages.
MinFreeBlocks = <free filesystem blocks>
Refuses to queue messages if the file
system free space (in blocks) drops
below this limit. Use du to display
the free space on your mounted
PrivacyOptions = <option1, option2, ...>
Sets how sendmail adheres to the
SMTP protocol. Options include
public, needmailhelo,
needexpnhelo, noexpn,
needvrfyhelo, novrfy,
restrictqrun, noreceipts,
goaway, and authwarnings, and
will be discussed shortly.
QueueDirectory = <directory>
Directory to use as the mail queue.
QueueLA = <load average>
Sets the load average at which
messages will be accepted, but
sendmail will no longer try to deliver
RecipientFactor = <factor>
This number is added to the message
priority (higher numbers are lower
priority) for each recipient of the
message. The more recipients, the
lower the priority. Defaults to 30000.
RefuseLA = <load average>
Limits incoming connections based
on the server's load average. Use
uptime to view your computer's
load averages.
RetryFactor = <factor>
Priority factor added to each message
as it is processed and reprocessed.
Defaults to 90000.
SafeFileEnvironment = <directory path>
Limits sendmail's capability to
deliver to anything but standard files
in a specific location. Sendmail
aliases and users with .forward
files are forced to write any files to
this directory.
StatusFile = <file path>
Path and filename of a file that is to
store mail server statistics. This is a
fixed-size log, and can be viewed at
any time with mailstats.
When active, "include" files (as are
often used in the alias file), and .
forward files are considered
"unsafe" and may not be used to write
files or reference other programs.
Privacy Options
The PrivacyOptions option controls how strictly your sendmail server follows the SMTP protocol
by enabling/disabling features in the SMTP protocol. For example, removing support for some functions
(EXPN, VRFY) is a wise idea. It prevents remote attackers from verifying that an account name exists, or
expanding an alias list to show the destinations. To use these two privacy options in the /etc/mail/ file, one would add the following line:
O PrivacyOptions=noexpn,novrfy
The complete list of privacy options is shown in Table 13.7.
Table 13.7. Available Privacy Options
Privacy Options
Open access, no restrictions.
Requires that the HELO/EHLO command be sent before MAIL.
Requires that the HELO/EHLO command be sent before EXPN.
Disables the EXPN command entirely. This prevents alias lists from being
Requires that the HELO/EHLO command be sent before VRFY.
Disables the VRFY command, eliminating the opportunity for a remote user to
verify that an account exists.
restrictmailq Restricts the mailq queue-listing command to the owner or group of /var/
spool/mqueue (root).
Restricts the queue runs to the owner of /var/spool/mqueue (root).
Disables return receipt acknowledgement.
Activates all privacy options except for restrictmailq and
Adds X-Authentication-Warning headers to messages.
By default, Mac OS X comes with the authwarnings option set. If you are customizing
the base configuration file, you should either comment out the existing PrivacyOptions
line and add your own, or add your options to the existing line.
Safe File Environment
Users and administrators can employ the .forward and aliases files to perform a number of tasks,
including forwarding messages to other accounts, piping them to other utilities, and writing to files—
such as mail archives. By allowing files to be created anywhere on the file system, sendmail opens itself
to being capable of overwriting existing files, directories, and /dev device files. The
SafeFileEnvironment option creates a chroot environment where files can be written and does
not allow access outside the directory. By living with the inconvenience of only being able to write to a
specific directory you can effectively shut sendmail off from being able to write to the rest of your
The SafeFileEnvironment is specified as an option and a directory, such as /archive:
O SafeFileEnvironment = /archive
You must, of course, also create the directory that sendmail is using as its Safe File Environment. You
should also modify .forward files and the aliases database to use the correct path for any files
they may write.
In many cases, you may not need to update your alias/forward file paths (although it's still a
good idea), because sendmail sets the base of any files being written to that of the
SafeFileEnvironment. For an installation with /archive set, the two paths /
archive/mailinglists/logs and /mailinglists/logs will be equivalent;
both will write files to /archive/mailinglists/logs.
Aliases and Common Sense
A topic mentioned repeatedly throughout the sendmail configuration features and options is the aliases
database. An alias is similar to a .forward file in functionality, except it applies to the entire mail
server, not just a single account.
Aliases are more of an operational issue than one of security, so we've saved them until last. By default,
Mac OS X stores sendmail aliases in the Netinfo /aliases directory, accessible through the NetInfo
Manager or the nicl command-line utility. Although convenient from the Mac-user perspective, most
sendmail installations on other systems use the file /etc/mail/aliases to store alias information.
This file is easier to edit and more convenient for maintenance than the NetInfo directory. To activate
the /etc/mail/aliases file on your system, edit /etc/mail/, looking for the
following line:
#O AliasFile=/etc/mail/aliases
Uncomment the line, so that it reads as follows:
O AliasFile=/etc/mail/aliases
Then save the configuration file. After restarting sendmail, you can start using /etc/mail/aliases
for storing aliases. This file should be owned and writable by root. Opening it to editing by anyone else
means risking your system's security. The aliases file itself contains the raw data defining account
aliases—it is not the file that sendmail uses directly. The actual aliases database is stored in /etc/
mail/aliases.db and you create it by running the newaliases command as root each time you
make changes to the raw data in /etc/mail/aliases.
So, what does the alias database do, and how does it relate to security? As mentioned earlier, the alias
database is similar to a .forward file. It takes incoming messages and directs them to a specific
account mailbox, email address, utility, or file. As such, it has the same security risks as a .forward
file, but encompasses the entire system, rather than a specific user account. By default, Mac OS X
includes the following aliases in NetInfo:
The alias (administrator, MAILER-AGENT, and so on) is paired with a destination—in the case of the
default aliases, either root or postmaster. Because postmaster itself is aliased to root, the destination of
all the aliases is the root account mailbox.
Within the NetInfo naming conventions the aliases are the Netinfo directory "names," and the aliases are
the "members", as shown in Figure 13.1.
Figure 13.1. NetInfo stores the sendmail aliases in the /aliases directory.
The aliases file follows a similar structure of alias names and destination members: separated by a
colon (:), one alias definition per line:
<alias name>:
The Netinfo aliases can be rewritten to a standard /etc/mail/aliases file like this:
Besides just aliasing a single name to an account (or another alias), you can also add multiple
destinations to a given alias name, and these destinations don't have to be accounts. Alias destinations
can be any of the following:
Full qualified email addresses. Besides just local accounts, you can use full email addresses, such
as [email protected], to forward the mail to another server.
File Names. A file in which to store the contents of the incoming message, useful for archiving
incoming messages.
Included Destinations. For creating a simple mailing list that works by redirecting incoming
messages to a list of recipients, you can add a special "include" destination by prefixing a fully
qualified path to the list of destinations (usually just a list of email addresses) with the text :
include:. For example, to include a list of email addresses in the file /etc/mail/
mylittlelist as destinations, I'd use :include:/etc/mail/mylittlelist as one of
the destinations for my alias.
Programs. Like a .forward file, an alias can also use a program as a destination. This is often
used for "auto-responders" that deliver help or other information when a message is sent to a
given email address.
Because of the capabilities of the aliases file (and Netinfo /aliases), you should always make
sure you know exactly what is contained in your system aliases. Often people inherit mail servers with
extremely complex alias definitions, some of which may be security risks. Keep in mind that the
SafeFileEnvironment option and smrsh feature both affect the aliases database, so activating
these options will help protect your aliases from being used for something nefarious.
Updating Your Sendmail Installation
Mac OS X originally shipped with sendmail 8.10.2. Unfortunately, at the same time, the sendmail
distribution was progressing through the 8.12.x series. Apple rectified this situation with Mac OS X 10.2
(Jaguar), but the fact remains: Apple's distribution has always lagged the official sendmail release. To
keep your system as secure as possible, you should pay close attention to the
notices and update your installation manually, if necessary.
Keep in mind that Apple's security updates will likely overwrite your customized sendmail
installation, so you should always review the Installer receipts (/Library/Receipts)
and keep a copy of your configuration files backed up. Alternatively, because Apple's
installation uses the standard sendmail file layout, you may wish to create a custom
installation layout to avoid having your files overwritten.
If a new version of sendmail becomes available, you can quickly install it on top of the Apple-supplied
binary by downloading the latest source distribution (8.12.6 at the time of this writing) and following
these instructions.
First unarchive and enter the distribution directory:
# tar zxf sendmail.8.12.6.tar.gz
# cd sendmail-8.12.6/
It is strongly recommended that you review the PGP signature of the sendmail code as
described in the CERT advisory (,
MacGPG (, or PGP ( In October 2002,
a Trojan horse–infected sendmail was widely distributed, forcing users to take additional
steps to verify the integrity of their downloads.
Next, use sh Build to compile the daemon. Sendmail already "knows" about Darwin/Mac OS X, so
the compile should progress smoothly:
[View full width]
# sh Build
Making all in:
Configuration: pfx=, os=Darwin, rel=6.1, rbase=6, rroot=6.1,
arch=PowerMacintosh, sfx=,
Using M4=/usr/bin/gm4
Creating /Users/jray/Desktop/sendmail-8.12.6/obj.Darwin.6.1.
PowerMacintosh/libsm using /
Making dependencies in sendmail-8.12.6/obj.Darwin.6.1.PowerMacintosh/
cp /dev/null sm_os.h
Finally, type make install to install the binaries:
# make install
install -c -o root
install -c -o root
install -c -o root
install -c -o root
install -c -o root
install -c -o root
install -c -o root
install -c -o root
-g wheel -m 444 sendmail.0 /usr/share/man/cat8/
-g wheel -m 444 sendmail.8 /usr/share/man/man8/
-g wheel -m 444 aliases.0 /usr/share/man/cat5/
-g wheel -m 444 aliases.5 /usr/share/man/man5/
-g wheel -m 444 mailq.0 /usr/share/man/cat1/mailq.1
-g wheel -m 444 mailq.1 /usr/share/man/man1/mailq.1
-g wheel -m 444 newaliases.0 /usr/share/man/cat1/
-g wheel -m 444 newaliases.1 /usr/share/man/man1/
You can check your new sendmail installation by typing sendmail -d from the command line. You'll
need to type Control-C to break out:
# sendmail -d
Version 8.12.6
Sendmail Resources
Unfortunately, sendmail is a large application that requires years to master. If you need to create
complex configurations, you should invest in a book dedicated to the topic of managing sendmail
servers. Here are a few resources that can help you get started:
Sendmail, by Bryan Costales and Eric Allman, published by O'Reilly. This is the definitive guide
to sendmail. Written by Eric Allman, the original sendmail programmer, this book contains every
feature, tip, and trick necessary to tweak sendmail for any configuration.
Sendmail: Theory and Practice, by Paul A. Vixie and Frederick M. Avolio. Covers the history
and architecture of mail servers, as well as sendmail setup. Useful for gaining a perspective on
sendmail and its place as an Internet server. The home of your sendmail server software. Updates, FAQs, and
documentation can be found here. Discussion and articles on the implementation and maintenance of
sendmail. Claus ABmann's email software
documentation, tips, and links.
Postfix as an Alternative
Sendmail is a monster of an MTA. Rarely is it necessary for an organization to use sendmail for the
advanced (and remarkably obscure) features. More frequently one would be better off installing an
alternative server, such as Postfix ( To quote the author:
Postfix attempts to be fast, easy to administer, and secure, while at the same time being
sendmail compatible enough to not upset existing users. Thus, the outside has a sendmailish flavor, but the inside is completely different.
Many people are hesitant to move away from mainstream software such as sendmail, but Postfix has
gained a following as one of the easiest and most stable Unix MTAs available. Better yet, it installs as a
drop-in sendmail replacement, meaning that any other software or scripts that rely on sendmail (such as
CGI scripts) will continue to function without additional modifications.
Postfix supports Mac OS X, integrates with the Netinfo:/aliases map, and is much easier to
configure than sendmail. If you don't mind a few minutes compiling, you can be rid of sendmail for
Of course, Postfix isn't without fault. Before you start the installation, it's a good idea to take a look at
the list (the short list) of exploits for Postfix.
Recent Postfix Exploits
As with sendmail, two kinds of exploits can potentially affect Postfix: local and remote attacks.
Strangely enough, there are two reported Postfix exploits, one local and one remote. Unlike sendmail,
the known Postfix exploits are minor in the severity and occur only in extreme conditions.
Logfile DOS Attack
Versions of Postfix prior to 0.0.19991231pl11-2 could potentially be targets of a DOS attack aimed at
filling drive space. Early versions of Postfix kept extensive SMTP debugging logs. Attackers could
create and drop connections in an attempt to overflow the log and disrupt server operations. There are no
known occurrences of this attack taking place. For more information visit http://online.securityfocus.
com/advisories/3722. Because we will be installing a more recent version of Postfix, this will not be an
issue with our installation. (CVE: CVE-2001-0894)
sudo MTA Invocation
On some Linux systems, an error existed in the sudo package which could be exploited by local users.
Attempting to invoke sudo would result in an error message being generated and Postfix being started
SUID root to deliver the message without the proper environment settings. This does not affect Mac OS
X, and anyway, it was easily fixed by removing or upgrading sudo. More information is available at (CVE: CVE-2001-0279).
Although Postfix is obviously a less widely used MTA than sendmail, the lack of serious problems is
still very telling. Over time, as with any software, new exploits will be found, but, if peace of mind is of
any concern to you, I recommend following through with the replacement of sendmail.
Installing Postfix
Postfix installation under Mac OS X is verging on trivial. The Postfix software includes several scripts
for everything from backing up your sendmail installation to adding the necessary users to Netinfo. This
is an excellent example of the open source community's embracing of Mac OS X.
Preparing the System
First, download the latest version Postfix from
Be sure that you find the "correct" latest version—Postfix seems to have changed numbering sequences
recently. At the time of this writing, 1.1.8 was the most recent.
Unarchive the software and cd into the software installation directory:
% tar zxf postfix-1.1.8.tar.gz
% cd postfix-1.1.8/
Next, you'll need to back up your existing sendmail installation in case you want or need to go back to
the original software. The auxiliary/MacOSX directory contains a script called backupsendmail-binaries, which, as its name suggests, does just that. You'll want to either su to root or
use sudo to execute the rest of the installation:
# cd auxiliary/MacOSX
# ./backup-sendmail-binaries
If you'd like to back up the sendmail binaries by hand, the files you want to copy are /usr/
sbin/sendmail, /usr/bin/newaliases, and /usr/bin/mailq.
Now, it's time to add the users and groups necessary to run Postfix. There are two groups (postfix
and maildrop) and a user account (postfix) that must be created before you install Postfix. The
script niscript (also located in the auxiliary/MacOSX directory) will do it all for you.
# ./niscript
This script massages your netinfo database.
This can severely break
your system. If your netinfo database breaks, you get to keep the
No Warranty. Really.
This script tries to create two groups (if they do not already exist):
- postfix
- maildrop
and tries to create a user (if it does not already exist)
- postfix
which is member of group postfix.
Will create postfix as gid 88
Will create maildrop as gid 89
Will create postfix as uid 88
The postfix and maildrop UID and GIDs are not hardcoded to 88 and 89 as shown in
the example. The script automatically chooses unused numbers for you.
Finally, cd back into the main source distribution directory and compile the software with a simple
[View full width]
# cd ../..
# make
make -f MAKELEVEL= Makefiles
set -e; for i in src/util src/global src/dns src/master src/postfix
src/smtpstone src/
sendmail src/error src/pickup src/cleanup src/smtpd src/local src/
lmtp src/
trivial-rewrite src/qmgr src/smtp src/bounce src/pipe src/showq src/
postalias src/postcat
src/postconf src/postdrop src/postkick src/postlock src/postlog src/
postmap src/postqueue
src/postsuper src/nqmgr src/qmqpd src/spawn src/flush src/virtual;
do \
(set -e; echo "[$i]"; cd $i; rm -f Makefile; \
make -f Makefile MAKELEVEL=) || exit 1; \
cc -g -O -I. -I../../include -DRHAPSODY5 -I.. -c unknown.c
cc -g -O -I. -I../../include -DRHAPSODY5 -I.. -o virtual virtual.o
mailbox.o recipient.o
deliver_attr.o maildir.o unknown. o ../../lib/libmaster.a ../../lib/
libglobal.a ../../lib/
cp virtual ../../libexec
Basic Setup
After Postfix has successfully compiled, the next step is to run the install script. It prompts you for
various settings (for most, the default answer will suffice). Type make install to run the install
script (the output of which is summarized for the sake of brevity):
# make install
Please specify the prefix for installed file names. This is useful
if you are building ready-to-install packages for distribution to
other machines.
install_root: [/]
Please specify a directory for scratch files while installing
Postfix. You must have write permission in this directory.
tempdir: [/Users/jray/Desktop/book code/postfix-1.1.8] /tmp
Please specify the destination directory for installed Postfix
configuration files.
config_directory: [/etc/postfix]
Please specify the destination directory for installed Postfix
daemon programs. This directory should not be in the command search
path of any users.
daemon_directory: [/usr/libexec/postfix]
Please specify the destination directory for installed Postfix
administrative commands. This directory should be in the command
search path of adminstrative users.
command_directory: [/usr/sbin]
Please specify the destination directory for Postfix queues.
queue_directory: [/var/spool/postfix]
Please specify the full destination pathname for the installed
Postfix sendmail command. This is the Sendmail-compatible mail
posting interface.
sendmail_path: [/usr/sbin/sendmail]
Please specify the full destination pathname for the installed
Postfix newaliases command. This is the Sendmail-compatible command
to build alias databases for the Postfix local delivery agent.
newaliases_path: [/usr/bin/newaliases]
Please specify the full destination pathname for the installed
Postfix mailq command. This is the Sendmail-compatible mail queue
listing command.
mailq_path: [/usr/bin/mailq]
Please specify the owner of the Postfix queue. Specify an account
with numerical user ID and group ID values that are not used by
any other accounts on the system.
mail_owner: [postfix]
Please specify the group for mail submission and for queue management
commands. Specify a group name with a numerical group ID that is
not shared with other accounts, not even with the Postfix mail_owner
account. You can no longer specify "no" here.
setgid_group: [postdrop] maildrop
Please specify the destination directory for the Postfix on-line
manual pages. You can no longer specify "no" here.
manpage_directory: [/usr/local/man]
Please specify the destination directory for the Postfix sample
configuration files.
sample_directory: [/etc/postfix]
Please specify the destination directory for the Postfix README
files. Specify "no" if you do not want to install these files.
readme_directory: [no] /etc/postfix/readme
Warning: you still need to edit myorigin/mydestination/mynetworks
parameter settings in /etc/postfix/
See also for information about
dialup sites or about sites inside a firewalled network.
BTW: Check your /etc/aliases file and be sure to set up aliases
that send mail for root and postmaster to a real person, then
run /usr/bin/newaliases.
After the install has completed, you should create an archive of the Postfix installation. This will enable
you to swap Postfix/sendmail at will by using the included Postfix scripts. Change back into the
auxiliary/MacOSX directory and run the backup-postfix-binaries.
# ./backup-postfix-binaries
Finally, activate the Postfix installation by using the activate-postfix script:
# ./activate-postfix
This surprisingly useful script automatically does everything you need to finish setting up the
installation. The /System/Library/StartupItems/Sendmail startup item is automatically
disabled while a /System/Library/StartupItems/Postfix item is created. You can reverse
this process by using the script activate-sendmail.
Basic Host Settings
When you reboot your Mac OS X computer, Postfix starts. (You can also start it at any time by typing /
usr/sbin/postfix start.) Unfortunately, you need to make a few more settings before the
software will run successfully.
Almost all the Postfix configuration you'll perform is done in the /etc/postfix/ All
options in consist of lines in the form:
where <setting> is one of the Postfix directives, and <value> is a simple setting (such as a
hostname, timeout value, etc), a path to a hash file, such as hash:/etc/aliases.db, or, in the case
of Mac OS X, a NetInfo path, such as netinfo:/aliases. In some cases, lists of values can be
used, separated by commas.
A hash file is a binary lookup table that holds key and value pairs. To create a hash file, use
either the postmap or postalias commands. Alias files, for example, contain <key>
and <value> fields, separated by a colon (:) and whitespace, such as this example /etc/
aliases file:
postmaster: root
operator: jray
admin: jray
All other hash files simply contain <key> and <value> fields separated by whitespace.
The postalias command works exclusively on alias files, whereas postmap is used to
generate all other hashes.
To use the Postfix utilities to generate hash files from the corresponding text file, type either
postmap <text file> or postalias <alias text file>. Within a few
seconds, a binary hash is created in the same location as the original file, with the extension .
Edit the /etc/postfix/ file now. To get up and running quickly, you need to tell Postfix
what your server's hostname and domain are by using the mydomain and myhostname directives.
Look for the myhostname and mydomain lines, both of which are initially commented out with the #
character. Uncomment both of the lines and change them to accurately reflect the state of your server
and network. For example, my server is on the domain poisontooth.
com. Thus, my file reads:
myhostname =
mydomain =
After assignment, these setting variables (myhostname, mydomain, etc) can by
referenced with a dollar sign ($) in other configuration directives.
Your Postfix server should now be ready to run. To verify the configuration, run /usr/sbin/
postfix check. This checks for errors in your setup. Start the server itself by rebooting or typing /
usr/sbin/postfix start as root.
# /usr/sbin/postfix start
postfix/postfix-script: starting the Postfix mail system
Verify that Postfix is running by Telneting to port 25 on your server computer. Use the QUIT SMTP
command to exit:
# telnet localhost 25
Connected to
Escape character is '^]'.
220 ESMTP Postfix
Assuming your system responds similarly, everything has gone according to plan and you're ready to
fine-tune the Postfix system. For simple setups, this may be as far as you need to go. Postfix
automatically configures itself to relay only for those machines on the same class subnet to which you're
connected. All others are denied.
Protecting Postfix
When you changed the myhostname and mydomain directives a moment ago, you edited two out of
hundreds of configuration options available for use in the /etc/postfix/ file. Thankfully,
the Postfix installation includes a number of sample configuration files with documentation inside the /
etc/postfix directory. These files are not meant to be used as drop-in replacements for the standard file; they simply document and provide options that you can use in For example,
the contains the instructions you need to add an alias map to Postfix. Table
13.8 contains a number of settings you may find useful.
Table 13.8. Common Postfix settings
myhostname = <Postfix server name>
Sets unqualified hostname for
the machine running the mail
mydomain = <Postfix server domain>
The domain of the Postfix server.
inet_interfaces = <all | hostname | ip,...>
A list of the network interfaces
on which Postfix will be active.
By default it works on all active
mydestination = <domain name, ...>
A list of domain names and
hostnames for which Postfix
will accept email. By default,
Postfix accepts email for
$myhostname and
If your server accepts email for
the entire domain, you should
add $mydomain and
mynetworks_style = <class|subnet|host>
Sets how Postfix determines
what portion of the local
network it should trust for
relaying. By default, the local
subnet is trusted. To trust
clients in the same class, use the
class setting. Finally, to trust
only the local computer, use
mynetworks = <network/netmask,...>
Used in lieu of
mynetworks sets a list of
network addresses that should
be considered local clients.
Specified in the format network/
netmask, such as
This can also be set to a hash
file, or any of the supported
Postfix table lookup methods,
including a Netinfo path.
relay_domains = <host|domain|file>
A list of domains for which
Postfix will relay mail. The list
can consist of host or domain
names, files containing host
names, or lookup tables (such as
hash tables or Netinfo paths).
These are in addition to the
mydestination and
mynetworks settings.
local_recipient_maps = <user lookup tables>
A list of lookup tables for
usernames that will be accepted
as local for the mail server. By
default, this is set to the local
user accounts and any alias
lookup tables that exist.
alias_maps = <alias lookup tables>
One or more lookup tables that
contain the alias lists for the
database. You may want to
consider using hash:/etc/
aliases, netinfo:/
aliases—which corresponds
to the defaults for sendmail.
Remember, postalias is
used to regenerate the alias hash
home_mailbox = <mail box path>
The path to the local mailbox
files. Mac OS X users should
use the default /var/mail.
smtpd_banner = $myhostname <banner text>
Sets banner text to be displayed
when a host connects. RFC
requirements state that the
hostname must come at the start
of the banner ($myhostname).
local_destination_ concurrency_limit = <limit
A limit on the number of local
simultaneous deliveries that can
be made to a single user. The
default is 2.
default_destination_ concurrency_limit =
<limit integer>
The number of simultaneous
connections that Postfix will
make to deliver mail. The
default is 10. Keeping this
number low can help protect
against inappropriate use of your
server if it is compromised. It is
unlikely that your server will
ever need to make 10
simultaneous connections to a
single domain at a time.
disable_vrfy_command = <yes|no>
Disables the VRFY SMTP
command, which can be used by
spammers to verify that an
account exists on the server.
smtpd_recipient_limit = <limit integer>
The maximum number of
recipients that will be accepted
per message.Keeping this limit
low makes your server unusable
for mass spam.
smtpd_timeout = <timeout s|m|h|d|w>
The timeout period to wait for a
response from an SMTP client
(in seconds, minutes, hours,
days, or weeks).
strict_rfc821_envelopes = <yes|no>
Sets a requirement for RFC821compliant messages. If set to
"yes," MAIL FROM and RCPT
TO addresses must be specified
within <>.
smtpd_helo_required = <yes|no>
Determines whether postfix will
require the HELO or EHLO
SMTP greeting at the start of a
smtpd_client_restrictions = < restrictions>
Used to fine-tune the restrictions
on the postfix clients and can
handle everything from realtime blacklisting to access
control lists.
smtpd_helo_restrictions = < restrictions>
Used to fine-tune the restrictions
on what machines are permitted
within a HELO or EHLO greeting.
smtpd_sender_restrictions = < restrictions>
Used to fine-tune the restrictions
on what machines are permitted
within a MAIL FROM address.
smtpd_recipient_restrictions = < restrictions> Used to fine-tune the restrictions
on what machines are permitted
within a RCPT TO address.
Using smtpd restrictions
Because the smtp_restrictions directives are a bit more complex than what can be described in a
table column, we'll provide more detailed coverage now. If you remember the sendmail configuration,
the FEATURE(access_db) and FEATURE(dnsbl) macros were used to set up a relay control list
and blacklisting. In Postfix, these features (and several others) are activated by smtp
restrictions, but rather than simply being compared against the mail sender these access controls
can be applied against clients, HELO/EHLO headers, and MAIL FROM/RCPT TO addresses.
Four different types of restrictions are considered here: client, helo, sender, and recipient (as defined in
Table 13.8). They all share some common restriction options, so the rather than list them separately,
Table 13.9 combines them.
Table 13.9. Common Options for Setting the smtpd Restrictions
Use In
Reject the client
if the hostname
is unknown.
client, helo,
Reject the
connection if the
hostname is
helo, sender,
Reject the
connection if the
hostname does
not have a
matching DNS A
or MX record.
helo, sender,
Reject if the
sender does not
have a matching
Reject sender
addresses that are sender
not fully
Reject recipient
addresses that are
not fully
check_client_access <lookup table>:
Restricts based
on a lookup table
that consists of
key and value
pairs where the
key is a
domain, or
address, and the
value is REJECT
or OK.
check_helo_access <lookup table>:<path>
Restricts based
on a lookup table
that consists of
key and value
pairs where the
key is a
domain, or
address, and the
value is REJECT
or OK.
helo, recipient
check_sender_access <lookup table>:
Restricts based
on a lookup table
that consists of
key and value
pairs where the
key is a
domain, or
address, and the
value is REJECT
or OK.
check_recipient_access <lookup table>:
Restricts based
on a lookup table
that consists of
key and value
pairs where the
key is a
domain, or
address, and the
value is REJECT
or OK.
Rejects the
client, helo,
message, or so
on, based on
blacklisting DNS.
You may want to check out the file that came with Postfix. This is a list of the
more common (and useful) restrictions, but several more are documented in the sample file. Out of the
box, Postfix is easier to work with than sendmail, but there are still hundreds of potential settings—
beyond what can easily be documented in a single chapter.
Postfix Resources
For more information about Postfix and its operation and configuration, look into these resources:
Postfix, by Richard Blum, Sams Publishing. The only printed reference specifically for Postfix,
this book covers the use and configuration of the Postfix MTA in an easy-to-follow format. The Postfix homepage provides links to the latest software release,
FAQs, and supporting documentation. An archive of the Postfix mailing list. (For
information on subscribing to the list itself, see the Postfix home page.) A BSD Today article on
Postfix compilation, setup, and configuration.
As the popularity of the MTA increases, additional resources will likely become available, but for now,
the selection is quite limited.
Delivering Mail—UW IMAP
An MTA is only half of a mail server. Unless the server is also the client computer, or you plan to use
local mail clients such as pine or elm, you need to provide delivery services for clients to access the
stored mail spools. To handle delivery for messages received by the SMTP system, you can install the
University of Washington's imapd (UW IMAP) software. This server can deliver to both POP and
IMAP clients, and is simple to compile and install on Mac OS X.
Recent Exploits
Because your system doesn't come with UW IMAP installed, you need to install a version. Obviously
you'll install the latest version, which (equally obviously) won't have any exploits at the time you
download and install it. That, however, doesn't mean that its past has been squeaky clean; in fact, a
number of exploits have affected the software in the past.
AUTHENTICATE buffer overflow
The UW IMAP server has been subject to a number of buffer overflow problems. One of the most
notable was the AUTHENTICATE command overflow. Used during the login process, IMAP's
AUTHENTICATE command is used to specify the authentication method that the IMAP client uses.
When the buffer for AUTHENTICATE suffers a specific overrun, it allows remote attackers to send
arbitrary machine code to the server, which would then be executed. Previously, a similar exploit had
been found that functioned in much the same way, but was accessed through the LOGIN IMAP function.
These exploits are extremely serious because the server process executes with root permissions, and the
"injected" exploit code runs as part of the already active process, making it difficult to detect. One
positive note is that the existing exploit code is (as expected) x86-centric. This isn't saying that the
exploit isn't possible on the PPC architecture, but rather that it would need to be reworked to affect an
early IMAP server running on Mac OS X. For more information, visit
Other Buffer Overflows
The last reported remote root exploit for UW IMAP was in the late 90s. Unfortunately, since that time,
several other buffer overflows have been reported to (or by) the authors, as late as mid-2001. These are
not root exploits, yet still pose a risk of allowing remote users to execute code on the affected server. For
more information, see
IMAP and POP Protocol Exploits
Unfortunately, even with a perfectly healthy UW IMAP, your mail server is far from secure. As you've
learned repeatedly throughout the book, many common TCP/IP protocols are insecure and open for
sniffing. Web, FTP, and Mail protocols make up a large portion of the Internet's traffic. None of these, in
their default states, offer any sort of protection from a network sniffer.
For example, POP-based email is the most popular form of mail delivery online. Yet the exchange of
username and password isn't protected at all:
% telnet 110
Connected to
Escape character is '^]'.
+OK Netscape Messaging Multiplexor ready
USER johnray
+OK password required for user johnray
PASS Gringle121
+OK Maildrop ready
In this sample POP connection with, I connect to port 110, provide my username and
password in clear text, and I'm logged in. Similarly, IMAP servers also support cleartext passwords:
% telnet 143
Connected to
Escape character is '^]'.
* OK Netscape Messaging Multiplexor ready
00001 login johnray Gringle121
00001 OK User logged in
Sniffing this information on compromises my email. Sniffing a username and password on
your server's email potentially compromises your account—and presumably you have root privileges.
One might think, "Eh, so what?" Who is going to sit around all day and watch network traffic waiting to
see a username and a password? The answer is no one.
Instead, they'll use one of dozens (or perhaps dozens of dozens) of available network sniffers that are
programmed with basic knowledge of common protocols and can simply watch for username/password
pairs and record them to a file. One such software package, available for Mac OS X, is ettercap (http:// You'll learn more about this in Chapter 7, "Eavesdropping and Snooping for
Information: Sniffers and Scanners." For now, understand that the basic mail protocols are not secure.
Not without some additional help.
To eliminate some of the threat of sniffing account passwords directly from IMAP and POP traffic, you
can create the SSL-wrapped equivalents of these services by using OpenSSL. SSL (Secure Sockets
Layer) protects against sniffing by encrypting all traffic moving across the network. Developed by
Netscape for use on the Web, SSL has been adopted as an industry standard for encrypting a number of
protocols—in this case, IMAP and POP. For an easy-to-understand introduction to SSL, visit http://
Outlook, Entourage, and Mac OS X's Mail application all support SSL-encrypted mail traffic natively—
just check an SSL check box. You can find this option by clicking the Options button in the Account
Information tab within account preferences in Mac OS X's Mail, as seen in Figure 13.2.
Figure 13.2. SSL is supported natively in Mac OS X's
The SSL-enabled versions of IMAP and POP are referred to as IMAPS and POPS, and operate by using
TCP from ports 993 and 995, respectively. Conveniently, these services (and their nonencrypted
counterparts) are already listed in Mac OS X's /etc/services file, so you're ready to configure and
install UW IMAP.
Compiling and Installing
Installation and setup of your IMAPS/POPS server is fast and painless. Fetch the latest version of UW
IMAP from the University of Washington's FTP site,
Uncompress and unarchive the imap.tar.gz file, then cd into the source directory:
$ tar zxf imap-2002.DEV.SNAP-0205.tar.gz
$ cd imap-2002.DEV.SNAP-0205032002
Before compiling, you need to make a few important changes to the source code. Unfortunately, UW
IMAP is not very "user-friendly" for setting up, so you need to do all configuration by editing the src/
osdep/unix/env_unix.c file. This isn't difficult, but is atypical for most modern Unix software
Open the env_unix.c file in your favorite text editor now.
Look for the line reading
static char *mailsubdir = NIL;
/* mail subdirectory name */
and change it to
static char *mailsubdir = "Library/Mail/Mailboxes";
name */
/* mail subdir
This sets the IMAP server to use your account's standard Mailboxes directory for storing IMAP
Now it's compile time. Make sure that you're within the root level of the UW IMAP distribution.
Because you want to compile for Mac OS X with SSL support built in, you need to provide two options
to the make command: osx and SSLTYPE=unix:
[View full width]
% make osx SSLTYPE=unix
Applying an process to sources...
tools/an "ln -s" src/c-client c-client
tools/an "ln -s" src/ansilib c-client
tools/an "ln -s" src/charset c-client
tools/an "ln -s" src/osdep/unix c-client
tools/an "ln -s" src/mtest mtest
tools/an "ln -s" src/ipopd ipopd
tools/an "ln -s" src/imapd imapd
ln -s tools/an .
Following the compile, the server binaries will be stored in imapd/imapd and popd/ipop3d. You
should copy these to /usr/local/libexec:
# cp popd/ipop3d /usr/local/libexec
# cp imapd/imapd /usr/local/libexec
WU IMAP is now installed, but you still need to make a few more configuration changes for the
Launching Through inetd/xinetd
Chapter 11 provided detailed information on setting up services to run under inetd and xinetd.
Because you have an entire chapter dedicated to this, we won't be providing the complete instructions.
Table 13.10 contains the parameters you need for setting up the services with either of these Internet
Table 13.10. Parameters for Configuring inetd or xinetd
Socket Type Protocol wait / nowait User
Server Process
root /usr/libexec/ /usr/local/
root /usr/libexec/ /usr/local/
This assumes that you'll also want to use TCP Wrappers with the IMAPS and POPS services,
which isn't necessary under xinetd. Keep in mind that you need to add similar entries for
IMAPS/POP if you want to run cleartext versions of the services.
In addition, there is no need to provide access to both IMAPS and POPS unless required by
your users. If you don't need one or the other, don't add it.
Creating a Certificate
The IMAPS and POPS services require two certificates be installed: imapd.pem and ipop3d.pem.
We're going to look at creating self-signed certificates. Production servers should consider purchasing
certificates from a qualified CA.
For the IMAPS server, issue the following command at a shell prompt:
# openssl req -new -x509 -nodes -out imapd.pem -keyout imapd.pem days 3650
This creates a new self-signed certificate named imapd.pem with a 10-year expiration within the
current directory. You should also create a ipop3d.pem certificate if adding support for POPS. Both
of these certificates must then be copied or moved to /System/Library/OpenSSL/certs or /
usr/local/ssl/certs if you're using a fresh install of OpenSSL.
# cp *.pem /System/Library/OpenSSL/certs
If you haven't created a certificate with openssl before, the following is a sample of the questions you
will be asked during setup. For your mail server, the critical piece of information is the Common Name.
This value should be set to the hostname to which users will connect when using their email clients. In
this example, I've chosen to use the IP address If I were using this certificate with real
clients, they would need to connect to the mail server based on this IP address.
Generating a 1024 bit RSA private key
writing new private key to 'imapd.pem'
----You are about to be asked to enter information that will be
into your certificate request.
What you are about to enter is what is called a Distinguished Name or
a DN.
There are quite a few fields but you can leave some blank.
For some fields there will be a default value.
If you enter '.', the field will be left blank.
----Country Name (2 letter code) [AU]: US
State or Province Name (full name) [Some-State]: OH
Locality Name (eg, city) []: Dublin
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:
Email Address []:
After you've created the certificates and configured the Internet daemon entries, you can restart inetd/
xinetd to start using UW IMAP (kill -HUP <process ID>). The server should be ready for use.
Be sure that you configure your email clients to use SSL or they won't be able to connect (unless you've
also added support for straight imap and pop).
This concludes this brief look at mail servers. Although we've tried to pack as much information into a
single chapter as possible, this is not meant to be a replacement for books dedicated to setting up and
configuring sendmail. Security requirements differ from server to server. Some users may need nothing
more than one or two autoresponders; others may run email servers with hundreds of accounts. Who has
access to your machine and what they're using it for makes a big difference in how you approach MTA
No matter how you choose to configure your server, keep in mind that many security holes come from
permission problems. Sendmail, no matter how carefully configured, becomes a security nightmare
without the proper file and directory permissions. (As a rule of thumb, make sure that no MTA
configuration files are group writable.) If you've been meticulous while setting up your server, you're
already well on your way to a secure mail server.
Chapter 14. Remote Access: Secure Shell, VNC, Timbuktu,
Apple Remote Desktop
What Is SSH?
SSH Vulnerabilities
Vulnerabilities in telnet and rlogin
Activating SSH
Advanced SSH Features
GUI Access Methods
Remote access is a way of using your machine without being at the console. OS X ships with services
that allow remote access—such as telnet, rlogin and ftp—disabled. Unlike many other flavors of
Unix. OS X in its default state is more secure as a result. However, as connectivity becomes more
important to you, you may be interested in being able to remotely access your OS X machine. This
chapter examines how to do this as securely as possible.
What Is SSH?
SSH, also known as secure shell, is a protocol for secure remote login, file transfer, and tunneling. It can
be used as a secure replacement for the more familiar telnet and rlogin protocols without any noticeable
difference to the user. For file transfers, SSH can be used as a secure replacement for rcp and ftp.
Finally, SSH can be used to tunnel traffic over an encrypted channel. In other words, SSH can be used to
transport otherwise insecure traffic more securely. For example, it can be used to encrypt the username
and password data transmitted by ftp.
SSH is a more secure protocol than the traditional protocols because it encrypts traffic. The other
protocols transmit data in cleartext, which, as you know, can then be captured by packet sniffers.
There are two versions of the SSH protocol: SSH1 and SSH2. As you might have guessed, SSH1 is the
original version, and SSH2 is a more recent development. The SSH2 protocol is the version currently
being developed, although fixes are occasionally released for SSH1 because it is still in use. It seems
like it's a good idea to keep both around: When critical bugs are discovered in one, a typical fallback is
to recommend switching it off and using only the other, until the bugs are repaired.
The SSH protocol was first developed by Tatu Ylonen in 1995. In that same year he also founded SSH
Communications Security, and currently serves as its president and CEO. SSH Communications
Security offers commercial and free versions of their SSH server and client products. The company
originally sold products through another company called Data Fellows, which is now F-Secure. F-Secure
has marketing rights for SSH and also sells SSH servers and clients. Both companies currently work on
further developing SSH2.
There is also an SSH Open Source project called OpenSSH. This is the SSH distribution that Apple
includes with Mac OS X. OpenSSH shares a common history with OpenBSD, which is the BSD variant
most concerned with security above all else. It is also based upon Tatu Ylonen's early SSH code.
OpenSSH provides support for both SSH1 and SSH2 protocols. There is usually little noticeable
difference between using an SSH server from one of the companies and using the OpenSSH package.
Because the OpenSSH package is the package included with Mac OS X, it is the package on which we
will concentrate our discussion. The package supports these encryption algorithms: DES, 3DES,
Blowfish, CAST-128, Arcfour, AES, RSA, and DSA.
If you are interested in the underlying specifics of the SSH protocol, check the Internet drafts
of the Secure Shell (secsh) Working Group of the IETF at
secsh.html. A good one to start with is the draft on the overall architecture of the SSH
SSH Vulnerabilities
Although more secure than traditional protocols, SSH is not without vulnerabilities. The various SSH
packages have some vulnerabilities in common and some that are unique to the distribution. We will
look at only some of the vulnerabilities that have affected OpenSSH since the introduction of Mac OS
X. For more details on OpenSSH security, see OpenSSH's security page at
Zlib Compression Library Heap Vulnerability (CVE-2002-0059, CA-2002-07, Bugtraq ID 4267)
A bug in the decompression algorithm of the zlib compression library (version 1.1.3 and
earlier) can cause problems with dynamically allocated memory. An attacker can take advantage
of this vulnerability in programs that link to or use the zlib compression library. Potential
impacts include denial of service, information leakage, or execution of arbitrary code with
permissions of the vulnerable program. So far there are no reports of this vulnerability being
This vulnerability is not an OpenSSH-specific vulnerability, but because OpenSSH can be
affected, it is included here. Because this vulnerability appears in programs that link to or use the
zlib compression library, it affects many programs and many operating systems. The solution is
to get a patch from your vendor or download the latest version of zlib from http://www.zlib.
org/. Then, where possible, recompile any programs that use the zlib compression library.
However, depending on how they use the zlib compression library, you might not able to fix
the problem yourself.
Mac OS X is reportedly not affected by this exploit. However, if you look for libz on a Mac OS
X system, you will see files that contain 1.1.3 in the name. We assume that Apple has taken care
of this problem without changing any filenames. The libz included with Mac OS X 10.2 is
definitely different than that included with Mac OS X 10.1 and earlier. You can try to update
libz yourself. If you succeed, you can then try to update OpenSSH. Other packages mentioned
in this chapter that can be affected by this vulnerability include TightVNC and VNCThing.
Trojan Horse OpenSSH Distributions (CA-2002-24, BugTraq ID 5374, CAN-1999-0661)
Although a Trojan horse distribution of a package is not a vulnerability inherent in a software
package itself, Trojan horse versions of software do exist, even for security software. From July
30–August 1, 2002, Trojan horse versions of OpenSSH 3.2.2p1, 3.4p1, and 3.4 were distributed
on the OpenBSD FTP server and may have propagated to other FTP servers via the mirroring
The Trojan horse versions execute code when the software is compiled. The software connects to
a fixed remote server on 6667/tcp, and opens a shell running as the user who compiled OpenSSH.
Challenge Response Handling Vulnerabilities (CAN-2002-0639, CAN-2002-0640, CA-2002-18,
BugTraq ID 5093)
Versions of OpenSSH between 2.9.3 and 3.3 have two vulnerabilities involving the challengeresponse authentication. One is an integer overflow in the number of responses received during
the challenge-response authentication. The other vulnerability is a buffer overflow in the
challenge-response authentication. Either vulnerability can be used for a denial of service attack,
or for the execution of arbitrary code with the privileges of OpenSSH. These vulnerabilities are
fixed in OpenSSH 3.4, and the Mac OS X 10.1 July 2002 Security Update includes OpenSSH
3.4. More information on the vulnerability is also available at
Off-by-one Error Allows Execution of Arbitrary Code with the Privileges of OpenSSH (CVE2002-0083, BugTraq ID 4241)
Versions of OpenSSH between 2.0 and 3.0.2 contain an off-by-one error in the channel code.
Exploiting this vulnerability can result in the execution of arbitrary code with the privileges of
OpenSSH. This vulnerability is fixed in OpenSSH 3.1, and the Mac OS X 10.1 April 2002
Security Update includes OpenSSH 3.1p1. More information on the vulnerability is also available
UseLogin Allows the Execution of Arbitrary Code with the Privileges of OpenSSH (CVE-20010872, VU# 157447, BugTraq ID 3614)
In some versions of OpenSSH, if the user turns on the UseLogin directive, which uses login
to handle interactive sessions, a user can pass environment variables to login. An intruder can
exploit this vulnerability to execute commands with the privileges of OpenSSH, which usually
has root privileges. This vulnerability is fixed in OpenSSH 3.0.2, and the Mac OS X 10.1.3
update includes OpenSSH 3.0.2p1.
Timing Analysis (CAN-2001-1382, VU# 596827)
Monitoring delays between keystrokes during an interactive SSH session can simplify brute-force
attacks against passwords. During the interactive SSH sessions, user keystrokes and system
responses are transmitted as packets with an echo. However, if a user types a password during the
interactive session, the password is transmitted without an echo. An intruder can detect the lack
of echo and analyze delays between the keystrokes to simplify a brute-force attack against the
password. Exploiting the vulnerability does not necessarily result in a compromised password.
OpenSSH 2.5.0 has fixes for this. Mac OS X's first update includes OpenSSH 2.30p1, which has
this vulnerability. However, the Mac OS X Web Sharing Update 1.0 addresses this issue by
including OpenSSH 2.9p2.
SSH CRC32 Attack Detection Code Can Lead to Execution of Arbitrary Code with the Privileges
of the SSH Daemon (CVE-2001-0144, VU# 945216, BugTraq ID 2347)
The SSH1 CRC32 attack detection code contains a remote integer buffer overflow that can allow
the execution of arbitrary code with the privileges of the SSH daemon, usually root. OpenSSH
2.3.0 contains a fix for this vulnerability.
The first Mac OS X update includes OpenSSH 2.3.0p1, which is not vulnerable. More
information on the vulnerability is also available at
Vulnerabilities in telnet and rlogin
SSH can replace a number of protocols. Included here is some information on vulnerabilities for two of
the most popular protocols it is used to replace: telnet and rlogin.
telnet's single largest vulnerability is in its specification, but as a demonstration of the additional
insecurity that can be found, even in an application that never tried to be secure in the first place, we've
included a listing of some recent telnet vulnerabilities, as well as NCSA Telnet vulnerabilities, since
1991. These telnet vulnerabilities turn out to be typical Unix application vulnerabilities, and problems of
this type are common in most Unix network applications: vulnerabilities that can allow arbitrary
execution of code as the process owner and a vulnerability that can lead to denial of service.
Although these vulnerabilities are serious, the most serious vulnerability that telnet has is its inherent
cleartext nature. The telnet protocol transmits everything, including usernames and passwords, in
cleartext. The best solution to this vulnerability is to not enable telnet, and not use it as a client. If you
are curious about seeing what telnet traffic looks like, install Etherpeek, IPNetMonitor, or Snort, or use
the built-in tcpdump program, or any other similar package on a machine, do a bit of telnetting around,
and see how much you can see with the sniffers. You can find these programs at http://www.wildpackets.
com/products/etherpeek_mac,, http://, and under man tcpdump in the man pages. The telnet protocol RFC is
available at
telnet vulnerabilities include the following:
Buffer Overflow in BSD Derived Versions of telnetd (CVE-2001-0554, CA-2001-21)
A buffer overflow in BSD derived versions of telnetd can result in either crashing the system,
or in the arbitrary execution of code with the privileges of telnetd, usually root. More
information can also be found at
Mac OS X was affected by this vulnerability, but it was fixed with the release of Mac OS X 10.1.
Denial of Service Vulnerability (BugTraq ID 1955)
The TERMCAP telnet variable in client-server negotiations can cause telnetd to search the file
system for files containing termcap entries. Because this is done before authentication, an
attacker can cause many telnetd processes to be started, thereby exhausting system resources.
This was found to be a vulnerability in many versions of FreeBSD, but patches fixed the
problem. However, there were no reports of the vulnerability having been exploited.
Environment Variable Format String Vulnerability (CVE-2000-0733, IN-2000-09, BugTraq ID
With the way telnetd sets the _RLD environment variable, an intruder can supply data to telnetd
such that it can be executed with the privileges of telnetd, usually root.
The exploit was used to add accounts with root privileges; install root kits containing
replacements for various commands, including telnetd; install packet sniffers; and/or install irc
proxy programs.
This vulnerability was discovered in various versions of IRIX. Patches were made available to fix
the problem.
Mac/PC NCSA Telnet Vulnerability (CVE-1999-1090, CA-1991-15)
This vulnerability does not exploit anything inherent to telnet itself. The default configuration for
NCSA Telnet for the PC and Macintosh enables FTP with no password setting. This can result in
an intruder gaining read/write access to a system, including its system files.
The temporary solution to this problem was either to specifically disable FTP or to enable
password protection. NCSA later changed that default. In a later release, NCSA changed how the
configuration for the program was set—from having the settings in an external text file called to setting them in a graphical interface. Support for NCSA Telnet was discontinued
with version 2.6, which was released in October 1994.
Like telnet, vulnerabilities involving rlogin include typical vulnerabilities: buffer overflows from which
root access can be gained.
Although such vulnerabilities are serious, the most serious vulnerability with rlogin, like telnet, is that it
transmits everything, including usernames and passwords, in cleartext. As with telnet, the best solution
to this vulnerability is to not enable rlogind.
Believe it or not, rlogin was an attempt at a secure protocol. It was developed at a time when
Unix machines were owned by companies, and personal Unix machines were rare. Securityconscious system administrators did not want to incur security problems on either their own
machines or someone else's. They never created an account for a "bad" user. Therefore,
someone connecting to a system via rlogin was assumed to be a trustworthy user coming
from another Unix system.
The rlogin RFC, available at, contains a cautionary tale
on the security problems that spoofing a trusted host might incur. However, the RFC does
not express any concern about rlogin's transmission of data in cleartext. Obviously, the
author never envisioned a world with so many untrustable Unix hosts and users.
Vulnerabilities involving rlogin include:
Buffer Overflow in System V–Derived login (CVE-2001-0797, CA-2001-34, BugTraq ID
A buffer overflow occurs in System V–derived versions of login when they handle
environment variables. The vulnerability can be exploited to execute arbitrary code with the
privileges of login. When login is called by applications that use it for authentication, its
privileges are elevated to those of the application. For telnetd, rlogind, and sshd, if it is
so configured, the privileges are that of root. As a result of the login buffer overflow, an
intruder can gain root access to a system.
Vendors have made patches available to fix the problem.
Buffer Overflow Vulnerability (CVE-1999-0046, CA-1997-06, BugTraq ID 242)
The handling of the TERM variable in some implementations of rlogin can cause a buffer
overflow, which can result in the arbitrary execution of code as root.
This vulnerability was discovered in many versions of Unix. Vendors made patches available to
fix the problem.
Activating SSH
If you want to be able to connect to your machine via the command line, and do it securely, using SSH is
the best security precaution available to you today. If you are just interested in connecting from your OS
X machine to another machine running an SSH server, then you do not need to activate the SSH server on
your machine. However, if you want to be able to access your Macintosh remotely, you need to turn on
the SSH server. To activate the SSH server, check the Remote Login box under the Services tab of the
Sharing pane.
The SSH server starts from /System/Library/StartupItems/ and also has a control in /etc/
Basic Configuration
There are two basic configuration files for SSH: /etc/sshd_config and /etc/ssh_config. The
first file is the configuration file for the SSH server itself, sshd. The second file is the configuration file
for the client, ssh. You can also use command-line options at startup for configuring sshd. Commandline options override settings in /etc/sshd_config.
The default configuration file for sshd, /etc/sshd_config, is shown here. Because sshd processes
run for each incoming connection, it is easiest to make changes to your sshd from the console. A brief
explanation for the sections is included.
$OpenBSD: sshd_config,v 1.56 2002/06/20 23:37:12 markus Exp $
# This is the sshd server system-wide configuration file.
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
The strategy used for options in the default sshd_config shipped with
OpenSSH is to specify options with their default value where
possible, but leave them commented. Uncommented options change a
default value.
#Port 22
#Protocol 2,1
#ListenAddress ::
# HostKey for protocol version 1
#HostKey /etc/ssh_host_key
# HostKeys for protocol version 2
#HostKey /etc/ssh_host_rsa_key
#HostKey /etc/ssh_host_dsa_key
# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 3600
#ServerKeyBits 768
This section of the configuration file sets some general configuration settings. By default, sshd runs on
port 22. The protocol option enables you to specify which SSH protocols sshd should support. The
default is 2,1. By default, sshd listens on all local addresses. However, there can be multiple
ListenAddress statements, where you can specify settings for each interface.
# Logging
#obsoletes QuietMode and FascistLogging
#SyslogFacility AUTH
#LogLevel INFO
This section controls the facility code and level of logging that sshd does.
# Authentication:
#LoginGraceTime 600
#PermitRootLogin yes
#StrictModes yes
#RSAAuthentication yes
#PubkeyAuthentication yes
# rhosts authentication should not be used
#RhostsAuthentication no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
#RhostsRSAAuthentication no
# similar for protocol version 2
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# RhostsRSAAuthentication and HostbasedAuthentication
#IgnoreUserKnownHosts no
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
# Change to no to disable s/key passwords
#ChallengeResponseAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#AFSTokenPassing no
# Kerberos TGT Passing only works with the AFS kaserver
#KerberosTgtPassing no
# Set this to 'yes' to enable PAM keyboard-interactive authentication
# Warning: enabling this may bypass the setting of
#PAMAuthenticationViaKbdInt yes
This section addresses various authentication issues. By default, PermitRootLogin is set to yes. This
is typically a poor choice in networked Unix installations. It's usually better to require administrative
users to log in as themselves and then su to root, rather than allowing direct login as root. This forces
slightly better tracking of who's doing what. Possible values for this directive are yes, withoutpassword, forced-commands-only, or no. The without-password value disables password
authentication for root. The forced-commands-only option permits root to log in with public
key authentication, but only if the command has been specified on a key in the authorized_keys file
with the command=... option. This option can be useful for doing remote backups on a system where
root is not normally permitted to log in. If the only commands that root is allowed to execute are
commands that can't compromise security (be very careful when making this assessment!), then the
without-password option may be acceptable in this case.
This section also provides some settings for a user's session. By default, ~/.rhosts and ~/.shosts
are ignored for RhostsAuthentication, RhostsRSAAuthentication, or
HostbasedAuthentication. The /etc/hosts.equiv files and /etc/shosts.equiv are
still used. The ~/.rhosts and ~/.shosts files allow users to specify trusted hosts. Typically, the /
etc/hosts.equiv and /etc/shosts.equiv files specify systemwide trusted hosts. In Mac OS
X, it might be necessary to create these maps in the NetInfo database instead.
This section also specifies what authentication methods are allowed. The
RhostsRSAAuthentication and RSAAuthentication are protocol 1 directives. Public key
authentication is allowed by default for protocol 2. By default, PasswordAuthentication is set to
yes. If you only want to permit public key authentication, set this option to no.
#X11Forwarding no
#X11DisplayOffset 10
#X11UseLocalhost yes
#PrintMotd yes
#PrintLastLog yes
#KeepAlive yes
#UseLogin no
#UsePrivilegeSeparation yes
#Compression yes
In this section you can also set whether to allow X11 forwarding, the printing of the message of the day,
when the user last logged in, and whether the server sends TCP keepalive messages. Having the server
send TCP keepalive messages prevents a connection from hanging if the network goes down or the client
#MaxStartups 10
# no default banner path
#Banner /some/path
#VerifyReverseMapping no
This section includes options for more general settings for sshd. The MaxStartups option enables
you to specify the maximum number of concurrent unauthenticated connections to sshd. When specified
as a set of three colon-separated numbers, this option specifies a random early drop as start:rate:
full. The point at which random early dropoff starts is when the number of unauthenticated connections
reaches start. When the number of unauthenticated connections reaches full, all the connections are
refused. The sshd refuses connections with a probability of rate/100 if the number of connections is
start. The probability increases linearly to 100% as the number of unauthenticated connections reaches
full. The VerifyReverseMapping directive specifies whether sshd should verify the remote
hostname for an IP address by checking that the resolved hostname maps back to the same IP address.
The default configuration file ends with the preceding line. This option activates the sftp server. It is on
by default. In earlier versions of Mac OS X, this option was commented out, and therefore off by default.
If you don't think you will have a need for the sftp functionality, you can turn it off here.
Some additional interesting directives are noted in Table 14.1. Be sure to read the man page for more
Table 14.1. Select Additional Options for /etc/sshd_config
Takes a list of group name patterns, separated by spaces. If specified,
login is allowed only for users whose primary group or
supplementary group list matches one of the patterns. By default,
login is allowed for all groups.
Takes a list of username patterns, separated by spaces. If specified,
login is allowed only for usernames that match one of the patterns.
By default, login is allowed for all users.
Specifies the ciphers allowed for protocol version 2. Multiple ciphers
must be comma separated. The default is
[View full width]
Sets a timeout interval in seconds, after which if no data has been
received from the client, sshd sends a message through the
encrypted channel to request a response from the client. The default
is 0, indicating that these messages will not be sent to the client.
Protocol version 2 option only.
Sets the number of client alive queries that may be sent without
sshd receiving any messages back from the client before sshd gets
suspicious. If this threshold is reached while client alive messages are
being sent, sshd disconnects the client, terminating the session. The
default value is 3.
Takes a list of group name patterns, separated by spaces. Login is
disallowed for users whose primary group or supplementary group
list matches one of the patterns. By default, login is allowed for all
Takes a list of username patterns, separated by spaces. Login is
disallowed for usernames that match one of the patterns. By default,
login is allowed for all users.
Specifies the available MAC (message authentication code)
algorithms. The MAC algorithm is used in protocol version 2 for data
integrity protection. Multiple algorithms must be comma separated.
The default is
[View full width]
Specifies the file that contains the process identifier of sshd.
Specifies whether public key authentication is allowed. Argument
must be yes or no. Default is yes. Protocol version 2 option only.
UsePrivilegeSeparation Specifies whether sshd separates privileges by creating an
unprivileged child process to deal with incoming network traffic.
After successful authentication, another process will be created that
has the privilege of the authenticated user. The goal of privilege
separation is to prevent privilege escalation by containing any
corruption within the unprivileged processes. The default is yes.
Specifies whether sshd should try to verify the remote hostname by
checking that the resolved hostname for the remote IP address maps
back to the very same IP address. The default is no.
Specifies the first display number available for sshd's X11
forwarding. This prevents sshd from interfering with real X11
servers. The default is 10.
Specifies whether X11 forwarding is permitted. The default is no.
Note that disabling X11 forwarding does not improve security in any
way; users can always install their own forwarders. X11 forwarding
is automatically disabled if UseLogin is enabled.
Specifies whether sshd should bind the X11 forwarding server to
the loopback address or to the wildcard address. By default, sshd
binds the forwarding server to the loopback address and sets the
hostname part of the DISPLAY environment variable to
localhost. This prevents remote hosts from connecting to the
proxy display. However, some older X11 clients may not function
with this configuration. X11UseLocalhost may be set to no to
specify that the forwarding server should be bound to the wildcard
address. The argument must be yes or no. The default is yes.
Specifies the full pathname of the xauth program. The default is /
sshd Command-Line Options
By default sshd does not start with any commnd-line options, but you can edit the startup file to control
which options will be used for your installation. Command-line options override settings in /etc/
sshd_config. If you choose to have sshd start with certain command-line options, edit /System/
Library/StartupItems/SSH/SSH accordingly and restart sshd. Table 14.2 provides a listing of
possible runtime options. See the man page for more details.
Table 14.2. Command-Line Options for sshd
-b <bits>
Specifies the number of bits in the ephemeral protocol version 1
server key (default 768).
Debug mode. The server sends verbose debug output to the system
log, and does not put itself in the background. The server also does
not fork and only processes one connection.
Sends output to standard error instead of /var/log/system.
-f <configuration_file> Specifies the name of the configuration file. Default is /etc/
sshd_config. sshd refuses to start if there is no configuration
-g <login_grace_time>
Gives the grace time for clients to authenticate themselves.
-h <host_key_file>
Specifies a file from which a host key is read. This option must be
given if sshd is not run as root (because the normal host key
files are normally not readable by anyone but root).
Runs sshd from inetd. sshd is normally not run from inetd
because it needs to generate the server key before it can respond to
the client, and this may take tens of seconds. Clients would have to
wait too long if the key was regenerated every time. However, with
small key sizes (for example, 512) using sshd from inetd may
be feasible.
-k <key_gen_time>
Specifies how often the ephemeral protocol version 1 server key is
regenerated. A value of 0 indicates that the key will never be
regenerated. Default is 3600 seconds or 1 hour.
-o <option>
Can be used to give options in the format used in the configuration
file. Useful for specifying options for which there is no separate
command-line flag.
-p <port>
Specifies the port on which the server listens for connections.
Multiple port options are permitted. Ports specified in the
configuration file are ignored when a command-line port is
specified. Default is 22.
Quiet mode. Sends no output to /var/log/system.log.
Test mode. Checks only the validity of the configuration file and
the sanity of the keys. Useful for updating sshd reliably because
configuration options may change.
-u <len>
Specifies the size of the field in the utmp structure that holds the
remote hostname. If the resolved hostname is longer than <len>,
the dotted decimal value is used instead.
sshd does not detach and does not become a daemon. Allows for
easy monitoring of sshd.
Forces sshd to use IPv4 addresses only.
Forces sshd to use IPv6 addresses only.
/etc/ssh_config, the default systemwide configuration file for the client, ssh, is shown following.
The configuration file is divided into host sections. Because parameters are determined on a first-matchwins basis, more host-specific values should be given at the beginning of the file, with general values at
the end of the file. Users can also configure the ssh client to suit their needs by creating a ~/.ssh/
config file. Specifying Host as * sets parameters for all hosts.
$OpenBSD: ssh_config,v 1.15 2002/06/20 20:03:34 stevesk Exp $
This is the ssh client system-wide configuration file. See
ssh_config(5) for more information. This file provides defaults for
users, and the values can be changed in per-user configuration files
or on the command line.
Configuration data is parsed as follows:
1. command line options
2. user-specific file
3. system-wide file
Any configuration value is only changed the first time it is set.
Thus, host-specific definitions should be at the beginning of the
configuration file, and defaults at the end.
# Site-wide defaults for various options
# Host *
ForwardAgent no
ForwardX11 no
RhostsAuthentication no
RhostsRSAAuthentication no
RSAAuthentication yes
PasswordAuthentication yes
BatchMode no
CheckHostIP yes
StrictHostKeyChecking ask
IdentityFile ~/.ssh/identity
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_dsa
Port 22
Protocol 2,1
Cipher 3des
Ciphers aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour,
EscapeChar ~
The default /etc/ssh_config file lists some options that you may want to set. Table 14.3 includes a
description of some of the options shown in this file, along with other selected options. For more details,
be sure to read the man pages for ssh and ssh_config.
Options that you can set in a systemwide /etc/ssh_config include LocalForward
and RemoteForward. We discourage setting up any tunnels in a systemwide configuration.
If an intruder does gain access to your machine, your systemwide forwarding settings make it
that much easier for an intruder to access other machines.
Table 14.3. Select Options for /etc/ssh_config or ~/.ssh/config
Restricts the following declarations (up to the next Host keyword)
to be for only those hosts that match one of the patterns given after
the keyword. The host is the hostname argument given on the
command line (that is, the name is not converted to a
canonicalized host name before matching).
If set to yes, disables passphrase/password querying. Useful in
scripts and other batch jobs where no user is present to supply the
password. The argument must be yes or no. Default is no.
Specifies the interface from which to transmit on machines with
multiple interfaces or aliased addresses. Option does not work if
UsePrivilegedPort is set to yes.
If set to yes, ssh also checks the host IP address in the
known_hosts file. This allows ssh to detect whether a host
key changed because of DNS spoofing. If set to no, the check is
not executed. Default is yes.
Specifies the cipher to use for encrypting the session in protocol
version 1. blowfish, 3des, and des are supported, although
des is supported in the ssh client only for interoperability with
legacy protocol 1 implementations that do not support the 3des
cipher. Its use is strongly discouraged because of cryptographic
weaknesses. Default is 3des.
Specifies the ciphers allowed for protocol version 2 in order of
preference. Multiple ciphers must be comma separated. The
default is
Specifies that all local, remote, and dynamic port forwardings
specified in the configuration files or on the command line be
cleared. Primarily useful when used from the ssh command line
to clear port forwardings set in configuration files, and is
automatically set by scp and sftp. Argument must be yes or
no. Default is no.
Specifies whether X11 connections will be automatically
redirected over the secure channel and DISPLAY set on the
remote machine. Argument must be yes or no. Default is no.
Specifies a file to use for the global host key database instead of /
Specifies, in order of preference, the protocol version 2 host key
algorithms that the client should use. Default is
Specifies an alias that should be used instead of the real hostname
when looking up or saving the host key in the host key database
files. Useful for tunneling ssh connections or for multiple servers
running on a single host.
Specifies the real hostname to log in to. This can be used to
specify nicknames or abbreviations for hosts. Default is the name
given on the command line.
Specifies a file from which the user's RSA or DSA authentication
identity is read. Defaults are $HOME/.ssh/identity for
protocol version 1, and $HOME/.ssh/id_rsa and $HOME/.
ssh/id_dsa for protocol version 2.
Specifies that a TCP/IP port on the local machine be forwarded
over the secure channel to the specified host and port from the
remote machine. Only the superuser can forward privileged ports.
Specifies the MAC (message authentication code) algorithms in
order of preference. The MAC algorithm is used in protocol
version 2 for data integrity protection. Multiple algorithms must
be comma separated. Default is
Specifies the number of password prompts before giving up.
Argument must be an integer. Default is 3.
Specifies the port number to connect to on the remote host.
Default is 22.
PreferredAuthentications Specifies the order in which the client should try protocol 2
authentication methods. Default is
Specifies the protocol versions ssh should support, in order of
preference. The possible values are 1 and 2. The default is 2,1.
In other words, ssh tries version 2 and falls back to version 1 if
version 2 is not available.
Specifies whether to try public key authentication. Argument must
be yes or no. Default is yes. Protocol version 2 option only.
Specifies that a TCP/IP port on the remote machine be forwarded
over the secure channel to the specified host and port from the
local machine. Only the superuser can forward privileged ports.
Argument must be yes, no, or ask. Default is ask.
If set to yes, ssh never automatically adds host keys to the
$HOME/.ssh/known_hosts file, and refuses to connect to
hosts whose host key has changed. This provides maximum
protection against Trojan horse attacks, but can be annoying when
the /etc/ssh_known_hosts file is poorly maintained, or
connections to new hosts are frequently made. Forces the user to
manually add all new hosts.
If set to no, ssh automatically adds new host keys to the userknown hosts files.
If set to ask, new host keys are added to the user-known host files
only after the user has confirmed that that is what he really wants
to do, and ssh refuses to connect to hosts whose host key has
The host keys of known hosts are verified automatically in all
Specifies whether to use a privileged port for outgoing
connections. Argument must be yes or no. Default is no.
Specifies as what user to log in. This can be useful when a
different user name is used on different machines. This saves the
trouble of having to remember to give the username on the
command line.
Specifies a file to use for the user host key database instead of
SSH provides for secure encrypted traffic transmission across a network. Most SSH software, including
that provided by Apple, includes both the encrypted transmission facility and rudimentary tools for
making use of that functionality. These tools include the ability to use the encryption to provide secure
terminal services and file transfer support. The user can add other functionality as needed by making use
of just the secure transport portion of the software to encrypt the traffic between otherwise insecure
external software packages.
A common use for the SSH package is for making remote terminal connections. Although you can set a
number of options to ssh in a user configuration file, you will probably find yourself using ssh with
command-line options initially. This is actually the easiest way to start using ssh. After you have been
using ssh with command-line options for a while, you will get a feel for what options, if any, you may
want to specify in either ~/.ssh/config or /etc/ssh_config.
To use the ssh client, you can run either ssh or slogin. If you are accustomed to using rlogin on a
system, then slogin will be the natural choice for you. Otherwise, you probably won't have any
The most commonly used syntax for ssh is
ssh –l <username> <remote_host>
ssh <username>@<remote_host>
If you are logging in to a remote host for the first time, you will be asked if you want to accept the host's
[localhost:~] joray% slogin -l jray
The authenticity of host ' ('
can't be established.
RSA key fingerprint is b3:60:d8:e3:1d:59:bc:2c:2d:9e:c3:83:9a:84:c3:a1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '
edu,' (RSA)
to the list of known hosts.
[email protected]'s password:
Welcome to Darwin!
[primal:~] jray%
Table 14.4 provides a listing of select command-line options to ssh. Be sure to read the man page for
more details.
Table 14.4. Select Command-Line Options to ssh
-b <bind_address>
Specifies the interface to transmit from on machines with
multiple interfaces or aliased addresses.
-c blowfish|3des|des
Selects the cipher to use for encrypting the session. Default is
3des. des is supported only for compatibility with legacy
protocol 1 servers.
-c <cipher_spec>
Additionally, for SSH2, a comma-separated list of ciphers.
Requests ssh to go to background just before command
execution. Useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. Implies n.
Allows remote hosts to connect to local forwarded ports.
-i <identity_file>
Selects a file from which the identity (private key) for RSA or
DSA authentication is read. Defaults are $HOME/.ssh/
identity for protocol version 1, and $HOME/.ssh/id_rsa
and $HOME/.ssh/id_dsa for protocol version 2. Identity
files may also be specified on a per-host basis in the
configuration file.
-l <login_name>
Specifies the user as which to log in on the remote machine.
This may also be specified on a per-host basis in the
configuration file.
-m <mac_spec>
Specifies a comma-separated list of MAC (message
authentication code) algorithms in order of preference for
protocol version 2.
Redirects stdin from /dev/null (actually, prevents
reading from stdin). This must be used when ssh is run in
the background.
Does not execute a remote command. This is useful for just
forwarding ports (SSH2 only).
-o <option>
Can be used to give options in the format used in the
configuration file. Useful for specifying options for which
there is no separate command-line flag.
-p <port>
Specifies the port to connect to on the remote host. This can
be specified on a per-host basis in the configuration file.
Uses a nonprivileged port for outgoing connections. This can
be used if a firewall does not permit connections from
privileged ports.
Verbose mode. Causes ssh to print debugging messages
about its progress.
Disables X11 forwarding.
Enables X11 forwarding. This can also be specified on a perhost basis in a configuration file.
-F <configfile>
Specifies an alternative per-user configuration file. If a
configuration file is given on the command line, the
systemwide configuration file (/etc/ssh_config) is
ignored. Default per-user configuration file is $HOME/.ssh/
-L <port>:<host>:<hostport> Specifies that the given port on the local (client) host is to be
forwarded to the given host and port on the remote side. Port
forwardings can also be specified in the configuration file.
Only root can forward privileged ports.
-R <port>:<host>:<hostport> Specifies that the given port on the remote (server) host is to
be forwarded to the given host and port on the local side. Port
forwardings can also be specified in the configuration file.
Privileged ports can be forwarded only when you are logging
in as root on the remote machine.
Forces SSH1 protocol only.
Forces SSH2 protocol only.
Forces ssh to use IPv4 addresses only.
Forces ssh to use IPv6 addresses only.
From other Unix machines with an SSH server installed you should be able to use ssh or slogin to
connect to your Mac OS X machine remotely. But you don't need a Unix machine to connect to your Mac
OS X machine. Windows and traditional Mac OS clients are also available. A brief description of each
client's features is included. At this time, not all the features will necessarily have meaning, but they will
by the end of the chapter.
A number of Windows SSH clients are available. Among the available clients are
Tera Term Pro with TTSSH. Tera Term is a free terminal emulation program available at http://hp. A free extension DLL called TTSSH is available
for Tera Term at With the extension, Tera Term can be
used as an SSH client. It supports only the SSH1 protocol. Additionally, it can handle public key
authentication, tunneling, and X11 forwarding.
PuTTY. PuTTY is a free telnet and SSH client available at
~sgtatham/putty/. PuTTY supports both the SSH1 and SSH2 protocols, with SSH1 being the
default protocol. It also supports public key authentication, tunneling, and X11 forwarding.
Additionally, it includes scp (PSCP) and sftp (PSFTP) clients.
F-Secure SSH. F-Secure SSH is a commercial SSH client. It is available for Windows 95/98/ME/
NT 4.0/2000/XP. It supports both the SSH1 and SSH2 protocols. It also supports public key
authentication, tunneling, and X11 forwarding. Additionally, it includes a built-in -sftp client
and command-line -ssh tools. For more product information, see
SSH Secure Shell. SSH Communications Security has both a commercial and free SSH client for
Windows 95/98/ME/NT 4.0/2000/XP. It supports both the SSH1 and SSH2 protocols. It also
supports public key authentication, tunneling, and X11 forwarding. Additionally, it includes a builtin sftp client. For more product information, see To download the freely
available client, go to and select the latest Windows client.
SecureCRT. SecureCRT is a commercial SSH client available from
products/securecrt/. It supports both the SSH1 and SSH2 protocols. It also supports public key
authentication, tunneling, X11 forwarding, and sftp.
Macintosh 8/9
A few SSH clients are available for the traditional Mac OS. The clients that work in the traditional Mac
OS probably also work in Mac OS X's Classic mode. As a matter of fact, to tunnel connections in Classic
mode, you need one of these clients with tunneling capabilities. Available clients include
NiftyTelnet 1.1 SSH r3. NiftyTelnet 1.1 SSH r3 is a free telnet and SSH client available at http:// It supports only the SSH1 protocol. It also supports
public key authentication and has a built-in scp function.
MacSSH. MacSSH is a free SSH, telnet, and various other protocols client available at http://www. For SSH, it supports only the SSH2 protocol. Additionally, it supports public key
authentication, tunneling, and X11 forwarding.
MacSFTP. MacSFTP is a shareware sftp client available at You can
download a 15-day trial. If you decide you like it, the shareware cost is $25. It has an interface
similar to Fetch's.
F-Secure SSH. F-Secure SSH is a commercial SSH client. It supports both the SSH1 and SSH2
protocols. Additionally, it supports public key authentication, tunneling, and X11 forwarding. For
more product information, see
Mac OS X
Mac OS X, of course, has the command-line -ssh tools available. However, if you are new to the
command line, you may also be wondering whether any SSH GUI tools are available. You should also
check whether your favorite FTP client includes or will include SFTP support. Available clients include
JellyfiSSH. JellyfiSSH is a freeware product available from
grepsoft/. It provides a GUI login interface and bookmarking capabilities. After you enter your
login information, it brings up a terminal window to the remote host. If you are comfortable with
using slogin or ssh to log in to a remote host, this application may not be useful to you. If you
like the basic GUI login interface of the clients for traditional Mac OS, this application may be
useful to you. If you want to learn how to use the ssh command-line client, this application might
be useful for you because you can see how the command was issued.
Fugu. Fugu is a freeware product available from It is an
ssh tunneling/scp/sftp client.
MacSFTP. MacSFTP also works in OS X. It is a shareware sftp client available at http://www. You can download a 15-day trial. If you decide you like it, the shareware cost is $25.
It has an interface similar to Fetch's.
RBrowser. RBrowser is an application available from It provides a
finder interface to ssh, scp, and sftp, and also supports tunneling. If you do not like the
command line at all, this may be the application for you. The sftp feature works by dragging
files from one "finder" to the other. This is a shareware product with two levels of licensing—a
basic level that covers ftp and sftp features and a professional level that includes ftp, sftp,
Unix, ssh, and ssh tunneling. Demo licenses are also available.
F-Secure SSH. As of this writing, F-Secure is working on a client for OS X. However, it is not yet
available. Because it is supposed to be similar to the F-Secure SSH 2.4 client, it is expected to be
able to do tunneling. Check for more information.
Advanced SSH Features
In addition to allowing you to log in remotely to another machine running an SSH server, SSH can be used for tunneling
connections from one machine to another, securely transferring files, and public key authentication. Table 14.5 lists the
primary tools in the SSH suite and their function.
Table 14.5. Primary Utilities in OpenSSH
SSH server.
SSH client.
An interactive secure file transfer program.
A copy program for copying files between hosts.
sftp-server SFTP server subsystem started automatically by sshd.
A utility that generates keys for public key authentication.
Authentication agent that manages keys for public key authentication users so that they don't have to
enter a passphrase when logging in to another machine. It starts at the beginning of an X11 session or
login session, and windows and programs start as its children.
Utility that adds keys to the ssh-agent.
Tunnel Encryption
As you saw in Chapter 12, "FTP Security," one of the features of the SSH protocol is its ability to tunnel connections. In
other words, you can set up an encrypted channel between machines and use the encrypted channel to send an otherwise
insecure protocol. In Chapter 12, you saw in great detail how to restrict access to the FTP server and to tunnel a connection
to the server. Services that you might also be interested in tunneling include POP, IMAP, and X11. Recall that X11
forwarding is off by default in the /etc/sshd_config file.
To summarize what was discussed in that chapter, first restrict access to the server that is to be tunneled. This is done by
adding a line to the /etc/hosts.allow file that restricts access to the machine with the server. For example, as you
saw earlier for an FTP server, this can be done as
in.ftpd: <machine-IP> localhost: allow
in.ftpd: deny
if you are only using the /etc/hosts.allow file, or as
in.ftpd: <machine-IP> localhost
if you are using the traditional /etc/hosts.allow and /etc/hosts.deny files.
Alternatively, if you are using the access restriction capabilities of xinetd, you could add a line to the /etc/xinetd.
d/ftp file like
= <machine-IP> localhost
Next, use an SSH client to set up a tunnel. For Windows, traditional Mac OS, or Mac OS X Classic mode, this is a matter
of using an SSH client to enter the local port to use on your machine, the name of the server, and the port number to use on
the server. From a Unix machine, such as the native Mac OS X side of your Macintosh, use ssh with a port forwarding
option. The following example sets up a tunnel without requiring that you have a terminal connection:
% slogin -l sage -N -L 2121:
[email protected]'s password:
In the preceding statement, a tunnel is set up between the local machine and the remote host for user sage.
The remote host can be specified as a hostname or an IP address. The tunnel is created between the local host at port 2121
and the remote host at port 21. The –N option enables you to set up a tunnel without initiating a terminal session. Please
note that only root can forward ports under 1024. So if you wanted to use a port number under 1024 on the local
machine, you would have to have root privileges.
After the port forwarding is set up in the SSH client, then use the regular client for the service, providing it with the
appropriate local port to use and localhost or as the host.
An FTP session over this encrypted channel would look like this:
% ftp localhost 2121
Connected to
220 FTP server (lukemftpd 1.1) ready.
Name (localhost:joray): sage
331 Password required for sage.
230Welcome to Darwin!
230 User sage logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd printing
250 CWD command successful.
ftp> passive
Passive mode on.
ftp> get test-1.tiff
local: test-1.tiff remote: test-1.tiff
227 Entering Passive Mode (192,168,1,17,192,9)
150 Opening BINARY mode data connection for 'test-1.tiff' (431888 bytes).
226 Transfer complete.
431888 bytes received in 0.516 seconds (837284 bytes/s)
ftp> quit
221Data traffic for this session was 431888 bytes in 1 file.
Total traffic for this session was 432551 bytes in 1 transfer.
221 Thank you for using the FTP service on
In short, the basic procedure to follow for tunneling a given protocol is as follows:
1. Optionally restrict access by setting up the server to accept connections from only the server machine. Depending
on your circumstances, you may have to have this restriction anyway.
2. Set up an SSH client with port and server information.
3. Set up the client for the service being tunneled with the local port number to use and localhost or
as the host.
4. Use the client for the tunneled service as you ordinarily would.
A notable exception to this basic procedure is the procedure for tunneling X11 connections. The SSH server on the remote
machine whose display you want to have displayed to your local machine should have X11 forwarding enabled. From your
local machine, simply connect with the ssh or slogin command with the –X option, which tunnels an X11 connection.
Because SSH takes care of handling everything else to make it work, you don't have to worry about details such as setting
the DISPLAY environment variable. However, if this doesn't quite work for you, it may also be necessary for the remote
host to add a line to /etc/hosts.allow to enable your IP address to have access to the sshdfwd-X11 service.
Figure 14.1 shows remotely running X11 applications being displayed to an OS X machine via SSH tunneling.
Figure 14.1. Remotely running X11 applications are being displayed on an OS X machine via SSH tunneling.
Secure File Transfer
As you also saw in Chapter 12, the OpenSSH package also includes two utilities for transferring files: scp (secure copy)
and sftp (secure FTP). Table 14.6 provides documentation on the options available to scp. Table 14.7 provides
documentation on options available to the command line in sftp as well as commands available interactively.
The basic syntax for scp is
scp <from> <to>
The <from> or <to> can be specified as a remote host and file, expanding the basic syntax to:
scp [[<username>@]<remote_host>:]<pathtofile> [[<username>@]<remote_host>:]
The <remote_host> can be a name or IP address. Here is sample output from copying a file on the remote host,
~sage/terminal/term-display-1.tiff, to the current directory on the local machine:
% scp [email protected]:terminal/term-display-1.tiff ./
[email protected]'s password:
term-display-1.tiff 100% |********************************|
900 KB
While the transfer occurs, the percentage and amount transferred increases over time. You cannot use scp to copy files
from your OS X machine unless you have activated the SSH server.
Table 14.6. Options for scp
-c <cipher>
Selects the cipher to use for encrypting the data transfer. Option is passed directly to ssh.
-i <identity_file> Selects the file from which the identity (private key) for RSA authentication is read. Option
is passed directly to ssh.
Preserves modification times, access times, and modes from the original file.
Recursively copies entire directories.
Verbose mode. Causes scp and ssh to print debugging messages about their progress.
Selects batch mode, which prevents passwords or passphrases from being requested.
Disables the progress meter.
Enables compression. Passes the -C flag to ssh to enable compression.
-F <ssh_config>
Specifies an alternative per-user configuration file for ssh. Option is passed directly to ssh.
-P <port>
Specifies the port to connect to on the remote host.
-S <program>
Specifies <program> as the program to use for the encrypted connection. The program
must understand ssh options.
-o <ssh_option>
Passes options to ssh in the format used in the ssh configuration file. This is useful for
specifying options for which there is no separate scp command-line flag. For example,
forcing the use of protocol version 1 is specified using scp -o Protocol=1.
Forces scp to use IPv4 addresses only.
Forces scp to use IPv6 addresses only.
The sftp command can also be used to securely transfer files. Its basic syntax, shown following, initiates an interactive
session that works much like regular ftp:
sftp [<usesrname>@]<remote_host>
Here is sample output from an interactive sftp session:
% sftp [email protected]
Connecting to
[email protected]'s password:
sftp> get terminal/term-display-2.tiff
Fetching /Users/sage/terminal/term-display-2.tiff to term-display-2.tiff
sftp> quit
In this example, sftp is used to transfer a file on the remote host, ~sage/terminal/term-display-2.tiff, to
the current directory on the local machine. As with scp, you cannot use sftp to transfer files from your OS X machine
unless you have activated the SSH.
Table 14.7. Documentation for sftp
-b <batchfile>
Batch mode. Reads a series of commands from an input
batchfile instead of stdin.
-o <ssh_option>
Passes options to ssh in the format used in the ssh
configuration file. Useful for specifying options for
which there is no separate sftp command-line flag. For
example, to specify an alternate port, use sftp oPort=24.
-s <subsystem> | <sftp_server>
Specifies the SSH2 subsystem or the path for an sftp
server on the remote host. A path is useful for using
sftp over protocol version 1, or when the remote
sshd does not have an sftp subsystem configured.
Raises logging level. Option is also passed to ssh.
-B <buffer_size>
Specifies the size of the buffer that sftp uses when
transferring files. Larger buffers require fewer round
trips at the cost of higher memory consumption. Default
is 32768 bytes.
Enables compression (via ssh's -C flag).
-F <ssh_config>
Specifies an alternative per-user configuration file for
ssh. Option is passed directly to ssh.
-P <sftp_server path>
Connects directly to a local sftp-server (rather than
via ssh). May be useful in debugging the client and
-R <num_requests>
Specifies how many requests may be outstanding at any
one time. Increasing this may slightly improve file
transfer speed but increases memory usage. Default is
16 outstanding requests.
-S <program>
Specifies <program> as the program to use for the
encrypted connection. The program must understand
ssh options.
Specifies the use of protocol version 1.
Interactive Commands
Quits sftp.
cd <path>
Changes remote directory to <path>.
lcd <path>
Changes local directory to <path>.
chgrp <grp> <path>
Changes group of file <path> to <grp>.
chmod <mode> <path>
Changes permissions of file <path> to <mode>.
chown <owner> <path>
Changes owner of file <path> to <owner>.
Quits sftp.
get [<flags>] <remote-path> [<local-path>] Retrieves the <remote-path> and stores it on the
local machine. If the local path name is not specified, it
is given the same name it has on the remote machine. If
the -P flag is specified, then the file's full permission
and access time are copied, too.
Displays help text.
lls [<ls-options> [<path>]]
Displays local directory listing of either <path> or
current directory if <path> is not specified.
lmkdir <path>
Creates local directory specified by <path>.
ln <oldpath> <newpath>
Creates a symbolic link from <oldpath> to
Prints local working directory.
ls [<path>]
Displays remote directory listing of either <path> or
current directory if <path> is not specified.
lumask <umask>
Sets local umask to <umask>.
mkdir <path>
Creates remote directory specified by <path>.
put [<flags>] <local-path> [<remote-path>] Uploads <local-path> and stores it on the remote
machine. If the remote pathname is not specified, it is
given the same name it has on the local machine. If the P flag is specified, then the file's full permission and
access time are copied, too.
Displays remote working directory.
Quits sftp.
rename <oldpath> <newpath>
Renames remote file from <oldpath> to
rmdir <path>
Removes remote directory specified by <path>.
rm <path>
Deletes remote file specified by <path>.
symlink <oldpath> <newpath>
Creates a symbolic link from <oldpath> to
! <command>
Executes command in local shell.
Escapes to local shell.
Synonym for help.
Public Key Authentication
In addition to the standard method of user authentication—a username and password—SSH provides another method:
public key authentication. With the traditional authentication method, the remote host stores a username and password pair
for a user. With public key authentication, the user creates a key pair on a given host. The key pair consists of a private key
and a public key that is protected with a passphrase. Then the user transfers the public key to the remote host to which she
would like to connect. So the remote host stores a set of public keys for machines on which you have generated a key pair
and transferred a copy of your public key. Furthermore, you protect your keys with a passphrase, rather than a password.
The procedure for enabling public key authentication is similar for both SSH1 and SSH2. Table 14.8 provides
documentation on options for ssh-keygen, the utility that generates key pairs. Table 14.9 provides descriptions for files
used in public key authentication. To enable public key authentication, do the following:
1. Generate a key pair on the host from which you want to access another host. (For example, call the host from which
you want to connect the local host, and the host to which you want to connect the remote host.) Use a good
passphrase to protect your key. It is recommended that a good passphrase be 10–30 characters long, and not simple
sentences or otherwise easily guessable. Include a mix of uppercase, lowercase, numeric, and nonalphanumeric
2. Transfer the public key of the key pair generated on the local host to the remote host. The public key is public, so
you can use any method necessary to transfer it to the remote host.
3. Add the public key you just transferred to the file on the remote host that stores public keys.
4. Test logging into the remote host. You should now be prompted for the passphrase that you used to generate your
key pair, because the private key of the local host is paired with its public key that was transferred to the remote
Table 14.8. Options for ssh-keygen
-b <bits>
Specifies the number of bits in the key to create. Minimum is 512 bits. Default is 1024
Requests the changing of the comment in the private and public key files. This operation is
supported for only RSA1 keys.
Reads a private or public OpenSSH key file and prints the key in a SECSH Public Key File
Format to stdout. This option exports keys that can be used by several commercial SSH
-f <filename>
Specifies the filename of the key file.
Reads an unencrypted private (or public) key file in SSH2-compatible format and prints an
OpenSSH compatible private (or public) key to stdout. ssh-keygen also reads the SECSH Public Key File Format. This option imports keys from several commercial SSH
Shows fingerprint of specified public key file. Private RSA1 keys are also supported. For
RSA and DSA keys, ssh-keygen tries to find the matching public key file and prints its
Requests that the passphrase of a private key file be changed rather than that a new private
key be created.
>Quiet mode. ilences ssh-keygen.
Reads a private OpenSSH format file and prints an OpenSSH public key to stdout.
-t <type>
Specifies the type of the key to create. The possible values are rsa1 for protocol version 1
and rsa or dsa for protocol version 2.
Show the bubblebabble digest of the specified private or public key file.
-C <comment>
Provides the new comment.
-D <reader>
Downloads the RSA public key stored in the smartcard in reader.
-N <new_passphrase> Provides the new passphrase, <new_passphrase>.
-P <passphrase>
Provides the (old) passphrase, <passphrase>.
Table 14.9. Files Used in Public Key Authentication
Contains the user's protocol version 1 RSA authentication identity. This file should
not be readable by anyone but the user. It is possible to specify a passphrase when
generating the key; that passphrase will be used to encrypt the private part of this file
with 3DES encryption. File is not automatically accessed by ssh-keygen, but is
offered as the default file for the private key. ssh reads this file when a login
attempt is made.
$HOME/.ssh/ Contains the protocol version 1 RSA public key for authentication. The contents of
this file should be added to $HOME/.ssh/authorized_keys on all machines
where the user wishes to log in using RSA authentication. There is no need to keep
the contents of this file secret.
Contains the user's protocol version 2 DSA authentication identity. This file should
not be readable by anyone but the user. It is possible to specify a passphrase when
generating the key; that passphrase will be used to encrypt the private part of this file
with 3DES encryption. This file is not automatically accessed by ssh-keygen, but
it is offered as the default file for the private key. ssh reads this file when a login
attempt is made.
Contains the protocol version 2 DSA public key for authentication. The contents of
this file should be added to $HOME/.ssh/authorized_keys on all machines
where the user wants to log in using public key authentication. There is no need to
keep the contents of this file secret.
Contains the user's protocol version 2 RSA authentication identity. This file should
not be readable by anyone but the user. It is possible to specify a passphrase when
generating the key; that passphrase will be used to encrypt the private part of this file
with 3DES encryption. This file is not automatically accessed by ssh-keygen, but
it is offered as the default file for the private key. ssh reads this file when a login
attempt is made.
Contains the protocol version 2 RSA public key for authentication. The contents of
this file should be added to $HOME/.ssh/authorized_keys on all machines
where the user wants to log in using public key authentication. There is no need to
keep the contents of this file secret.
Not only are there differences in public key authentication between SSH1 and SSH2, but there are differences between
SSH packages as well. The keys for SSH1 and SSH2 generated by OpenSSH differ from the ones made by SSH
Communications Security's SSH servers, which are the other variety you are most likely to encounter. Be sure to
thoroughly read the ssh-keygen, ssh, and sshd man pages for the SSH servers you have to connect to, because the
information you need to know for connecting via public key authentication will most likely be spread among those man
pages. The keys look a bit different for the different protocols, and can look quite different between the SSH packages.
Fortunately, OpenSSH's ssh-keygen can import and export keys.
To give you an idea of how the various public keys look, some sample public keys are shown here. The samples were
made using the defaults, except for this first key, which does not have a default for the algorithm. This key is a sample
SSH2 public key generated in OpenSSH with the DSA algorithm option (ssh-keygen –t dsa), stored as ~/.ssh/
ssh-dss AAAAB3NzaC1kc3MAAACBALzT9RbceziStHPmMiHmg78hXUgcMP14sJZ/7MH/p2NX
This is a sample SSH1 public key generated in OpenSSH with the RSA algorithm (ssh-keygen –t rsa), stored as ~.
1024 35 1557212985106595841799445393895896855201842965316926480116187178
66438835891174723508102956387 [email protected]
This key is a sample SSH2 public key generated in SSH Communications Security's SSH server with the DSA algorithm
(ssh-keygen2 –t dsa), stored in ~/.ssh2/
---- BEGIN SSH2 PUBLIC KEY ---Subject: miwa
Comment: "1024-bit dsa, [email protected], Thu May 16 2002 23:33:30 -0500"
---- END SSH2 PUBLIC KEY ---This is a sample SSH2 public key generated in SSH Communications Security's SSH server with the RSA algorithm (sshkeygen2 –t rsa), stored in ~/.ssh2/
---- BEGIN SSH2 PUBLIC KEY ---Subject: miwa
Comment: "1024-bit rsa, [email protected], Sun Sep 08 2002 23:00:14 -0500"
---- END SSH2 PUBLIC KEY ---This is a sample SSH1 public key generated in SSH Communications Security's SSH server with the RSA algorithm (sshkeygen1), stored in ~/.ssh/
1024 35 150523262886747450533481402006467053649597280355648477085483985
155150617149168536905710250843163 [email protected]
After you have transferred your public key to the remote host, you have to let the remote host know that you want to allow
public key authentication from your local host. How this is done depends on the SSH server. For OpenSSH, authorized
public keys for SSH1 and SSH2 keys are stored in ~/.ssh/authorized_keys. Each line of a basic
authorized_keys file contains a public key. Blank lines and lines starting with # are ignored. However, limitations
can be further placed on an authorized public key if you use the options listed in Table 14.10. The following is a sample
~/.ssh/authorized_keys file:
1024 35 1557212985106595841799445393895896855201842965316926480116187178
66438835891174723508102956387 [email protected]
ssh-dss AAAAB3NzaC1kc3MAAACBALPMiCqdPDGxcyB1IwPrPXk3oEqvpxR62EsspxGKGGbO
fF7XrXuwNQ== [email protected]
For an SSH Communications Security SSH1 server, the authorized public keys are also stored in ~/.ssh/
authorized_keys. An SSH2 server by SSH Communications Security, however, stores references to files that contain
authorized public keys in ~/.ssh2/authorization. Here is a sample ~/.ssh/authorization file:
As an example, suppose you want to allow public key authentication from a machine running an SSH Communications
Security SSH2 server. First, generate a key pair on the remote SSH2 machine by using ssh-keygen2.
Then transfer the public key of the key pair to your Mac OS X machine by whatever method you choose.
In this case, because you want to allow public key authentication from a machine running a non-OpenSSH SSH server,
you have to convert the public key file that was transferred to something compatible with your OpenSSH server. The sshkeygen utility can convert between SSH formats. Run a command of the following form:
ssh-keygen -i -f <transferred_public_key> > <converted_transferred_public_key>
The preceding statement imports the transferred public key file and directs the converted output to a file specified by
<converted_transferred_public_key>. We recommend including the name of the remote host in your
filename to make things easier for you. OpenSSH's ssh-keygen can also export its keys to the IETF SECSH format.
Then add that file to the ~/.ssh/authorized_keys file, the file that contains your public keys from machines
authorized to connect via public key authentication. This can be done in whatever way you feel most comfortable. Issuing
the following statement does this quite neatly:
cat <converted_transferred_public_key> >> .ssh/authorized_keys
Now that the public key from the non-OpenSSH machine has been transferred and converted to a format used by
OpenSSH, you can log in to your Mac OS X machine from the remote host via pubic key authentication.
Logging in to a machine running a non-OpenSSH SSH server from your OS X machine is similar. First generate the key
pair on your OS X machine by using ssh-keygen. Then convert the public key file to the IETF SECSH format by
running a command of the form:
ssh-keygen –e –f <public_key> > <converted_public_key>
Transfer the converted public key file to the remote host by whatever method you choose. Then add a reference to the
form's ~/.ssh2/authorization file:
Key <public_key_filename>
Now that the public key generated on your OS X machine has been transferred to the remote host running a non-OpenSSH
SSH server, and a reference to it has been added to the ~/.ssh2/authorization file, you can log in to the remote
host via public key authentication.
The details provided here address logging in via public key authentication between the major different SSH servers using
the SSH2 protocol. Because the SSH1 protocol is not under active development, we are not discussing the details involved
there. However, if you need to connect to an SSH1 server via public key authentication, it is easier than what needs to be
done for the SSH2 protocol. You do not have to convert the key formats. On the non-OpenSSH machine, the file that
contains the public keys is ~/.ssh/authorized_keys, and you add public keys themselves to the file, rather than
references to the public key files.
If you don't like the command line, you might try Gideon Softworks' SSH Helper, available at http://www. It is a freely available package.
Table 14.10. Options for ~/.ssh/authorized_keys
Specifies that in addition to RSA authentication, the canonical name of the remote
host must be present in the comma-separated list of patterns.
Specifies that the command is executed whenever this key is used for
authentication. The command supplied by the user (if any) is ignored. This option
might be useful to restrict certain RSA keys to perform just a specific operation,
such as a key that permits remote backups but nothing else.
environment="NAME=value" Specifies that the string is to be added to the environment when logging in using
this key. Environment variables set this way override other default environment
values. Multiple options of this type are permitted. This option is automatically
disabled if UseLogin is enabled.
Forbids TCP/IP forwarding when this key is used for authentication. This might
be used, for example, in connection with the command option.
Forbids X11 forwarding when this key is used for authentication.
Forbids authentication agent forwarding when this key is used for authentication.
Prevents tty allocation.
Limits local ssh -L port forwarding to connect to only the specified host and
A Butler to Hold Your Wallet: ssh-agent
The SSH suite of applications is wonderful for protecting your communications, but although entering a passphrase instead
of a password for logins through ssh is only a minor inconvenience, repeating it over and over to copy files with scp can
be a real annoyance. Thankfully, the designers thought of this, and have created an auxiliary application that enables you to
authenticate yourself once to it, and it can then use the stored private keys and the passphrases associated with your SSH
identities (SSH key pairs generated by ssh-keygen and authorized on another host) to authenticate to remote hosts for
you automatically. Essentially, this software acts as your agent and responds for you, whenever a remote host asks for your
passphrase. This eliminates any need for you to respond to passphrase queries from remote hosts for which the agent
knows a proper response, and can drastically decrease the effort involved in using the SSH applications.
If you're dealing with SSH on a daily basis, using ssh-agent is almost certainly the way you'll want to use the SSH
software; it will make your life much easier. The process for using the agent is simple as well, and can be summarized as
1. Start the ssh-agent.
2. Set up your environment so that SSH applications can find the agent.
3. Add identities to the agent.
4. Use SSH applications (slogin, scp, etc.), and never get asked for your passphrase.
However, although the difference in practice is significant, the difference in print is subtle. Previously in this chapter
you've learned how to perform all the steps necessary to work through SSH, but for the sake of clarity with respect to what
ssh-agent can actually do for you, we'll recap from the position of a user who's never used SSH to authenticate to
remote hosts. In the examples that follow, we've left the prompt intact so that you can tell in which machine and directory
we're working. The input/output fragments that follow were collected as a single stream of actions by our test user miwa,
and we've split them up to intersperse some comments on what he's doing. If you follow along, by the end of this section
you'll have set up a user with two independent SSH identities that can be used to authenticate against both and type sshd servers.
1. Look first at what files miwa has in his ~/.ssh directory.
[Sage-Rays-Computer:~] miwa% ls -l .ssh
ls: .ssh: No such file or directory
2. We're starting with a clean slate—we've deleted miwa's ~/.ssh directory so that it's as if he's never used the SSH
software before.
[Sage-Rays-Computer:~] miwa% ssh-keygen -t rsa -b 1024
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/miwa//.ssh/id_rsa):
Created directory '/Users/miwa//.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/miwa//.ssh/id_rsa.
Your public key has been saved in /Users/miwa//.ssh/
The key fingerprint is:
f4:ab:15:d4:47:54:5b:4b:d4:79:be:e6:f7:f3:ca:6d [email protected]age-Rays-Computer.local.
3. To use SSH applications, miwa needs keys. Create his default key as an RSA key of 1024 bits. The "//" in the
path to his key file is an artifact of the way the software determines paths, and will collapse safely to a single /.
Enter a passphrase for miwa, but notice that it's not echoed to the screen.
[Sage-Rays-Computer:~] miwa% ls -l .ssh
total 16
-rw------- 1 miwa staff 951 Feb 17 13:43 id_rsa
-rw-r--r-- 1 miwa staff 240 Feb 17 13:43
4. In his ~/.ssh directory, there are now two files, containing the private and public key pair for his default identity.
[Sage-Rays-Computer:~] miwa% slogin
[email protected]'s password:
Last login: Mon Feb 17 2003 13:51:04 -0500 from cvl232015.columb
You have mail.
ryoko miwa 1 >ls -l .ssh2
.ssh2: No such file or directory
ryoko miwa 2 >mkdir .ssh2
ryoko miwa 3 >exit
Connection to closed.
5. miwa logs into, using his password for the system. ryoko is a Sun
Enterprise Server running's version of the SSH software. It doesn't keep its key files in the same place as
does our Macintosh's version. miwa creates the required ~/.ssh2 directory in his home
directory on ryoko and then logs off the machine. For the best security, we recommend disabling passworded
logins from the network entirely, and accepting only passphrases, but this requires physical access to both machines
for at least a little while, or some other way of transferring a public key without being able to log in to the remote
machine via the network.
[Sage-Rays-Computer:~] miwa% cd .ssh
[Sage-Rays-Computer:~/.ssh] miwa% ls -l
total 24
-rw------- 1 miwa staff 951 Feb 17 13:43 id_rsa
-rw-r--r-- 1 miwa staff 240 Feb 17 13:43
-rw-r--r-- 1 miwa staff 633 Feb 17 13:45 known_hosts
[Sage-Rays-Computer:~/.ssh] miwa% ssh-keygen -e -f id_rsa
---- BEGIN SSH2 PUBLIC KEY ---Comment: "1024-bit RSA, converted from OpenSSH by [email protected]
---- END SSH2 PUBLIC KEY ---6. miwa needs to get the public key for the identity he wants to use on ryoko into a form that ryoko's sshd can
understand. Pleasantly, ssh-keygen cannot only generate keys, but it can also translate them into the standard
format that's server version wants. A known_hosts file has appeared in miwa's .ssh directory along
with his id_rsa identity files. In this file is recorded the public host key for ryoko.
[Sage-Rays-Computer:~/.ssh] miwa% ssh-keygen -e -f id_rsa > home_rsa.ietf
[Sage-Rays-Computer:~/.ssh] miwa% ls -l
total 32
-rw-r--r-- 1 miwa staff 347 Feb 17 13:57 home_rsa.ietf
-rw------- 1 miwa staff 951 Feb 17 13:43 id_rsa
-rw-r--r-- 1 miwa staff 240 Feb 17 13:43
-rw-r--r-- 1 miwa staff 633 Feb 17 13:45 known_hosts
7. Using ssh-keygen, miwa writes out an IETF formatted version of the public key for his id.rsa key, and puts
it in his .ssh directory. Because the SSH implementation on Mac OS X won't use this key for anything, he could
actually store it just about anywhere, but this seems like as good and safe a place as any.
[View full width]
[Sage-Rays-Computer:~/.ssh] miwa% scp ./home_rsa.ietf [email protected]:.
[email protected]'s password:
scp: warning: Executing scp1 compatibility.
100% |****************************|
8. miwa copies the key to ryoko by using scp. Because it's a public key, it wouldn't be a problem even if he had to
copy it over a protocol where data is visible. If passworded logins are blocked, this key transfer needs to be done in
some other fashion, such as transporting it on removable media.
[Sage-Rays-Computer:~/.ssh] miwa% slogin
[email protected]'s password:
Last login: Mon Feb 17 2003 13:53:54 -0500 from cvl232015.columb
You have mail.
ryoko miwa 1 >ls -l .ssh2
total 2
-rw-r--r-1 miwa
347 Feb 17 14:01 home_rsa.ietf
ryoko miwa 2 >cd .ssh2
ryoko .ssh2 3 >cat >> authorization
Key home_rsa.ietf
9. An authorization file must be created on ryoko, listing the key miwa just transferred as valid for logins.
miwa just cats the line in append mode onto his authorization file. The file doesn't exist, so it'll get created,
but if it did exist, this Key line would simply be added as new data at the end of the file. The cat command is
terminated with a Control-D on a line by itself.
ryoko .ssh2 4 >ls -l
total 4
-rw-r--r-1 miwa
18 Feb 17 14:02
-rw-r--r-1 miwa
347 Feb 17 14:01
ryoko .ssh2 5 >chmod 600 authorization home_rsa.ietf
ryoko .ssh2 6 >ls -l
total 4
-rw------1 miwa
18 Feb 17 14:02
-rw------1 miwa
347 Feb 17 14:01
ryoko .ssh2 7 >cat authorization
Key home_rsa.ietf
ryoko .ssh2 8 >exit
Connection to closed.
10. The authorization file now exists, and contains the data expected. Even though it's a public key and theoretically
can't be usefully abused, miwa chmods both files in his .sshd directory so that only he can read them, just to be
[Sage-Rays-Computer:~/.ssh] miwa% slogin
Enter passphrase for key '/Users/miwa//.ssh/id_rsa':
Last login: Mon Feb 17 2003 14:05:00 -0500 from cvl232015.columb
You have mail.
ryoko miwa 1 >exit
Connection to closed.
11. Back on Sage's computer, miwa can now slogin to ryoko, and be asked for a passphrase rather than his
considerably weaker password.
[View full width]
[Sage-Rays-Computer:~/.ssh] miwa% ssh-keygen -t rsa -b 1024
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/miwa//.ssh/id_rsa): /Users/miwa//.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/miwa//.ssh/internal_rsa.
Your public key has been saved in /Users/miwa//.ssh/
The key fingerprint is:
62:47:b5:71:2b:23:08:ee:87:e2:cc:7d:0b:ce:4d:44 [email protected]
12. For some reason, miwa wants another, separate identity for use on his internal (private) network. Perhaps it's
because he's going to allow (against our good advice) other users to log in to his account and use it for connecting
to other machines on the internal network. By using a separate identity, and giving out the passphrase only to his
internal identity, he can mitigate the danger in this scheme and protect his external identity. Here, miwa's chosen to
create it into the nondefault file internal_rsa, again as a 1024-bit RSA key.
[Sage-Rays-Computer:~/.ssh] miwa% ls -l
total 48
-rw-r--r-- 1 miwa staff 347 Feb 17 13:57
-rw------- 1 miwa staff 951 Feb 17 13:43
-rw-r--r-- 1 miwa staff 240 Feb 17 13:43
-rw------- 1 miwa staff 951 Feb 17 14:11
-rw-r--r-- 1 miwa staff 240 Feb 17 14:11
-rw-r--r-- 1 miwa staff 633 Feb 17 13:45
13. Now there are files in ~miwa/.ssh/ for both his default id_rsa identity and his internal_rsa identity.
miwa needs to transfer the public key from the internal_rsa key pair to any of the private, internal-network
hosts he wants to be able to access via passphrase. In this case,, otherwise known as
creampuf, will be used as an example.
[Sage-Rays-Computer:~/.ssh] miwa% slogin 1