Untitled

Untitled
Sound for Film and Television
Third Edition
This page intentionally left blank
Sound for Film and
Television
Third Edition
Tomlinson Holman
AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD
PARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Focal Press is an imprint of Elsevier
Focal Press is an imprint of Elsevier
30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK
# 2010 Tomlinson Holman. Published by Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or any information storage and retrieval system, without
permission in writing from the publisher. Details on how to seek permission, further information about the
Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance
Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other
than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our
understanding, changes in research methods, professional practices, or medical treatment may become
necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using
any information, methods, compounds, or experiments described herein. In using such information or methods
they should be mindful of their own safety and the safety of others, including parties for whom they have a
professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any
liability for any injury and/or damage to persons or property as a matter of products liability, negligence or
otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the
material herein.
Library of Congress Cataloging-in-Publication Data
Holman, Tomlinson.
Sound for film and television / Tomlinson Holman. – 3rd ed.
p. cm.
Includes bibliographical references and index.
ISBN 978-0-240-81330-1 (alk. paper)
1. Sound–Recording and reproducing. 2. Sound motion pictures. 3. Video recording. 4. Motion pictures–
Sound effects. 5. Television broadcasting–Sound effects. I. Title.
TK7881.4.H63 2010
778.5’2344–dc22
2009044200
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
ISBN: 978-0-240-81330-1
For information on all Focal Press publications
visit our website at www.elsevierdirect.com
09 10 11 12 13 5 4 3 2 1
Printed in the United States of America
Contents
Preface to the Third Edition . . . . . . . . . . . . ix
Introduction . . . . . . . . . . . . . . . . . . . xi
Chapter 1
Objective Sound . . . . . . . 1
An Old Story . . . . . . . . . . . . .
Properties of Physical Sound . . . .
Propagation . . . . . . . . . . . .
A Medium Is Required . . . . . .
Speed of Sound . . . . . . . . . .
Amplitude . . . . . . . . . . . . .
Wavelength and Frequency . . .
Importance of Sine Waves . . . .
Sympathetic Vibration and
Resonance . . . . . . . . . . .
Phase . . . . . . . . . . . . . . .
Influences on Sound Propagation
Room Acoustics . . . . . . . . . . .
Sound Fields in Rooms . . . . . .
Sum of Effects . . . . . . . . . . .
Standing Waves . . . . . . . . . .
Noise . . . . . . . . . . . . . . .
Scaling the Dimensions . . . . .
Chapter 2
Localization in Three Dimensions:
Horizontal, Vertical, and Depth .
The Cocktail Party Effect (Binaural
Discrimination) . . . . . . . . .
Auditory Pattern and Object
Perception . . . . . . . . . . . .
Information Used to Separate
Auditory Objects . . . . . . . .
Gestalt Principles . . . . . . . . .
Speech Perception . . . . . . . . . .
Speech for Film and Television .
Influence of Sight on Speech
Intelligibility . . . . . . . . . .
The Edge of Intelligibility . . . . .
Conclusion . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
1
1
1
3
3
4
4
6
. 8
. 8
. 9
14
15
17
18
19
20
Chapter 3
Introduction . . . . . . . . . . . . .
The Physical Ear . . . . . . . . . . .
Hearing Conservation . . . . . .
Auditory Sensitivity versus
Frequency . . . . . . . . . . . . .
Threshold Value—the Minimum
Audible Field . . . . . . . . . .
Equal-Loudness Curves . . . . . .
What’s Wrong with the DecibelMagnitude Scaling . . . . . . . .
Loudness versus Time . . . . . . . .
Spectrum of a Sound . . . . . . . .
Critical Bands of Hearing . . . .
Frequency Masking . . . . . . . . .
Temporal Masking . . . . . . . . . .
Pitch . . . . . . . . . . . . . . . . . .
Spatial Perception . . . . . . . . . .
Transients and the Precedence
Effect . . . . . . . . . . . . . .
Influence of Sight on Sound
Localization . . . . . . . . . .
23
23
24
26
26
26
27
27
28
28
28
29
29
30
30
30
Chapter 4
32
32
33
34
36
36
37
37
37
Audio Fundamentals . . . . 39
Audio Defined . . . . . . . . .
Tracks and Channels . . . . . .
Signals: Analog and Digital . .
Paradigms: Linear versus
Nonlinear . . . . . . . . . .
Level . . . . . . . . . . . . . . .
Microphone Level . . . . .
Line Level . . . . . . . . . .
Speaker Level . . . . . . . .
Level Comparison . . . . .
Analog Interconnections . . .
Impedance Bridging versus
Matching . . . . . . . . .
Connectors . . . . . . . . .
Quality Issues . . . . . . . . .
Dynamic Range: Headroom
and Noise . . . . . . . .
Linear and Nonlinear
Distortion . . . . . . . . .
Wow and Flutter . . . . . .
Digital Audio-Specific
Problems . . . . . . . . .
Psychoacoustics . . . . . . . 23
30
. . . 39
. . . 39
. . . 39
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
42
42
42
43
43
44
. . . 45
. . . 45
. . . 47
. . . 47
. . . 48
. . . 50
. . . 50
Capturing Sound . . . . . . 55
Introduction . . . . . . . . . . . . . 55
Microphones in General . . . . . . 55
Production Sound for Fiction
Films . . . . . . . . . . . . . . . . 56
v
vi
Contents
Preproduction—Location
Scouting . . . . . . . . . . . . . .
Microphone Technique—Mono . .
Distance Effect . . . . . . . . . .
Microphone Directionality . . . .
Microphone Perspective . . . . .
The Boom—Why, Isn’t That Old
Fashioned? . . . . . . . . . . .
Booms and Fishpoles . . . . . . .
Boom and Fishpole Operation . .
Checklist for Boom/Fishpole
Operation . . . . . . . . . . . .
Planted Microphones . . . . . . .
Lavaliere Microphones . . . . . .
Using Multiple Microphones . . .
Typical Monaural Recording
Situations . . . . . . . . . . . .
Microphone Technique—Stereo . .
Background . . . . . . . . . . . .
Techniques . . . . . . . . . . . .
Recommendations . . . . . . . . . .
Microphone Damage . . . . . . . .
Worldized and Futzed
Recording . . . . . . . . . . . . .
Other Telephone Recordings . . . .
Chapter 5
Microphone Specifications . . . . .
Sensitivity . . . . . . . . . . . . .
Frequency Response . . . . . . .
Choice of Microphone
Frequency Response . . . . . .
Polar Pattern and Its Uniformity
with Frequency . . . . . . . . .
Equivalent Acoustic Noise Level
and Signal-to-Noise Ratio . . .
Maximum Undistorted Sound
Pressure Level . . . . . . . . .
Dynamic Range . . . . . . . . . .
Susceptibility to Wind Noise . . .
Susceptibility to Pop Noise . . . .
Susceptibility to Handling
Noise . . . . . . . . . . . . . .
Susceptibility to Magnetic
Hum Fields . . . . . . . . . . .
Impedance . . . . . . . . . . . .
Power Requirements . . . . . . .
Microphone Accessories . . . . . .
Pads . . . . . . . . . . . . . . . .
High-Pass (Low-Cut) Filters . . .
Shock and Vibration Mounts . . .
Mic Stands . . . . . . . . . . . .
Mic Booms and Fishpoles . . . .
Windscreens . . . . . . . . . . .
Silk Discs . . . . . . . . . . . . .
Microphone Cables and
Connectors . . . . . . . . . . .
56
57
57
58
58
58
59
60
61
61
63
65
66
72
72
73
75
76
76
76
Microphone Technicalities . 79
Pressure Microphones . . . . . . .
Boundary-Layer Microphones .
Wind Susceptibility . . . . . . .
Pressure-Gradient Microphones .
Wind Susceptibility . . . . . . .
Combinations of Pressure and
Pressure-Gradient Responding
Microphones . . . . . . . . . .
Super- and Hypercardioids . . .
Subcardioid . . . . . . . . . . .
Variable-Directivity
Microphones . . . . . . . . .
Interference Tube (Shotgun
or Rifle Microphone) . . . . .
Microphone Types by Method of
Transduction . . . . . . . . . .
Carbon . . . . . . . . . . . . .
Ceramic . . . . . . . . . . . . .
Electrodynamic (Commonly
Called “Dynamic”)
Microphone . . . . . . . . .
Electrostatic (Also Known as
Condenser or Capacitor)
Microphone . . . . . . . . .
Microphone Types by Directivity
(Polar Pattern) . . . . . . . . .
.
.
.
.
.
79
80
80
80
81
. 82
. 82
. 83
. 83
. 83
. 84
. 84
. 84
. 84
. 85
. 86
Chapter 6
88
88
88
88
89
89
90
90
90
90
90
90
90
91
91
91
91
91
92
92
92
93
93
Handling the Output of
Microphones . . . . . . . . . 95
What Is the Output of a
Microphone? . . . . . . . . . .
Analog Microphones . . . . . .
Where to Put the Pad/Gain
Function . . . . . . . . . . .
Case History . . . . . . . . . . . .
Quiet Sounds . . . . . . . . . . . .
Impedance . . . . . . . . . . . . .
Digital Microphones . . . . . . . .
Digital Microphone Level . . . . .
The Radio Part of Radio Mics . . .
Selecting Radio Mics . . . . . .
Radio Mics in Use . . . . . . .
Frequency Coordination . . . .
Minimize Signal Dropouts and
Multipath . . . . . . . . . . .
Added Gain Staging
Complications in Using
Radio Mics . . . . . . . . . .
Radio Mics Conclusion . . . . .
. 95
. 95
.
.
.
.
.
96
97
98
98
99
100
100
. 100
. 102
. 102
. 103
. 104
. 104
vii
Contents
Chapter 7
Production Sound
Mixing. . . . . . . . . . . .
Introduction . . . . . . . . . . . .
Single- versus Double-System
Sound . . . . . . . . . . . . . .
Combined Single and Double
System . . . . . . . . . . . . . .
Next Decision for Single-System
Setups: On-Camera or Separate
Mix Facilities? . . . . . . . . . .
For Double-System Setups:
Separate Mixer and Recorder
or Combined? . . . . . . . . .
Production Sound Consoles:
Processes . . . . . . . . . . . .
Accommodating Microphone
Dynamic Range . . . . . . . .
Other Processes . . . . . . . . . .
Production Sound Mixers: Signal
Routing . . . . . . . . . . . . .
Examples . . . . . . . . . . . . . .
Small Mixers . . . . . . . . . . .
Small Mixer/Recorders . . . . . .
A Production Sound Mixer and
Separate Recorder . . . . . . .
Production Sound Mixer/
Recorders . . . . . . . . . . . .
Production Sound Equipment on a
Budget . . . . . . . . . . . . . .
Cueing Systems, IFB, and IEM . .
Equipment Interactions . . . . . .
Radio Frequency Interactions . .
Audio Frequency Range
Interactions: Inputs . . . . . . .
Audio Frequency Range
Interactions: Outputs . . . . . .
Initial Setup . . . . . . . . . . . . .
Toning Heads of “Reels” . . . . .
Slating . . . . . . . . . . . . . . .
Mixing . . . . . . . . . . . . . . . .
Level Setting . . . . . . . . . . . .
Coverage . . . . . . . . . . . . . .
Dialog Overlaps . . . . . . . . .
Crowd Scenes . . . . . . . . . . .
Logging . . . . . . . . . . . . . . .
Shooting to Playback . . . . . . .
Other Technical Activities in
Production . . . . . . . . . . .
Set Politics . . . . . . . . . . . . .
Chapter 8
A Little History . . . . . . . . . .
Telecine or Scanner Transfer . .
The European Alternative . . . .
SMPTE Time Code Sync . . . . .
Types of Time Code . . . . .
Time Code Slates . . . . . . .
Jam Syncing . . . . . . . . . .
Syncing Sound on the
Telecine . . . . . . . . . .
Latent Image Edge Numbers .
Synchronizers . . . . . . . . .
Machine Control . . . . . . .
Time Code Midnight . . . . .
Time Code Recording Method
Time Code for Video . . . . . .
Conclusion . . . . . . . . . . . .
Locked versus Unlocked Audio .
The 2 Pop . . . . . . . . . . . . .
Principle of Traceability . . . . .
105
105
105
106
106
106
107
107
109
110
110
110
110
111
112
112
114
114
114
115
115
115
115
115
117
117
117
117
118
118
118
119
119
Sync, Sank, Sunk . . . . .
121
In Case of Emergency . . . . . . .
Introduction . . . . . . . . . . . .
121
123
Chapter 9
. 124
. 125
. 127
. 127
. . 128
. . 131
. . 131
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
132
132
132
132
133
133
133
134
134
134
134
Transfers . . . . . . . . . .
137
Introduction . . . . . . . . . . . .
Digital Audio Transfers . . . . . .
Transfers into Digital Audio
Workstations . . . . . . . . . .
Types of Transfers . . . . . . . .
File Transfers . . . . . . . . . .
Audio File Formats . . . . . . .
Common Problems in Digital
Audio File Transfers for Film
and Television . . . . . . . .
Streaming Digital Audio
Transfers . . . . . . . . . . .
Problems Affecting Streaming
Transfers . . . . . . . . . . .
Audio Sample Rate . . . . . . .
Revert to Analog . . . . . . . .
Digital Audio Levels . . . . . .
Analog Transfers . . . . . . . . . .
Analog-to-Digital and
Digital-to-Analog Systems . . .
137
137
Chapter 10 Sound Design . . . . . . .
137
. 137
. 138
. 139
. 140
. 141
.
.
.
.
142
142
143
143
143
144
145
Where Does Sound Design
Come From? . . . . . . . . . . 146
Sound Styles . . . . . . . . . . . . 147
Example of Sound Design Evolution 149
Sound Design Conventions . . . . 150
Observing Sound . . . . . . . . . 151
Chapter 11 Editing . . . . . . . . . . .
Introduction . . . . . . . . . . . .
Overall Scheme . . . . . . . . . .
153
153
153
viii
Contents
Computer-Based Digital Audio
Editing . . . . . . . . . . . . .
Digital Editing Mechanics . .
Types of Cuts . . . . . . . . .
Fade Files . . . . . . . . . . .
Cue-Sheet Conventions . . . .
Feature Film Production . . . . .
Syncing Dailies . . . . . . . .
Dialog-Editing Specialization
Sound-Effects Editing
Specialization . . . . . . .
Music-Editing Specialization .
Scene Changes . . . . . . . .
Premix Operations for Sound
Editors . . . . . . . . . . .
Television Sitcom . . . . . . . . .
Documentary and Reality
Production . . . . . . . . . .
Bit Slinging . . . . . . . . . .
Back to Our Story . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
155
155
156
156
157
157
157
157
. . 161
. . 164
. . 165
. . 166
. 166
. 167
. . 168
. . 168
Chapter 12 Mixing . . . . . . . . . . .
Introduction . . . . . . . . . . . .
Sound Source Devices Used in
Rerecording . . . . . . . . .
Mixing Consoles . . . . . . . .
Processes . . . . . . . . . . . . . .
Level . . . . . . . . . . . . . . .
Multiple Level Controls in
Signal Path . . . . . . . . . .
Dynamic Range Control . . . .
Processes Primarily Affecting
Frequency Response . . . . .
Processes Primarily Affecting
the Time Domain . . . . . .
Combination Devices . . . . .
Configuration . . . . . . . . . . . .
Early Rerecording Consoles . .
Adding Mix in Context . . . . .
Busing . . . . . . . . . . . . . .
Patching . . . . . . . . . . . . .
Panning . . . . . . . . . . . . .
Auxiliary and Cue Buses . . . .
171
171
. 172
. 172
173
. 173
Automation . . . . . . . . . . . . .
Punch-In/Punch-Out (Insert)
Recording . . . . . . . . . . . .
Chapter 13 From Print Masters to
Exploitation . . . . . . . .
Introduction . . . . . . . . . . . .
Print Master Types . . . . . . . . .
Print Masters for Various Digital
Formats . . . . . . . . . . . . .
Low-Bitrate Audio . . . . . . . .
Print Masters for Analog
Soundtracks . . . . . . . . . .
Other Types of Delivered
Masters for Film Uses . . . . .
Digital Cinema . . . . . . . . . .
Masters for Video Release . . . .
Television Masters . . . . . . . .
Sound Negatives . . . . . . . . . .
Theater and Dubbing Stage Sound
Systems . . . . . . . . . . . . .
A-Chain and B-Chain
Components . . . . . . . . . .
Theater Sound Systems . . . . . .
Theater Acoustics . . . . . . . . .
Sound Systems for Video . . . . .
Home Theater . . . . . . . . . . .
Desktop Systems . . . . . . . . .
Toward the Future . . . . . . . . .
186
186
189
189
189
190
190
191
192
192
192
193
193
194
194
194
196
197
197
198
199
. 174
. 174
. 177
. 180
. 183
183
. 183
. 183
. 183
. 184
. 184
. 185
Appendix I Working with Decibels . . . . . . . .
Appendix II Filmography . . . . . . . . . . . . . .
Appendix III The Eleven Commandments of
film sound . . . . . . . . . . . . . . . . . . . . . . . .
Appendix IV Bibliography . . . . . . . . . . . . .
Glossary . . . . . . . . . . . . . . . . . . . . . . . .
Index . . . . . . . . . . . . . . . . . . . . . . . . .
About the Author . . . . . . . . . . . . . . . . . .
Companion Website Page . . . . . . . . . . . . . .
Instructions for accompanying DVD . . . . . . . .
201
203
205
207
209
229
241
243
245
viii
Preface to the Third Edition
This book is an introduction to the art and technique of
sound for film and television. The focus in writing the
book has been to span the gulf between typical film and
television production textbooks, with perhaps too little
emphasis on underlying principles, and design engineering
books that have few practical applications, especially to
the film and television world. The guiding principle for
inclusion in the text is the usefulness of the topic to film
or video makers.
The first three chapters provide background principles
of use to anyone dealing with sound, especially sound
accompanying a picture, and, by way of examples, demonstrate the utility of the principles put into practice. The rest
of the book walks through the course of a production,
from the pickup of sound by microphones on the set to
the reproduction of sound in cinemas and homes at the
end of the chain.
For the sake of completeness, some information has
been included that may be tangential to end users. This
information has been made separate from the main text
by being indented and of smaller type.
Examples
No study of film and television sound would be complete
without listening to a lot of film and television shows.
This is practical today in classrooms and at home because
with a decent home theater sound system, available for a
few thousand dollars, the principles given in the text can
be demonstrated. Here are some film examples that are
especially useful.
l
How to Read This Book
Depending on who you are, there are various approaches
you can take to reading this book.
If you have to start on a set tomorrow morning, read
Chapter 4, Capturing Sound, and Chapter 8, Sync, Sank,
Sunk, tonight. These two chapters contain the most salient
features that you have to know to get started. Of these, the
sync chapter is the harder one, and you may have to call
the postproduction house to know what to do, explaining
to them what you are about to embark on—above all, be
careful: cameras that say they are 24 P may in fact be
23.976 P. Be sure to have camera, mixer/recorder, and
slate model numbers so that the post house can help you.
If the production has not determined a postproduction
house, your audio rental facility should be helpful.
Then, having mastered the material in these chapters,
move on to the other chapters related to recording sound,
5, 6, and 7. You will find in them concepts that the background provided by Chapters 1, 2, and 3 will be helpful in
explaining. From there, work linearly through the book
from Chapter 9 through Chapter 13. Note that the Glossary
at the end should help in defining terms.
For a university course in film sound, I start with the
first background chapters and proceed forward straight
through the book. I do skip some material in a starting
course, which is here for completeness but is beyond the
scope of early courses. We use the book at multiple levels
in our program at the University of Southern California.
l
Citizen Kane: the scene in the living room at Xanadu
in which Kane and his love interest interact, photographed with the great depth of field that was the innovation of Greg Toland for the picture. The sound in
this scene can be contrasted with that in the next scene
of Kane and his girlfriend in the back seat of a car. In
the first scene, the sound is very reverberant, emphasizing the separation of the characters. In the second,
the sound is very intimate, emphasizing the change of
scene that has taken place. Orson Welles brought the
techniques he had learned in radio to films. This is
used to illustrate Chapter 1, Objective Sound, and the
difference that attention to such factors can make.
Days of Heaven, reel 1: from opening to arrival of the
train at the destination. After an opening still photo
montage accompanied by music, we come in on an
industrial steel mill in which the sound of the machinery is so loud we often cannot hear the characters.
A fight ensues between a worker and his boss, and
the content of the argument is actually made stronger
by the fact of our not being able to discern it. This
illustrates frequency masking, a topic in Chapter 2.
A train leaves for the country then, accompanied by
music, and the question posed is: Do we hear the train
or not and what does it matter if we do or don’t?
A voice-over narration illustrates the speech mode of
perception, when it abruptly enters and demands our
attention. The lyrical music accompanied by the train
motion is a strong contrast with the sound that has come
before, and is used in the vaudeville sense of “a little
traveling music, please”—making it an effective montage. At the end of the scene there is a cross-fade
ix
x
l
l
Preface to the Third Edition
between the music and the reality sound that puts an
end to the montage, punctuating the change in mood.
Das Boot, reel 1: the entrance of the officers into the
submarine compound until the first shot of the submarine on the open ocean illustrates many things. At first
the submarine repair station interior is very noisy, and
the actors have to raise their voices to be heard. Actually, the scene was almost certainly looped, therefore it
was the direction to the actors that caused their voices
to be raised, not the conditions under which they were
recorded. Next, the officers come upon their boat, and
despite the fact they are still in a space contiguous with
the very noisy space, the noisy background gives way
to a relatively quiet one, illustrating a subjective rather
than a totally objective view. Then the submarine
leaves the dock, accompanied by a brass band playing
along in sync (an example of prerecorded, or at least
postsynced, sound). The interior of the boat is established through the medium of telling a visitor where
everything is. Sound is used to help establish each
space in the interior: noise for the men’s quarters,
and a little music for the officer’s, Morse code for
the radio room, and mechanical noise for the control
room. Next we come upon a door from behind which
we hear a loud sound, then going through the door,
we find it is the engine room with the very noisy
engine causing the actors to speak loudly once again.
The whole reel, up to the going to sea shot, is useful
in the ways that sound is used to tell the story.
Cabaret is useful for two principal purposes. The first
is to show a scene that involved extensive preproduction
preparation of a music recording, then filming, then
using the prerecorded and possibly postrecorded materials after the picture was edited to synchronize to the
perspective of the picture. The scene is of the Nazi boy
singing “The sun on the meadow is summery warm
. . .” until the main characters leave with the question,
“Do you still think you will be able to control them?”
What is illustrated here is very well controlled filmmaking, for we always hear what we expect to, that is,
sound matched to picture, but over a bed of sound that
stays more constant with the picture cuts. The second
point of using Cabaret is that filmmaking is a powerful,
l
but neutral, tool that can help move people to heights of
folly. Whether the techniques taught here are used for
good or ill is in the hands of the filmmaker.
Platoon demonstrates a number of sound ideas in the
final battle scene. Despite the fact that it is difficult
to understand the characters while they are under
fire, the effect of their utterances is bone chilling
nevertheless. The absolutely essential lines needed
for exposition are clearly exposed, with practically no
competition from sound effects or music. On the other
hand, there is one line that can be understood only by
lip reading, because it is covered by an explosion.
Still, the meaning is clear and the “dialog” can be
understood because the words spoken are so right in
the context of the scene.
Other films that I have found to be of enduring interest are
listed in the Appendix II Filmography at the end of the book.
ACKNOWLEDGMENTS
Art Baum read the entire book in minute detail and
provided great feedback. Martin Krieger also read the
entire manuscript for clarity. Others read specific areas
of their expertise, which is often beyond my own. Mark
Schubin, of the live Metropolitan Opera broadcasts, and
Paul Chapman, Senior Vice President of Technology at
Fotokem, were particularly helpful on synchronization
issues. The discussion of the Gestalt psychologists and
psychoacoustics in Chapter 2 owes a great deal to Dr.
Brian C. J. Moore’s book An Introduction to the Psychology of Hearing. Other readers included colleagues Tom
Abrams, Midge Costin, and Don Hall. Dr. Dominic Patawaran contributed to my well-being during the writing of
this book.
DEDICATION
This work is dedicated to the hardworking men and
women, often unsung, who perform feats of skill and
amazing perseverance every day in the making of sound
for film and video.
Introduction
SOUND FOR FILM AND TELEVISION
DEFINED
Sound for film and television is an aural experience constructed to support the story of a narrative, documentary,
or commercial film or television program. Sound may tell
the story directly, or it may be used indirectly to enhance
the story. Although there are separate perceptual mechanisms for sound and picture, the sound may be integrated
by the audience along with the picture into a complete
whole, without differentiation. In such a state, the sound
and picture together can become greater than the sum of
the parts.
In most instances, film and television sound for entertainment and documentary programming is constructed in
postproduction by professionals utilizing many pieces of
sound mixed seamlessly together to create a complete
whole. The sources used for the sound include recordings
made during principal photography on sets or on location,
sound effects libraries and customized recordings, and
music, both composed for the film and from preexisting
sources. Sound for film and television is thus a thoroughly
constructed experience, usually meant to integrate many
elements together seamlessly and not draw specific attention to itself.
The relative roles of picture and sound can change with
regard to storytelling from scene to scene and moment to
moment. A straight narrative picture will probably have
dialog accompanying it, whereas a picture montage will
often be accompanied by music, or at least manipulated
sound effects, as the filmmaker varies the method of storytelling from time to time to add interest to the film and
provide a moment for audiences to soak up the action,
make scene transitions, and so forth.
Nearly everyone involved in the production of a film
or television program affects, and is affected by, sound.
Writers use sound elements in their storytelling, with suggestions in the script for what may be heard. Location
scouts should note bad noise conditions at potential shooting sites because, although the camera can “pan off” an
offending sign, there is no such effective way to eliminate
airplanes flying over from the soundtrack—the “edges” of
a sound frame are not hard like those of a picture frame.
Directors need to be keenly aware of the potential for
sound, for what they are getting on location and what
can be substituted in postproduction, as sound is “50 percent of the experience” according to a leading filmmaker.
Cinematographers can plan lighting so that a sound boom
is usable, with the result being potentially far better sound.
Costumers can supply pouches built into clothing that can
conceal microphones and can supply booties so that actors
can wear them for low noise when their feet don’t show.
Grips, gaffers, and set dressers can make the set quiet
and make operable items work silently. Often, the director
need only utter the word to the crew that sound is important to him or her for all this to occur.
ROLES OF SOUND
Many kinds of sound have a direct storytelling role in
filmmaking.1 Dialog and narration tell the story, and narrative sound effects can be used in such a capacity, too,
for example, to draw the attention of the characters to an
off-screen event. Such direct narrative sound effects are
often written into the script, because their use can influence when and where actors have to take some
corresponding action.
Sound also has a subliminal role, working on its audience subconsciously. Whereas all viewers can tell the various objects in a picture apart—an actor, a table, the walls
of a room—listeners barely ever perceive sound so analytically. They tend to take sound in as a whole, despite its
actually being deliberately constructed from many pieces.
Herein lies the key to an important storytelling power of
sound: the inability of listeners to separate sound into
ingredient parts can easily produce “a willing suspension
of disbelief” in the audience, because they cannot separately discern the functions of the various sound elements.
This fact can be manipulated by filmmakers to produce a
route to emotional involvement in the material by the
audience. The most direct example of this effect is often
the film score. Heard in isolation, film scores2 often do
not make much musical sense; the music is deliberately
written to enhance the mood of a scene and to underscore
the action, not as a foreground activity, but a background
one. The function of the music is to “tell” the audience
1
This term is used instead of the clumsier, but more universal, “program
making.” What is meant here and henceforth when terms such as this are
used is the general range of activities required to make a film, video, or
television program.
2
The actual score played with the film, not the corresponding music-only
CD release.
xi
xii
how to feel, from moment to moment: soaring strings
mean one thing, a single snare drum, another.
Another example of this kind of thing is the emotional
sound equation that says that low frequencies represent a
threat. Possibly this association has deep primordial roots,
but if not, exposure to film sound certainly teaches listeners this lesson quickly. A distant thunderstorm played
underneath an otherwise sunny scene indicates a sense of
foreboding, or doom, as told by this equation. An interesting parallel is that the shark in Jaws is introduced by four
low notes on an otherwise calm ocean, and there are many
other such examples.
Sound plays a grammatical role in the process of
filmmaking too. For instance, if sound remains constant
before and after a picture cut, the indication being made
to the audience is that, although the point of view may
have changed, the scene has not shifted—we are in the
same space as before. So sound provides a form of
continuity or connective tissue for films. In particular,
one type of sound represented several ways plays this part.
Presence and ambience help to “sell” the continuity of a
scene to the audience.
SOUND IS OFTEN “HYPERREAL”
Sound recordings for film and television are often an
exaggeration of reality. One reason for this is that there
is typically so much competing sound at any given
moment that each sound that is recorded and must be
heard has to be rather overemphatically stated, just to
“read” through the clutter. Heard in isolation, the recordings seem silly, overhyped; but heard in context, they
assume a more natural balance. The elements that often
best illustrate this effect are called Foley sound effects.
These are effects recorded while watching a picture, such
as footsteps, and are often exaggerated compared to how
they would be in reality, both in loudness and in intimacy.
Although some of this exaggeration is due to the experience of practitioners finding that average sound playback
systems obscure details, a good deal of the exaggeration
still is desirable under the best playback conditions, simply because of the competition from other kinds of sound.
SOUND AND PICTURE
Sound often has an influence on picture, and vice versa.
For instance, making picture edits along with downbeats
in a musical score often makes the picture cuts seem very
right. In The Wonderful Horrible Life of Leni Riefenstahl,
we see Hitler’s favorite filmmaker teaching us this lesson,
for she cut the waving flags in the Nuremberg Nazi rally
in Triumph of the Will into sync with the music, increasing
the power of the scene to move people.
Introduction
Scenes are different depending on how sound plays out
in them. For example, “prelapping” a sound edit before a
scene-changing picture edit3 simply feels different from
cutting both sound and picture simultaneously. The sense
is heightened that the outgoing scene is over, and the story
is driven ahead. Such a decision is not one taken at the end
of the process in postproduction by a sound editor typically, but more often by the picture editor and director
working together, because it has such a profound impact
on storytelling. Thus involvement with sound is important
not only to those who are labeled with sound-oriented
credits, but also to the entire filmmaking process represented by directing and editing the film.
SOUND PERSONNEL
Sound-specific personnel on a given film or television job
may range from one person, that being the camera person
on a low-budget documentary with little postproduction,
to quite large and differentiated crews as seen in the credits of theatrical motion pictures. In typical feature film
production, a production sound recordist serves as head
of a crew, who may add one or more boom operators
and cable persons as needed to capture all the sound
present. On television programs shot in the multicamera
format, “filmed in Hollywood before a live studio audience,” an even larger crew may be used to control multiple boom microphones, to plant microphones on the set,
and to place radio microphones on actors, and then mix
these sounds to a multitrack tape recorder. Either of these
situations is called production sound recording.
Following in postproduction, picture editors cut the
production soundtrack along with the picture, so that the
story can be told throughout a film. They may add some
additional sound in the way of principal sound effects
and music, producing, often with the help of soundspecific editors, “temp mixes” useful in evaluating the
current state of a film or video in postproduction. Without
such sound, audiences, including even sophisticated professional ones, cannot adequately judge the program content, as they are distracted by such things as cutting to
silence. By stimulating two senses, program material is
subject to a heightened sensation on the part of the
viewer/listener, which would not occur if either the picture
or the sound stood alone. A case in point is one of an
observer looking at an action scene silently, and then with
ever increasing complexity of sound by adding each of the
edited sound sources in turn. The universal perception of
observers under these conditions is that the picture appears
to run faster with more complex sound, despite the fact
3
By cutting to the sound for the incoming scene before the outgoing
picture changes.
xiii
Introduction
that precisely the same time elapses for the silent and the
sound presentations: the sound has had a profound influence on the perception of the picture.
When the picture has been edited, sound postproduction begins in earnest. Transfer operators take the
production sound recordings and transfer them to an editable format such as into a digital audio workstation. Sound
editors pick and place sound, drawing on production
sound, sound effects libraries, and specially recorded
effects, which are also all transferred to an editable format. From the edited soundtracks, various mixes are made
by rerecording mixers (called dubbing mixers in England).
Mixing may be accomplished in one or more steps, more
generations becoming necessary as the number of soundtracks cut increases to such an extent that all the tracks
cannot be handled at one time. The last stage of postproduction mixing prepares masters in a format compatible
with the delivery medium, such as optical sound on film,
or videotape.
THE TECHNICAL VERSUS THE AESTHETIC
Although it has a technical side, in the final analysis what
is most important for film and television sound is what the
listener hears, that is, what choices have been made
throughout production and postproduction by the filmmakers. Often, thoughts are heard from producers and
others such as, “Can’t you just improve the sound by
making it all digital?” In fact, this is a naive point of view,
because, for instance, what is more important to production sound is the microphone technique, rather than the
method of tape recording. Unwanted noise on the set is not
reduced by digital recording and often causes problems,
despite the method used to record the production sound.
When film sound started in the late 1920s, the processes to produce the soundtrack were very difficult. Camera movement was restricted by large housings holding
both the camera and the cameraman so that noise did not
intrude into the set. Optical soundtracks were recorded
simultaneous with the picture on a separate sound camera
and could not be played back until the film was processed
and printed and the print processed. Microphones were
insensitive, so actors had to speak loudly and clearly.
Silent movie actor’s careers were on the line, as it was discovered by audiences that many of them had foreign
accents or high, squeaky voices.
Today, the technical impediments of early sound recording have been removed. Acting styles are much more
natural, with it more likely that an actor will “underplay” a
scene because of the intimacy of the camera than “overplay”
it. Yet the quality achieved in production sound is still
subject to such issues as whether the set has been made quiet
and whether the actor enunciates or mumbles his or her
lines. Many directors pass all problems in speech intelligibility to the sound “technician,” who, after all, is supposed
to be able to make a high-quality recording even if the
director can’t hear the actor on the set!
A CONFUSION WITH DIRECTING ACTORS
One confusion for actors is that the frame of reference for what
is left and what is right changes between theater and film. Actors
have had the notion of left and right beaten into them, that it is
from their vantage point facing the audience, called stage left
and stage right. However, film and television employ the opposite convention. Called camera left and camera right, the point
of view is that of the camera. This confusion has slowed down
more than one production over the course of time, when the
director yells “Go left,” and the actors move camera right.
THE DIMENSIONS OF A SOUNDTRACK
The “dimensions” of a soundtrack may be broken down
for discussion into frequency range, dynamic range, the
spatial dimension, and the temporal dimension. A major
factor in the history of sound accompanying pictures is
the growth in the capabilities associated with these dimensions as time has gone by, and the profound influence this
growth has had on the aesthetics of motion-picture soundtracks. Whereas early sound films had a frequency range
capability (bandwidth) only about that of a telephone,
steady growth in this area has produced modern soundtrack capabilities well matched to the frequency range of
human hearing. Dynamic range capability improvements
have meant that both louder and softer sounds are capable
of being reproduced and heard without audible distortion
or masking. Stereophonic sound added literally new
dimensions to film soundtracks, first rather tentatively in
the 1950s with magnetic sound release prints and then
firmly with optical stereo prints in the 1970s, which have
had continued improvement ever since. Still, even the
monophonic movies of the 1930s benefited from one
spatial dimension: adding reverberation to soundtracks
helped place the actors in a scene and to differentiate
among narration, on-screen dialog, off-screen sound
effects, and music.
This page intentionally left blank
Chapter 1
Objective Sound
AN OLD STORY
A tree falls over in a wood. Does it make a sound? From
one point of view, the answer is that it must make a sound,
because the physical requirements for sound to exist
have been met. An alternate view is that without any consciousness to “hear” the sound, there in fact is no sound.
This dichotomy demonstrates the difference between this
chapter and the next. A physicist has a ready answer—of
course, there is a great crashing noise. On the other hand,
a humanist philosopher thinks consciousness may well be
required for there to be a sound.
The dichotomy observed is that between objective
physical sound and subjective psychoacoustics. Any sound
undergoes two principal processes for us to “hear” it. First,
it is generated and the objective world acts on it, and then
that world is represented inside our minds by the processes
of hearing and perception. This chapter concentrates on
the first part of this process—the generation and propagation of physical sound—whereas Chapter 2 discusses how
the physical sound is represented inside our heads.
The reason the distinction between the objective and the
subjective parts of sound perception is so important is that in
finding cause and effect in sound, it is very important to know
the likely source of a problem: Is there a real physical problem
to be solved with a physical solution, or does the problem
require an adjustment to accommodate human listening?
Any given problem can have its roots in either domain and
is often best solved in its own dominion. On the other hand,
there are times when we can provide a psychoacoustical solution to what is actually an acoustical problem.
An example of this is that often people think there is an
equipment solution to what is, in fact, an acoustical problem. A high background noise level of a location cannot
be solved with digital recording, for instance, although
some people give so much credit to digital recording that
they wonder whether this might not be true. “It’s digital,
so we won’t need to do any postproduction, right?” has
been asked naively of more than one sound mixer.
(is it big or small? does it radiate sound equally in all
directions, or does it have a directional preference?) and
partly from the prevailing conditions between the point
of origin and the point of observation (is there any barrier
or direct line of sight?). Sound propagates through a
medium such as air at a specific speed and is acted on
by the physical environment.
Propagation
Sound travels from one observation point to another by a
means that is analogous to the way that waves ripple outward from where a stone has been dropped into a pond.
Each molecule of the water interacts with the other
molecules around it in a particular orderly way. A given
volume of water receives energy from the direction of
the source of the disturbance and passes it on to other
PROPERTIES OF PHYSICAL SOUND
There are several distinguishing characteristics of sound,
partly arising from the nature of the source of the sound
2010 Tomlinson Holman. Published by Elsevier Inc. All rights reserved.
DOI: 10.1016/B978-0-240-81330-1.00007-5
#
FIGURE 1.1 The waves resulting from a stone dropping into a pond
radiate outward, as do sound waves from a point source in air, only in
three dimensions, not two.
1
2
Sound for Film and Television
compression
compression
rarefaction
FIGURE 1.2 A waiter on the dance floor compresses dancers in front of
him and leaves a rarefied space behind him.
water that is more distant from the source, causing a
circular spreading of the wave. Unless the stone is large
and the water splashes, the water molecules are disturbed
only about their nominal positions, but eventually they
occupy about the same position they had before the
disturbance.
Consider sound in air for a moment. It differs from
other air movement, like wind or drafts, by the fact that,
on the whole, the molecules in motion will return to
practically the same position they had before the disturbance. Although sound is molecules in motion, there is
no net motion of the air molecules, just a passing
disturbance.
Another way to look at how sound propagates from
point to point is to visualize it as a disturbance at a dance.
Let us say that we are looking down on a crowded dance
floor. With contemporary dancing, there isn’t much organization to the picture from above—the motion is random.
A waiter, carrying a large tray, enters the dance floor. The
dancers closest to the waiter have to move a lot to get out
of his way, and when they then start to bump into their
neighbors, the neighbors move away, etc. The disturbance
may be very small by the time it reaches the other side of
the dance floor, but the action of the waiter has disturbed
the whole crowd, more or less. If the waiter were to step
in, then out, of the crowd, people would first be compressed together and then be spread apart, perhaps farther
than they ever had been while dancing. The waiter in
effect leaves a vacuum behind, which people rush in to
fill. The two components of the disturbance are called
compression, when the crowd is forced together more
closely than normal, and rarefaction, when the spacing
between the people is more than it is normally.
The tines of a tuning fork work like the waiter on the
dance floor, only the dancers are replaced by the air
molecules around the tuning fork. As the tines move
away from the center of the fork, they compress the outside air molecules, and as they reverse direction and
move toward one another, the air becomes rarefied
(Fig. 1.3). Continuous, cyclical compression and rarefaction form the steady tone that is the recognized sound of
a tuning fork.
Our analogy to water ripples can be carried further. In
a large, flat pond, the height of the waves gets smaller as
we go farther from the origin, because the same amount
of energy is spread out over a larger area. Sound is like
this, too, only the process is three-dimensional, so that
by spreading out over an expanding surface, like blowing
up a balloon, the energy farther from the source is even
less. The “law” or rule that describes the amplitude of
the sound waves falling off with distance is called the
rarefied
air
compressed
air
FIGURE 1.3 The tines of a tuning fork oscillate back and forth, causing
the nearby air to be alternately rarefied and compressed.
Chapter
|1
3
Objective Sound
inverse square law. This law states that when the distance
to a sound source doubles, the size of the disturbance
diminishes to one-quarter of its original size:
Strength of sound at a distant point ¼
original strength=distance2 :
Track 4 of the DVD that accompanies this book illustrates the inverse square law effect of level versus
distance.
The inverse square law describes the fall-off of sound
energy from a point source, radiating into free space.
A point source is a source that is infinitesimal and shows
no directional preference.
In actual cases, most sources occupy some area and
may show a directional preference, and most environments contain one or more reflecting surfaces, which conspire to make the real world more complex than this
simple model.
One of the main deviations from this model comes when a
source is large, say, a newspaper press. Then it is difficult to
get far enough away to consider this to be a point source, so
the falloff with distance will typically be less than expected. This
causes problems for narrative filmmakers trying to work in a
press room, because not only is the press noisy, but the falloff
of the noise with distance is small.
Another example is an explosion occurring in a mine shaft.
Within the shaft, sound will not fall off according to the inverse
square law because the walls prevent the sound pressure from
spreading. Therefore, even if the sound of the explosion is a
great distance away, it can be nearly as loud as it is near its
source, and quite dangerous to the documentary film crew
members who think that they are sufficiently far away from
the blast to avoid danger.
The water analogy we used earlier falls apart when we
get more specific. Ripples in water are perpendicular to
the direction of propagation—that is, ripples in a pond
are up and down. These are called transversal waves. Sound
waves, on the other hand, are longitudinal; that is, they are
in the direction of travel of the wave. Visualize a balloon
No sound
Random molecules of air
Vibrating
membrane
Pressure
changes
1 cycle
FIGURE 1.4 Sound is the organized pressure changes above and
below ambient pressure caused by a stimulating vibration, such as a
loudspeaker.
blowing up with a constant rate of inflation equal to the
speed of sound, while its surface is oscillating in and out,
and you have a good view of sound propagation.
A Medium Is Required
Sound requires a medium. There is no medium in the
vacuum of outer space, as Boyle discovered in 1660 by
putting an alarm clock suspended by a string inside a
well-sealed glass jar. When the air was pumped out of
the jar to cause a vacuum and the alarm clock went off,
there was no sound; but when air was let back in, sound
was heard. This makes sense in light of our earlier discussion of propagation: If the waiter doesn’t have anything to disturb on the dance floor, he can hardly
propagate a disturbance.
For physicists, the famous opening scene of Star Wars
makes no sense, with the rumble from its spaceships arriving over one’s head, first of the little ship and then the
massive Star Destroyer. No doubt the rumble is effective,
but it is certain that somewhere a physicist groaned about
how little filmmakers understand about nature. Here is an
example of where the limitations of physics must succumb
to storytelling. Note that although radio signals and light
also use wave principles for propagation, no medium is
required: These electromagnetic waves travel through a
vacuum unimpeded.
Speed of Sound
The speed of sound propagation is quite finite. Although
it is far faster than the speed of waves in water caused by
a stone dropping, it is still incredibly slower than the
speed of light. You can easily observe the difference
between light and sound speed in many daily activities.
Watch someone hammer a nail or kick a soccer ball at
a distance, and you can easily see and hear that the sound
comes later than the light—reality is out of sync! Filmmakers deal with this problem all the time, often forcing
into sync sounds that in reality would be late in time.
This is another example of film reality being different
from actual reality. Perhaps because of all of the training that we have received subliminally by watching
thousands of hours of film and television, reality for
viewers has been modified: Sound should be in hard sync
with the picture, deliberately neglecting the effects of the
speed of sound, unless a story point sets up in advance
the disparity between the arrival times for light and
sound.
The speed of sound is dependent on the medium in which
the sound occurs. Sound travels faster in denser media, and
so it is faster in water than in air and faster in steel than in
water. The black-hatted cowboy puts his ear to the rail to
4
Sound for Film and Television
hear the train coming sooner than he can through the air,
partly because of the faster speed of sound in the material
and partly because the rail “contains” it, with only a little
sound escaping to the air per unit of length.
Sound travels 1130 ft/sec in air at room temperature.
This is equal to about 47 ft of travel per frame of film at
24 frames per second. Unfortunately, viewers are very
good at seeing whether the sound is in sync with the picture. Practically everyone is able to tell if the sync is off
by two frames, and many viewers are able to notice when
the sound is one frame out of sync!
Because sound is so slow relative to light, it is conventional lab
practice to pull up the sound on motion picture analog release
prints one extra frame, printing the sound “early,” and thus producing exact picture–sound sync 47 ft from the screen. (Picture
and sound are also displaced on prints for other reasons; the one
frame is added to the other requirements.) In very large houses,
such as Radio City Music Hall or the Hollywood Bowl, it is common practice to pull up the sound even more, putting it into sync
at the center of the space. Still, the sound is quite noticeably out of
sync in many seats, being too early for the close seats and too late
for the distant ones. Luckily, this problem is mostly noticeable
today only in those few cases in which the auditoriums are much
larger than the average theater. Because of the one-frame pull-up
built into all prints, for a listener to be two frames out of sync, the
listener would have to be three frames away from the screen, or
about 150 ft. The Hollywood Bowl measures 400 ft from the stage
to the back row, so the sync problems there are quite large and are
made tolerable only by the small size of lips when seen from such a
large distance (see Figure 1.5). The speed of sound is fairly strongly
influenced by temperature (see speed of sound in the Glossary), so
calculations of it yield different results in the morning compared
to a warm afternoon, when the speed of sound is faster.
Amplitude
The “size” of a sound wave is known by many names: volume, level, amplitude, loudness, sound pressure, velocity,
or intensity. In professional sound circles, this effect is
usually given the name level. A director says to a mixer
“turn up the level,” not “turn up the volume,” if he or
she wants to be taken seriously.
The size of a sound disturbance can be measured in a
number of ways. In the case of water ripples, we could
put a ruler into the pond, perpendicular to the surface,
and note how large the waves are, from their peak to their
trough, as first one and then the other passes by the ruler.
This measurement is one of amplitude.
When reading the amplitude of a wave, it is customary to call the
measurement just defined “peak-to-peak amplitude,” although
what is meant is the vertical distance from the peak to the trough.
Confusion occurs when trying to decide which dimension to measure. If asked to measure the peak-to-peak amplitude of a wave,
you might think that you should measure from one peak to the
next peak occurring along the length of the wave, but that would
give you the wavelength measurement, which is discussed in the
next section, not the peak-to-peak amplitude.
In sound, because it is more easily measured than
amplitude directly, what is actually measured is sound
pressure level, often abbreviated SPL. Sound pressure is
the relative change above and below atmospheric pressure
caused by the presence of the sound.
Atmospheric pressure is the same as barometric pressure as read on a barometer. It is a measure of the force
exerted on an object in a room by the “weight” of the
atmosphere above it, about 15 lb/inch2. The atmosphere
exerts a steady force measured in pounds per square inch
on everything. Sound pressure adds to (compression) and
subtracts from (rarefaction) the static atmospheric pressure, moment by moment (see Figure 1.6). The changes
caused by sound pressure during compression and rarefaction are usually quite small compared with barometric
pressure, but they can nonetheless be readily measured
with microphones.
Although measuring sound pressure is by far the most common
method of measurement, alternative techniques that may yield
additional information are available. For our purposes, we can
say that all measures of size of the waveform—including amplitude, sound pressure level, sound velocity, and sound intensity—
are members of the same family, and so we will henceforth
use sound pressure level as the measure because it is the most
commonly used.
Sound intensity, in particular, provides more information
than sound pressure because it is a more complex measure,
containing information about both the amplitude of the wave
and its direction of propagation. Thus, sound intensity measurements are very useful for finding the source of a noise. Sound
intensity measures are rarely used in the film and television
industries, though, because of the complexity and cost of
instrumentation.
Wavelength and Frequency
+1
IN
frame SYNC
−2
frames
−4
frames
−6
frames
400 ft
FIGURE 1.5 The Hollywood Bowl is over 400 ft long, and sound from
the front of the house is significantly out of sync by the time it reaches
the back.
Wavelength
Another measure of water waves or sound waves we have
yet to discuss is the distance from one peak, past one
trough, to the next peak along the length, called the wavelength. Note that wavelength is perpendicular to the
Chapter
|1
5
Objective Sound
Compression above
barometric pressure
Amplitude
Barometric pressure
in the absence of
sound
Rarefaction below
barometric pressure
Time
FIGURE 1.6 Sound pressure adds to (compression) and subtracts from
(rarefaction) the static atmospheric pressure.
amplitude dimension, so the two have little or nothing to
do with each other. One can have a small, long wave or
a large, short one (a tsunami!). The range of wavelengths
of audible sound is extremely large, spanning from 56 ft
(17 m) to 3=4 inch (1.9 cm) in air.
Notice how our purist discussion of objective sound has already
been circumscribed by psychoacoustics. We just mentioned the
audible frequency range, but what about the inaudible parts?
Wavelengths longer than about 56 ft or shorter than about 3=4
inch still result in sound, but they are inaudible and will be covered later. The wavelength range for visible light, another wave
phenomenon, covers less than a 2:1 ratio of wavelengths from
the shortest to the longest visible wavelength, representing
the spectrum from blue through red. Compared to this, the audible sound range of 1000:1 is truly impressive.
which is generally considered to be the audible frequency
range. Within this range the complete expressive capability that we know as sound exists. The frequency range in
which a sound primarily lies has a strong storytelling
impact. For example, in the natural world low frequencies
are associated with storms (distant thunder), earthquakes,
and other natural catastrophes.
When used in film, low-frequency rumble often
denotes a threat is present. This idea extends from sound
effects to music. An example is the theme music for the
shark in Jaws. Those four low notes that begin the shark
theme indicate that danger lurks on an otherwise pleasant
day on the ocean. Alternatively, the quiet, high-frequency
sound of a corn field rustling in Field of Dreams lets us
know that we can be at peace, that there is no threat,
despite its connection to another world.
Infrasonic Frequencies
The frequency region below about 20 Hz is called the
infrasonic (or, more old fashioned, the subsonic) range,
although the lowest note on the largest pipe organs corresponds to a frequency of 16 Hz, and this is still usually
amplitude
20Hz
56 ft
time
Track 5 of the DVD contains tones at 100 Hz, 1 kHz,
and 10 kHz, having a wavelength in air of 11 ft 3 inches
(3.4 m), 131=2 inches (34.4 cm), and 3=8 inches (34.4 mm),
respectively.
50 ms
Frequency
Wavelength is directly related to one of the most important concepts in sound, frequency. Frequency is the number of occurrences of a wave per second. The unit for
frequency is hertz (abbreviated Hz, also used with the
“k” operator for “kilo” to indicate thousands of Hz:
20 kHz is shorthand for 20,000 Hz).
Wavelength and frequency are related reciprocally to
the speed of sound such that as the wavelength gets
shorter, the frequency gets higher. The frequency is equal
to the speed of sound divided by the wavelength:
f ¼ c=l
where f is the frequency in Hz (cycles per second), c is
the speed of sound in the medium, and l is the wavelength. Note that the units for speed of sound and wavelength may be metric or English, but must match each
other.
Thus the frequency range that corresponds to the
wavelength range given earlier is from 20 Hz to 20 kHz,
1kHz
about 1 ft
1ms
20kHz
3/4”
50 µs
FIGURE 1.7 Wavelength and frequency of sound in air over the audible
frequency range.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement