null  null
THURSDAY MORNING, 25 OCTOBER 2012
COLONIAL, 8:55 A.M. TO 10:00 A.M.
Session 4aAAa
Architectural Acoustics: The Technical Committee on Architectural Acoustics Vern O. Knudsen
Distinguished Lecture
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
William J. Cavanaugh, Cochair
Cavanaugh Tocci Assoc Inc, 327F Boston Post Rd., Sudbury, MA 01776
Chair’s Introduction—8:55
Invited Paper
9:00
4aAAa1. The consultant’s risk is an invitation to academia—An exploration of the greatest successes in a design career thus far,
and the research-based foundations that made them possible. Scott D. Pfeiffer (Threshold Acoustics LLC, 53 West Jackson Blvd.,
Suite 815, Chicago, IL 60604, [email protected])
The path of discovery is common for both the academic and the consultant. The point of departure for the consultant is the decision
which must be based on limited information. The resulting void in knowledge often invites research within the narrow scope necessary
for academic certainty. Exploration of some past successes provides a roadmap for a closer relationship between academia and the consulting community. There are seminal papers that are universally used in bracketing design decisions. The strength of these is in their
certainty, and the certainty comes from clear assumptions and limitations to the conditions of the studies. Too often, the consulting world
levies criticism against ivory tower academia for these very limitations, without recognizing and respecting the power in concrete baby
steps forward. Students are likewise ill-equipped to spend their energies designing concert halls, or full projects. It is precisely the accumulated experience of consulting and collaborating with architects, engineers of all kinds, and owners that allows for confidence when
leaping into the gap between judgement and certainty. In honor of Knudsen’s contributions to scientific exploration and education, we
will dedicate ourselves to the betterment of our profession through real connections between academia and the consulting community.
THURSDAY MORNING, 25 OCTOBER 2012
COLONIAL BALLROOM, 10:30 A.M. TO 11:35 A.M.
4a THU. AM
Session 4aAAb
Architectural Acoustics, Noise, and ASA Committee on Standards: Acoustics and Health
David M. Sykes, Cochair
The Remington Group LP, 23 Buckingham St., Cambridge, MA 02138
Ning Xiang, Cochair
School of Architecture, Rensselaer Polytechnic Institute, Troy, NY 12180
Invited Paper
10:30
4aAAb1. Hospital noise and staff performance. Gabriel Messingher, Erica Ryherd (Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA 30332-0405, [email protected]), and Jeremy Ackerman (Emergency Medicine, Emory University,
Atlanta, GA)
Hospitals are often noisy and not conducive to staff performance. Indeed, many staff believe that noise negatively affects their professional performance, quality of work, and ability to concentrate and communicate. Research shows that increased stress and annoyance, increased rates of burnout, and reduced occupational health are a few of the possible effects of hospital noise on staff. However,
only a few hospital studies have directly linked noise to job performance. Results show that noise and distractions can potentially
2031
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2031
deteriorate mental efficiency and short-term memory and increase errors, but other studies have shown no significant effects. Alarm fatigue is also of concern, as staff may tune out, silence, or disable alarms because they are desensitized or exhausted by them. This paper
will discuss what is currently known about hospital noise and staff performance and what questions remain. On-going studies relating
the sound environment to staff performance in medical simulations will also be highlighted.
Contributed Papers
10:50
4aAAb2. Patient and staff perceptions of hospital noise. Nicola J. Shiers,
Bridget M. Shield (Urban Engineering, London South Bank University,
Borough Road, London SE1 7JQ, United Kingdom, [email protected]),
and Rosemary E. Glanville (Medical Architecture Research Unit, London
South Bank University, London, United Kingdom)
A large scale survey of noise and acoustic conditions in a range of inpatient hospital wards has been undertaken in two major hospitals in the UK.
The survey involved noise and acoustic surveys of occupied hospital wards,
identification of noise sources and questionnaire surveys of nursing staff
and patients. The surveys were carried out in a range of different ward types,
including surgical and medical wards, and ward sizes. In total 25 patient
bays were measured, varying in size from single rooms to large bays containing 12 beds. Questionnaire responses were received from 66 staff and
154 patients in the two hospitals. This paper will present the results of the
questionnaire surveys relating to noise annoyance and disturbance among
staff and patients. Factors which affect perceptions of noise will be examined including personal factors such as age, sex, and length of time working/staying in the hospital. The sources of noise which cause the most
disturbance to staff and patients will also be discussed.
11:05
4aAAb3. A different perspective on the ongoing noise problem in U.S.
hospitals: Lessons learned from existing acute care facilities and their
patients’ quiet-at-night scores. Gary Madaras (Making Hospitals Quiet,
4849 S. Austin Ave., Chicago, IL 60638, [email protected])
Acute care hospitals that care for Medicare patients now participate in
the Hospital Consumer Assessment of Healthcare Providers and Systems
(HCAHPS) quality survey as part of The Hospital Value-Based Purchasing
program implemented by the Centers for Medicare and Medicaid Services
(CMS). One question on the 27-item survey asks inpatients to score “how
often was the area around your room quiet at night” as ‘always’, ‘usually’,
‘sometimes’ or ‘never’. Patients score the quietness question the lowest of
2032
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
all the quality metrics, responding only 58% of the time that the area around
their room was always quiet at night (as compared to an average score of
72% for all other metrics). Results of the HCAHPS survey will affect market share and financial reimbursements from CMS. Hospitals are scrambling
to reduce noise levels and increase HCAHPS scores. A study was conducted, asking leaders of hospitals to share their noise reduction stories.
Leaders from 241 hospitals contributed their challenges, successes and lessons learned. This presentation will share the findings including an in-depth
look at one of the participating hospitals. Further insight into the ongoing
noise problem in hospitals will be gained via HCAHPS scores analysis and
overnight noise audits recently conducted in existing hospitals.
11:20
4aAAb4. Designing quiet, healthy ductwork. Stephanie Ayers (Evonik
Foams, Inc., Allentown, TX) and Michael Chusid (Chusid Associates,
18623 Ventura Blvd. #212, Tarzana, CA 91356, [email protected])
Acoustical duct liners promote a healthier interior environment by suppressing mechanical noise from heating, ventilating, and air conditioning
(HVAC) systems. However the materials used to reduce or control noise
may, themselves, have health implications. Fibrous acoustical insulation, for
example, can release fibers into the air stream during installation or maintenance and when subjected to high velocity air or vibration. Recent studies
have determined that glass fiber - the most prevalent duct liner material should not be listed as a carcinogen. However, glass fiber is an acknowledged
irritant. Moreover, long-term effects on sensitive populations - including children and individuals with compromised immune systems - have not been
studied. Fibrous insulation can collect dust, thereby providing a site for
mold and microbial growth. And dislodged particles can disturb sensitive
electronics and clean room conditions. Some owners of facilities such as hospitals, schools, and laboratories have, therefore, prohibited use of fibrous
acoustical liners in ductwork. This paper discusses the application of acoustical duct liners, and the performance and use of alternatives to glass fiber in
situations where non-fibrous liners are required.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2032
THURSDAY MORNING, 25 OCTOBER 2012
JULIA LEE A/B, 8:50 A.M. TO 12:00 NOON
Session 4aABa
Animal Bioacoustics, Acoustical Oceanography, Structural Acoustics and Vibration,
Underwater Acoustics, and ASA Committee on Standards: Underwater Noise from Pile Driving I
Mardi C. Hastings, Cochair
George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405
Martin Siderius, Cochair
ECE Dept., Portland State Univ., Portland, OR 97201
Chair’s Introduction—8:50
Invited Papers
9:00
4aABa1. Experience measuring underwater sounds from piling activities. James A. Reyff (Illingworth & Rodkin, Inc., 505 Petaluma
Blvd. South, Petaluma, CA 94952, [email protected])
Extensive acoustic monitoring of pile driving activities along the U.S. West Coast has occurred in recent years as response to concerns
regarding the effects to aquatic species. Impact pile driving activities have been found to produce high amplitude sounds that have injured
fish and harassed marine mammals. As a result, requirements to reduce sounds, restrict the amount of pile driving and monitor effects to
the environment have been required. The monitoring requirements vary for each project depending on the strength of the sound sources
and potential presence of sensitive aquatic species. This presentation describes our experiences measuring acoustic signals from pile driving
activities for various construction projects. Some results from testing sound attenuation devices are also presented. The challenges associated with monitoring these sounds are described, which include the complexities of measuring highly dynamic sounds in an environment
with varying background levels. This presentation also describes the analysis methods used to describe pile driving sounds and how they
are used to assess potential impacts to aquatic species. Methods for reporting results on a real-time or daily basis are also described.
9:20
4aABa2. Underwater radiated noise and impact assessment of marine piling operations during offshore windfarm construction.
Paul A. Lepper (School of Electronic, Electrical and Systems Engineering, Loughborough University, Loughborough LE113TU, United
Kingdom, [email protected]), Stephen P. Robinson, and Pete D. Theobald (National Physical Laboratory (NPL), Teddington,
Middlesex, United Kingdom)
4a THU. AM
In UK waters numerous large scale offshore wind farm developments have been constructed typically using large hollow steel monopile foundations with pile diameters varying from a few meters to greater than 6 m diameter and lengths 60-80 m. Piles may be driven 2030 m into the seabed in water depths from a few meters to greater than 30 m. Typically percussive piling construction operations are used
with many thousands of individual strikes over periods of several hours resulting in repetitive high amplitude impulsive sound within the
water column that has potential for impact on marine life. Data is presented for in-situ measurements made during installation a range of
mono-pile diameters used on offshore windfarms. Full piling sequences were recorded at fixed ranges using fixed autonomous data loggers and sampled range dependent boat based measurements. Simultaneous recordings at multiple ranges varying from 10’s meters to
10’s km were made. Data is analyzed in terms of received levels, spectral and temporal components. Using range dependent propagation
loss modeling equivalent mono-pole source levels are estimated. Level dependence on range, hammer energy, etc. are discussed. A Monte
Carlo approach is used to obtain total cumulative exposure (SEL) risk for single foundation to whole windfarm construction scenarios.
9:40
4aABa3. On the Mach wave effect in impact pile driving, its observation, and its influence on transmission loss. Peter H. Dahl
(Applied Physics Laboratory, Mechanical Engineering, University of Washington, 1013 NE 40th St, Seattle, WA 98105, [email protected]
washington.edu) and Per G. Reinhall (Mechanical Engineering, University of Washington, Seattle, WA)
Pile driving in water produces extremely high sound pressure levels in the surrounding underwater environment of order 10 kPa at
ranges of order 10 m from the pile that can result in deleterious effects on both fish and marine mammals. In Reinhall and Dahl [J. Acoust.
Soc. Am. 130, 1209-1216, Sep. 2011] it is shown that the dominant underwater noise from impact driving is from the Mach wave associated with the radial expansion of the pile that propagates down the pile at speeds in excess of Mach 3 with respect to the underwater sound
speed. In this talk we focus on observations of the Mach wave effect made with a 5.6 m-length vertical line array, at ranges 8-15 m in
waters of depth ~12.5 m. The key observation is the dominant vertical arrival angle associated with the Mach wave, ~17 deg., but other
observations include: its frequency dependence, the ratio of purely waterborne energy compared with that which emerges from the sediment, and results of a mode filtering operation which also points to the same dominant angle. Finally, these observations suggest a model
for transmission loss which will also be discussed. [Research supported by the Washington State Department of Transportation.]
10:00–10:20 Break
2033
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2033
10:20
4aABa4. Attenuation of pile driving noise using a double walled sound shield. Per G. Reinhall (Mechanical Engineering, University
of Washington, MS 352600, Seattle, WA 98125, [email protected]) and Peter H. Dahl (Applied Physics Laboratory, University of Washington, Seattle, WA)
Pile driving in water produces high sound levels in underwater environments. The associated pressures are known to produce deleterious effects on both fish and marine mammals. We present an evaluation of the effectiveness of surrounding the pile with a double
walled sound shield to decrease impact pile driving noise. Four 32 m long, 76 cm diameter piles were driven 14 m into the sediment
with a vibratory hammer. A double walled sound shield was then installed around the pile, and the pile was impact driven another 3 m
while sound measurements were obtained. The last 0.3 m was driven with the sound shield removed, and data were collected for the
untreated pile. The sound field obtained by finite element analysis is shown to agree well with measure data. The effectiveness of the
sound shield is found to be limited by the fact that an upward moving Mach wave is produced in the sediment after the first reflection of
the deformation wave against the bottom end of the pile. The sound reduction obtained through the use of the sound shield, as measured
10 meters away from the pile, is shown to be approximately 12dB dB re 1 lPa.
10:40
4aABa5. Transient analysis of sound radiated from a partially submerged cylindrical pile under impact. Shima Shahab and Mardi
C. Hastings (George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405,
[email protected])
Underwater noise generated by impact pile driving has potentially harmful effects on aquatic animals and their environment. In effort
to predict sound radiation from piling activities, a structural acoustics finite-difference, time-domain (FDTD) model has been developed
for transient analysis of a partially submerged cylindrical pile. Three coupled partial differential equations govern vibration of the pile
wall and six partial differential equations govern its boundary conditions. The space-time gridding underlying the numerical computations
controls selection of an appropriate time step while the physical geometry of the pile imposes an upper limit on the frequency bandwidth
of wall oscillations and radiated sound. This bandwidth is inversely proportional to diameter for a cylindrical steel pile. The higher the frequency content in the dynamic response, the smaller the time step required in a transient analysis. So as diameter of the pile decreases,
smaller time steps are required to capture the total bandwidth observed in field data. Results of correlations between radiated sound predicted by the FDTD model and acoustic field data from piles of different diameter are presented. [Work supported by the Georgia Institute
of Technology and Oregon Department of Transportation through a subcontract from Portland State University.]
11:00
4aABa6. Modeling underwater impact pile driving noise in shallow, inhomogeneous channels. Nathan D. Laws, Lisa Zurk, Scott
Schecklman, and Martin Siderius (Electrical and Computer Engineering, Portland State University, Portland, OR 97210-3038, laws.
[email protected])
The broadband synthesis of a parabolic equation (PE) propagation model for shallow water acoustic propagation in inhomogeneous
channels is presented to account for the noise produced by impact pile driving. The PE model utilizes sediment information obtained
from boring measurements and detailed bathymetry to model range dependent propagation in the Columbia River between Portland, OR
and Vancouver, WA. The impact pile driving source is modeled in two ways: first, as a reverberating impulse source that emits Machwave radiation [Reinhall, Dahl, J. Acoust. Soc. Am. 130, 1209 (2011)]; and second, with a structural acoustic finite-difference time-domain (FDTD) model [Shahab, Woolfe, and Hastings, J. Acoust. Soc. Am. 130, 2558 (2011)]. Model results using both source models
are shown to be in good agreement with acoustic measurements of test pile operations in the Columbia River at multiple locations from
10 to 800 meters from the pile driving source. Implications for noise levels in river systems with varying bottom sediment characteristics
are presented and discussed. [This research is supported with funding from the Oregon Department of Transportation.]
11:20
4aABa7. Underwater sound from pile driving and protected marine species issues. Amy R. Scholik-Schlomer (Office of Protected
Resources, NOAA’s National Marine Fisheries Service, 1315 East-West Hwy, SSMC3, Room 13605, Silver Spring, MD 20910, [email protected]) and Jason Gedamke (Office of Science and Technology, NOAA’s National Marine Fisheries Service, Silver Spring, MD)
With current, wide-spread coastal construction projects and the predicted development of offshore wind energy, there are concerns regarding the potential impacts of underwater sound associated with pile driving activities on protected marine species. The National Marine Fisheries Service (NMFS) works to conserve, protect, and recover a variety of marine species, including marine mammals, marine and
anadromous fishes, and sea turtles, protected under the Marine Mammal Protection Act (MMPA) and/or Endangered Species Act (ESA). In
order to make management decisions for these protected species, we rely on scientific data to inform our policy. However, there are many
challenges, including determining appropriate acoustic criteria and metrics for injury and behavioral harassment for impact and vibratory pile
driving activities; understanding acoustic propagation in complex environments, especially shallow, coastal areas and throughout sediments;
establishing appropriate protocols to mitigate and monitor impacts; and managing uncertainty for the broad number of species under our jurisdiction, who use and depend on sound (pressure and particle motion) in a variety of ways. Thus, we work collaboratively with other federal,
state, and local government agencies, academia, nongovernmental agencies, and industry to best assess and manage risk from these activities.
11:40
4aABa8. Barotrauma effects on fishes in response to impulsive pile driving stimuli. Brandon M. Casper (Department of Biology,
University of Maryland, College Park, MD 20742, [email protected]), Michele B. Halvorsen, Thomas J. Carlson (Marine Sciences Laboratory, Pacific National Northwest Laboratory, Sequim, WA), and Arthur N. Popper (Department of Biology, University of Maryland,
College Park, MD 20742)
We report on new results from controlled exposure studies of impulsive pile driving stimuli using the High Intensity Controlled Impedance Fluid Filled Wave Tube (HICI-FT). Following upon initial investigations focusing on injury thresholds and recovery from injuries in the Chinook salmon, experiments have been expanded to include lake sturgeon, Nile tilapia, hybrid striped bass, and hogchoker.
2034
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2034
Several key questions concerning pile driving exposure in fishes have been explored utilizing species with different types of swim bladders as well as a species without a swim bladder. Injury thresholds were evaluated in all species, with recovery from injuries measured
in the hybrid striped bass. Other pile driving variables measured with the hybrid striped bass include difference in response between fish
less than or greater than 2g as well as the minimum number of pile strikes needed for injuries to appear. A study to evaluate potential
damage to inner ear hair cells was also conducted on hybrid striped bass as well as the Nile tilapia. These studies will be utilized to better
understand, and possibly predict, the potential effects of pile driving exposure in fishes.
THURSDAY MORNING, 25 OCTOBER 2012
LESTER YOUNG A, 8:55 A.M. TO 11:40 A.M.
Session 4aABb
Animal Bioacoustics: Terrestrial Passive Acoustic Monitoring I
David Delaney, Chair
U.S. Army CERL, Champaign, IL 61821
Chair’s Introduction—8:55
Invited Papers
9:00
4aABb1. Acoustical monitoring of resource conditions in U.S. National Parks. Kurt M. Fristrup, Emma Lynch, and Damon Joyce
(Natural Sounds and Night Skies Division, National Park Service, 1201 Oakridge Drive, Suite 100, Fort Collins, CO 80525,
[email protected])
Several laws and derived policy direct the National Park Service to conserve and restore acoustic resources unimpaired for the enjoyment of future generations. The Natural Sounds and Night Skies Division has collected acoustical and related meteorological data at
more than 300 sites in over 60 park units spanning the coterminous U. S., with additional sites in Alaska, Hawaii, and American Samoa.
Analyses of these data reveal that background sound levels in many park units approach or fall below the human threshold of hearing,
and that noise intrusions are ubiquitous. An emergent challenge is to develop efficient tools to reprocess these data to document bioacoustical activity. Generic indices of wildlife activity would be useful for examining responses to climate change, other anthropogenic
disturbance, and changes in park unit management. Documentation of individual species occupancy and calling density would inform
identification of habitat characteristics and management of species of special concern.
4a THU. AM
9:20
4aABb2. Sensor arrays for automated acoustic monitoring of bird behavior and diversity. Charles Taylor (Ecology and Evolutionary Biology, UCLA, Los Angeles, CA 90064, [email protected])
There is growing interest in how to automate analysis of acoustic monitoring of bird vocalizations – especially for monitoring bird
behavior and biodiversity. I will review some of the main approaches to this problem and describe how this is being approached in our
laboratory. We break the problem down to: event recognition; classification; localization; and analysis. These are not entirely independent. I will discuss some new approaches to these problems that seem to hold special promise.
9:40
4aABb3. Accurate localization over large areas with minimal arrays. Douglas L. Jones (Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 1308 W. Main St., Urbana, IL 61801, [email protected]), Aaron L. Jones (Sonistic, LLC,
Champaign, IL), and Rama Ratnam (Biology, University of Texas at San Antonio, San Antonio, TX)
Passive terrestrial acoustic monitoring often requires accurate localization of acoustic sources over large areas. At least five microphones are required for unambiguous 3D relative-time-delay-based localization, but within what range can reasonable accuracy practically be obtained? An efficient new method for estimating localization error for any given array geometry and location allows rapid
exploration of the expected accuracy for any given array geometry and region. The accuracy is proportional to the standard deviation of
the relative-time-delay error and an array-geometry-and-source-location-specific term. Box-like arrays show high accuracy within the
box boundaries, but a quasi-linear “discrete helical” array also shows excellent planar localization performance, over large squarish
regions of the extent of the long axis of the array to either side of that axis, independent of the array length. Assuming a 1 kHz bandwidth
acoustic source with an approximate “Rayleigh” correlation error of 1 ms, the analysis shows the planar localization error to be less than
2 m within that region. Field tests with a 60-m five-element discrete helical array and several recorded bird and mammal calls closely
conformed to the analytical estimates and experimentally achieved sub-2-m accuracy within 60 m of the array.
10:00–10:20 Break
2035
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2035
10:20
4aABb4. An auditory approach to understanding the effects of noise on communication in natural environments. Robert Dooling,
Sandra Blumenrath, Ed Smith, Ryan Simmons (Psychology, Univ of Maryland, Baltimore Ave, College Park, MD 20742, [email protected]
umd.edu), and Kurt Fristrup (Natural Sounds Program, National Park Service, Fort Collins, CO)
Animals, like humans, frequently communicate using long-range acoustic signals in networks of several individuals. In socially and
acoustically complex environments, however, communication is characterized by a variety of perceptual challenges that animals strive
to overcome in order to interact successfully with conspecifics. Species differences in auditory sensitivity and the characteristics of the
environment are major factors in predicting whether environmental noise limits communication between animals or interferes with
detection of other biologically important sounds. Working with both birds and humans and using both synthetic and natural noises in
both laboratory and field tests, we have developed a model for predicting the effects of particular masking noises on animal communication. Moreover, by comparing birds listening to bird vocalizations in noise with humans listening to speech in noise, we gain a novel intuitive feel for the challenges facing animals in noisy environments. This approach of considering communication from the standpoint
of the receiver provides a better approach for understanding the effects of anthropogenic noises that exceed ambient levels. For instance,
in determining risk to a particular species, effective communication distances derived from this model might be compared to other
aspects of the species biology such as territory size.
10:40
4aABb5. A wireless acoustic sensor network for monitoring wildlife in remote locations. Matthew W. McKown (Ecology and Evolutionary Biology, Center for Ocean Health, UC Santa Cruz, 100 Shaffer Rd., Santa Cruz, CA 95060, [email protected]), Martin
Lukac (Nexleaf Analytics, Los Angeles, CA), Abraham Borker, Bernie Tershy, and Don Croll (Ecology and Evolutionary Biology,
Center for Ocean Health, UC Santa Cruz, CA)
Seabirds are the most threatened marine group with nearly 28% of extant species considered at risk of extinction. Managers and
researchers face considerable financial and logistical challenges when designing programs to monitor the status of any of the 97 species
listed as critically endangered, endangered, or vulnerable by the IUCN. These challenges are exacerbated by the fact that these birds
breed in isolated/inaccessible locations, many have cryptic nest sites, and most return to colonies only at night. Acoustic sensors are an
effective tool for monitoring the presence, distribution, and relative abundance of rare and elusive seabirds. We have developed new,
cellphone-based wireless acoustic sensors that 1) are comparable to state-of-the-art sensors, 2) are affordable (~US$500.00 per hectare),
3) can sample continuously over months, 4) can telemeter data from remote locations via a cellular, microwave, or satellite link, and 5)
can be reprogrammed remotely. To date we have deployed our wireless acoustic sensor networks to monitor seabirds of conservation
concern including - Ashy Storm-petrel, Oceanodroma homochroa, on Southeast Farallon Island (CA), Tristram’s Storm-petrel, O. tristrami, on Tern Island (French Frigate Shoals), as well as Newell’s Shearwater, Puffinus newelli, and Hawaiian Petrel, Pterodroma sandwichensis, at the Upper Limahuli Preserve (Kaua’i, HI).
11:00
4aABb6. A template-based automatic bird phrase classification in noisy environments using limited training data. Kantapon
Kaewtip, Lance Williams, Lee N. Tan (Electrical Engineering, UCLA, Los Angeles, CA), George Kossan (Ecology and Evolutionary
Biology, UCLA, Los Angeles, CA), Abeer Alwan (Electrical Engineering, UCLA, Los Angeles, CA), and Charles Taylor (Ecology and
Evolutionary Biology, UCLA, Los Angeles, CA 90064, [email protected])
Bird Songs typically comprise a sequence of smaller units, termed phrases, separated from one another by longer pauses; songs are
thought to assist in mate attraction and territory defense. Studies of bird song would often be helped by automated phrase classification.
Past classification studies usually employed techniques from speech recognition, such as MFCC feature extraction and HMMs. Problems
with these methods include degradation from background noise, and often require a large amount of training data. We present a novel
approach to robust bird phrase classification using template-based techniques. One (or more) template is assigned to each phrase with its
specific information, such as prominent time-frequency components. In our trials with 1022 phrases from Cassin’s Vireo (Vireo cassinii)
that had been hand-identified into 32 distinct classes, far fewer few examples per class were required for training in some cases only 1
to 4 examples for 84.95%-90.27% accuracy. The choice of distance metrics was crucial for such systems. We found that weighted 2D
convolution is a robust distance metric for our task.. We also studied phrase patterns using Multi-Dimensional Scaling, a discriminative
feature for phrase patterns that are very similar
11:20
4aABb7. Separating anthropogenic from natural sound in a park setting. John Gillette, Jeremy Kemball, and Paul Schomer
(Schomer and Associates, 2117 Robert Drive, Champaign, IL 61821, [email protected])
This paper is a continuation of a study with the National Park Service to detect and separate natural and anthropogenic sound in a
park setting. Last year, an algorithm was written to detect anthropogenic tones, because virtually all anthropogenic sound contains tones
less than one 1KHz. By comparing each frequency to the surrounding third octave band, the algorithm detects almost all anthropogenic
sounds. However, this method does not work for jet aircraft, because of their broad band sound when flying at altitude, not tones. This
year, an algorithm was developed to detect jet aircraft, by comparing natural and anthropogenic sound over time as opposed to over frequency. The algorithm finds average equivalent sound level (LEQ) over a day and then removes anomalous peaks from the original
sound recording. The program then subtracts the LEQ from the entire file, and the remaining sound is marked as probable jet aircraft
sound. If the sound is present for a long enough duration, it is recorded as a jet sound.
2036
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2036
THURSDAY MORNING, 25 OCTOBER 2012
TRIANON B, 8:00 A.M. TO 11:45 A.M.
Session 4aBA
Biomedical Acoustics and Physical Acoustics: Cavitation in Biomedical and Physical Acoustics
Xinmai Yang, Chair
Mechanical Engineering, Univ. of Kansas, Lawrence, KS 66045
Contributed Papers
4aBA1. Effects of encapsulation damping on frequency dependent subharmonic threshold for contrast microbubbles. Amit Katiyar (Mechanical Engineering, University of Delaware, Newark, DE) and Kausik Sarkar
(Mechanical and Aerospace Engineering, George Washington University,
801 22nd Street NW, Washington, DC 20052, [email protected])
The frequency of minimum threshold for subharmonic generation from
contrast microbubbles is investigated here. Increased damping—either due
to the small radius or the encapsulation—is shown to shift the minimum
threshold away from twice the resonance frequency. Free bubbles as well as
four different models of the contrast agent encapsulation are investigated
varying the surface dilatational viscosity. Encapsulation properties are
determined using measured attenuation data for a commercial contrast
agent. For sufficiently small damping, models predict two minima for the
threshold curve—one at twice the resonance frequency being lower than the
other at resonance frequency—in accord with the classical analytical result.
However, increased damping damps the bubble response more at twice the
resonance than at resonance, leading to a flattening of the threshold curve
and a gradual shift of the absolute minimum from twice the resonance frequency towards the resonance frequency. The deviation from the classical
result stems from the fact that the perturbation analysis employed to obtain
it assumes small damping, not always applicable for contrast microbubbles
(Supported by NSF CBET-0651912, CBET-1033256, DMR-1005283).
8:15
4aBA2. Pulse duration dependence of cavitation emissions and loss of
echogenicity from ultrasound contrast agents insonified by Doppler
pulses. Kirthi Radhakrishnan (Biomedical Engineering, University of
Cincinnati, 231 Albert Sabin Way, Cincinnati, OH 45267, [email protected]
uc.edu), Kevin J. Haworth (Internal Medicine, University of Cincinnati,
Cincinnati, OH), Jonathan A. Kopechek (Mechanical Engineering, Boston
University, Boston, MA), Bin Huang (Division of Biostatistics and Epidemiology, Children’s Hospital Medical Center, Cincinnati, OH), Shaoling
Huang, David D. McPherson (Internal Medicine, University of Texas Health
Science Center, Houston, TX), and Christy K. Holland (Internal Medicine,
University of Cincinnati, Cincinnati, OH)
Careful determination of stable and inertial cavitation thresholds of
UCAs exposed to pulsed ultrasound is required for their safe use in diagnosR were
tic and therapeutic applications. Echogenic liposomes and DefinityV
diluted in porcine plasma and pumped through a physiological flow phantom.
UCAs were insonified with pulsed Doppler ultrasound at three pulse durations (3.33 ms, 5.83 ms and 8.33 ms) over a range of peak rarefactional pressure amplitudes (0.06-1.9 MPa). A 10-MHz focused passive cavitation
detector (PCD) was used to record cavitation emissions. PCD signals and
B-mode images of UCAs and degassed water were acquired during insonation. Thresholds of stable and inertial cavitation, and loss of echogenicity
were determined by piecewise linear fits of the cavitation powers and mean
gray scale values, respectively. The stable cavitation thresholds were found
to be lower than the inertial cavitation thresholds at each pulse duration setting. The thresholds of loss of echogenicity and stable and inertial cavitation
were found to be dependent on pulse duration. The relationship between loss
of echogenicity and cavitation emissions will be discussed in the context of
2037
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
using onscreen echogenicity to indirectly monitor cavitation during ultrasound-mediated therapy with UCAs. [Supported by NIH R01 HL059586.]
8:30
4aBA3. Echogenicity and release characteristics of folate-conjugated
echogenic liposomes for cytosolic delivery of cancer drugs. Shirshendu
Paul (Mechanical Engineering, University of Delaware, Newark, DE),
Rahul Nahire, Sanku Mallik (Pharmaceutical Sciences, North Dakota State
University, Fargo, ND), and Kausik Sarkar (Mechanical and Aerospace Engineering, George Washington University, 801 22nd Street NW, Washington, DC 20052, [email protected])
Echogenic liposomes (ELIPs) are specially prepared liposomes that encapsulate both aqueous and gaseous phases. The presence of gas makes
them echogenic. Since, ELIPs retain all the favorable properties of normal
liposomes they can be used for simultaneous ultrasonic imaging and drug
delivery applications. These liposomes are polymerized on the external leaflet using a disulphide linker. Disulphide bonds are reversibly broken in presence of thiol above a critical concentration. Therefore, the liposomes are
stable in the plasma (thiol concentration 10 lM) but release its content
inside the cell (thiol concentration 10 mM). The liposome also expresses folate group on its surface which allows its entry into the cancer cells. The
release can be controlled by diagnostic frequency ultrasound. Therefore,
these ELIPs hold promises for ultrasound image-guided cytosolic delivery
for cancer drugs. We will report on their acoustic properties and ultrasoundmediated release characteristics. Their implications on design and development of these novel contrast agents will be discussed. [Supported by NSF
CBET-0651912, CBET-1033256, DMR-1005283.]
8:45
4aBA4. High-frequency harmonic imaging with coded excitation: Implications for the assessment of coronary atherosclerosis. Himanshu Shekhar and
Marvin M. Doyley (Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14611, [email protected])
The adventitial vasa vasorum grows abnormally in life-threatening atherosclerotic plaques. Harmonic intravascular ultrasound (H-IVUS) could help
assess the vasa vasorum by nonlinear imaging of microbubble contrast
agents. However, the harmonics generated in tissue at high acoustic pressures
compromise the specificity of H-IVUS - a trait that has hampered its clinical
use. Therefore, H-IVUS should be conducted at low pressure amplitudes; but
the resulting decrease in signal-to-noise ratio (SNR) could limit the sensitive
detection of the vasa vasorum. In this study, we investigated the feasibility
of improving the SNR of H-IVUS imaging with chirp-coded excitation. Numerical simulations and experiments were conducted to assess the harmonic
response of the commercial contrast agent Targestar-pTM, to sine-burst and
chirp-coded excitation (center frequencies 10 and 13 MHz, peak-pressures
100 to 300 kPa). We employed 1) a single-element transducer pair, and 2) a
dual-peak frequency transducer for our studies. Our experimental results
demonstrated that exciting the agent with chirp-coded pulses can improve
the harmonic SNR by 7 to 14 dB. Further, the axial resolution obtained with
chirp-coded excitation was within 10% of that expected for sine-burst excitation. Therefore, we envisage that chirp-coded excitation may be a viable
strategy to visualize the vasa vasorum with H-IVUS imaging.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2037
4a THU. AM
8:00
9:00
4aBA5. Effect of inter-element apodization on passive cavitation
images. Kevin J. Haworth (Internal Medicine, University of Cincinnati, 231
Albert Sabin Way, CVC3940, Cincinnati, OH 45209, [email protected]
edu), T. D. Mast, Kirthi Radhakrishnan (Biomedical Engineering Program,
University of Cincinnati, Cincinnati, OH), and Christy K. Holland (Internal
Medicine, University of Cincinnati, Cincinnati, OH)
Acoustic cavitation has been correlated with a variety of ultrasound-mediated bioeffects. Recently developed passive cavitation imaging methods provide spatially resolved maps of cavitation activity with good azimuthal
resolution but poor axial resolution. Here, inter-element apodization is investigated as a means of improving image quality. Cavitation was induced from
echogenic liposomes in a flow phantom exposed to 6 MHz Doppler ultrasound
(Philips HDI-5000). The resulting acoustic emissions were passively recorded
on 64 elements of a linear array (L8-3 transducer, Zonare z.one ultra scanner).
Amplitude scaling of each waveform by its root-mean-square value improved
axial resolution at the expense of creating an ‘X-shaped’ artifact. Cosine amplitude apodization of the received waveforms across the array and centered
about the azimuthal location of the beamformed image pixel was found to
reduce grating lobe artifacts. Numerical time reversal of the received waveforms, using the Fresnel approximation for the acoustic field of each array element, resulted in an effective apodization due to element directivity and also
reducing grating lobe artifacts. Applying apodization may be an effective
means of increasing passive image quality for certain cavitation distributions,
which will be discussed. [Supported in part by NIH grants F32HL104916,
R01HL074002, R21EB008483, R01HL059586, and R01NS047603.]
9:15
4aBA6. Acoustic emissions associated with ultrasound-induced rupture
of ex vivo blood vessels. Cameron L. Hoerig (Electrical Engineering Program, University of Cincinnati, Cincinnati, OH), Joseph C. Serrone (Department of Neurosurgery, University of Cincinnati, Cincinnati, OH), Mark T.
Burgess (Biomedical Engineering Program, University of Cincinnati,
Cincinnati, OH), Mario Zuccarello (Department of Neurosurgery, University
of Cincinnati, Cincinnati, OH), and T. Douglas Mast (Biomedical Engineering
Program, University of Cincinnati, 3938 Cardiovascular Research Center, 231
Albert Sabin Way, Cincinnati, OH 45267-0586, [email protected])
Occlusion of blood vessels using high-intensity focused ultrasound (HIFU)
is a potential treatment for arteriovenous malformations and other neurovascular disorders. However, HIFU-induced vessel occlusion can cause vessel rupture resulting in hemorrhage. Possible rupture mechanisms include mechanical
effects of acoustic cavitation and hyperthermia of the vessel wall. To investigate the mechanism of vessel rupture and assess the possibility of rupture prediction from acoustic emissions, HIFU exposures were performed on 18 ex
vivo porcine femoral arteries with simultaneous passive cavitation detection.
Vessels were insonified by a 3.3 MHz focused source with spatial-peak, temporal-peak focal intensity 1728-2791 W/cm2 and a 50% duty cycle for durations up to 5 minutes. Time-dependent acoustic emissions were recorded by an
unfocused passive cavitation detector and quantified within low-frequency
(10-30 kHz), broadband (0.3-1.1 MHz), and subharmonic (1.65 MHz) bands.
Vessel rupture was detected by inline metering of saline flow, recorded
throughout each treatment. Rupture prediction tests, using receiver operating
characteristic curve analysis, found subharmonic emissions to be most predictive. These results suggest that acoustic cavitation plays an important role in
HIFU-induced vessel rupture. In HIFU treatments for vessel occlusion, passive
monitoring of acoustic emissions may be useful in avoiding hemorrhage.
9:30
4aBA7. Cavitation mechanisms in ultrasound-enhanced permeability of
ex vivo porcine skin. Kyle T. Rich (Biomedical Engineering Program, University of Cincinnati, Cincinnati, OH), Cameron L. Hoerig (Electrical Engineering Program, University of Cincinnati, Cincinnati, OH), and T. Douglas
Mast (Biomedical Engineering Program, University of Cincinnati, 3938
Cardiovascular Research Center, 231 Albert Sabin Way, Cincinnati, OH
45267-0586, [email protected])
Ultrasound-induced cavitation is known to enhance transdermal transport of drugs for local and systemic delivery. However, the specific cavitation mechanisms responsible are not well understood, and the physical
2038
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
location of permeability-enhancing cavitation is also unknown. The experiments reported here investigated the role of stable and inertial cavitation,
both within the skin and at the dorsal skin surface, in ultrasound enhancement of skin permeability. Full-thickness porcine skin was hydrated with either air-saturated phosphate buffered saline (PBS) or vacuum-degassed PBS
to localize cavitation activity within or outside the skin, respectively. Skin
samples were sonicated for 30 minutes over a range of frequencies (0.41
and 2.0 MHz) and peak rarefactional pressure amplitudes (0-750 kPa) with
a 20% duty cycle (1 s on, 4 s off). Cavitation activity was monitored using a
1.0 MHz unfocused, wideband passive cavitation detector (PCD). Changes
in skin permeability were quantified by measuring the electrical resistance
of skin every 10 seconds during insonation. Subharmonic acoustic emissions
revealed a strong correlation with decreasing electrical resistance of skin
when cavitation was isolated within the tissue, suggesting that stable cavitation within the skin plays a primary role in ultrasound-enhanced permeability over the frequencies investigated.
9:45
4aBA8. Laser-induced-cavitation enhanced ultrasound thrombolysis.
Huizhong Cui and Xinmai Yang (Mechanical Engineering, University of
Kansas, 5109 Learned Hall, Lawrence, KS 66045, [email protected])
Application of ultrasound (US) is considered as an effective way to dissolve thrombus. Cavitation has been demonstrated to be significant to
enhance thrombolytic efficacy. In this study, to improve the efficacy of this
thrombolytic therapy, 764-nm laser light was used to induce cavitation in
the US thrombolysis. Porcine clots were cut into small pieces and inserted
into small tubes, then placed in the focal zone of a 1-MHz high-intensity
focused ultrasound (HIFU) transducer in a water tank. At the same time, a
10-Hz laser system, which is confocal with the HIFU transducer, was used
to illuminate on the focal area of the model during thrombolysis. After
thrombolysis, the debris of clots was weighed to calculate the weight loss.
Both US thrombolysis with and without laser illumination were performed
in the experiment. Different combinations of peak-to-peak ultrasound pressure amplitude, duty cycle and duration were used. It is shown that the clot
mass loss increased significantly when the laser illumination presented during the US thrombolysis process. The preliminary experimental results indicated that laser induced cavitation may play an important role in the
enhancement of US thrombolysis.
10:00–10:30 Break
10:30
4aBA9. Ethanol injection induced cavitation and heating in tissue
exposed to high intensity focused ultrasound. Chong Chen (Department
of Biomedical Engineering, Tulane University, New Orleans, LA), Yunbo
Liu, Subha Maruvada, Matthew Myers (Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD), and
Damir Khismatullin (Department of Biomedical Engineering, Tulane University, 500 Lindy Boggs Center, New Orleans, LA 70118, [email protected]
edu)
High Intensity Focused Ultrasound (HIFU) can ablate tumors located
deep in the body through highly localized energy deposition and tissue heating at the target location. The volume of a HIFU-induced thermal lesion can
be increased in the presence of cavitation. This study explores the effect of
ethanol injection on cavitation and heating in tissue-mimicking phantoms
and bovine liver tissues exposed to HIFU. The HIFU transducer
(0.825 MHz) operated at seven acoustic power levels ranging from 1.3 W to
26.8 W. The cavitation events were quantified by B-mode ultrasound imaging, needle hydrophone measurements, and passive cavitation detection
(PCD). Temperature in or near the focal zone was measured by thermocouples embedded in the samples. The onset of inertial cavitation in ethanoltreated phantoms and bovine liver tissues occurred at a lower power level
than in the untreated samples (control). The cavitation occurrence in turn
resulted in a sudden rise of temperature in ethanol-treated samples at a lower
acoustic power than that in control. The results of this work indicate that the
use of percutaneous ethanol injection prior to HIFU exposure may improve
the HIFU therapeutic efficiency.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2038
4aBA10. Scattering by bubbles at frequencies well below resonance.
R. L. Culver, Robert W. Smith, and Dale I. McElhone (ARL, Penn State
University, PO Box 30, State College, PA 16804, [email protected])
We are interested in acoustic scattering by bubble clouds in water at frequencies and densities such that the acoustic wavelength is large relative to
the average distance between bubbles and large relative to that corresponding to the bubble resonance frequency. At high frequency and moderate
bubble density, bubble scattered intensity is proportional to N (the number
density of the bubbles, m4), which corresponds to incoherent scattering.
Effective medium theory has been shown to predict predominantly incoherent scattering at high frequencies, but coherent scattering (scattered intensity
proportional to N2) at lower frequencies. An incoherent scattering assumption at low frequencies can substantially under predict the intensity of the
scattered signal. Coherent (low frequency) scattering from bubble assemblages has also been explained in terms of collective shape, but this approach
does not provide a means of predicting the temporal extent of the scattered
signal in low frequency regimes. The literature apparently does not provide
precise guidance as to when and how bubble scattering transitions from
incoherent to coherent scattering in response to increasing wavelength, and
the relationship between the acoustic wavelength and average bubble separation. Modeling and a tank experiment are underway that we hope will provide some answers to this question.
11:00
4aBA11. Low-frequency measurement of encapsulated bubble compressibility. Scott J. Schoen, Yurii A. Ilinskii, Evgenia A. Zabolotskaya, and
Mark F. Hamilton (Applied Research Laboratories, The University of Texas
at Austin, 204 E. Dean Keeton Street, Austin, TX 78712-1591, [email protected]
mail.utexas.edu)
Interest in measuring underground water flow has motivated synthesis of
encapsulated microbubbles for use as contrast agents. The large acoustic
attenuation in earth prohibits use of the high frequencies required to exploit
resonant scattering. Instead, contrast enhancement must rely on the reduction of acoustic impedance due to higher compressibility of the microbubbles. Bubble compressibility is measured at the kilohertz frequencies of
interest using a resonance tube filled with water and observing the change in
tube resonance frequency due to the presence of bubbles for different void
fractions [Wilson and Dunton, J. Acoust. Soc. Am. 125, 1951 (2009)].
Buoyancy makes it difficult to maintain a uniform distribution of bubbles
throughout the tube in order to relate sound speed to resonance frequency.
Therefore, the bubbles were restrained with acoustically transparent barriers
to form discrete layers within the water column. A model was developed to
investigate the effect on the tube resonance frequency due to different spatial distributions of the bubble layers, and the predictions were compared
with measurements. Good agreement with the known compressibility of air
2039
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
was obtained experimentally with only three or four layers. [Work supported by Advanced Energy Consortium.]
11:15
4aBA12. Measurements of resonance frequencies and damping of large
encapsulated bubbles in a closed, water-filled tank. Kevin M. Lee,
Andrew R. McNeese, Laura M. Tseng, Mark S. Wochner (Applied Research
Laboratories, The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758, [email protected]), and Preston S. Wilson (Mechnical
Engineering Department and Applied Research Laboratories, The University of Texas at Austin, Austin, TX)
The ultimate goal of this work is to accurately predict the resonance
frequencies of large (on the order of 10 cm radius) tethered encapsulated
bubbles used in an underwater noise abatement system, and also to investigate ways to enhance the system’s efficacy over the use of air-filled bubbles alone. Toward that end, a closed water-filled tank was developed for
the purpose of measuring the resonance frequency and damping of single
large tethered encapsulated bubbles. The tank was designed to be operated
in the long wavelength limit for frequencies below the lowest tank resonance, which was chosen to be 500 Hz, using the method described be
Leighton, et al. [J. Acoust. Soc. Am. 112, 1366–1376 (2002)]. Individual
bubble resonance frequencies and Q-factors were measured for encapsulated bubbles of various sizes. The effects of the encapsulating material
and wall thickness were investigated, along with the effects of alternative
fill gases and internal heat transfer materials. Experimental results are
compared with an existing predictive model [J. Acoust. Soc. Am. 97,
1510–1521 (1995)] of bubble resonance and damping. [Work supported by
Shell Global Solutions.]
11:30
4aBA13. Application of inversion techniques for bubble size spectra
from attenuation measurements in lab-generated bubble distributions.
Dale I. McElhone, Robert W. Smith, and R. Lee Culver (Appl. Res. Lab.,
Penn State Univ., State College, PA 16804, [email protected])
The size distribution of a bubble population can be estimated from
measurements of the frequency-dependent attenuation through the bubble
cloud. These attenuation values are the inputs to an inversion method that
makes use of a resonant bubble approximation wherein it is assumed that
only resonant bubbles contribute to the attenuation at a given frequency,
e.g., Caruthers et al., [JASA (1999)]. H. Czerski has shown that power law
bubble distributions proportional to (radius)x, where x-2, have few enough
large bubbles for resonant bubble inversion methods to yield accurate
results. In this paper, the Caruthers and Czerski inversion methods are both
verified with synthetic data and applied to acoustic measurements in a fresh
water tank using lab-generated bubble distributions. Work sponsored by the
Office of Naval Research, Code 321US.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2039
4a THU. AM
10:45
THURSDAY MORNING, 25 OCTOBER 2012
LIDO, 8:20 A.M. TO 11:15 A.M.
Session 4aEA
Engineering Acoustics: Layered Media
Andrew J. Hull, Chair
Naval Undersea Warfare Center, Newport, RI 02841
Invited Papers
8:20
4aEA1. Acoustic radiation from a point excited multi-layered finite plate. Sabih I. Hayek (Engineering Science and Mechanics, Penn
State University, State College, PA 16802, [email protected]) and Jeffrey E. Boisvert (NAVSEA, Division Newport, Newport, RI)
The acoustic radiation from a finite rectangular composite plate is evaluated using eigenfunctions obtained through the use of threedimensional equations of elasticity. The composite plate is made of perfectly bonded finite plates of identical lateral dimensions and of
different thicknesses. The plate is free of shear stresses and is pinned on the in-plane displacements on all its boundaries and is baffled
by an infinite rigid plane. The multi-layered plate is in contact with a different fluid medium on each of its two surfaces. The solution for
the vibration response due to normal and shear surface forces is found in terms of the composite plate eigenfunctions that include heavy
acoustic loading. The displacement vector field throughout the thickness of the plate is computed as well as the resultant near- and farfield radiated acoustic pressures for various ratios of thickness to plate dimensions over a broad frequency range. Initial results focus on
a bilaminar plate. [Work supported by the ASEE Summer Faculty Research Program.]
8:40
4aEA2. Investigating the fidelity of a pseudo-analytical solution of a rib-stiffened, layered plate structure subjected to high frequency acoustic loading. Kirubel Teferra and Jeffrey Cipolla (Applied Science Division, Weidlinger Associates, 375 Hudson St., New
York, NY 10014, [email protected])
There is a need for a fast and reliable tool to assist in the analysis, design, and optimization of submarine and UUV coatings due to
high frequency incident acoustic pressure loading. An existing pseudo-analytical, frequency domain solution for wave propagation in
coated, ribbed, three-dimensional elastic layered plates excited by acoustic plane waves provides fast solutions for high frequency excitations. Weidlinger Associates, Inc. (WAI) is developing an analysis software tool which integrates this solution methodology while adding
some technical improvements to the formulation. The solution methodology, which is found to be numerically unstable under certain conditions, contains a fundamental ansatz regarding the set of excited wave forms expressed through a particular wave number expansion in
the direction of periodicity. Evidence is presented to show that the numerical instability is due to the specific choice of the wave number
basis used in the solution. In order to provide a remedy while retaining the positive aspects of the solution methodology, WAI is implementing a pre-processing step to determine the optimal wave number basis: the set of admissible propagating (and attenuating) waves are
predetermined via an eigenvalue analysis and then substituted into the wave number basis in computing the pseudo-analytical solution.
9:00
4aEA3. Elasto-acoustic response of a rib-stiffened multi-layer Hull system. Irena Lucifredi (SOFAR Acoustics, 44 Garfield Ave. #2,
Woburn, MA 01801, [email protected]), Raymond J. Nagem (Boston University, Boston, MA), and Federico Lucifredi (SOFAR
Acoustics, Woburn, MA)
The analysis of hull vibrations has been a long-standing topic of interest in the US Navy for both surface and underwater vehicles.
Understanding of the physics controlling acoustic scattering and radiation from coated, fluid-loaded structures is important as it can
provide the required knowledge of the self-noise modeling of hull arrays and of the acoustic target strength of the submersibles. Currently, models are typically limited to low frequency regime of operation, not being able to consider a broad mid-high frequency range,
commonly rich in physical phenomena that characterize sound fields in underwater vehicle environments. The goal of this effort is to
provide a robust, innovative, and computationally efficient tool for analytical modeling of a fluid-loaded acoustic coating affixed to a
rib-stiffened backing plate, capable of representing high frequency acoustic environments not suitable for conventional finite element
approaches.The approach taken in this effort is based on the A.J. Hull’s derivation of the elastic response of a layered sonar system on a
rib-stiffened plate, and it is centered on the reformulation of the layered system response problem using displacement and stress variables.The new approach produces a significant improvement in the stability, efficiency, and accuracy of the computational method.
9:20
4aEA4. Vibro-acoustic response of an infinite, rib-stiffened, thick-plate assembly using finite-element analysis. Marcel C. Remillieux (Mechanical Engineering, Virginia Tech, 149 Durham Hall, Blacksburg, VA 24061, [email protected])
The vibration of and sound radiation from an infinite, fluid-loaded, thick-plate assembly stiffened periodically with ribs are investigated numerically using finite element (FE) analysis. The analysis is conducted in two-dimensions using plane-strain deformation to
model the dynamics of the structure. Advantage is taken of the periodicity of the system to deal with the infinite dimensions of the model
through the use of periodic boundary conditions. Firstly, numerical simulations are used to validate the analytical solutions derived
2040
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2040
recently for this particular problem by Hull and Welch [Elastic response of an acoustic coating on a rib-stiffened plate, Journal of Sound
and Vibration 329 (2010) 4192-4211]. Numerical and analytical solutions are in excellent agreement, provided that the number of modes
in the analytical model is chosen correctly. Through this validation effort it is also demonstrated that the analytical model is sensitive to
the number of modes used to formulate the solution, which may result in some instabilities related to mode count. Subsequently, the numerical model is used to study the effect of repeated and equally spaced void inclusions on the vibro-acoustic response of the system.
Contributed Papers
4aEA5. Radiation loading on multi-panel plates. Chiruvai P. Vendhan,
Poosarla V. Suresh, and Subrata K. Bhattacharyya (Ocean Engineering
Department, Indian Institute of Technology Madras, Chennai, Tamilnadu
600036, India, [email protected])
Fluid-structure interaction problems involving the harmonic vibration of
plates may be analyzed by employing an assumed modes approach. The associated hydrodynamic problem may be solved employing boundary element or
finite element (FE) methods. An infinitely long multi-panel plate, having uniform spans and vibrating in contact with a fluid is considered here. A typical
single span panel of the multi-panel system is set in a rigid baffle and the
semi-infinite fluid domain over it is truncated. A FE model for the Helmholtz
equation is employed over this domain, and suitable dampers are used on the
truncation boundary to impose the radiation boundary condition. The FE solution is used to set up an eigenfunction expansion of the acoustic field outside
the FE domain. Such an approach has originally been developed for exterior
acoustic problems [C.P. Vendhan and C. Prabavathi, J. Vib. and Acoust., 118,
1996, 575-582]. The pressure field on the single panel and the infinite baffle
is used to obtain the modal radiation loading in the form of added mass and
radiation damping matrices of the multi-panel system, employing reciprocity
and linear superposition. The method has been validated for an infinite plate
example and illustrated using two and three panel systems.
9:55–10:15 Break
10:15
4aEA6. Free-wave propagation relationships of second-order and
fourth-order periodic systems. Andrew J. Hull (Naval Undersea Warfare
Center, 1176 Howell St, Newport, RI 02841, [email protected])
This talk develops an analytical expression for the determinant of two
diagonally-indexed, full matrices when they are zero. These matrices originate from second- and fourth-order periodic system theory. The partial differential equations of these systems are solved using a series solution and are
converted into closed-form analytical expressions. The denominators of these
expressions are zero when free-wave propagation is present, and these
denominators are equated to the determinants of the system matrices derived
from a second analytical method. This process develops a relationship
between frequency and wavenumber that is explicit for free-wave propagation
in these systems. Two examples are included to illustrate this relationship.
10:30
4aEA7. Damping of flexural vibrations in glass fiber composite plates
and honeycomb sandwich panels containing indentations of power-law
profile. Elizabeth P. Bowyer, Peter Nash, and Victor V. Krylov (Aeronautical and Automotive Engineering, Loughborough University, Loughborough,
Leicetershire LE11 3TU, United Kingdom, [email protected])
In this paper, the results of the experimental investigation into the addition of indentations of power-law profile into composite plates and panels
and their subsequent inclusion into composite honeycomb sandwich panels
are reported. The composite plates in question are sheets of composite with
visible indentations of power-law profile. A panel is a sheet of composite
with the indentations encased within the sample. This makes a panel similar
in surface texture to an un-machined composite sheet (reference plate) or
conventional honeycomb sandwich panel. In the case of quadratic or higherorder profiles, the above-mentioned indentations act as two-dimensional
acoustic black holes for flexural waves that can absorb a large proportion of
2041
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
the incident wave energy. For all the composite samples tested in this investigation, the addition of two-dimensional acoustic black holes resulted in
further increase in damping of resonant vibrations, in addition to the already
substantial inherent damping due to large values of the loss factor for composites. Due to large values of the loss factor for composite materials used,
no increase in damping was seen with the addition of a small amount of
absorbing material to the indentations, as expected.
10:45
4aEA8. Sound radiation of rectangular plates containing tapered indentations of power-law profile. Elizabeth P. Bowyer and Victor V. Krylov
(Aeronautical and Automotive Engineering, Loughborough University,
Loughborough, Leicestershire LE11 3TU, United Kingdom, [email protected]
lboro.ac.uk)
In this paper, the results of the experimental investigations into the
sound of rectangular plates containing tapered indentations of power-law
profile are reported. Such tapered indentations materialise two-dimensional
acoustic black holes for flexural waves that result in absorption of a large
proportion of the incident wave energy. A multi-indentation plate was compared to a plain reference plate of the same dimensions, and the radiated
sound power was determined (ISO 3744). It was demonstrated that not only
do such multiple indentations provide substantial reduction in the damping
of flexural vibrations within the plates, but also cause a substantial reduction
in the radiated sound power. As the amplitudes of the flexural vibrations of
a plate are directly linked to the amplitude of radiated sound from the same
plate, this paper also considers the effect of distribution of the amplitude of
the plate’s response on the amplitudes of the radiated sound. This investigation concludes that, despite an increase in the amplitude of the displacement
at the indentation tip, the overall reduction in the constant thickness of the
plate is large enough to result in substantial reductions in the overall vibration response and in the resulting sound radiation of plates containing indentations of power-law profile.
11:00
4aEA9. Damping of flexural vibrations in plates containing ensembles
of tapered indentations of power-law profile. Elizabeth P. Bowyer, Daniel
O’Boy, and Victor V. Krylov (Aeronautical and Automotive Engineering,
Loughborough University, Loughborough, Leicestershire LE11 3TU, United
Kingdom, [email protected])
In this work, we report experimental results on damping flexural vibrations in rectangular plates containing tapered indentations (pits) of powerlaw profile, the centres of which are covered by a small amount of absorbing
material. In the case of quadratic or higher-order profiles, such indentations
materialise two-dimensional acoustic ‘black holes’ for flexural waves. Initially, the effects of single pits have been investigated. It has been found
that, in order to increase the damping efficiency of power-law profiled
indentations, their absorption cross-sections should be enlarged by drilling a
central hole of sufficiently large size (14 mm), while keeping the edges
sharp. Such pits, being in fact curved power-law wedges, result in substantially increased damping. The next and the major part of this investigation
involved using multiple indentations in the same rectangular plates to
increase damping. Plates with combinations from two to six equal indentations have been investigated. The results show that, when multiple indentations are used, the associated damping increases substantially with the
increase of a number of indentations. For the plate with 6 indentations, the
resulting damping becomes comparable if not greater than that achieved by
a wedge of power-law profile.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2041
4a THU. AM
9:40
THURSDAY MORNING, 25 OCTOBER 2012
ANDY KIRK A/B, 8:25 A.M. TO 11:20 A.M.
Session 4aMUa
Musical Acoustics and Speech Communication: The Acoustics of Rhythm
James P. Cottingham, Chair
Physics, Coe College, Cedar Rapids, IA 52402
Chair’s Introduction—8:25
Invited Papers
8:30
4aMUa1. The nature and perception of human musical rhythms. Holger Hennig (Jefferson Lab, Dept. of Physics, Harvard University, Cambridge, MA 02138, [email protected]), Ragnar Fleischmann (Nonlinear Dynamics, Max Planck Instittute for
Dynamics and Self-Organization, Goettingen, Nds, Germany), Anneke Fredebohm (Institute of Psychology, University of Goettingen,
Goettingen, Nds, Germany), York Hagmayer (King’s College, Goettingen, Nds, Germany), Jan Nagler, Annette Witt (Nonlinear
Dynamics, Max Planck Instittute for Dynamics and Self-Organization, Goettingen, Nds, Germany), Fabian J. Theis (Institute for Bioinformatics and Systems Biology, Helmholtz Centre, Munich, BAV, Germany), and Theo Geisel (Nonlinear Dynamics, Max Planck Instittute for Dynamics and Self-Organization, Goettingen, Nds, Germany)
Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform
imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer
generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing
software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units
are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish longrange fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances [1,2]. Moreover, we
demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for
humanizing interbeat intervals coincides with the one generically inherent in human musical performances. [1] HH et al., PLoS
ONE,6,e26457 (2011). [2] Physics Today, invited article, submitted (2012).
8:50
4aMUa2. Human body rhythms motion analogy in music sound. Alexander Ekimov (National Center for Physical Acoustics, The
University of Mississippi, 1 Coliseum Drive, University, MS 38677, [email protected])
The universal algorithm developed for searching periodic and quasiperiodic rhythms in different type of signals [JASA 129(3)] was
applied for a processing a few musical sound files and the results were reported on the ASA and other conferences and published in the
POMA 14 (2011) article. Originally this algorithm was developed for finding human body motion rhythms in signals of different security systems. A preliminary conclusion from usage of this algorithm for a few music sound files founded rhythms in music files corresponded to rhythms of human regular body mechanical motion. Its appears that the musicians body parts motions, due to playing music
can be found in rhythms of the playing music, which create an impression for the audience. These rhythms in analyzed music sound files
are corresponding to mechanical human body movements due to walking or running also. More music file (including vocal) analysis
with this rhythm algorithm and the results corresponding to rhythms of the human body motion are presented. This work was supported
by the National Center for Physical Acoustics (NCPA) of the University of Mississippi.
9:10
4aMUa3. Heartbeat and ornaments: More technical secrets of swing rhythm. Ken Lindsay (Software Engineering, Tinmap, 180
Ohio St, Ashland, OR 97520, [email protected]) and Pete Nordquist (Computer Science, Southern Oregon University, Ashland, OR)
We previously demonstrated technically precise methods characterizing various types of Swing style in music. Our primary tool is
“diffdot” notation showing, in graphical form, the exact timing relationships between various musical notes that create a specific musical
phrase. We have shown several common and obvious details of Swing, all based on time variations from a uniform square grid (Classical
Mozart/Bach). Micro-timing variations are generally recognized as being essential to Swing. There may be other elements which define
Swing feeling but we have focused on micro-timing details. These are a fruitful source for technical analysis of Swing styles.Triplet subdivision is often associated with Swing – Jazz, Blues – but triplets are neither necessary nor sufficient to distinguish a performance as
“Swing” versus “Straight” time. One seemingly universal detail of Swing is an asymmetrical “pulse” or basic beat, e.g. on the downbeat
of every measure, or the one and three beat in a 4/4 piece. The time between the two heartbeat notes as they repeat their cycle is not
equal. This gives rise to unmistaken Swing. Other non-uniform but precisely repeated timing patterns characterize Swing at hierarchical
levels different from pulse. These we call “ornaments” in keeping with common musical jargon.
2042
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2042
9:30
4aMUa4. Identifying highly rhythmic stretches of speech with envelope-driven resonance analysis. Sam Tilsen (Linguistics,
Cornell University, 203 Morrill Hall, Ithaca, NY 14853, [email protected])
This paper proposes envelope-driven resonance analysis as a technique for characterizing rhythmicity in speech, emphasizing the
degree to which a brief stretch of speech creates a rhythmic expectancy. Most approaches to characterizing the rhythm of speech have
utilized measurements derived from the durations of linguistically relevant units such as feet, syllables, or vocalic/consonantal intervals.
Recently, alternative approaches have been developed which are based upon the amplitude envelope of the speech waveform after the
waveform has been filtered to emphasize low-frequency oscillations associated with alternations between vowels and consonants. These
approaches include spectral analysis and empirical mode decomposition of the envelope. The method explored here is resonance analysis, which utilizes a bank of resonators that differ in their characteristic resonant frequencies. The resonators are 2nd order dynamical
systems analogous to driven, damped springs. The powers of the resonator amplitudes are analyzed during and subsequent to excitation
by a speech amplitude envelope. The power and frequency distribution of the resonant response is used to identify highly rhythmic
stretches of speech and characterize their spectral properties.
9:50
4aMUa5. Using resonance to study the deterioration of the pulse percept in jittered sequences. Marc J. Velasco and Edward W.
Large (Center for Complex Systems and Brain Sciences, Florida Atlantic University, 777 Glades Rd, Boca Raton, FL 33432, [email protected]
ccs.fau.edu)
Studies of pulse perception in rhythms often ask what periodicity describes the pulse, e.g., tempo identification. In studies of pulse
attribution, irregular rhythmic sequences are rated for the degree to which a pulse percept is elicited, if at all. Here, we investigate how a
resonance approach to pulse perception may explain the reduction in pulse attribution ratings for jittered sequences while also predicting
perceived tempo. We use a signal processing approach to predict perceptual ratings and behavioral performance measures (i.e., tapping
data). Measures of resonance are evaluated using both FFT and a network of neural oscillators. The stimuli were isochronous sequences
modified with varying levels of pseudorandom Kolakoski jitter. In separate blocks, participants were asked to provide pulse attribution
judgments and to tap at the pulse rate. As levels of jitter increased, pulse attribution ratings decreased and participants tapped periodically at the mean sequence rate. At certain high levels of jitter, pulse attribution ratings increased and participants entrained at a new tapping rate. Resonance measures account for both mean tapping rate and pulse attribution ratings, suggesting that these two behavioral
measures may be different aspects of the same resonant phenomenon.
10:10–10:25 Break
10:25
4aMUa6. Rhythm and meter in 21st century music theory. Justin London (Music, Carleton College, One North College St., Northfield, MN 55057, [email protected])
4a THU. AM
Theories of rhythm in western music go back to Aristoxenus (335 BC) and have continued unabated to the present day. Yet while
music theoretic discussions of melody and harmony since Pythagoras have often looked to mathematics and acoustics, only recently has
music theory availed itself of research in acoustics, psychoacoustics, and auditory psychology. The central question for a theory of musical rhythm is “what makes something regular enough to be considered a rhythm?” Answering this question requires not only a knowledge of music in a range of musical styles and cultures, but also understanding of our basic psychological capacities for temporal
perception and discrimination, as well as our perceptual biases and habits. A brief outline of recent theories of rhythm and meter that
draw upon these domains will be presented, with an emphasis on musical meter as kind of entrainment, that is, a synchronization of our
attending and/or sensorimotor behaviors to external periodicities in a particular temporal range.
10:45
4aMUa7. Cross-cultural concepts and approaches in musical rhythm. Rohan Krishnamurthy (Musicology, Eastman School of
Music, Rochester, NY 14604, [email protected])
Western (written, notated) and Indian (oral, unnotated) systems of musical rhythm will be analyzed from theoretical and performance perspectives. Rhythmic concepts and parameters such as meter and tala (rhythmic cycles with constant duration and tempo), tuplet
and nadai subdivisions of an underlying pulse, and accelerando (gradual accelerations in musical tempo) and decelerando (gradual decelerations in musical tempo) will be defined in a cross-cultural context. These systems of understanding temporal flow have wide-ranging
implications on musical form, style and aesthetics, and artistic freedom. The corporeal or physical dimension of musical rhythm, resulting from instrumental techniques, vocalizations of rhythms, and physical systems of constructing and maintaining temporal flow such as
tala visualizations and ensemble conducting, will also be considered. The presentation will include live, interactive musical demonstrations and will be followed by a performance on the mridangam, the primary percussion instrument from South India.
2043
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2043
Contributed Paper
11:05
punctuated by filler sounds to maintain perceptual connectivity. At high
speeds, an interesting phenomenon is observed as compound sounds or
‘bols’ segregate into their simpler components, forming auditory streams of
acoustically similar sounds. Compound bols such as ‘dha’ break up into the
tonal ‘ta’ and the noisy ‘ghe’, with the sequence of rapidly recurring ‘ghe’
sounds forming a noise band that could potentially mask tonal accent
markers. To avoid this, performers routinely drop out the ‘ghe’ sounds at
high speeds at metrically unimportant points in the sequence, while retaining them at points that would mark accents. These playing strategies are
useful in providing mental and physical relief to performers in maintenance
of a steady rhythm across such a vast range of tempi while also preserving
the rhythmic integrity of the music for listeners.
4aMUa8. Analysis of rhythm performance strategies on the Indian tabla
as a function of tempo. Punita G. Singh (Sound Sense, 16 Gauri Apartments, 3 Rajesh Pilot Lane, New Delhi 110011, India, [email protected]
com)
In north Indian classical music, the range of tempi can extend from the
ultra-slow ‘vilambit’ at less than a beat every 5 seconds to the super-fast
‘drut’ at over 10 beats per second. To hold a rhythm at these speeds and generate a perceptible metrical structure, performers routinely alter playing
strategies that derive from neurophysiological and psychoacoustical considerations. At slow speeds, theoretically silent intervals are in practice
THURSDAY MORNING, 25 OCTOBER 2012
ANDY KIRK A/B, 11:30 A.M. TO 12:00 NOON
Session 4aMUb
Musical Acoustics: Demonstration Performance on the Mridangam by Rohan Krishnamurthy
James P. Cottingham, Chair
Physics, Coe College, Cedar Rapids, IA 52402
Rohan Krishnamurthy will present a percussion solo on the ancient South Indian pitched drum, the mridangam. The performance will
showcase the lively and complex rhythmic nuances of Indian classical music and involve interactive audience participation.
THURSDAY MORNING, 25 OCTOBER 2012
TRIANON C/D, 8:20 A.M. TO 11:25 A.M.
Session 4aNS
Noise, Architectural Acoustics, and ASA Committee on Standards: Ongoing Developments in Classroom
Acoustics—Theory and Practice in 2012, and Field Reports of Efforts to Implement Good Classroom
Acoustics I
David Lubman, Cochair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Louis C. Sutherland, Cochair
lcs-acoustics, 5701 Crestridge Rd., Rancho Palos Verdes, CA 90275
Chair’s Introduction—8:20
Invited Papers
8:30
4aNS1. Classroom acoustics 2012: Recent developments, current issues, and field reports. David Lubman (DL-Acoustics, Westminster, CA) and Louis C. Sutherland (LCS-Acoustics, 5701 Crestridge Rd, Apt 243, Rancho Palos Verdes, CA 90275, [email protected]
juno.com)
This introductory paper provides an overview of the papers in this session. It showcases important findings of the UK’s Essex Study
by David Canning & Adrian James (2012) which confirms large listening benefits for reducing reverberation times (RT) to 0.4 sec or
less. The Essex study also found a marked drop in LA90 for occupied classrooms when RT was halved. This introductory paper also
briefly reviews the Acoustical Society of America’s initial actions leading to development of the influential ANSI standards on
2044
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2044
classroom acoustics (S12.60 - 2010/Parts1 and 2), and subsequent outreach actions, including publication of Classroom Acoustics booklets. (Two new booklets, one aimed at Educators and the other aimed at Architects, are being prepared for publication.) Also reviewed is
the ongoing struggle to incorporate meaningful noise and reverberation time criteria into design guidelines for the California Collaborative for High Performance Schools, Los Angeles Unified School District, and LEED for Schools. Finally, it shows that noise transients
occurring during classroom noise measurements can make quiet classrooms seem misleadingly noisy.
8:50
4aNS2. Essex experimental study: The impact of reverberation time on working classrooms. David Canning (UCL, Gower Street,
London WC1E 6BT, United Kingdom, [email protected]), Adrian James (Adrian James Acoustics, Norwich, United Kingdom), and
Bridget M. Shields (Urban Engineering, London South Bank University, London, United Kingdom)
There has been considerable debate regarding the value of adding acoustic treatment to refurbished classrooms. Is there any demonstrable benefit in reducing reverberation time in secondary schools to below 0.8s? This study aimed to examine the impact of reverberation time on working classrooms. Four similar classrooms with RT > 0.9s were selected for the study. Three rooms were treated with
visually similar acoustically absorbent materials to reduce the RT to between 0.3 and 0.8s, the fourth room being left as a control. Over
a period of six months the treatments were changed so that all class/teacher combinations experienced the different acoustic environments, while remaining blind to the condition. Ten teachers and 400 children including 17 hearing impaired children were involved in
the study. Extensive objective and qualitative (interview and questionnaire) data were collected throughout the project. Analysis of the
impact of room acoustics on classroom noise was conductedblind to the acoustic condition. Results demonstrate that RT has a significant
impact on classroom noise levels and occupant behaviour. Reductionof reverberation time from 0.8 to 0.4s brought a reduction of 9 dB
in LA90 as against the expected 3dB reduction. Qualitative data supports the beneficial impact on the classroom experience.
9:10
4aNS3. Impact and revision of UK legislation on school acoustics. Bridget M. Shield and Robert Conetta (Urban Engineering,
London South Bank University, Borough Road, London SE1 7JQ, United Kingdom, [email protected])
Since 2003 new school buildings in England and Wales have been subject to Building Regulations which impose a legal requirement
for spaces in schools to meet acoustic performance criteria for ambient noise levels, reverberation times and sound insulation. The criteria are specified in the Department of Education publication ‘Building Bulletin 93’ (BB93). In 2008 it was agreed that BB93 would be
updated. The Labour government endorsed the need for good acoustic design of schools and agreed to a minor revision of the legislation.
However, the new government elected in 2010 recommended the removal of legislation on school acoustics, in order to reduce the cost
of new school buildings. The acoustics community in the UK successfully lobbied the government to keep the legislation and it has been
agreed that the acoustic regulations relating to the performance of a building in use will be retained. BB93 is currently (June 2012) being
redrafted and the acoustic performance specifications revised. This paper will use the results of a recent large scale survey of the acoustics of secondary schools in the UK to examine the impact of BB93 on school design over the past 10 years, and will discuss the current
revision of the legislation.
9:30
4aNS4. Effects of noise in high schools on pupils’ perceptions and performance. Julie E. Dockrell, Daniel Connolly (Psychology
and Special Needs, Institute of Education, London University, London, United Kingdom), Charles Mydlarz (School of Computing, Science and Engineering, University of Salford, Manchester, United Kingdom), Robert Conetta, Bridget M. Shield (Urban Engineering,
London South Bank University, Borough Road, London SE1 7JQ, United Kingdom, [email protected]), and Trevor J. Cox (School
of Computing, Science and Engineering, University of Salford, Manchester, United Kingdom)
4a THU. AM
A recent project has investigated acoustical conditions in secondary (high) schools, and examined the effects of a poor acoustic environment on teaching and learning of 11- to 16-year-olds. Around 2600 pupils from suburban secondary schools in England responded to
an online questionnaire concerning the acoustic environment in their schools. The questionnaire data highlighted the differential effects
of noise reported by more vulnerable learners. A repeated measures experimental study involving 572 pupils examined reading performance under two different classroom noise simulations. Results revealed a complex pattern reflecting noise levels, time of testing and
measure of reading performance used. Reading text while exposed to classroom noise of 70 dB resulted in quicker reading but less accuracy in measures of reading comprehension compared with performance in 50 dB. The data further suggested that the pupils were not
processing the text as deeply as was evident from their reduced lexical learning. There were also interactions with time of testing highlighting the importance of examining the effects of chronic exposure in addition to single session experimental testing. The test results
show that capturing the effects of noise on pupils’ learning in realistic classroom environments raises a number of methodological and
analytical problems.
9:50
4aNS5. Classroom acoustics and beyond: Soundscapes of school days. Jeff Crukley (London Hearing Centre, 1843 Bayswater Crescent, London, ON N6G 5N1, Canada, [email protected])
Moving beyond traditional measures of classroom acoustics, in this presentation I propose a novel approach that addresses the
dynamic nature of the school-day soundscape. In addition to noise floor and reverberation measures, I suggest that the use of dosimetry
and observation of children’s acoustic environments and situations can provide a more realistic representation of children’s listening
needs and the contexts of potential challenges. Cohorts of daycare, elementary, and high school students were shadowed by a researcher
wearing a dosimeter and recording observational data. Detailed tracings of the sound levels, the types and sources of sounds, and classifications of the acoustic situations will be presented. Results demonstrated a wide range of listening environments, goals, and situations
across all three cohorts of students. Sample recordings from the school day soundscapes will be presented. The implications of these
results for how we think about and study classroom acoustics will be discussed.
10:10–10:30 Break
2045
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2045
10:30
4aNS6. Ongoing developments in classroom acoustic theory and practice in 2012, and reports on efforts to implement good classroom acoustics. Pamela Brown and Mary Crouse (David H. Sutherland & Co., Inc., 2803 NE 40th, Tigard, Portland, OR 97220,
[email protected])
We live in a time of increasingly loud competing sounds and hearing loss is the number one disability in the world. Diverse populations of school children are especially vulnerable. The result is a degradation of the child’s academic achievement. New classrooms,
built everyday, often incorporate acoustical barriers which limit students’ achievements. Overcoming these barriers involves funding
constraints, construction timelines and lack of support which requires advocacy from parents, school boards, and design teams. This advocacy should include the ANSI Classroom Acoustics standards and an acoustical assessment of existing classrooms. Complex classroom acoustics challenges may include reduction of noise radiated by HVAC systems, improved acoustic treatment of external walls to
minimize exterior noise and acoustic design of walls between adjacent noisy classrooms. Next steps for schools should be to retain an
architect and/or an acoustical engineer for remodels and new school construction who are well versed in acoustics for educational settings and noise control. A booklet covering these issues, and designed as a practical guide for educators not versed in acoustics, is in
preparation by the Acoustical Society of America.
10:50
4aNS7. Creation of an architects’ companion booklet for ANSI 12.60 American National Classroom Acoustics Standard. David
S. Woolworth (Oxford Acoustics, 356 CR 102, Oxford, MS 38655, [email protected]) and Peter Phinney (Bryant Palmer
Soto, Inc., Torrence, CA)
This paper outlines the process of collaboration between an architect and acoustician to produce a document that translates the fundamental objectives and ideas of ANSI 12.60 from the semi-archaic language of an acoustics standard to a simple, useful reference for all stripes of
architects. Included will be the paradigm of the approach, definition of scope presented, order of presentation, and methods of presentation.
Contributed Paper
story design forms the basis of design. First tests were done at an existing elementary school with the same design. Acoustical recommendations for
wall designs, room finishes and HVAC design were incorporated into the
design and construction of the new school. The school was not near significant transportation noise sources. After construction was mostly complete,
tests were done to learn the sound transmission loss of walls and floor/ceiling systems. Reverberation time tests and background sound levels were
measured after construction was complete. Background sound met design
goals in all but one space except for the sound generated by a wind turbine
mounted on one end of the buildings. This was added by the schools Principal during the latter part of construction without consulting everyone. This
proved to be a significant source that had to be removed.
11:10
4aNS8. Acoustic design of a new elementary school to meet high performance prerequisites using a school districts base design: Predictions
and results from commissioning. Steve Pettyjohn (The Acoustics & Vibration Group, Inc., 5700 Broadway, Sacramento, CA 95820, [email protected]
acousticsandvibration.com)
An architectural firm was selected to design a new elementary school
using the school district’s standard building, but with modifications to meet
the prerequisites of the Collaborative for Performance Schools (CHPS).
Two acoustic prerequisites are a part of the CHPS programs including a
background limit of 45 dB(A) and a reverberation time of 0.6 seconds. A 2-
THURSDAY MORNING, 25 OCTOBER 2012
TRIANON A, 8:00 A.M. TO 11:15 A.M.
Session 4aPA
Physical Acoustics and Noise: Infrasound I
Roger M. Waxler, Chair
NCPA, University of Mississippi, University, MS 38677
Invited Papers
8:00
4aPA1. Coherent ambient infrasound recorded by the International Monitoring System. Robin S. Matoza (Institute of Geophysics
and Planetary Physics, Scripps Institution of Oceanography, UC San Diego, IGPP 0225, La Jolla, CA 92093-0225, [email protected]),
Matthieu Landes, Alexis Le Pichon (DAM/DIF, CEA, Arpajon, France), Lars Ceranna (BGR, Hannover, Germany), and David Brown
(IDC, Comprehensive Nuclear Test-Ban Treaty Organization (CTBTO), Vienna, Austria)
Ambient noise recorded by the International Monitoring System (IMS) infrasound network includes incoherent wind noise and
coherent infrasonic signals; both affect detection of signals of interest. We present summary statistics of coherent infrasound recorded
by the IMS network. We have performed systematic broadband (0.01-5 Hz) array processing of the IMS historical dataset (39 stations
2046
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2046
from 2005 to 2010) using an implementation of the Progressive Multi-Channel Correlation (PMCC) algorithm in log-frequency space.
From these results, we estimate multi-year 10th, 50th, and 90th percentiles of the rms pressure of coherent signals in 15 frequency bands
for each station. We compare the resulting coherent noise models in the 15 frequency bands with raw power spectral density noise models, which inherently include both incoherent and coherent noise. We show that IMS arrays consistently record coherent ambient infrasound across the broad frequency range from 0.01 to 5 Hz when wind-noise levels permit. Multi-year averaging of PMCC detection
bulletins emphasizes continuous signals such as oceanic microbaroms, as well as persistent transient signals such as repetitive volcanic,
surf, or anthropogenic activity (e.g., mining or industrial activity).
8:20
4aPA2. Modeling the generation of infrasound from earthquakes. Stephen Arrowsmith (Geophysics Group, Los Alamos National
Laboratory, 1711 Second Street, Santa Fe, NM 87505, [email protected]), Relu Burlacu, Kristine Pankow (Seismograph Stations, University of Utah, Salt Lake City, UT), Brian Stump (Huffington Department of Earth Sciences, Southern Methodist University,
Dallas, TX), Richard Stead, Rod Whitaker (Geophysics Group, Los Alamos National Laboratory, Los Alamos, NM), and Chris Hayward
(Huffington Department of Earth Sciences, Southern Methodist University, Dallas, TX)
Earthquakes can generate complex seismo-acoustic wavefields, consisting of seismic waves, epicenter-coupled infrasound, and secondary infrasound. We report on the development of a numerical seismo-acoustic model for the generation of infrasound from earthquakes. We model the generation of seismic waves using a 3D finite difference algorithm that accounts for the earthquake moment
tensor, source time function, depth, and local geology. The resultant acceleration-time histories (on a 2D grid at the surface) provide the
initial conditions for modeling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field
source pressure through the Ground-to-Space atmospheric model using a time-domain parabolic equation technique. The modeling is
applied to an earthquake of MW 4.6, that occurred on January 3, 2011 in Circleville, Utah; the ensuing predictions are in good agreement with observations made at the Utah network of infrasonic arrays, which are unique and indicate that the signals recorded at 6 arrays
are from the epicentral region. These results suggest that measured infrasound from the Circleville earthquake is consistent with the generation of infrasound from body waves in the epicentral region.
8:40
4aPA3. The variability in infrasound observations from stratospheric returns. L€aslo Evers and Pieter Smets (KNMI, PO Box 201,
De Bilt 3730 AE, Netherlands, [email protected])
Long range infrasound propagation depends on the wind and temperature around the stratopause (alt. 50 km). There is a seasonal
change in the wind direction around the equinoxes. In summer, the wind and temperature structure of the stratosphere is stable. In winter,
however, planetary waves in the troposphere can travel into the stratosphere and disturb the mean flow. This mean flow is most pronounced in the stratospheric surf zone from 20N (20S) to 60N (60S). One of the most dramatic events in the stratosphere is a Sudden
Stratospheric Warming (SSW) during the winter. These occur every winter on the Northern Hemisphere as minor Warmings with a
major SSW each other year. SSWs have a strong influence on infrasound propagation due to the large change in temperature and possible reversal of the wind. Therefore, SSWs are important to consider in relation to, e.g., regional and global monitoring with infrasound
for verification purposes or other strategic deployments. In this presentation, the detectability of infrasound will be considered as a function of the state of the stratosphere. Variations in strength of the circumpolar vortex (around the stratopause) and temperature changes
will give rise to specific propagation conditions which can often not be foreseen.
9:00
4a THU. AM
4aPA4. Anomalous transmission of infrasound through air-water and air-ground interfaces. Oleg A. Godin (CIRES, Univ. of Colorado and NOAA Earth System Research Laboratory, Physical Sciences Div., Mail Code R/PSD99, 325 Broadway, Boulder, CO 803053328, [email protected])
Sound speed and especially mass density exhibit large relative changes at gas-liquid and gas-solid interfaces. Sound transmission
through an interface with a strong impedance contrast is normally very weak. However, diffraction effects can lead to the phenomenon
of anomalous transparency of gas-liquid or gas-solid interfaces, where most of the acoustic power generated by a compact, low-frequency source located within the liquid or within the solid is radiated into the gas. Contrary to the conventional wisdom based on raytheoretical predictions and observations at higher frequencies, infrasonic energy from compact waterborne and underground sources can
be effectively transmitted into air. This paper reviews the theory and emerging experimental evidence of the anomalous transparency.
Physical mechanisms responsible for enhanced sound transmission at low frequencies are discussed. The phenomenon of anomalous
transparency can have significant implications, in particular, for localization of buried objects and for acoustic monitoring, detection,
and classification of powerful underwater and underground explosions for the purposes of the Comprehensive Nuclear-Test-Ban Treaty.
9:20
4aPA5. Observation of the Young-Bedard Effect during the 2010 and 2011 Atlantic Hurricane Seasons. Philp Blom, Roger Waxler, William Garth Frazier, and Carrick Talmadge (National Center for Physical Acoustics, University of Mississippi, 1 Coliseum Dr,
University, MS 38677, [email protected])
Infrasonic acoustic energy is known to be generated during the collision of counter propagating ocean surface waves of like periods.
The acoustic signals produced by such collisions are known as microbaroms. One significant source of microbarom radiation is the interaction of waves produced by large maritime storms with the background ocean swell. The region in which the microbaroms associated
with a large storm are produced tends to be hundreds of kilometers from the eye of the storm. It was suggested by Young and Bedard that,
when observed along propagation paths that pass through the storm, the microbarom signal can be severely refracted by the storm itself.
Such refraction has been observed in data from the 2010 and 2011 Atlantic hurricane seasons. A data processing algorithm has been
developed and implemented using the Multiple Signal Classification (MUSIC) spatial spectra and Akaike Information Criterion. The
results of this analysis will be presented and compared with predictions of the refraction using a geometric acoustics propagation model.
2047
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2047
9:40
4aPA6. Exploiting correlation in wind noise for enhanced detection of transient acoustic signals. William G. Frazier (NCPA, University of Mississippi, 1 Coliseum Dr, University, MS 38677, [email protected])
Wind noise presents significant difficulties when trying to detect transient acoustic signals. The most common approach to enhancing
signal detection when the signal-to-noise ratio is low due to wind noise is to utilize mechanical windscreens, large number of widely
spaced microphones or a combination of both. Results from recent experimental investigations and algorithm developments are presented that demonstrate a alternative method for improving detection of transients that utilizes only a few closely spaced microphones
and a unique processing technique that explicitly exploits the correlation among wind noise induced pressure fluctuations.
10:00–10:10 Break
10:10
4aPA7. Validating infrasound sensor performance: Requirements, specifications, and calibration. Darren M. Hart (Ground Based
Nuclear Explosion Monitoring R & D, Sandia National Lab, PO Box 5800, Mail Stop 0404, Albuquerque, NM 87109, [email protected]
gov), Rod Whitaker (Earth and Environmental Science, Los Alamos National Lab, Los Alamos, NM), and Harold Parks (Primary Standards Laboratory, Sandia National Lab, Albuquerque, NM)
The Ground-Based Nuclear Explosion Monitoring Research and Development (GNEM R&D) program at Sandia National Laboratories (SNL) is regarded as a primary center for unbiased expertise in testing and evaluation of geophysical sensors and instrumentation
for nuclear explosion monitoring. In the area of Infrasound sensor evaluation, Sandia relies on the “comparison calibration” technique to
derive the characteristics of a new sensor under evaluation relative to a standard reference infrasound sensor. The traceability of our
technique to a primary standard is partially dependent on the infrasound calibration chamber operated by a similar program group at Los
Alamos National Laboratory (LANL). Previous work by LANL and the SNL Primary Standards Laboratory was able to determine the
LANL chamber pistonphone output pressure level to within 5% uncertainty including dimensional measurements and careful analysis of
the error budget. Over the past several years, the staff at LANL and the SNL Facility for Acceptance, Calibration and Testing (FACT)
site has been developing a methodology for the systematic evaluation of infrasound sensors. That evaluation involves making a set of
measurements that follow a prescribed set of procedures, allowing traceability to a primary standard for amplitude. Examples of evaluation tests will be shown for monitoring quality infrasound sensors.
Contributed Papers
10:30
4aPA8. Noise reduction optimization of wind fences. JohnPaul R. Abbott,
Richard Raspet, and Jeremy Webster (Natl. Center for Physical Acoustics–
Dept. of Phys. and Astr., The University of Mississippi, 1 Coliseum Dr,
University, MS 38677, [email protected])
An earlier paper [J. Acoust. Soc. Am. 129, 2445 (2011)] described an
investigation on the optimization of a large wind screen for infrasonic noise
reduction. This wind screen is a circular variable porous enclosure 3 m high
and 5 m in diameter consisting of ten chain link fence panels about 3 m high
and 1.5 m wide, with removable vinyl privacy slats, an open top, and a 0.1
m bottom gap. That paper reported on the noise reduction for a microphone
set at the center of the enclosure relative to another set outside the enclosure
as the screen’s porosity was varied. Both microphones were mounted under
porous foam flush to the ground. It was shown that the best reductions
occurred at intermediate porosities, with reductions of 6 dB or greater
between 0.6 -10 Hz, with max reductions about 13-15 dB. The current paper
will report on the effect of further optimization techniques—sealing off the
bottom gap, adding a roof, and placing a small porous dome over the
enclosed field microphone. Of these techniques the addition of the dome
was most effective, with noise reductions of 6 dB or greater between 0.3-10
Hz, with max reductions about 20-23 dB.
10:45
4aPA9. Uncertainty associated with in-situ frequency-response estimation by reference-sensor comparison at infrasound monitoring sites.
Thomas B. Gabrielson (Applied Research Laboratory, Penn State University, PO Box 30, State College, PA 16804, [email protected])
In-situ measurement of the frequency-response of infrasound array elements has proven to be a useful tool in the assessment of element performance. In order to transition to a true calibration process, the uncertainties
2048
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
inherent in the method must be determined. It is critically important to distinguish between bias errors and random errors and to recognize that the ambient pressure fluctuations are typically not stationary in a statistical sense.
The time evolution of the cross-spectrum is particularly useful for identifying non-stationary behavior and for isolating high-quality data intervals.
Three important cases are tractable: high coherence between the reference
sensor and the infrasound element; low-to-moderate coherence resulting
from uncorrelated noise in one channel; and moderate coherence resulting
from uncorrelated noise in both channels. For a fixed number of averages,
the confidence limits for the frequency-response estimate are often considerably tighter than the corresponding limits for the estimated spectral
densities.
11:00
4aPA10. Direct measurement of the acoustical impedance of windnoise-reduction pipe systems. Thomas B. Gabrielson and Matthew Poese
(Applied Research Laboratory, Penn State University, PO Box 30, State
College, PA 16804, [email protected])
Wind-noise-reduction systems for infrasound monitoring stations often
take the form of networks of pipes and cavities. The acoustical response of
these wind-noise-reduction systems can be determined using ambient noise
and comparison to a reference sensor. Faults in these systems can sometimes
be detected by such response measurements; however, identification and
localization of a fault is more challenging. Another approach for performance assessment is to measure the acoustical impedance at accessible points
in the pipe network. This approach has the potential for high signal-to-noise
ratio, less dependence on atmospheric conditions, and the ability to isolate
sub-sections of the network. A portable apparatus has been designed for
field measurement of acoustical impedance. The impedance apparatus generates a controlled volume velocity and measures acoustic pressure at the
driving point.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2048
THURSDAY MORNING, 25 OCTOBER 2012
TRUMAN A/B, 8:30 A.M. TO 11:30 A.M.
Session 4aPP
Psychological and Physiological Acoustics: Physiology and Perception (Poster Session)
Gabriel A. Bargen, Chair
Communication Sciences and Disorders, Idaho State University, Meridian, ID 83642
Contributed Papers
All posters will be on display from 8:30 a.m. to 11:30 a.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 8:30 a.m. to 10:00 a.m. and contributors of even-numbered papers will be at their
posters from 10:00 a.m. to 11:30 a.m.
The chirp-evoked auditory brainstem response (ABR) has been found to
be a more synchronous response in adults than the click-evoked ABR with
more areas of the cochlea contributing to the compound ABR. ABRs evoked
using delayed-model chirp stimuli have shown to compensate for the temporal dispersion of the cochlea and result in larger wave V amplitudes and better overall morphology when compared to click-evoked ABRs. To date,
published research has only included adult subjects with the majority of
studies completed on subjects with normal hearing. This study compares the
chirp-evoked ABR to the click-evoked ABR in children to determine if the
chirp-evoked stimulus is more efficient than the click-stimulus which is currently used in most newborn hearing screening protocols and pediatric diagnostic ABR evaluations. Subjects from birth to eight years of age, with
normal and abnormal hearing, participated in this study. This presentation
will include preliminary study findings.
4aPP2. Effectiveness of steady versus varied-color/varied-pattern visual
tasks during acquisition of late auditory evoked potentials. Charles G.
Marx and Edward L. Goshorn (Speech and Hearing Sciences, University of
Southern Mississippi, Psychoacoustics Research Laboratory, Hattiesburg,
MS 39401, [email protected])
Instructions for late auditory evoked potential (LAEP) testing include a
need for the subject to remain alert (not go to sleep). Previous studies show an
inverse relationship between alertness level and waveform morphology. Thus,
a need exists to maintain alertness during LAEP testing. If not maintained, a
wide range of alertness, and thus waveform morphology, may exist from one
run to the next. Therefore, if alertness level is not controlled, any variations in
waveform morphology may be due to variations in alertness rather than auditory system integrity. Previous investigators have implemented visual tasks
consisting of still or action images in an attempt to maintain alertness. In these
visual tasks, a subject is typically instructed to attend to a video screen during
LAEP testing. This project investigated the effectiveness of two visual task
screens: unvaried blue with no pattern, versus varied colors-patterns occurring
in 1-3 second random intervals. LAEPs were gathered on twenty-five young
adult subjects who were instructed to attend to a video display of one of the
screens during LAEP testing. Six replicates were obtained for each screen in
counter-balanced order. Results showed no significant (p>.05) differences in
mean P1 or P2 latency or amplitude for the two screens.
4aPP3. Links between mismatch negativity responses and speech intelligibility in noise. Tess K. Koerner, Yang Zhang, and Peggy Nelson (Department of Speech-Language-Hearing Sciences, University of Minnesota,
Minneapolis, MN 55408, [email protected])
Research has shown that the amplitude and latency of neural responses to
passive mismatch negativity (MMN) tasks are affected by noise (Billings et al.,
2010). Further studies have revealed that informational masking noise results in
2049
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
decreased P3 amplitude and increased P3 latency, which correlates with
decreased discrimination abilities and reaction time (Bennett et al., 2012). This
study aims to further investigate neural processing of speech in differing types
of noise by attempting to correlate MMN neural responses to consonant and
vowel stimuli with results from behavioral sentence recognition tasks. Preliminary behavioral data indicate that noise conditions significantly compromise the
perception of consonant change in an oddball discrimination task. Noise
appears to have less of an effect on the perception of vowel change. The MMN
data are being collected for the detection of consonant change and vowel
change in different noise conditions. The results will be examined to address
how well the pre-attentive MMN measures at the phonemic level can predict
speech intelligibility at the sentence level using the same noise conditions.
4aPP4. Effect of broadband contralateral noise on distortion product
otoacoustic emissions and psychophysical tuning curves. Andrzej Wicher
(Institute of Acoustics AMU, Umultowska85, Poznan 61-614, Poland,
[email protected])
The main purpose of this work was to describe an influence of contralateral stimulation (CS) on distortion products otoacoustic emissions (DPOAEs)
and psychophysical tuning curves (PTCs). The fast method for determining
PTCs was used in the study. DPOAEs and PTCs were measured in two
modes: in the presence or absence of CS. The CS was a broadband noise at a
level of 50 SPL. The primary tones with frequencies f1 and f2, (f2/f1 = 1.21)
were presented at levels of L1 = 60 dB SPL, and L2 = 50 dB SPL. A pulsed
sinusoidal signal at a sensation level (SL) of 10 dB was used in the measurements of the PTC. The signal frequency was 1 or 2 kHz. Ten normal-hearing
subjects participated in this study. The CS caused a decrease in the level of
the DPOAEs (suppression effect) in 90% of cases, in the whole frequency
range of f2 (i.e. from 845 to 6200 Hz). The maximum suppression of the
DPOAE level occurs for the f2 frequency from 1 to 2 kHz. For both signal
frequencies the CS significantly reduces the sharpness of the PTCs. The CS
has a significant effect on decreasing the quality factor (Q10) of PTCs.
4aPP5. Improving the discriminability of simultaneous auditory alarms
using principles of voice-leading found in music. Matthew J. Davis and
Nial Klyn (Speech and Hearing Science, The Ohio State University, Columbus, OH 43210, [email protected])
Predicting the ability of listeners to discriminate between simultaneous
auditory streams is a longstanding challenge in the design of auditory displays. The creation of an efficacious artificial auditory scene can require an
immense amount of knowledge about how sounds are heard and interpreted;
what is commonly called auditory scene analysis. Fortunately, musicians
have been constructing novel auditory scenes with multiple simultaneous
streams for many centuries, and the rules governing the composition of
Western polyphonic music have even been explicitly codified in a range of
techniques referred to as “voice-leading”. These relatively simple but effective rules have the potential to help guide designers of auditory displays by
maximizing the distinctions between concurrent signals. An experiment was
conducted to measure the discriminability of alarms designed with musical
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2049
4a THU. AM
4aPP1. Auditory brainstem responses evoked by chirp and click stimuli
in children. Gabriel A. Bargen (Meridian-Health Science Center, Idaho State
University, 1311 E. Central Drive, Meridian, ID 83642, [email protected])
“voice-leading” features as compared with existing alarms from Learjet 31a
and Learjet 35 aircraft. Signals designed with the auditory scene synthesis
techniques embedded in musical “voice-leading” were found to significantly
improve discriminability for up to five simultaneous alarms. By applying
these principles to warning signals, this study has sought to implement a system for creating new auditory warnings that contain more efficient differentiating properties and furthermore conform to a more unified stylistic identity.
4aPP6. Effects of listener bias on auditory acuity for aircraft in realworld ambient environments. Matthew J. Davis, Lawrence L. Feth (Speech
and Hearing Science, The Ohio State University, Columbus, OH 43210, davis.
[email protected]), Michael Spottswood, and John Hall (711 Human Performance
Wing, Air Force Research Laboratory, Wright Patterson Air Force Base, OH)
Hoglund et al. (2010) investigated the ability of listeners to detect the
presence of aircraft masked by ongoing ambient sounds using a two interval
forced choice (2IFC) procedure. They found that the signal-to-noise ratio
required for target detection varied across the different types of ambient
environments. Recordings of helicopters in flight were used as target signals
and maskers were recorded in rural, suburban and urban locations. Their
goal was to better approximate real-world conditions. The goal of the current study is to extend those results to include factors that may bias the listener under more realistic conditions. The 2IFC procedure is designed to
minimize listener bias; however, real-world listening conditions are more
typically one interval situations. The frequency of occurrence of aircraft
over-flights and the costs of errors and rewards for correct responses may
substantially affect some estimates of listener sensitivity. Work reported here
investigated the influence of apriori probability of target occurrence and
manipulation of the pay-off matrix on the acuity measures reported by
Hoglund, et al., using the same target sounds and environmental maskers.
Psychometric functions shifted by ~18 dB as frequency of targets varied from
20% to 80%. ROC curves display the influence of pay-off manipulations.
4aPP7. Modulation difference limen for spectral center-of-gravity signals. Amy E. Stewart, Evelyn M. Hoglund, Yonghee Oh, and Lawrence L.
Feth (Speech and Hearing Science, Ohio State University, 110 Pressey Hall,
1070 Carmack Road, Columbus, OH 43210, [email protected])
Auditory processing of the dynamic spectral center-of-gravity (COG) of
a pair of amplitude modulated (AM) tones was investigated by comparing
the modulation difference limen (DL) for a COG signal to that for a sinusoidally frequency modulated (FM) tone. The center-of-gravity effect refers to
the listener’s ability to track an amplitude-weighted instantaneous frequency
between two tones differing in frequency. To create a dynamic COG, two
tones separated in frequency by four ERB were amplitude modulated at the
same modulation rate and modulation depth. AM modulators differed only in
relative phase. For five normal-hearing listeners, a 2IFC discrimination task
was used to determine the DL for frequency deviation across a range of center frequencies, modulation frequencies, and frequency deviations for both
FM and COG signals. COG signals were matched to FM signals (same center
frequency, modulation frequency, and frequency deviation). Frequency deviation was determined by equating the maximum instantaneous spectral centroid for each signal type. COG DLs were approximately three times larger
than the corresponding FM DLs; however, variation with modulation frequency and frequency deviation was similar for the two types of signals.
Results indicate comparable auditory processing for the two types of signals.
4aPP8. Temporal weighting for interaural time differences in low-frequency pure tones. Anna C. Diedesch, Jacqueline M. Bibee, and G. Christopher Stecker (Speech & Hearing Sciences, University of Washington,
Seattle, WA 98110, [email protected])
In contrast to envelope-based interaural time differences (ITD) at high frequencies, where sound onsets play a dominant role, the reliability and salience
fine-structure ITD at low frequency (<1500 Hz) suggests uniform sensitivity
to information across periods of an ongoing stimulus waveform. Several past
studies, however, have demonstrated low-frequency ITD thresholds to improve
sub-optimally with increasing sound duration [e.g. Houtgast & Plomp 1968,
JASA 44:807-12], suggesting that the initial periods of a brief tone play a
greater role in ITD processing than do later periods. Here, we measured the
temporal profile of ITD sensitivity in pure tones ranging from 250-1000 Hz.
2050
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
Sounds were presented with ITD that either remained fixed over the sound duration (condition RR) or progressed linearly to eliminate the ITD cue from either the beginning (condition 0R) or end (R0) of the sound. Durations varied
from 40-640 ms, including 20 ms ramps applied diotically to minimize envelope cues. ITD detection thresholds demonstrated (a) suboptimal improvement
with duration and (b) greater sensitivity to ITD available early (R0) rather than
late (0R) in the stimulus, a pattern nearly identical to that observed for highfrequency envelope ITD. [Supported by NIH R01 DC011548.]
4aPP9. Novelty detection of covariance among stimulus attributes in auditory perception. Christian Stilp (Department of Psychological and Brain
Sciences, University of Louisville, Louisville, KY 40292, [email protected]
gmail.com) and Keith Kluender (Speech, Language, and Hearing Sciences,
Purdue University, West Lafayette, IN)
Novelty detection is characterized by enhanced response to a stimulus
with some property changed relative to expected input. Many reports examine
sensitivity to deviations in physical acoustic dimensions, patterns, or simple
rules, but fail to consider information in higher-order statistical relationships
between dimensions. Here we report novelty detection that depends upon
encoding of experienced covariance between complex acoustic dimensions
(attack/decay, spectral shape.) Here, novelty is defined as violation of experienced covariance between otherwise independent acoustic attributes. Listeners primarily discriminated sound pairs in which attributes supported robust
covariance (15 pairs, Consistent condition) and rarely discriminated sounds
that violated this redundancy (1 pair, Orthogonal condition) in randomized
AXB trials without feedback. Probability of occurrence for Orthogonal trials
was minimized by withholding them until the final testing block. Discrimination accuracy for Orthogonal sounds exceeded that for Consistent sounds as
well as that for control stimuli absent experienced redundancy between attributes. Increasing Orthogonal trial probability reduces this enhancement, as
does acoustic similarity between Consistent and withheld Orthogonal sound
pairs. Results parallel novelty detection as measured by stimulus-specific adaptation and mismatch negativity. Implications for high-level auditory perception and organization will be discussed. [Supported by NIDCD.]
4aPP10. Using channel-specific models to detect and remove reverberation in cochlear implants. Jill M. Desmond, Chandra S. Throckmorton, and
Leslie M. Collins (Department of Electrical and Computer Engineering, Duke
University, Durham, NC 27713, [email protected])
Reverberation results in the smearing of both harmonic and temporal elements of speech through self-masking (masking within an individual phoneme)
and overlap-masking (masking of one phoneme by a preceding phoneme).
Self-masking is responsible for flattening formant transitions, while overlapmasking results in the masking of low-energy consonants by higher-energy
vowels. Reverberation effects, especially the flattening of formant transitions,
are especially detrimental to cochlear implant listeners because they already
have access to only limited spectral and temporal information (Kokkinakis and
Loizou, 2011). Efforts to model and correct for reverberation in acoustic listening scenarios can be quite complex, requiring estimation of the room transfer
function and localization of the source and receiver. However, due to the limited resolution associated with cochlear implant stimulation, simpler processing
for reverberation detection and mitigation may be possible. This study models
speech stimuli in a cochlear implant on a per-channel basis both in quiet and in
reverberation, where reverberation is characterized by different reverberation
times, room dimensions, and source locations. The efficacy of these models for
detecting the presence of reverberation and subsequently removing its effects
from speech stimuli is assessed. [This work was funded by the National Institutes of Health (NIDCD), R01-DC-007994-04.]
4aPP11. The effect of visual information on speech perception in noise
by electroacoustic hearing. Qudsia Tahmina, Moulesh Bhandary, Behnam
Azimi, Yi Hu (Electrical Engineering & Computer Science, University of
Wisconsin-Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, [email protected]
uwm.edu), Rene L. Utianski, and Julie Liss (Speech & Hearing Science,
Arizona State University, Tempe, AZ)
The addition of amplified low frequency hearing to cochlear implants
has been shown to provide substantial performance benefits for cochlear
implant (CI) users, particularly in noise. In the current study, we examined
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2050
listeners maintain a rather detailed representation of distributional information that is continuously updated during the task. This interpretation is in
line with the assumptions underlying many current models of perceptual
(statistical) learning in speech perception. However, it is possible to get
optimal-like behavior by maintaining a general distributional representation
or by using simpler “local” strategies based on only a few of the most
recently experienced exemplars. The results will be presented with multiple
categorization models, which testify to the difficulty of interpreting claims
of distributional learning in categorization. [Work supported by NIHNIDCD.]
the extent to which the presence of visual information (facial movement
during speech) augments perception for CI listeners with electroacoustic
stimulation (EAS). Two experiments were conducted. In the first one, participants transcribed semantically anomalous phrases in quiet and noise.
Intelligibility results showed modest improvements in intelligibility for low
and high levels of noise, and dramatic gains (30+ percentage points) in midlevel noise. Error analyses conducted on the transcripts further suggest that
the perceptual benefits extended beyond articulatory place information to
that of facilitating lexical segmentation. In the second experiment, participants were tested on their recognition of words in sentences corrupted by
noise. Results showed significant benefit of hearing aids in EAS patients.
However, the benefit of acoustic hearing was not apparent when visual information was available. Our results will provide guidance for auditory
rehabilitation strategies in this population.
4aPP13. Ictal and interictal changes in auditory processing. David M.
Daly (Hugin, Inc, Box 210855, Dallas, TX 75211, [email protected]
stanford.edu)
Altered neuronal functioning manifest in seizures can also cause interictal misperceptions. The present case experienced nausea, head-turning,
and automatism with loss of consciousness; following this, he could see and
hear, but could not speak for up to 30 min. Left hemisphere initiated speech;
seizures involved right frontal and anterior temporal areas. He underwent
anterior temporal lobe resection, and for the next year, seizures were medically controlled. Then seizures recurred; although he remained conscious,
he was often amnestic instead, and, again, post-ictally mute. He underwent
resection of right frontal lobe; he recovered over the next year with only
prophylactic medication. Patient was tested using pre-recorded sets of GY,
BDG, and iIe delivered through headphones [J Neurophysiol. 44:1, 200-22
(1980)]. In the year after first surgery, he classified vowels appropriately,
but GY as ‘not /ye/’ and /ye/, and BDG as /be/ and /de/; right ear and binaural performances were statistically less anomalous than left ear. Following
second surgery, he classified GY and BDG appropriately. Left ear performance varied by at most chance from standard; right ear was indistinguishable
from the standard (p < 0.0001).
4aPP12. Optimal categorization of sounds varying on a single dimension. Megan Kittleson (Speech, Language, and Hearing Sciences, University
of Arizona, 1131 E. 2nd St., Tucson, AZ 85721, [email protected]
edu), Randy L. Diehl (Psychology, University of Texas at Austin, Austin,
TX), and Andrew J. Lotto (Speech, Language, and Hearing Sciences, University of Arizona, Tucson, AZ)
Listeners were randomly presented narrow-band filtered noise bursts
that varied in filter center frequency from two overlapping, Gaussian-like
distributions. Participants mapped these distributions of sounds onto creatures in a video game where they received visual and auditory feedback
about their accuracy. Categorization boundaries for each participant were
estimated using logistic regression and compared with the optimal boundary
from an ideal observer model. The participants appeared to be able to establish near optimal boundaries rapidly and had a remarkable ability to shift
these boundaries when the underlying distributions changed - even when
these changes were not explicitly signaled. These results suggest that
THURSDAY MORNING, 25 OCTOBER 2012
BASIE A, 8:00 A.M. TO 12:00 NOON
Session 4aSC
Speech Communication: The Nature of Lexical Representations in the Perception and Production of Speech
4a THU. AM
Allard Jongman, Cochair
Univ. of Kansas, 1541 Lilac Ln., Lawrence, KS 66045
Joan A. Sereno, Cochair
Linguistics, University of Kansas, Lawrence, KS 66049
Chair’s Introduction—8:00
Invited Papers
8:05
4aSC1. The role of phonological alternation in speech production: Evidence from Mandarin tone Sandhi. Stephen Politzer-Ahles
and Jie Zhang (Linguistics, University of Kansas, Lawrence, KS 66046, [email protected])
An open question in psycholinguistics is the nature of the phonological representations used during speech production and the processes that are applied to them, particularly between lexical access and articulatory implementation. While phonological theory posits that
speakers’ grammar includes mechanisms for transforming from input to output forms, whether such mechanisms also are used by the
parser during online speech production is unclear. We examined the role of phonological alternations in Mandarin Chinese real and novel
compounds using the implicit priming paradigm, which can reveal forms being used prior to articulation. We compared modulations of
the implicit priming effect in sets of words that are heterogeneous at the lexical level (where one word has a different lexical tone than the
rest) to those in sets that are heterogeneous at the derived level (where a word has the same underlying lexical tone, but that tone surfaces
as a different tone because of tone sandhi). Both lexical and derived heterogeneous sets reduced the priming effect, suggesting that phonological alternation was computed abstractly before articulation was initiated. We argue that the progression from underlying phonological
representations to articulatory execution may be mediated online by a level at which abstract phonological alternations are processed.
2051
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2051
8:25
4aSC2. Discreteness and asymmetry in phonological representations of words. Aditi Lahiri (Centre for Linguistics and Philology,
University of Oxford, Walton Street, Oxford OX1 2HG, United Kingdom, [email protected])
Lexical phonological contrasts are generally binary and abound in asymmetries. For example, vowels can contrast in nasality (oral
vs. nasal), but the presence of contrastive nasal vowels implies the presence of oral vowels, and not vice versa. The occurrence of geminates in a language implies the presence of single consonants and therefore, a contrast in consonantal length. Here we address the question of how these asymmetries constrain phonological representations of WORDS in the mental lexicon, and how these constraints
affect language processing and change. Various phonological contrasts will be discussed including features, length, and tone, claiming
that representations are discrete and asymmetric which in turn lead to asymmetry in processing. Experimental evidence will be presented
from behavioural as well as brain imaging studies in Bengali, English, and German.
8:45
4aSC3. From speech signal to phonological features—A long way (60 years and counting). Henning Reetz (Dpt. of Empirical Linguistics, Goethe-University Frankfurt, Georg-Voigt-Str. 6/II, Frankfurt 60325, Germany, [email protected])
When Jakobson, Fant and Halle proposed 1952 their feature system to describe the representation of speech, they wrote: “In decoding a message received (A), the listener operates with the perceptual data (B) which are obtained from the ear responses (C) […] The
systematic exploration of the first two of these levels belongs to the future and is an urgent duty.” In the last three decades, this approach
has been substituted by stochastic modeling to map the speech signal to lexical (word) entries in automatic speech recognition. Although
this has lead to working ASR applications, the process of speech understanding by humans is still of ‘urgent duty’. The FUL (featural
underspecified lexicon) system is one model for this process and this talk will present its methods for mapping the signal onto phonological features, which removes acoustic detail that we assume is irrelevant for (human) speech understanding. The analysis is performed
with a high temporal resolution to model the ‘online’ processing of the human brain and provide redundancy for noisy signals. The ultimate goal is to match the acoustic signal to feature sets that activate possible and suppress improbable word candidates. These features
sets themselves are defined by the phonological structure of a language rather than by extensive training with speech material. The presentation includes an online demonstration of the system.
9:05
4aSC4. The exemplar-based lexicon. Keith Johnson (Linguistics, UC Berkeley, 1203 Dwinelle Hall, Berkeley, CA 94720,
[email protected])
Exemplar-based models of memory have been successful in accounting for a variety of recall and recognition data in general cognitive psychology, and provide an interesting counter-point to other more “standard” models of the mental lexicon. This talk will discuss
the ways that the exemplar-based lexicon deals with spoken language variability in auditory word recognition: with emphasis on talker
normalization, cross-dialect speech perception, and the recognition of highly variable conversational speech. I will also discuss the use
of exemplar-based models in the linguistic theory of sound change, and the relationship between exemplar-based models and neurophonetics. Although the specific modeling strategy employed in exemplar-based modeling is likely over-simplified and wrong in some
ways, the success of this type of model indicates that something true is being captured. I will suggest that what makes exemplar-based
models useful is that they provide a way for the theorist to include a role for fine phonetic detail in the representation of phonological
representations. The ultimate argument is that phonetic memory is gradient as well as categorical, and should be modeled as such.
9:25
4aSC5. Processing pronunciation variants: Rules and representations. Cynthia M. Connine and Stanislav Sajin (Psychology,
Binghamton University, PO Box 6000, Binghamton, NY 13902, [email protected])
Both representational and inference rule mechanisms have been proposed for recognizing pronunciation variants. In our work, we have
advanced a view for recognizing pronunciation variants in which multiple forms are represented in the lexicon, with non-canonical forms
represented based on their frequency of occurrence and canonical forms represented in a privileged (immune to frequency of occurrence) status due to their congruence with orthography. These investigations have focused on variants in which the relevant alternation was word internal (e.g. schwa vowel deletion, flapping and nasal flaps). Other classes of pronunciation variants are formed due to interactions with
segmental properties of surrounding words (e.g. place assimilation, fricative assimilation); the processing explanation advanced for such variants has focused on phonological inference rules that recover underlying representations. The current project investigated the relative role of
inferential processes and representation in processing variants formed due to interaction at word boundaries (e.g. fricative assimilation).
9:45
4aSC6. The consequences of lexical sensitivity to fine grained detail: Solving the problems of integrating cues, and processing
speech in time. Bob McMurray (Psychology, University of Iowa, E11 SSH, Iowa City, IA 52242, [email protected]) and
Joseph C. Toscano (Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, IL)
Work on language comprehension is classically divided into two fields. Speech perception asks how listeners cope with variability
from factors like talker and coarticulation to compute some phoneme-like unit; and word recognition assumed these units to ask how listeners cope with time and match the input to the lexicon. Evidence that within-category detail affects lexical activation (Andruski, et al.,
1994; McMurray, et al., 2002) challenges this view: variability in the input is not “handled” by lower-level processes and instead survives
until late in processing. However, the consequences of this have not been fleshed out. This talk begins to explore them using evidence
from the eye-tracking paradigms. First, I show how lexical activation/competition processes can help cope with perceptual problems, by
integrating acoustic cues that are strung out over time. Next, I examine a fundamental issue in word recognition, temporal order (e.g., distinguishing cat and tack). I present evidence that listeners represent words with little inherent order information, and raise the possibility
that fine-grained acoustic detail may serve as a proxy for this. Together these findings suggest that real-time lexical processes may help
cope with perceptual ambiguity, and that fine-grained perceptual detail may help listeners cope with the problem of time.
2052
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2052
10:05–10:20 Break
10:20
4aSC7. The structure of the lexical network influences lexical processing. Michael S. Vitevitch and Rutherford Goldstein (Psychology, University of Kansas, 1415 Jayhawk Blvd., Lawrence, KS 66045, [email protected])
Network science is an emerging field that uses computational tools from physics, mathematics, computer science, and other fields to
examine the structure of complex systems, and explore how that structure might influence processing. In this approach, words in the
mental lexicon can be represented as nodes in a network with links connecting words that are phonologically related to each other. Analyses using the mathematical tools of network science suggest that phonological networks from a variety of languages exhibit the characteristics of small-world networks, and share several other structural features. Studies of small-world networks in other domains have
demonstrated that such networks are robust to damage, and can be searched very efficiently. Using conventional psycholinguistic tasks,
we examined how certain structural characteristics influence the process of spoken word recognition. The findings from these experiments suggest that the lexicon is structured in a non-arbitrary manner, and that this structure influences lexical processing.
Contributed Papers
4aSC8. How talker-adaptation helps listeners recognize reduced wordforms. Katja Poellmann (International Max Planck Research School for
Language Sciences, P.O. Box 310, Nijmegen 6500 AH, Netherlands, katja.
[email protected]), James M. McQueen (Behavioural Science Institute
and Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, Gelderland, Netherlands), and Holger Mitterer (Max Planck
Institute for Psycholinguistics, Nijmegen, Gelderland, Netherlands)
Two eye-tracking experiments tested whether native listeners can adapt
to reductions in casual Dutch speech. Listeners were exposed to segmental
([b] > [m]), syllabic (full-vowel-deletion), or no reductions. In a subsequent
test phase, all three listener groups were tested on how efficiently they could
recognize both types of reduced words. In the first Experiment’s exposure
phase, the (un)reduced target words were predictable. The segmental reductions were completely consistent (i.e., involved the same input sequences).
Learning about them was found to be pattern-specific and generalized in the
test phase to new reduced /b/-words. The syllabic reductions were not consistent (i.e., involved variable input sequences). Learning about them was
weak and not pattern-specific. Experiment 2 examined effects of word repetition and predictability. The (un-)reduced test words appeared in the exposure phase and were not predictable. There was no evidence of learning for
the segmental reductions, probably because they were not predictable during
exposure. But there was word-specific learning for the vowel-deleted words.
The results suggest that learning about reductions is pattern-specific and
generalizes to new words if the input is consistent and predictable. With
variable input, there is more likely to be adaptation to a general speaking
style and word-specific learning.
10:55
4aSC9. Lexically guided category retuning affects low-level acoustic
processing. Eva Reinisch and Lori L. Holt (Psychology, Carnegie Mellon
University, 5000 Forbes Avenue, Pittsburgh, PA 15213, [email protected]
cmu.edu)
Listeners adapt to non-canonically produced speech by using lexical
knowledge to retune phoneme categories. It is unclear, however, whether
these retuned categories affect perception at the category level or the signalto-representation mapping. This was addressed by exploring conditions of
cross-speaker generalization of retuned fricatives. During a lexical-decision
task, American listeners heard a female Dutch learner of English whose
word-final /f/ or /s/ was replaced by an ambiguous sound. At test listeners
categorized minimal pairs ending in sounds along [f]-[s] continua spoken by
the same female speaker and a new male speaker. Listeners’ [f]-[s] categorization for the previously heard speaker shifted as a function of exposure.
Generalization to the new speaker was not found when continua between his
natural [f]-[s] endpoints were presented. However, listeners did generalize
to this voice when presented with only a subset of the male’s most [f]-like
continuum steps, adjusting the fricative range to match the exposure speaker’s, and eliminating a bias toward /s/-responses in the male continua. Listeners thus use short-term acquired knowledge about acoustic properties of
2053
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
phonemes even to interpret upcoming phonemes from previously unheard
speakers. Acoustic match, not speaker identity, predicted the results supporting accounts of the effect originating in the early signal-to-representation
mapping.
11:10
4aSC10. Lexical effects on the perception of /l/ allophones in English.
D. H. Whalen (Speech-Language-Hearing Sciences, City University of New
York, 360 Fifth Ave., New York, NY 10016, [email protected]), Ylana
Beller-Marino, Stephanie Kakadelis (Dept. of Linguistics, City University
of New York, New York, NY), Katherine M. Dawson (Speech-LanguageHearing Sciences, City University of New York, New York, NY), Catherine
T. Best (MARCS Institute, University of Western Sydney, Sydney, NSW,
Australia), and Julia R. Irwin (Dept. of Psychology, Southern Connecticut
State University, New Haven, CT)
Previous work has shown that perception of allophones of /p/ in English
utterances was influenced by lexical status. In nonwords, the aspirated allophone was preferred whether appropriate or not; in words, the appropriate
allophone was preferred [Whalen, Best, & Irwin (1997), J. Phonetics, 25,
501-528]. Here, we examined dark and light [l] in English words and nonwords. Dark [l] occurs in syllable codas whereas light [l] occurs in onsets.
Items were selected in pairs to balance syllable position in monosyllabic English words and pseudowords, such as “gel”/“ledge”, “teal”/“leat”, and “beel”/
“leeb.” Frequency of occurrence for words was also manipulated to explore
compatibility with versions of exemplar theory. A phonetician produced two
versions of each item, one with a contextually appropriate allophone and one
with the inappropriate. Listeners were asked to rate where each acoustically
presented item fell on a Likert scale (1-7) between “ideal (native)
pronunciation” or “bad (nonnative) pronunciation.” Results will be discussed
in terms of the underlying representation needed to account for lexical effects
in perception. The relationship to phonotactic rules will also be discussed.
11:25
4aSC11. Lexical representation of perceptually difficult second-language words. Mirjam Broersma (Max Planck Institute for Psycholinguistics, PO Box 310, Nijmegen 6500 AH, Netherlands, [email protected]
mpi.nl)
This study investigates the lexical representation of second-language
words that contain difficult to distinguish phonemes. Dutch and English listeners’ perception of partially onset-overlapping word pairs like DAFFOdilDEFIcit and minimal pairs like flash-flesh, was assessed with two crossmodal priming experiments, examining two stages of lexical processing:
activation of intended and mismatching lexical representations (Exp.1) and
competition between those lexical representations (Exp.2). Exp.1 shows that
truncated primes like daffo- and defi- activated lexical representations of
mismatching words (either deficit or daffodil) more for L2 than L1 listeners.
Exp.2 shows that for minimal pairs, matching primes (prime: flash, target:
FLASH) facilitated recognition of visual targets for L1 and L2 listeners
alike, whereas mismatching primes (flesh, FLASH) inhibited recognition
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2053
4a THU. AM
10:40
consistently for L1 listeners but only in a minority of cases for L2 listeners;
in most cases, for them, primes facilitated recognition of both words equally
strongly. Importantly, all listeners experienced a combination of facilitation
and inhibition (and all items sometimes caused facilitation and sometimes
inhibition). These results suggest that for all participants, some of the minimal pairs were represented with separate, native-like lexical representations,
whereas other pairs were stored as homophones. The nature of the L2 lexical
representations thus varied strongly even within listeners.
11:40–12:00 Panel Discussion
THURSDAY MORNING, 25 OCTOBER 2012
MARY LOU WILLIAMS A/B, 9:00 A.M. TO 11:45 A.M.
Session 4aSP
Signal Processing in Acoustics and Underwater Acoustics: Localizing, Tracking,
and Classifying Acoustic Sources
Altan Turgut, Chair
Naval Research Lab, Washington, DC 20375
Contributed Papers
9:00
4aSP1. Passive sonar target tracking with a vertical hydrophone array
in a deep ocean environment. Sheida Danesh and Henrik Schmidt (Massachusetts Institute of Technology, Cambridge, MA 02139, [email protected])
When operating in a deep ocean environment, limited power availability
makes it imperative to conserve energy. This is achieved through the use of
computational efficiency, as well as a passive sonar configuration that eliminates the need for a sonar source. Mallat and Zhang’s Matching Pursuits
algorithm with a Kalman filter is implemented for use in passive target
tracking. This makes it possible to determine the range of a moving target
through the use of dot products and other simple calculations. The model
setup used to test this approach includes a vertical hydrophone array at a
depth of 4-5km and a near surface target between 10 and 45 km away.
Simulated results using ray tracing (BELLHOP) and wavenumber integration (OASES) were used in developing this method. Preliminary results
indicate this to be an effective means of target tracking. Possible future
improvements include determining the bearing as well as the range of the
target.
9:15
4aSP2. Autonomous underwater vehicle localization using the acoustic
tracking system. Nicos Pelavas, Garry J. Heard, and Carmen E. Lucas
(DRDC Atlantic, 9 Grove St., Dartmouth, NS B3A 3C5, Canada, nicos.
[email protected])
Operator peace-of-mind during Autonomous Underwater Vehicle
(AUV) missions is dependent on the ability to localize the vehicle. During
launch and recovery phases this capability is particularly important. Defence
R&D Canada (DRDC) Atlantic has designed and built a long-range tracking
system for the International Submarine Engineering Explorer class AUVs.
The acoustic tracking system (ATS) enables an operator on a loud icebreaker platform to determine the position of the AUVs at ranges up to 30
km. An acoustic projector, mounted on the AUV, emits a hyperbolic frequency modulated (HFM) chirp at a preset time interval. A small, directional, acoustic receiving array mounted near the stern of the icebreaker,
accurately synchronized with the remote projector, receives signals from the
distant AUV. Matched filter processing is used to determine the time of
flight of the transmitted chirp. A beamforming algorithm applied to the data
provides bearing and elevation angle estimates for the received signals. A
ray tracing algorithm then uses this information, along with the sound velocity profile, to determine the position of the AUV. Moreover, ATS uses different HFM chirps to provide a basic one-way AUV state messaging
2054
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
capability. We conclude with a brief discussion of ATS data collected during in-water trials.
9:30
4aSP3. Passive localization of surface vessels in shallow water using
broadband, unintentionally radiated noise. Alexander W. Sell and R. Lee
Culver (Acoustics, Penn State University, State College, PA 16801,
[email protected])
The waveguide invariant relates ocean waveguide propagation conditions to the spectral interference patterns (or striations) in range-frequency
plots. The striations are the result of interaction between propagating modes.
A method of source localization, using a horizontal line array (HLA), that
exploits this relationship will be presented. Source azimuth is estimated
using conventional Bartlett beamforming, after which source range is estimated from spectral interference observed along the HLA as well as knowledge of the waveguide invariant. Automation of this process makes use of a
spectral characterization method for striation slope estimation, which works
well in some but not all cases. The use of a physics-based, range-dependent
waveguide invariant model to improve the range estimates will also be discussed. This method has been applied to acoustical data recorded in 2007 at
the Acoustical Observatory off the coast of Port of the Everglades, Florida.
Localization results compare favorably with radar-based Automatic Identification System (AIS) records. [Work supported by ONR Undersea Signal
Processing.]
9:45
4aSP4. Depth discrimination using waveguide invariance. Altan Turgut
and Laurie T. Fialkowski (Naval Research Lab, Acoustics Div., Washington, DC 20375, [email protected])
Waveguide invariant theory is used to analyze the acoustic striation patterns generated by a moving surface vessel and a towed broadband (350-600
Hz) source during two field experiments (TAVEX08, AWIEX09) conducted
in the East China Sea and New Jersey Shelf. Results from the East China
Sea site indicated that slopes of striation patterns are different when the
source is below the thermocline and receivers are below and above the thermocline. However, slopes are the same when the source (surface vessel) is
above the thermocline and receivers are below and above the thermocline.
In addition, results from the New Jersey Shelf site indicated that slopes of
striation patterns are different when two co-located sources (tow-ship and
towed source) are placed below and above the thermocline, and received on
a single hydrophone below the thermocline. Results are explained by the
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2054
dominance of reflecting and refracting modes for sources being above or
below the thermocline during summer profile conditions. [Work supported
by the Office of Naval Research.]
localization performance analysis to inhomogeneous media are discussed.
[Work supported by ONR.]
11:00
10:00
We address application of a passive, model-based depth discriminator to
data from the REP11 experiment. The method is based on a mode subspace
approach (Premus, 2007) which uses environmental information along with
a normal mode based acoustic simulation to predict the propagating mode
structure. This mode space can be divided into subspaces representing the
lower and higher order modes. Sufficient aperture yields orthogonal and linearly independent subspaces and a linear algebraic process yields orthogonalized subspaces with reduced aperture. Received data is then projected
onto these subspaces and a discrimination statistic is formed. This work
examines the application of this process to data from the REP11 experiment
in terms of performance of the discriminator over different sets of data and
levels of environmental knowledge. Work sponsored by ONR Undersea Signal Processing.
10:15–10:30 Break
10:30
4aSP6. Sound speed estimation and source localization with particle filtering and a linearization approach. Tao Lin and Zoi-Heleni Michalopoulou (Department of Mathematical Sciences, New Jersey Institute of
Technology, 323 ML King Blvd, Newark, NJ 07102, [email protected])
In previous work, a particle filtering method was developed that provided estimates of multipath arrival times from short-range data and, subsequently, employed them in geometry, bathymetry, and sound speed
inversion. The particle filter provided probability density functions of arrival
times, that were then “propagated” backwards through a sound propagation
model for inversion. That implies that every particle from the probability
density is employed in the inversion scheme, creating a potentially computationally cumbersome process. In this work, we develop a new method for
such parameter estimation which relies on linearization. The novel aspect is
that the Jacobian matrix now includes derivatives with respect to Empirical
Orthogonal Function coefficients. The approach, requiring only a few iterations to converge, is particularly efficient. Results from the application of
this technique to synthetic and real (SW06) data are presented and compared
to full-field inversion estimates. [Work supported by ONR and the NSF
CSUMS program.]
10:45
4aSP7. Bayesian localization of acoustic sources with information-theoretic analysis of localization performance. Thomas J. Hayward (Naval
Research Laboratory, 4555 Overlook Ave SW, Washington, DC 20375,
[email protected])
Approaches investigated to date for localizing acoustic sources include
conventional beamforming, matched field processing, and Bayesian methods [e.g., Pitre and Davis, J. Acoust. Soc. Am., 97, 1995], with recent
research revisiting Bayesian methods with focalization and marginalization
approaches [Dosso and Wilmut, J. Acoust. Soc. Am., 129, 2011]. Information-theoretic bounds on source localization performance were investigated
by Meng and Buck [IEEE Trans. Sig. Proc., 58, 2010] extending earlier
work of Buck. The present work investigates direct application of Bayes’
Rule to source localization and information-theoretic quantification and
analysis of localization performance, taking as an example the localization
of a time-harmonic source in a range-independent shallow-water acoustic
waveguide. Signal propagation is represented by normal modes, and additive Gaussian ambient noise is represented by a Kuperman-Ingenito model.
The localization performance is quantified by the entropy of the Bayesian
posterior pdf of the source location, and an information-theoretic interpretation of this performance measure is presented. Comparisons with
matched-field localization performance and extensions of the modeling and
2055
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
4aSP8. Acoustic cavitation localization in reverberant environments.
Samuel J. Anderson (The Graduate Program in Acoustics, The Pennsylvania
State University, State College, PA 16801, [email protected]), Daniel A.
Perlitz (Engineering Sciences, The Pennsylvania State University, State
College, PA), William K. Bonness, and Dean E. Capone (Noise Control
and Hydroacoustics, Applied Research Laboratory - PSU, State College,
PA)
Cavitation detection and localization techniques generally require visual
access to the fluid field, multiple high-speed cameras, and appropriate illumination to locate cavitation. This can be costly and is not always suitable
for all test environments, particularly when the bubble diameter is small or
duration is short. Acoustic detection and localization of cavitation can be
more robust and more easily implemented, without requiring visual access
to the site in question. This research utilizes the distinct acoustic signature
of cavitation events to both detect and localize cavitation during experimental water tunnel testing. Using 22 hydrophones and the processing techniques plane-wave beamforming and Matched-Field Processing (MFP),
cavitation is accurately and quickly localized during testing in a 12” diameter water tunnel. Cavitation is induced using a Nd:YAG laser for precise
control of bubble location and repeatability. Accounting for and overcoming
the effects of reflections on acoustic localization in acoustically small environments is paramount in water tunnels, and the techniques employed to
minimize error will be discussed.
11:15
4aSP9. Doppler-based motion compensation algorithm for focusing the
signature of a rotorcraft. Geoffrey H. Goldman (U.S. Army Research Laboratory, 2800 Powder Mill Road, Adelphi, MD 20783-1197, geoffrey.h.
[email protected])
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft.
For target signatures with large spectral peaks that vary slowly in amplitude
and have near constant frequency, the time-varying Doppler shift can be
tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of
the first harmonic of a rotorcraft was tracked with a fixed-lag smoother.
Then, state space estimates of the frequency were used to calculate a time
warping that removed the effect of the Doppler shift from the data. The
algorithm was evaluated by analyzing the increase in the amplitude of the
harmonics in the spectrum of a rotorcraft. The results depended upon the
frequency of the harmonics, processing interval duration, target dynamics,
and atmospheric conditions. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved the predicted
upper bound. The results for higher frequency harmonics had larger
increases in the amplitude of the peaks, but significantly fewer than the predicted upper bounds.
11:30
4aSP10. Automated entropy-based bird phrase segmentation on sparse
representation classifier. Ni-Chun Wang, Lee Ngee Tan, Ralph E. Hudson
(Electrical Engineering, University of California at Los Angeles, Westwood
Plaza, Los Angeles, CA 90095, [email protected]), George Kossan (Ecology
and Evolutionary Biology, University of California at Los Angeles, Los
Angeles, CA), Abeer Alwan, Kung Yao (Electrical Engineering, University
of California at Los Angeles, Los Angeles, CA), and Charles E. Taylor
(Ecology and Evolutionary Biology, University of California at Los
Angeles, Los Angeles, CA)
An automated system capable of reliably segmenting and classifying
bird phrases would help analyze field recordings. Here we describe a phrase
segmentation method using entropy-based change-point detection. Spectrograms of bird calls are often very sparse while the background noise is relatively white. Therefore, considering the entropy of a sliding time- frequency
window on the spectrogram, the entropy dips when detecting a signal and
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2055
4a THU. AM
4aSP5. Application of a model-based depth discriminator to data from
the REP11 experiment. Brett E. Bissinger and R. Lee Culver (Graduate
Program in Acoustics, The Pennsylvania State University, PO Box 30, State
College, PA 16804, [email protected])
rises back up when the signal ends. Rather than a simple threshold on the
entropy to determine the beginning and end of a signal, a Bayesian recursion-based change-point detection(CPD) method is used to detect sudden
changes in the entropy sequence. CPD reacts only to those statistical
changes, so generates more accurate time labels and reduces the false alarm
rate than conventional energy detection methods. The segmented phrases
THURSDAY MORNING, 25 OCTOBER 2012
are then used for training and testing a sparse representation(SR) classifier,
which performs phrase classification by a sparse linear combination of feature vectors in the training set. With only 7 training tokens for each phrase,
the SR classifier achieved 84.17% accuracy on a database containing 852
phrases from Cassins Vireo (Vireo casinii ) phrases that were hand-classified into 32 types. [This work was supported by NSF.]
BENNIE MOTEN A/B, 8:30 A.M. TO 11:30 A.M.
Session 4aUW
Underwater Acoustics and Acoustical Oceanography: Sources, Noise, Transducers, and Calibration
Ching-Sang Chiu, Chair
Department of Oceanography, Naval Postgraduate School, Monterey, CA 93943-5193
Contributed Papers
8:30
4aUW1. The measured 3-D primary acoustic field of a seismic airgun
array. Arslan M. Tashmukhambetov, George E. Ioup, Juliette W. Ioup
(Department of Physics, University of New Orleans, New Orleans, LA
70148, [email protected]), Natalia A. Sidorovskaia (Physics Department,
University of Louisiana at Lafayette, Lafayette, LA), Joal J. Newcomb (Naval Oceanographic Office, Stennis Space Center, MS), James M. Stephens,
Grayson H. Rayborn (Department of Physics and Astronomy, University of
Southern Mississippi, Hattiesburg, MS), and Phil Summerfield (Geodetics &
Cartography, ExxonMobil Corporation, Houston, TX)
The Littoral Acoustic Demonstration Center has conducted an experiment
to measure the 3-D acoustic field of a seismic airgun array in the Gulf of Mexico. A seismic source vessel shot specified lines to give solid angle and range
information. Hydrophone positions were measured by an ultra-short baseline
(USBL) acoustic system while the source ship was turning between lines. An
acoustic Doppler current profiler measured currents so the positions could be
modeled between USBL measurements. The position locations were refined
by using information from the acoustic arrival times on the hydrophones.
Peak pressures, sound exposure levels, total shot energy spectra, one-third
octave band analyses, and source directivity studies are used to characterize
the field. One third octave band analysis shows received levels up to 180 dB
re 1 mP for emission angles from 0 degrees (vertically down) up to 45 degrees
for horizontal ranges up to 200 m at endfire, between 10 Hz and 200 Hz. The
levels decrease with increasing frequency above 200 Hz, with increasing horizontal ranges, and for emission angles above 45 degrees. The levels are lower
at broadside than at endfire. [Research supported by the Joint Industry Programme through the International Association of Oil and Gas Producers.]
8:45
4aUW2. Investigation of a tunable combustive sound source. Andrew R.
McNeese, Thomas G. Muir (Applied Res. Labs., The University of Texas at
Austin, 10000 Burnet Rd, Austin, TX 78757, [email protected]),
and Preston S. Wilson (Mech. Eng. Dept. and Applied Res. Labs., The University of Texas at Austin, Austin, TX)
The Combustive Sound Source (CSS) is a versatile underwater sound
source used in underwater acoustics experiments. The source is comprised of a
submersible combustion chamber which is filled with a combustive gas mixture
that is ignited via spark. Upon ignition, the combustive mixture is converted
into high temperature combustion byproducts which expand and ultimately collapse back to smaller volume than before ignition. Acoustic pulses are radiated
by the bubble activity. The CSS can be used as a source for calibration, TL
measurements, and bottom characterizations, and when deployed on the bottom
can create seismic interface waves. Current environmental regulations and
2056
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
varying experimental needs require a tunable source that allows users to easily
alter the source level, bandwidth, and signal duration. Current efforts have
focused on altering the bubble growth and collapse in attempt to tune the radiated signals to meet various needs. Scale models have been constructed and
tested in in-house tank experiments. Discussion will focus on the results of the
study along with future plans for development and modeling.
9:00
4aUW3. Mitigation of underwater piling noise by air filled balloons and
PE-foam elements as hydro sound dampers. Karl-Heinz Elmer (OffNoise-Solutions GmbH, Leinstr. 36, Neustadt a. Rbge. 31535, Germany,
[email protected]), J€
org Gattermann, Christian Kuhn, and Benedikt Bruns (Inst. Soil Mechanics and Found. Engineering, Techn. Universit€at Braunschweig, Braunschweig, Nds, Germany)
Founding of offshore wind turbines by pile driving induces considerable
underwater sound emissions that are potentially harmful to marine life. In
Germany, the Federal Maritime and Hydrographic Agency (BSH) has set a
standard level of 160 dB (SEL) at a distance of 750 m from pile driving.
Effective noise reducing methods are necessary to keep this standard level.
The new method of hydro sound dampers (HSD) uses curtains of robust air
filled elastic balloons showing high resonant effects, similar to air bubbles,
but also balloons with additional dissipative effects from material damping
and special dissipative PE-foam elements to reduce impact noise. The resonance frequency of the elements, the optimum damping rate for impact
noise, the distribution and the effective frequency range can be fully controlled, if the HSD-elements are fixed to pile surrounding fishing nets. HSDsystems are independent of compressed air, not influenced by tide currents
and easy adaptable to different applications. The theoretical background, numerical simulations, laboratory tests and offshore tests of HSD-systems
result in noise mitigations between 17 dB to 35 dB (SEL). The work is supported by the German Federal Environmental Ministry (BMU).
9:15
4aUW4. Mitigation of underwater radiated noise from a vibrating work
barge using a stand-off curtain of large tethered encapsulated bubbles.
Kevin M. Lee, Mark S. Wochner (Applied Research Laboratories, The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758, [email protected]
arlut.utexas.edu), and Preston S. Wilson (Mechanical Engineering Department and Applied Research Laboratories, The University of Texas at Austin, Austin, TX)
A stand-off curtain of encapsulated bubbles with resonance frequencies of
approximately 50 Hz was used to attenuate radiated noise from a work barge
vibrated by onboard rotating machinery in a lake experiment. The purpose of
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2056
this experiment was to provide a scale-model of how a noise reduction system
of tethered encapsulated bubbles would be deployed to mitigate noise from a
shallow water drilling ship. The work reported here is an extension of previous tests which used an array of encapsulated bubbles attached directly to the
bottom of the work barge to reduce the radiated sound levels [J. Acoust. Soc.
Am. 131, 3506 (2012)]. The design of the new stand-off encapsulated bubble
curtain is described, including the finite-element model that was developed to
aide in the design. The deployment and acoustic testing of the curtain are also
described. Results from the tests demonstrate that the system is both practical
to deploy and is effective in reducing the underwater noise radiated into the
lake from the work barge. [Work supported by Shell Global Solutions.]
(squeaking sounds) that prevailed at times during the passage of the very
large-amplitude internal waves was also identified. In addition to the noise
budget, the variability in the daily and monthly means and variances of the
measured noise spectrum and band levels were examined. In order to gain
insights into the predictability of the ambient noise field in this marginal sea,
the interpretation of the data was facilitated with temperature records measured with moored instruments, wind and precipitation time series from the US
Naval Operational Global Atmospheric Prediction System (NOGAPS), and
vessel motion simulation based on historical shipping density and lane structure. [Research sponsored by the Office of Naval Research.]
10:30
9:30
The ability to accurately estimate shipping source levels from ambient noise
data is an essential step towards creating a forecast model of the ocean soundscape. Source level estimates can be obtained by solving the system of linear
equations, governed by the sonar equation, that relate source level to transmission loss (TL) and beamformer response. In this formulation, beamformer
response is known and TL can be modeled from ship positions that are determined by a fusion of automatic identification system (AIS) reports and local radar data. Different levels of environmental realism will be taken into account for
the TL model by considering two ocean bottom profiles. In particular, a layered
sand-limestone bottom and karst sand-limestone bottom will be used in comparison for both 2D and NX2D TL runs. Source levels must be constrained to be
positive and are thus solved for with a non-negative least squares (NNLS) algorithm. Estimation of source levels on data collected during the 2007 shallow
water array performance (SWAP) experiment will be presented. Simulated ambient noise forecasts for the different sediment profiles will then be compared to
real data from the SWAP experiment. [This work was supported by ONR.]
9:45
4aUW6. Prediction of noise levels on accelerometers buried in deep
sediments. William Sanders and Leonard D. Bibee (Seafloor Sciences, Naval Research Laboratory, Stennis Space Center, MS 39529, [email protected]
nrlssc.navy.mil)
The noise field below 100 Hz for three-axis accelerometers buried in sediments is due primarily to shipping, and to a lesser extent wind. Both are generated near the surface. Hence a buried sensor observes noise from an area of the
sea surface around it extending theoretically across the entire ocean. However,
practically more distant noise sources diminish (even though the area increases
with the square of the distance) with range so as to limit the “listening area”.
Sensors buried in sediments cut off horizontally propagating noise and hence
are relatively more sensitive to locally generated noise. An elastic parabolic
equation model is used to model the responses of three axis accelerometers
buried in sediments within a complex geologic environment. The effect of
shear waves in surrounding structures are shown to significantly affect the
noise field. Noise from distant sources received by buried sensors is shown to
be as much as 20 dB lower than that on sensors in the water column.
10:00–10:15 Break
10:15
4aUW7. Low-frequency ambient noise characteristics and budget in the
South China Sea basin. Ching-Sang Chiu, Christopher W. Miller, and John
E. Joseph (Department of Oceanography, Naval Postgraduate School, 833
Dyer Road, Room 328, Monterey, CA 93943-5193, [email protected])
A sound record measured by a moored hydrophone in the South China
Sea basin was analyzed. Sampled at a rate of 1.6 kHz and with a duty cycle of
approximately 1-min-on and 14-min-off, the measured time series captures
the spectral characteristics and variability of the ambient noise in the lessthan-800-Hz band over an annual cycle. Using a combination of automated
and manual screening methods, the dominant regular and transient noise sources were identified and categorized, which include shipping, wind waves,
seismic air-gun surveys, shots/explosives and sonar. Intermittent self noise
2057
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
4aUW8. Using hydroacoutic stations as water column seismometers.
Selda Yildiz (Marine Physical Laboratory, Scripps Institution of Oceanography/UCSD, 9500 Gilman Dr, La Jolla, CA 92093-0238, [email protected]),
Karim Sabra (School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA), W. A. Kuperman, and LeRoy M. Dorman (Marine Physical
Laboratory, Scripps Institution of Oceanography/UCSD, La Jolla, CA)
The Comrehensive Nuclear-Test-Ban Treaty Organization (CTBTO)
maintains hydrophones that have been used to study icebergs and T-wave
propagation. These stations consist of three hydrophones at about the depth of
sound channel in a horizontal triangle array with 2 km sides. We have used
data from these stations in the few tenths of a Hertz and below regime to
study if we can effectively use these stations as water column seismometers.
Among the processing performed was methods to effectively transform the
hydrophone configurations to vector sensors. An assortment of signal processing on hydrocoustic data from the December 26th 2004 Great Sumatra Earthquake has been compared to seismograph data of the same event indicating,
that the hydrophone stations can indeed be used as surrogate seismometers.
10:45
4aUW9. Hydrophone calibration using ambient noise. Kristy Castillo
Moore (Sensors and SONAR Systems, Naval Undersea Warfare Center Division Newport, 27744 Bugg Spring Rd, Okahumpka, FL 34762, kristy.
[email protected]) and Steven E. Crocker (Sensors and SONAR Systems,
Naval Undersea Warfare Center Division Newport, Newport, RI)
Hydrophone calibration typically requires a good signal-to-noise (SNR) ratio
in order to calculate the free-field voltage sensitivity (FFVS). However, the SNR
requirements can limit the calibration of hydrophones with low sensitivity, particularly in the low frequency range. Calibration methods using ambient noise in
lieu of a generated signal will be explored at the Underwater Sound Reference
Division (USRD) Leesburg Facility in Okahumpka, FL. The USRD Leesburg
Facility is at a natural spring in rural central Florida and is one of the Navy’s
quietest open water facilities with no boating noise, limited biological noise, an
isolated location, low reverberation and an isothermal water temperature profile
below 5 meters. Comparison calibrations will be made with two similar hydrophones using the ambient noise in the natural spring and the results will be compared to calibrations made with the same hydrophones using a generated signal.
11:00
4aUW10. Transducer models for simulating detection processes for
underwater mining. Kyounghun Been, Hongmin Ahn (Mechanical Engineering, POSTECH, Pohang-si, Gyeongbuk, Republic of Korea), Hunki Lee,
Eunghwy Noh, Won-Suk Ohm (Mechanical Engineering, Yonsei University,
Seoul, Republic of Korea), and Wonkyu Moon (Mechanical Engineering,
POSTECH, Pohang Univ. of Science, Hyoja-dong, Nam-gu, Pohang,
Gyeongbuk 790-784, Republic of Korea, [email protected])
Numerical simulations on propagating and scattering processes of sound
waves in water and sediment may be useful for designing a detection system
for underwater mining. Here a transducer model is developed for the numerical simulation to implement radiating and receiving processes of transducers
into numerical calculations. Since the Rayleigh integral approach is adopted
for acoustic radiation, the accurate velocity profiles over the radiating surfaces
of a transducer array should be estimated considering the mechano-acoustic
interactions including the dynamics of unit drivers and the acoustic radiation
loadings on the radiation surfaces. We adopted the approach that the surface
velocity is calculated using the transducer model with the acoustic loading
while the loading effects are estimated via calculating the radiation
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2057
4a THU. AM
4aUW5. Shipping source level estimation for ambient noise forecasting.
Jeffrey S. Rogers, Steven L. Means, and Stephen C. Wales (Naval Research
Lab, 4555 Overlook Ave SW, Washington, DC 20375, [email protected]
navy.mil)
impedance of transducer array using Rayleigh integrals. The estimated velocity profile of the transducer surface is used for calculating the accurate sound
fields generated by the transducer array. A similar approach will be adopted
for estimating receiving characteristics. [The Authors gratefully acknowledge
the support from UTRC(Unmanned technology Research Center) at KAIST(Korea Advanced Institute of Science and Technology), originally funded by
DAPA, ADD in the Republic of Korea.]
typical procedure for insertion loss measurement, two pulses are recorded:
one without and one with the specimen inserted between the transmitters
and receivers. The amplitude spectra of the two pulses are then used to
determine the insertion loss, which is a function of frequency. The measurement with low frequencies is quite difficult because of the reverberation interference, which is induced by the sides of vessel where the absorption
materials cannot work well and fail to produce an acoustic free field environment. This presentation presents a method which uses time reversal (TR)
focusing technique to measure the insertion loss of acoustic filtering materials. The experiment results in a waveguide water tank show that the
approach can achieve high signal-to-reverberation ratio in the measurement.
Besides, TR focusing provides high resolution at the place of the specimen
which reduces the requirement of the specimen size. [Work supported by
the National Natural Science Foundation of China under grant no
61171147.]
11:15
4aUW11. Acoustic insertion loss measurement using time reversal
focusing. Jianlong Li and Zhiguang He (Department of Information Science
and Electronic Engineering, Zhejiang University, Hongzhou, Zhejiang,
China, [email protected])
Accurate measurement of acoustic insertion loss has important applications in evaluating the performance of acoustic filtering material. In a
THURSDAY AFTERNOON, 25 OCTOBER 2012
BASIE A1, 1:30 P.M. TO 6:00 P.M.
Session 4pAA
Architectural Acoustics, Noise, and Signal Processing in Acoustics: Alternative Approaches
to Room Acoustic Analysis
Timothy E. Gulsrud, Cochair
Kirkegaard Associates, 954 Pearl St., Boulder, CO 80302
David S. Woolworth, Cochair
Oxford Acoustics, 356 CR102, Oxford, MS 38655
Invited Papers
1:30
4pAA1. Using spherical microphone array beamforming and Bayesian inference to evaluate room acoustics. Samuel Clapp, Jonathan Botts (Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, 110 8th Street, Greene Building, Troy, NY
12180, [email protected]), Anne Guthrie, Ning Xiang (Arup Acoustics, New York, NY), and Jonas Braasch (Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, Troy, NY)
The most well-known acoustical parameters - including Reverberation Time, Early Decay Time, Clarity, and Lateral Fraction - are
measured using data obtained from omnidirectional or figure-of-eight microphones, as specified in ISO 3382. Employing a multi-channel receiver in place of these conventional receivers can yield new spatial information about the acoustical qualities of rooms, such as
the arrival directions of individual reflections and the spatial homogeneity. In this research, a spherical microphone array was used to
measure the room impulse responses of a number of different concert and recital halls. The data was analyzed using spherical harmonic
beamforming techniques together with Bayesian inference to determine both the number of simultaneous reflections along with their
directions and magnitudes. The results were compared to geometrical acoustic simulations and used to differentiate between listener
positions which exhibited similar values for the standard parameters.
1:50
4pAA2. Two home-brewed microphone assemblies for performing arts spaces. David Conant (McKay Conant Hoover Inc, 5655
Lindero Canyon Rd, Suite 325, Westlake Village, CA 91362, [email protected])
Two decades ago, MCH pressed into service a binaural head (rather identical to one of its principals) comprised of a human skull,
paraffin wax and anatomically-correct pinnae. This has been found useful in our concert hall tuning exercises. Separately, during tuning
exercises, acousticians on our team reported possible percussion echoes during amplified events at Los Angeles’ new 1700-seat Valley
Performing Arts Center. Anticipating a deeper forensic exercise rapidly looming, a highly directional parabolic microphone system was
cobbled from ad hoc parts to quickly confirm (or not) the reports, identify problem surfaces and potential solutions, if required. The curious-appearing, but effective devices are described and their use discussed.
2058
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2058
2:10
4pAA3. Analysis of concert hall acoustics using time-frequency and time-spatial responses. Jukka P€atynen, Sakari Tervo, and Tapio
Lokki (Department of Media Technology, Aalto University School of Science, Konemiehentie 2, Espoo FI02150, Finland, jukka.
[email protected])
A set of objective parameters (ISO3382-1:2009) is widely used for describing acoustic conditions in performance spaces. With few
exceptions, they are based on integrating sound energy within moderate time intervals. In practice, different acoustic conditions can
yield similar values for objective measures. A presented method of analyzing concert hall acoustics with respect to the time-frequency
features aims to overcome the deficiencies of the objective parameters by conveying considerably more information in an uncomplicated
form. This is achieved by visualizing the contribution of short time frames in the impulse response to the cumulative sound energy as a
function of frequency. Particularly the early part of the room impulse response, including the influence of the seat dip effect is efficiently
visualized. The method is applied to acoustic measurements conducted at five corresponding positions in six concert halls. It is shown
that in addition to communicating standard monaural objective parameters, the visualizations from the method are connected with several features regarding the subjective impression of the acoustics. The time-frequency analysis is further extended into utilizing a recent
sound direction estimation technique. Resulting time-directional visualization enables the accurate analysis of early reflections and their
contribution to spatial sound.
2:30
4pAA4. The importance and feasibility of “Reflectivity” as an index applicable for architectural acoustic design. Sooch SanSouci
(Acoustic Design International LLC, Bryn Mawr, PA) and Felicia Doggett (Metropolitan Acoustics, LLC, 40 W. Evergreen Ave., Suite
108, Philadelphia, PA 19118, [email protected])
A major part of room acoustics concerns sound control within a space. This traditionally involves reverberation control, music envelopment, optimization of speech comprehension and privacy, noise control, spatial enhancement, modal and reflection control for rooms
used in recording, mixing, editing, mastering, and measuring sounds and hearing. In all of these examples, room acoustics is influenced
by early reflections from the interior surfaces, objects and geometry. The impedance of a surface generally varies in relation to the incident angle of sound waves, therefore knowing the true reflectivity of surfaces would be complimentary to sound decay time and absorption coefficients. This presentation reviews current trends in the measurement of sound reflectivity as well as many of the challenges
involved in developing an approved laboratory measurement methodology. Not only a potentially important tool for acousticians, architects and designers, but a formally adopted “Reflectivity Index” would aid product design, research, and education in room acoustics.
Examples of comparative measurements of materials ranging from porous media to continuous or perforated surface assemblies are
examined and how the range of results might be unified as a single metric.
2:50
4pAA5. STI measurements in real time in occupied venues. Wolfgang Ahnert and Stefan Feistel (Ahnert Feistel Media Group, Arkonastr. 45-49, Berlin D-13189, Germany, [email protected])
To measure impulse responses in rooms and free fields is a daily Job for acousticians but mainly this is done in unoccupied cases.
Results for the occupied case are derived by use of simulation software or by estimations based on experience. Using a newly developed
multithread algorithm speech, music or any other signals from a microphone input and from a mixing console can be utilized to obtain
impulse response data for further evaluation. In a soccer stadium in presence of more than 50.000 visitors the measurements have been
done even as a function of the degree of occupancy. The method will be explained in detail and the needed conditions are described. All
factors of influence are discussed to obtain results to derive impulses responses and to calculate the STI values in post processing.
3:10–3:25 Break
3:25
4p THU. PM
4pAA6. Non-traditional in-room measurement and non-measurement approaches to existing room evaluations. Dawn Schuette,
Carl Giegold, and Molly Norris (Threshold Acoustics LLC, 53 W Jackson Blvd, Suite 815, Chicago, IL, [email protected]
com)
Threshold Acoustics has experimented with a number of non-traditional approaches to the evaluation of existing interior spaces in
recent years. These methods have varied widely in response to the unique circumstances of each project. The challenges and advantages
of each room must be approached with an open mind, in terms of both measured and qualitative evaluation, to reach a firm understanding of the acoustic character of a space. This paper will discuss studies that have utilized techniques as diverse as theatrical lighting and
gels as a means of fine-tuning ceiling reflectors to working directly with musicians to determine how rooms as a whole or, in some cases
individual room elements, respond to the frequency content and directionality of specific instruments. We will also discuss the method
involved in a recent study of the interaction between a room and its reverberation chambers that was performed to gain a more complete
understanding of the complex ways they influence one another.
3:45
4pAA7. Using musical instruments for narrow band impulse excitation of halls to aid in acoustical analysis. David S. Woolworth
(Oxford Acoustics, 356 CR 102, Oxford, MS 38655, [email protected])
This paper suggests that narrow band specific information of a room’s acoustic behavior can be revealed through the use of musical
instruments as impulse sources that would be otherwise buried in response to a broadband impulse excitation or other sequence. An
advantage to this is discovering acoustic anomalies before data analysis, as well as subjective judgments to room response and stage
acoustics to add to standardized and more advanced technical methods. A violin, snare drum, and double bass are examined in terms of
directionality and frequency content with and an example hall is analyzed.
2059
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2059
4:05
4pAA8. Recent experience with low frequency room acoustics measurements. Timothy E. Gulsrud (Kirkegaard Associates, 954
Pearl Street, Boulder, CO 80302, [email protected])
Despite the fact that both orchestral and popular music contain important low frequency (i.e., below 100Hz) sound energy, room
acoustics measurements and parameters for concert halls do not typically consider the frequency range below the 125Hz octave band.
This has resulted in inadequate objective descriptors of bass response and, in some cases, misguided acoustic designs to obtain good
bass response. In this paper we present low frequency data measured in several concert halls around the world and discuss various methods for acquiring and analyzing the data, with the aim of encouraging further research in this area.
4:25
4pAA9. Evaluation of room-acoustic modal characteristics from single-point measurements using Bayesian analysis. Wesley Henderson, Ning Xiang, and Jonathan Botts (Architectural Acoustics, Rensselaer Polytechnic Institute, 110 8th Street, Greene Building,
Troy, NY 12180, [email protected])
Room mode analysis is an important element of architectural acoustic design. In small rooms especially, well-separated low frequency room modes can cause unpleasant aural effects, such as undesirable resonances and flutter echoes. Traditional room mode analysis is generally done using the discrete Fourier transform. This work proposes a time-domain modal analysis method by which the
amplitudes, frequencies, and damping constants of the room under test can be directly determined for low modal frequencies by using a
Bayesian inference algorithm on a single room impulse response (RIR). The method’s time-domain, model-based approach allows the
number of modes present in the RIR, as well as the amplitudes, frequencies, and damping constants for each mode, to be determined
using Bayesian model selection and parameter estimation, respectively. The method uses Skilling’s nested sampling algorithm to infer
parameter values. Results indicate that the method is especially useful in rooms with closely-spaced modes at low frequencies.
Contributed Papers
4:45
4pAA10. Methods of discovering problem room modes. Ben Bridgewater
(University of Kansas, Lawrence, KS 60045, [email protected])
Determining the best course of action to eliminate problem room modes
in the Kansas Public Radio Live Performance studio required a way of
measuring room modes and determining the problem dimensions. This case
study focuses on how the room modes where measured and the determination of the problem modes in the KPR studio.
5:00
4pAA11. A comparison of source types and their impacts on acoustical
metrics. Keely Siebein (University of Florida, P.O. Box 115702, Gainesville, FL 32611, [email protected])
The purpose of this study is to compare acoustical measurements made
with different source types in a relatively reverberant room to determine if
ISO 3382 monaural acoustic parameters such as Reverberation Time (RT),
Early Decay Time (EDT), and Clarity Index (C80), yield different results
for natural acoustic source stimuli. The source stimuli used in the study
included Maximum Length Sequences (MLS), a running train of speech, a
running piece of music and a balloon pop. The scientifically calibrated
method is then compared to acoustical measurements obtained from natural
acoustic sources, which include anechoic recordings of voice and music
played through a directional speaker in the front of the room to simulate
activities that would normally take place in the room, such as a person
speaking and music being played during a worship service. This analysis is
performed to determine if there are differences in acoustic room parameters
using natural acoustic sources. This study essentially compares the effects
of different source stimuli on measured acoustic parameters. It was found
that different source signals and receiver locations significantly affect the
acoustic metrics derived from the acoustical measurements due to variations
in frequency, level and directionality.
5:15
4pAA12. Detecting the angle of arrival of discrete echoes by measuring
polar energy time curves in a contemporary church. Ted Pyper and Ray
Rayburn (K2 Audio, 4900 Pearl East Cir., Suite 201E, Boulder, CO 80301,
[email protected])
In order to deal with speech intelligibility concerns from the main
sound system at a contemporary church, the authors executed measurements in the main assembly hall to detect late arriving reflections at
2060
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
various positions in the audience and on the stage. By implementing Polar
Energy Time Curves (PETC) at each measurement location, discrete
reflections were identified by time arrival and also, more critically, by
angle of arrival. The advantage of calculating the PETCs on site during the
measurement session was that the authors could physically pinpoint
the sources of discrete echoes by using a laser positioned at the center of
the microphone armature used to take the measurements. This gave the
authors immediate feedback to diagnose reflections and help select additional measurement positions within the room. With this information, the
authors were able to identify room surfaces that contributed significantly
to the late arriving echoes and specify appropriate sound absorptive treatments for these surfaces.
5:30
4pAA13. Measurements of the just noticeable difference for reverberation time using a transformed up–down adaptive method. Adam Buck,
Matthew G. Blevins, Lily M. Wang, and Zhao Peng (Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln,
Omaha, NE 68182, [email protected])
This investigation sought to measure the just noticeable difference
(JND) for reverberation time (T30) using a rigorous psychophysical
method, namely the transformed up-down adaptive method. In ISO 33821:2009, the JND for reverberation metrics is taken to be 5%, based on
work by Seraphim (1958); however, others have suggested that the actual
JND is higher. In this project, sound samples with varying T30 were
auralized from impulse responses simulated in a realistically modeled
performance space using ODEON. The model’s absorption coefficients
were uniformly varied across all surfaces and frequencies to achieve the
desired T30s. Three reference reverberation times were utilized (one,
two, and three seconds), and eight T30 cases spaced at 4% intervals both
above and below each of the three reference T30s were created. Auralizations using a 500 ms white noise burst were presented in a computerbased testing program running a three-interval one-up two-down forced
choice method, presented in a sound booth over headphones with flat frequency response. The program randomly interleaved six staircase
sequences, three of which ascended and three of which descended
towards each reference T30. Results averaged across 30 participants will
be presented. [Work supported by a UNL UCARE Grant and the ASA
Robert W. Young Award.]
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2060
5:45
4pAA14. The role of acoustical characteristics for enhancement of
acoustic comfort in high-speed train passenger cars. Hyung Suk Jang,
Jooyoung Hong, and Jin Yong Jeon (Architectural Engineering, Hanyang
University, Seoul, Seongdong-gu 133791, Republic of Korea, [email protected]
hanyang.ac.kr)
The room acoustic environments in high-speed trains have been investigated to identify the design elements for the passenger cars to improve
acoustic comfort. Both room acoustical and psychoacoustical parameters
affected by the absorption coefficients of interior finish materials were
measured at the height of passengers’ ears. Room acoustical simulation was
constructed based on the measurements to investigate the effect of design
elements influencing acoustic quality in the carriage. Through computer
simulation of the models with changes in acoustical properties such as
absorption/diffusion coefficients of the interior surfaces, the effect of interior design components were investigated and classified to improve the
speech privacy.
THURSDAY AFTERNOON, 25 OCTOBER 2012
JULIA LEE A/B, 2:00 P.M. TO 3:15 P.M.
Session 4pABa
Animal Bioacoustics, Acoustical Oceanography, Structural Acoustics and Vibration, Underwater Acoustics,
and ASA Committee on Standards: Underwater Noise from Pile Driving II
Mardi C. Hastings, Cochair
George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405
Martin Siderius, Cochair
ECE Dept., Portland State Univ., Portland, OR 97201
Contributed Papers
4pABa1. Results of a scaled physical model to simulate impact pile driving. Katherine F. Woolfe and Mardi C. Hastings (George W. Woodruff
School of Mechanical Engineering, Georgia Institute of Technology,
Atlanta, GA 30332-0405, [email protected])
To achieve a more complete understanding of the parameters involved
in the structural acoustics of impact pile driving, a scaled physical model
was developed and tested. While the design of the scaled model has been
presented previously (Woolfe et al., JASA 130: 2558, 2011), this presentation focuses on analysis of wall velocity data and intensity data obtained
from experimental evaluation of the model. The energy contained in a control volume surrounding the pile and the energy exchanged across the surface of the control volume were estimated from near field intensity
measurements. The amount of energy transferred to the fluid from the cylindrical shell structure during impact and the amount of energy transferred to
the structure from the fluid immediately following impact were determined.
Results indicate that the highly damped pressure waveform as observed in
the water column of the scaled physical model as well as in field data is due
primarily to the transfer of energy from the surrounding water back into the
structure. [Work supported by the Georgia Institute of Technology and the
Oregon Department of Transportation through a subcontract from Portland
State University.]
2:15
4pABa2. Modeling and visualization of the underwater sound field associated with underwater pile driving. Dara M. Farrell and Peter H. Dahl
(Department of Mechanical Engineering and Applied Physics Laboratory,
University of Washington, Seattle, WA 98105, [email protected])
As communities seek to expand and upgrade marine and transportation
infrastructure, underwater noise from pile driving associated with marine
construction is a significant environmental regulatory challenge. This work
explores results of different transmission loss models for a site in Puget
Sound and the effect of improved understanding of modeling on the extents
of zones of influence. It has been observed that most of the energy
2061
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
associated with impact pile driving is less than about 1000 Hz. Here, analysis of the spectral content of pile driving noise is undertaken to ascertain the
optimal surrogate frequency to model the broadband nature of the noise.
Included is a comparison of a normal mode model, which is motivated by
work presented by Reinhall and Dahl [JASA 130, 1209 (2011)], with other
methods. A GIS (Geographic Information System) tool, ArcMap, is used to
map the sound level over the bathymetry, which has proved to be a useful
way of visualizing the impact of the noise. [Work supported by Washington
Sea Grant.]
2:30
4pABa3. A model for passive underwater noise suppression by bubble
curtains surrounding point or line sources in shallow water. Todd A.
Hay, Yurii A. Ilinskii, Evgenia A. Zabolotskaya, and Mark F. Hamilton
(Applied Research Laboratories, The University of Texas at Austin, P.O.
Box 8029, Austin, TX 78713-8029, [email protected])
Underwater noise generated by pile driving, rotating machinery, or towers supporting offshore wind turbines may disturb marine life and inhibit
detection of coastal activities via passive sonar and seismic sensors. Noise
abatement techniques have therefore been proposed to limit the propagation
of such noise into the far field, and many of these employ a curtain of
freely-rising bubbles or tethered encapsulated bubbles to surround the towers [Lee et al., J. Acoust. Soc. Am. 131, 3507(A) (2012)]. An analytic
model, based on a Green’s function approach, is presented for the passive
noise suppression provided by a discrete number of bubbles surrounding
submerged point sources or pulsating cylindrical towers above horizontallystratified layers of sediment. The sediment layers are modeled as viscoelastic media and the Green’s function is derived via angular spectrum
decomposition [Hay et al., J. Acoust. Soc. Am. 129, 2477(A), (2011)]. Simulations in which the bubbles are assumed to react independently to the incident field will be compared to those in which bubble-bubble interaction is
taken into account. The effects of bubble size distributions and void fractions on noise suppression will be investigated for different source configurations. [This work was supported by the Department of Energy under Grant
DE-EE0005380.]
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2061
4p THU. PM
2:00
encapsulated bubbles provided as much as 45 dB of reduction in the 10 Hz
to 600 Hz frequency band. [Work supported by Shell Global Solutions and
ARL IR&D program.]
2:45
4pABa4. Reduction of underwater sound from continuous and impulsive noise sources using tethered encapsulated bubbles. Kevin M. Lee,
Andrew R. McNeese, Mark S. Wochner (Applied Research Laboratories,
The University of Texas at Austin, 10000 Burnet Road, Austin, TX 78758,
[email protected]), and Preston S. Wilson (Mechanical Engineering
Department and Applied Research Laboratories, The University of Texas at
Austin, Austin, TX)
3:00
4pABa5. Dynamic response of a fish swim bladder to transient sound.
Shima Shahab and Mardi C. Hastings (George W. Woodruff School of
Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
30332-0405, [email protected])
Arrays of encapsulated bubbles have been shown to be very effective in
reducing underwater sound radiated from various sources [J. Acoust. Soc.
Am. 131, 3356 (2012); J. Acoust. Soc. Am. 131, 3506 (2012) ]. These arrays
have been used to treat both sources of noise and to protect a receiving area
from external noise. The system provides noise reduction using the combined effects of bubble resonance attenuation and acoustic impedance mismatching. Results are reviewed from experiments where tethered
encapsulated bubble arrays were used to reduce underwater sound levels
from both continuous and impulsive sound sources. In one set of experiments, the encapsulated bubble array attenuated continuous wave sound
from a compact electromechanical source. In the other set of experiments,
the continuous source was replaced by a combustive sound source [IEEE J.
Oceanic Eng. 20, 311–320 (1995)], which was intended to simulate realworld impulsive noise sources such as impact pile driving or airguns used in
seismic surveys. For both the continuous and impulsive sources, the
High-level underwater sound exposures from impact pile driving activities and seismic air guns can cause physical damage to fishes. Damage to
biological tissues depends on the strain rate induced by the initial rapid rise
and fall times of the received sound pulse. So a time-domain analysis is
needed to understand mechanisms of tissue damage resulting from exposure
to sound from transient acoustic sources. To address this issue, the frequency domain mathematical model originally developed by Finneran and
Hastings (JASA 108: 1308-1321, 2000) was modified to predict the
response of a fish swim bladder and surrounding tissues to transient signals.
Each swim bladder chamber was modeled as an elastic prolate spheroidal
shell filled with gas and connected to other parts of the anatomy. Results of
three case studies are presented showing correlation between the strain rate
of the swim bladder wall and tissue damage reported in five different species
of fish.
THURSDAY AFTERNOON, 25 OCTOBER 2012
LESTER YOUNG A, 1:00 P.M. TO 5:50 P.M.
Session 4pABb
Animal Bioacoustics: Terrestrial Passive Acoustic Monitoring II
Ann E. Bowles, Chair
Hubbs-SeaWorld Research Institute, San Diego, CO 92109
Chair’s Introduction—1:00
Invited Papers
1:05
4pABb1. Using acoustical monitoring to assess mule deer behavior in response to natural gas development and noise disturbance
in the Piceance Basin, Colorado. Emma Lynch (Biology, Colorado State University, 1201 Oakridge Drive, Suite 100, Fort Collins, CO
80526, [email protected])
Passive recording units are valuable tools for assessing wildlife behavior, and can be used to address questions on multiple scales,
from individual to landscape. We used this versatile technique to explore one of the most pressing wildlife management issues in the
intermountain west: broad scale energy development. Much of this development occurs on critical wildlife habitat, and has been shown
to alter the physical and acoustical properties of landscapes at a rapid rate. We designed and packaged an inexpensive, collar-mounted
recording device for monitoring mule deer (Odocoileus hemionus) behavior with respect to natural gas development in Northwestern
Colorado. This presentation will provide a summary of collar design, data analysis, and preliminary results.
1:25
4pABb2. Peeling the onion: What alarm calls can reveal about mammalian populations. Stacie Hooper (Evolution and Ecology,
University of California at Davis, One Shields Avenue, Davis, CA 95616, [email protected]), Brenda McCowan (Population Health
and Reproduction, University of California at Davis, Davis, CA), Toni Lyn Morelli, and Christina Kastely (Environmental Science, University of California at Berkeley, Berkeley, CA)
While individually distinctive vocalizations have been used as a tool for the conservation and management of bird populations, few
studies have investigated the potential of a bioacoustic tool for use with terrestrial mammals. Even relatively simple signals, such as
alarm calls, have been shown to contain different types of information, and can even be individually distinctive in their structure. Similarities in vocal structure among individuals may also reflect close kin relationships, as suggested by previous work. We explored the
feasibility of a bioacoustic tool for monitoring mammalian populations by testing for the presence of individual, age-class, and sexrelated information within the alarm calls of Belding’s ground squirrels from several geographically distinct populations. A neural network was used to successfully classify alarm calls to individual, age class and sex, demonstrating that this species’ alarm calls contain
2062
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2062
information regarding at least three different caller characteristics. We also found that acoustic similarity, as measured by an index of
acoustic distance, was significantly correlated with genetic relatedness between individuals. This work indicates that vocalizations have
the potential to provide more information to wildlife managers about terrestrial mammal populations than just species presence, and
may even provide insight about the level of inbreeding.
1:45
4pABb3. Response of nesting northern Goshawks to logging truck noise Kaibab National Forest, Arizona. Teryl Grubb (Rocky
Mountain Research Station, U.S. Forest Service, 2500 S. Pine Knoll Dr., Flagstaff, AZ 86001, [email protected]), Larry Pater (Engineer
Research Development Center, (Retired), Champaign, IL), Angela Gatto (Kaibab National Forest, U.S. Forest Service, Fredonia, AZ),
and David Delaney (Engineer Research Development Center, Construction Engineering Research Laboratory, Champaign, IL)
We recorded 94 sound/response events at northern goshawk (Accipiter gentilis) nests 167, 143, and 78 m from the nearest road in
Jun 2010: 60 experimentally controlled logging trucks, plus 30 passing light aircraft, 3 cars, and 1 all-terrain vehicle (ATV). Logging
truck noise levels varied among nest sites and with distance from roads (F = 36.753, P < 0.001, df = 59). Aircraft noise levels for each
day of testing ranged between 45.6-67.9 dB, and varied little among test sites, 60.1-65.6 dB (F = 2.008, P = 0.154, df = 29). Our test logging truck (61.9 dB adjusted CLEQ) was no louder than passing aircraft (62.3 dB adjusted CLEQ; t-test, P = 0.191), which goshawks
generally ignored. The logging truck resulted in 27% no response and 73% alert response; passing aircraft resulted in 90% no response
and only 10% alert response; and 3 cars and 1 ATV, combined, resulted in 50% each for no response and alert response (v2 = 82.365,
P < 0.005). Goshawk alert response rates were inversely proportional to nest distance from the nearest road (v2 = 29.861, P < 0.005).
Logging truck noise had no detrimental effects on nesting northern goshawks on the Kaibab Plateau, Arizona.
2:05
4pABb4. Use of acoustics to quantify and characterize bullet overshot into sensitive wildlife areas. David Delaney (U.S. Army,
ERDC/CERL, 2902 Newmark Drive, Champaign, IL 61821, [email protected]) and Tim Marston (U.S. Army, Fort
Benning, GA)
Live-fire training exercises on military installations are known to impact tree health and can potentially affect nesting/foraging habitat and behavior of terrestrial animals, though few studies have attempted to quantify and characterize bullet overshot from military
training operations downrange into sensitive wildlife areas. There is concern about the potential impact that downrange military munitions might have on the federally endangered Red-cockaded Woodpecker and its foraging and nesting habitat. Various anecdotal methods have been used in an attempt to document bullet overshot, but none of these methods have been shown to be effective. The
objective of this project is to demonstrate that acoustical techniques can accurately and effectively record and characterize live-fire bullet
overshot into sensitive wildlife areas downrange of active military ranges. This research is part of a long-term study on Fort Benning,
GA to investigate how munitions fire affects Red-cockaded Woodpecker nesting/foraging habitat and nesting behavior, while also investigating the effectiveness of earthen berms at stopping bullets from entering downrange areas. Preliminary results and field protocols
will be presented and discussed.
2:25
4pABb5. Determinants of accuracy in a terrestrial microphone array. Alan H. Krakauer (Department of Evolution and Ecology,
University of California at Davis, 2320 Storer Hall, One Shields Ave, Davis, CA 95616, [email protected]), John Burt (Department of Psychology, University of Washington, Seattle, WA), Neil Willits (Department of Statistics, University of California at Davis,
Davis, CA), and Gail L. Patricelli (Department of Evolution and Ecology, University of California at Davis, 2320 Storer Hall, One
Shields Ave, Davis, CA 95616)
4p THU. PM
Acoustic sensor arrays can allow researchers to localize the position of vocalizing animals. During the course of research on a threatened bird species, the greater sage-grouse, we developed a 24-channel wired array to non-invasively monitor male courtship displays at
traditional display grounds (i.e. leks). Here we describe a study in which we localized repeated playbacks of four local species while
varying speaker position, the number and arrangement of microphones, and accuracy of speed of sound and sensor location estimates.
As expected, localization accuracy was lowest when the speaker was outside the array and when using a linear microphone arrangements. We found no overall effect of species identity in spite of strong differences in time and frequency structure of the playbacks,
although we did find significant interactions of species with other factors in our analysis. Simulated errors in speed-of-sound-in-air and
estimation of sensor position revealed that while localization was most accurate when these errors were small, localization was still possible even with relatively large errors in these two factors. While we hope these results will help researchers to design effective sensor
arrays, specific outcomes will depend on study-specific factors as well as the specific sound processing and localization algorithms
employed.
2:45
4pABb6. Passive acoustic monitoring of bullfrog choruses: Spontaneous and evoked changes in group calling activity. Andrea M.
Simmons (Cognitive, Linguistic & Psychological Sciences, Brown University, Box 1621, Providence, RI 02912, [email protected]
brown.edu), Jeffrey M. Knowles (Neuroscience, Brown University, Providence, RI), Eva Jacobs (Cognitive, Linguistic & Psychological
Sciences, Brown University, Providence, RI), and James A. Simmons (Neuroscience, Brown University, Providence, RI)
We developed a multiple-microphone array method for recording temporal and spatial interactions of groups of vocalizing male bullfrogs, and for analyzing how this chorusing behavior is perturbed by playbacks of modified frog calls. Chorusing bullfrogs were
recorded over 3 nights (90 min sessions) using an array of ten MEMS microphones distributed along a 20-m sector beside a natural
pond. Vocal responses were digitized at 50 kHz using Measurement Computing A-to-D boards and customized software on a Lenovo
Thinkpad. Individual frogs were located by time-difference-of-arrival measurements at the array. Baseline chorus activity was recorded
for 10-20 min before and after playbacks. Playbacks consisted of digitized exemplars of two natural 5-croak advertisement calls, which
were manipulated by adding or subtracting spectral components or by introducing masking noise. Baseline chorus activity featured both
alternation of calls, mostly between far neighbors, and overlapping of calls, mostly by near neighbors. Bullfrog evoked vocal responses
2063
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2063
were modified by playbacks of stimuli with altered spectral components, suggesting that the animals perceived these spectral modifications. The array technique indicated that responses of far neighbors were often more strongly impacted by playbacks than those of near
neighbors.
3:05–3:20 Break
3:20
4pABb7. Environmental determinants of acoustic activity in Iberian anurans. Rafael Marquez, Diego Llusia (Dept de Biodiversidad
y Biologia Evolutiva, Fonoteca Zool
ogica, Museo Nacional de Ciencias Naturales CSIC, Jose Gutierrez Abascal 2, Madrid 28006,
Spain, [email protected]), and Juan Francisco Beltran (Dept of Zoology, University of Seville, Sevilla, Spain)
We monitored acoustic activity of populations of anurans (genera Hyla and Alytes) in the Iberian Peninsula (Spain and Portugal) in
localities at thermal extremes of their distribution. Logistic and linear regression models revealed that the major social and environmental determinants of calling behavior (chorus recruitment and chorus duration) were similar over most populations. Chorus recruitment
was less dependent on environmental factors than chorus duration, which was also influenced by chorus size. Seasonal variation of night
temperatures in populations of Hyla showed wide overall ranges (above 11 C) and gradual increases of the nightly mean (3-12 C),
which was positively associated with the day number in the breeding season. Within days, temperatures were typically close to their
daily maximum at sunset, the initiation of calling activity. We compared ranges of calling temperatures among species, populations, and
seasons over three years. We showed that calling temperature changed when anuran populations were subjected to different thermal
environments. Species had wide calling temperatures ranges across their distribution. Interannual comparisons showed that both terrestrial and aquatic breeding anurans were active during extremely hot breeding seasons. Lower thermal thresholds for the onset of calling
were different between conspecific populations, suggesting that other factors are needed to trigger reproduction.
3:40
4pABb8. Passive acoustic monitoring of fish in shallow water estuaries. Mark W. Sprague (Dept. of Physics, East Carolina University, Mail Stop 563, Greenville, NC 27858, [email protected]), Cecilia S. Krahforst (Coastal Resources Management Doctoral Program, East Carolina University, Greenville, NC), and Joseph J. Luczkovich (Inst. for Coastal Science and Policy and Dept. of Biology,
East Carolina University, Greenville, NC)
Passive acoustic monitoring is a useful tool for studying soniferous fishes in shallow water estuaries. We have used a variety of techniques for monitoring the acoustic environment in the coastal waters of North Carolina (USA) to study fishes in the Family Sciaenidae
(drums and croakers), which produce sounds with frequencies below 1000 Hz. We will present data recorded with hydrophones
deployed from a small boat, a hydrophone array towed behind a boat, and remote data loggers. We have used passive acoustic recordings
to study the distributions (large- and small-scale) and seasonality of acoustically active courtship and spawning behavior, acoustic interactions between predators and prey, the effects of noise from tugs and small boats on fish sound production, and relationships between
fish sound production and environmental parameters such as temperature and salinity. One limitation on shallow-water acoustic monitoring is the sound propagation cutoff frequency, which depends on the water depth. All frequency components below the cutoff frequency
decay exponentially with propagation distance. This limit on shallow-water sound propagation must be considered when selecting locations for acoustic monitoring and comparing recordings made in waters of different depths. We will explore the implications on acoustic
monitoring due to the cutoff frequency.
4:00
4pABb9. Reproductive success of Mexican spotted owls (Strix occidentalis lucida) in relation to common environmental noise biotic, non-military aircraft, and weather-related. Ann E. Bowles (Bioacoustics Laboratory, Hubbs-SeaWorld Research Institute,
2595 Ingraham Street, San Diego, CA 92109, [email protected]), Samuel L. Denes (Graduate Program in Acoustics, Pennsylvania
State University, State College, PA), Chris Hobbs, and Kenneth J. Plotkin (Wyle, Arlington, VA)
From 2000 to 2005, noise in Mexican spotted owl habitat in the Gila National Forest, NM, was monitored using an array of LarsonDavis (LD) sound level meters (SLMs). Thirty-nine SLMs were deployed across a 20 km x 24 km area, collecting 2-s time interval data
mid-April to July, resulting in over 350,000 hr of data. Time-history profiles could be used to attribute many events to sources reliably
when SNR exceeded the background by 5-10 dB. The events were categorized as biotic (insects and chorusing birds), thunder, regional
commercial jet aircraft, and local air traffic (recreational and firefighting). Measured by the proportion of 2-s samples with LAeq > 60
dB, biotic sources and thunder were the most important. Regional commercial jet traffic was the most significant anthropogenic source,
accounting for 2% of the total. Based on cumulative sound exposure, thunder was the greatest contributor. Regression techniques were
used to relate owl reproductive success to noise metrics by source. Biotic noise was the only significant correlate, highly and positively
related to owl reproductive success. The most reasonable interpretation was a strong relationship between biotic noise and owl prey base
[Work supported by U.S. Air Force ACC/CEVP.]
4:20
4pABb10. Assessing the effects of sound on a forest-nesting seabird, the threatened marbled murrelet (Brachyramphus marmoratus). Emily J. Teachout (Washington Fish and Wildlife Office, U.S. Fish and Wildlife Service, 510 Desmond Drive SE, Suite 102,
Lacey, WA 98503, [email protected])
The marbled murrelet is a forest-nesting seabird that is federally listed under the Endangered Species Act. This species is unusual in
that it flies up to 70 miles inland to nest in mature trees rather than on shorelines near its foraging habitat. The first nest was not discovered until 1974, and much remains to be learned about this difficult-to-study species, including its basic hearing sensitivities and
response to auditory stimuli. In evaluating the effects of federal actions on this species, we must assess the potential effects of anthropogenic sound from a variety of sources, both at-sea and in the forested environment. Over the past ten years, we have developed an
approach for analyzing the effect of anthropogenic sound by conducting literature reviews, convening expert panels, and drawing from
2064
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2064
information on other species. We address both impulsive and continuous sounds from sources including impact pile driving, blasting,
heavy equipment noise, and sonar. Interim thresholds for expecting injurious effects from some of these sources are in use, and refinements to our analysis of forest-management activities were recently applied to landscape-scale consultations. The bioacoustic research
needs of this unique species continue to emerge as we apply these approaches in both the aquatic and terrestrial environments where
murrelets occur.
4:40
4pABb11. The American National Standards Institute/Acoustical Society of America new (draft as 5 May 2012) standard method
to define and measure the background sound in quiet areas. Paul Schomer (Schomer and Associates Inc., 2117 Robert Drive, Champaign, IL 61821, [email protected]) and Kurt Fristrup (Natural Sounds and Night Skies Division, National Park
Service, Ft. Collins, CO)
This draft standard is a joint work effort of S3/SC1, the animal bioacoustics committee, and S12, the noise committee. The draft standard includes 3 major, distinct components: (1) a method to measure the background using unattended instruments, and based on L-levels
with L-90 as the default; (2) a definition for ANS-weighted sound pressure level, which is simply the A-weighted sound level after deleting all of the sound energy in the 2-kHz octave band or above; and (3), requirements for monitoring the sound in parks and wilderness
area. The background measurement procedure is mainly for low-noise residential areas and relevant to the siting of such installations as
wind farms and power plants. The ANS-weighting is applicable to both measurement of background and monitoring in parks. The requirements for monitoring in parks and wilderness areas ensure that measurements adequately capture the range of natural ambient conditions.
The draft standard provides for two grades of measurement/monitoring: engineering or survey. In addition, this is the first standard to
clearly and unambiguously define and establish the requirements for the measurement of percentile levels (exceedance values).
5:00–5:50 Panel Discussion
THURSDAY AFTERNOON, 25 OCTOBER 2012
TRIANON B, 2:00 P.M. TO 4:00 P.M.
Session 4pBA
Biomedical Acoustics: Therapeutic and Diagnostic Ultrasound
Robert McGough, Chair
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824
Contributed Papers
4pBA1. Effect of skull anatomy on intracranial acoustic fields for ultrasound-enhanced thrombolysis. Joseph J. Korfhagen (Neuroscience Graduate Program, University of Cincinnati, 231 Albert Sabin Way CVC 3948,
Cincinnati, OH 45267-0586, [email protected]), Jason L. Raymond
(Biomedical Engineering Program, College of Engineering and Applied Science, University of Cincinnati, Cincinnati, OH), Christy K. Holland (Internal Medicine, Division of Cardiovascular Diseases, University of
Cincinnati, Cincinnati, OH), and George J. Shaw (Emergency Medicine,
University of Cincinnati, Cincinnati, OH)
Transcranial ultrasound improves thrombolytic drug efficacy in ischemic
stroke therapy. The goal of this study was to determine the ideal ultrasound
parameters for obtaining peak rarefactional pressures exceeding the stable
cavitation threshold at the left anterior clinoid process (lACP) of the skull.
This location is near the origin of the middle cerebral artery, a common site
for ischemic stroke. For 0.5, 1.1 and 2.0-MHz ultrasound transducers, pulse
repetition frequencies (PRF) ranging from 5.4-8.0 kHz were studied at a
50% duty cycle. Attenuation and ultrasound beam distortion were measured
from a cadaveric human skull. Each transducer was placed near the left temporal bone such that the unaberrated maximum acoustic pressure would be
located at the lACP. A hydrophone measured the acoustic field around the
lACP. Free-field measurements were taken in the same locations to determine attenuation and beam focus distortion. For 5 skulls, the average pressure attenuation at the lACP was 68619, 9165.1, and 9464.7% for 0.5,
1.1, and 2.0 MHz, respectively. The degree of displacement of the beam
focus depended on the skull properties, but not the center frequency nor
PRF. In conclusion, lower frequencies exhibited lower attenuation and
2065
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
improved penetration at the lACP. This work was supported by NIH-3P50NS044283-06S1.
2:15
4pBA2. Fiber-optic probe hydrophone measurement of lithotripter
shock waves under in vitro conditions that mimic the environment of
the renal collecting system. Guangyan Li, James McAteer, James Williams
(Department of Anatomy and Cell Biology, Indiana University School of
Medicine, 635 Barnhill Dr., Indinapolis, IN 46202, [email protected]), and
Michael Bailey (Applied Physics Lab, University of Washington, Seattle,
WA)
The fiber-optic probe hydrophone (FOPH) is the accepted standard for
characterization of lithotripter shock waves, and measurements are typically
conducted in water in the unconstrained free-field. To sort out potential factors that may affect in vivo measurements within the collecting system of
the kidney, we assessed for the effect of contact between the fiber tip and
tissue as might occur when working “blind” within the urinary tract, and
contamination of the fluid medium (isotonic saline) by blood as often occurs
during endourological procedures. Studies were performed using a Dornier
Compact-S electromagnetic lithotripter. Contact of the optical fiber with ex
vivo kidney tissue lowered pressure readings. The effect was greatest (~45%
reduction) when the fiber was oriented normal to tissue, but pressures were
also reduced when a length of fiber rested parallel against tissue (~5-10%
reduction). Placing the fiber tip near, but not touching tissue, increased variability in peak negative pressure (P-). Adding porcine blood to the medium
(up to 10% V/V) had no effect on readings. These findings suggest that position/orientation of the FOPH relative to surrounding tissue is critical and
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2065
4p THU. PM
2:00
must be controlled, but that micro-hematuria will not be a confounding factor for in vivo measurements. (NIH-DK43881)
2:30
4pBA3. Speckle generation and analysis of speckle tracking performance in a multi-scatter pressure field. Ayse Kalkan-Savoy (Biomedical
Engineering, UMass-Lowell, 1 University Ave, Lowell, MA 01854, ayse.k.
[email protected]) and Charles Thompson (Electrical and Computer Engineering, UMass-Lowell, Lowell, MA)
Speckle tracking imaging is used as a method to estimate heart strain.
An analysis of accuracy of speckle tracking and its potential to be utilized in
quantification of myocardial stress through estimation of heart motion is
examined. Multiple scattering effects are modeled using the Kirchoff integral formulation for the pressure field. The method of Pade approximants is
used to accelerate convergence and to obtain temporal varying characteristics of the scattered field. Phantoms having varied acoustical contrast media
and speckle density are used in this study. The effectiveness of inter-image
frame of correlation methods for estimating speckle motion in high contrast
media is considered. (NSF Grant 0841392)
2:45
4pBA4. Analytical and numerical approximations for the lossy on-axis
impulse response of a circular piston. Robert McGough (Department of
Electrical and Computer Engineering, Michigan State University, 2120 Engineering Building, East Lansing, MI 48824, [email protected])
In biological tissues, the frequency dependence of attenuation and speed
of sound for ultrasound propagation is described by the power law wave equation, which is a partial differential equation with fractional time derivatives.
As demonstrated previously, the time domain Green’s function for the power
law wave equation combined with the Rayleigh-Sommerfeld integral is an
effective reference for calculations of the lossy on-axis impulse response of a
circular piston. Using the result obtained from this reference, two different
approximations to the lossy on-axis impulse response are evaluated. The first
approximation is an analytical expression that is proportional to the difference
between two cumulative distribution functions for maximally skewed stable
probability densities. The second approximation numerically convolves the
lossless impulse response with a maximally skewed stable probability density
function. The results show that both approximations achieve relatively small
errors. Furthermore, the analytical approximation provides an excellent estimate for the arrival time of the lossy impulse response, whereas the departure
time of the lossy impulse response is more difficult to characterize due to the
heavy tail of the maximally skewed stable probability density function. Both
approximations are rapidly calculated with the STABLE toolbox. [supported
in part by NIH Grant R01 EB012079.]
3:00
4pBA5. The lossy farfield pressure impulse response for a rectangular
piston. Robert McGough (Department of Electrical and Computer Engineering, Michigan State University, 2120 Engineering Building, East Lansing, MI 48824, [email protected])
The impulse response of the velocity potential is useful for computing
transient pressures in lossless media, especially for calculations in the nearfield region. Closed form expressions for the lossless impulse response of
the velocity potential in the nearfield are available for circular and rectangular transducers and for several other geometries. A closed form lossless farfield expression is also available for rectangular transducers. Typically,
when the effects of attenuation are introduced, the numerical calculation is
performed in the frequency domain, and the time response is obtained with
an inverse fast Fourier transform. To derive an equivalent analytical result
directly in the time domain, all path lengths that appear in the denominator
scaling term of the lossy diffraction integral are treated as constants, and a
binomial expansion is applied to the path length that appears in the time
delay term. The resulting analytical expression, which describes the lossy
farfield pressure impulse response, is directly expressed in terms of maximally skewed stable probability density functions and cumulative distribution functions. Results are compared with the Rayleigh Sommerfeld
integral, and excellent agreement is achieved in the farfield region. [supported in part by NIH Grant R01 EB012079.]
2066
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
3:15
4pBA6. Histological analysis of biological tissues using high-frequency
ultrasound. Kristina M. Sorensen (Department of Mathematics and
Statistics, Utah State University, 698 E 700 N, Logan, UT 84321, Kristina.
[email protected]), Timothy E. Doyle, Brett D. Borget, Monica
Cervantes, J. A. Chappell, Bradley J. Curtis, Matthew A. Grover, Joseph E.
Roring, Janeese E. Stiles, and Laurel A. Thompson (Department of Physics,
Utah Valley University, Orem, UT)
High-frequency (20-80 MHz) ultrasonic measurements have the potential to detect cancer and other pathologies within breast tissues in real time,
and thus may assist surgeons in obtaining negative or cancer free margins
during lumpectomy. To study this approach, ultrasonic tests were performed
on 34 lumpectomy margins and other breast tissue specimens from 17
patients to provide pulse-echo and through-transmission waveforms. Timedomain waveform analysis yielded ultrasonic attenuation, while fast Fourier
transforms of the waveforms produced first- and second-order ultrasonic
spectra. A multivariate analysis of the parameters derived from these data
permitted differentiation of normal, adipose, benign, and malignant breast
pathologies. The results provide a strong correlation between tissue microstructure and ultrasonic parameters relative to the morphology and stiffness
of microscopic features such as ductules, lobules, and fibrous structures. Ultrasonic testing of bovine heart, liver, and kidney tissues supports this correlation, showing that tissues having stiff fiber-like or filled-duct structures,
such as myocardium or ductal carcinomas, display greater peak densities in
the ultrasonic spectra than tissues with soft, open duct-like structures, such
as kidney tissue or normal breast glands. The sensitivity of high-frequency
ultrasound to histopathology may assist in eliminating invasive re-excision
for lumpectomy patients. [Work supported by NIH R21CA131798.]
3:30
4pBA7. The design and fabrication of a linear array for three-dimensional intravascular ultrasound. Erwin J. Alles, Gerrit J. van Dijk (Laboratory of Acoustical Wavefield Imaging, Delft University of Technology, Delft,
Zuid-Holland, Netherlands), Antonius van der Steen (Biomedical Engineering,
Thorax Centrum, Erasmus MC, Rotterdam, Zuid-Holland, Netherlands),
Andries Gisolf, and Koen van Dongen (Laboratory of Acoustical Wavefield
Imaging, Delft University of Technology, Lorentzweg 1, Room D212, Delft,
Zuid-Holland, Netherlands K.W.A., [email protected])
Current intravascular ultrasound catheters generate high resolution crosssectional images of arterial walls. However, the elevational resolution, in the
direction of the catheter, is limited, introducing image distortion. To overcome this limitation, we designed and fabricated a linear array which can be
rotated to image a three-dimensional volume at each pullback position. The
array consists of eight rectangular piezo-electric elements of 350 mm by 100
mm operating at a center frequency of 21 MHz with a fractional bandwidth of
80 %, separated by a kerf of 100 mm. The array has been tested on both an ex
vivo bovine artery and phantoms and, using the real aperture of the array, axially densely sampled images of the artery are obtained in every position. The
array consistently yields significantly higher resolution in longitudinal images
and more detail in radial images compared to a conventional catheter.
3:45
4pBA8. Parametric imaging of three-dimensional engineered tissue constructs using high-frequency ultrasound. Karla P. Mercado (Department
of Biomedical Engineering, University of Rochester, Rochester, NY 14627,
[email protected]), Marıa Helguera (Center for Imaging
Sciences, Rochester Institute of Technology, Rochester, NY), Denise C.
Hocking (Department of Pharmacology and Physiology, University of
Rochester, Rochester, NY), and Diane Dalecki (Department of Biomedical
Engineering, University of Rochester, Rochester, NY)
The goal of this study was to use high-frequency ultrasound to nondestructively characterize three-dimensional engineered tissues. We hypothesized that backscatter spectral parameters, such as the integrated backscatter
coefficient (IBC), can be used to quantify differences in cell concentration
in engineered tissues. We chose the IBC parameter since it estimates the
backscattering efficiency of scatterers per unit volume. In this study, acoustic fields were generated using single-element, focused transducers (center
frequencies of 30 and 40 MHz) operating over a frequency range of 13 to 47
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2066
MHz. Three-dimensional engineered tissue constructs were fabricated with
mouse embryonic fibroblasts homogenously embedded within agarose. Constructs with cell concentrations ranging from 1x104 to 1x106 cells/mL were
investigated. The IBC was computed from the backscatter spectra, and parametric images of spatial variations in the IBC were generated. Results
showed that the IBC increased linearly with cell concentration. Further, we
demonstrated that parametric images detected spatial variations in cell concentration within engineered tissue constructs. Thus, this technique can be
used to quantify changes in cell concentration within engineered tissues and
may be considered as an alternative to histology. Furthermore, because this
technique is nondestructive, it can be employed for repeated monitoring of
engineered tissues throughout the duration of fabrication.
THURSDAY AFTERNOON, 25 OCTOBER 2012
LIDO, 1:30 P.M. TO 3:15 P.M.
Session 4pEA
Engineering Acoustics: Electromechanical Considerations in Transducer Design
R. Daniel Costley, Cochair
Geotechnical and Structures Lab., U.S. Army Engineer R&D Center, Vicksburg, MS 39180
Robert M. Koch, Cochair
Chief Technology Office, Naval Undersea Warfare Center, Newport, RI 02841-1708
Contributed Papers
4pEA1. Analysis of electromechanical parameters of thick rings under
radial, axial, and circumferential modes of polarization. Sairajan sarangapani (Department of Electrical Engineering, University of Massachusetts,
Dartmouth, Fall River, MA 02747, [email protected]), Corey L. Bachand, Boris Aronov (BTech Acoustics LLC, Fall River, MA), and David A.
Brown (Department of Electrical Engineering, University of Massachusetts,
Dartmouth, Fall River, MA)
Piezoceramic short hollow cylinders and annular disks operating in the
extensional mode of vibration are typically used where the thickness is
small compared to its lateral dimensions. For a thin ring, strain is along the
circumferential direction and the mechanical system is considered as one
dimensional. But it is not clear as to what extent a ring can be considered as
thin. This study calculates the electromechanical parameters of thick rings
under different modes of polarization using the energy method. An analytical formulation is presented and expressions for the internal energies and
the electromechanical parameters are derived by analyzing the mechanical
system of vibration under different polarization. The resonance frequencies,
effective coupling coefficient and correction factors for the various electromechanical parameters under different modes of polarization as a function
of thickness along the radial direction are presented.
1:45
4pEA2. Free in-plane vibrations of annular sector plates with elastic
boundary supports. Xianjie Shi (College of Mechanical and Electrical Engineering, Harbin Engineering University, No. 145 Nantong Street, Nangang District, Harbin, Heilongjiang 150001, China, [email protected]), Wen L. Li
(Department of Mechanical Engineering, Wayne State University, Detroit, MI,
United Kingdom), and Dongyan Shi (College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang, China)
In this investigation, a generalized Fourier series method is proposed for
the in-plane vibration analysis of annular sector plates with elastic restraints
along each of its edges. The in-plane displacement fields are universally
expressed as a new form of trigonometric series expansions with a drastically
improved convergence as compared with the conventional Fourier series.
The expansion coefficients are considered as the generalized coordinates,
and determined using the Rayleigh-Ritz technique. Several examples are presented to demonstrate the effectiveness and reliability of the current method
for predicting the modal characteristics of annular sector plates with various
2067
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
cutout ratios and sector angles under different boundary conditions. It is also
shown that annular and circle plates can be readily included as the special
cases of the annular sector plates when the sector angle is set equal to 360 .
2:00
4pEA3. Comparison of the electromechanical properties of bars vibrating in flexure under transverse, longitudinal, and tangential polarization. Sairajan sarangapani (Department of Electrical Engineering,
University of Massachusetts, Dartmouth, Fall River, MA 02747, [email protected]
yahoo.com), Boris Aronov (BTech Acoustics LLC, Fall River, MA), and
David A. Brown (Department of Electrical Engineering, University of Massachusetts, Dartmouth, Fall River, MA)
The calculation of electromechanical properties of stripe-electroded bar
vibrating in flexure becomes complicated as it involves nonuniform electric
field distributions in the poling and operational mode and nonuniform mechanical strain distributions. This study is an extension of the previous work [J.
Acoust. Soc. Am. 130, 2394 (2011)] and involves the calculation of the electromechanical parameters of stripe-electroded bars vibrating in flexure. The
contributions due to the prominent longitudinal 33 mode, transverse 31 mode
and the shear 15 mode are taken into account and the corresponding expressions for the internal energy (electrical energy, electromechanical energy and
mechanical energy) are derived under the assumption that the piezoelement is
fully polarized. Results of calculations are presented for effective coupling
coefficient as a function of various distances between the electrodes and are
compared with single-sided stripe-electroded bar design and the traditional
bimorph designs using transverse and longitudinal piezoelectric effect.
2:15
4pEA4. Capacitive micromachined ultrasound Doppler velocity sensor
using a nickel on glass process. Minchul Shin, Zhengxin Zhao (Mechanical
Engineering, Medford, MA), Paul DeBitetto (Draper Labs, Cambridge,
MA), and Robert D. White (Mechanical Engineering, Tufts University, 200
College Ave, Medford, MA 02155, [email protected])
The design, fabrication, modeling and characterization of a small (1 cm2
transducer chip) acoustic Doppler velocity measurement system using a
capacitive micromachined nickel on glass ultrasound transducer array technology is described. The acoustic measurement system operates in both
transmit and receive mode. The device consists of 168 0.6 mm diameter
nickel diaphragms, and operates at approximately 180 kHz. Computational
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2067
4p THU. PM
1:30
predictions suggest that in transmit mode the system will deliver an 11
degree -3dB beamwidth ultrasound. Characterization of the cMUT sensor
with a variety of testing procedures including acoustic testing, Laser Doppler
Vibrometry (LDV), beampattern test, reflection test, and velocity testing will
be shown. LDV measurements demonstrate that the membrane displacement
at center point is 0.1 nm/V2 at 180 kHz. During beampattern testing, the
measured response was 0.1 mVrms at the main lobe with 90 kHz drive at 20
Vpp (frequency doubling causes the acoustics to be at 180 kHz). The maximum range of the sensor is 1.7 m. Finally, a velocity sled was constructed
and used to demonstrate measureable Doppler shifts at velocities from 0.2 m/
s to 0.8m/s. Doppler shifts are clearly seen as the velocity changes.
2:30
4pEA5. An electro-mechanical model of a carbon nanotube with application to spectrum sensing. Kavitha Chandra, Armand Chery, Fouad
Attioui, Charles Thompson (University of Massachusetts Lowell, 1 University Ave, Lowell, MA 01854, [email protected]),
The transduction of resonant mechanical vibrations induced in a single
walled carbon nanotube (CNT) with application to frequency and signal detection is investigated in this work. The CNT, clamped between two metallic
source and drain junctions and suspended in a trench above a gate electrode
behaves as a conducting channel, controlled by appropriate gate and drain voltages. The Euler-Bernoulli model of an elastic beam is applied to model transverse vibrations of the CNT, considering the nonlinear stretching effect of the
beam. The solution approach utilizes a Chebyshev-Galerkin spectral expansion
and captures the influence of the fundamental and higher harmonic spatial
modes when subject to time-harmonic and stochastic loads distributed across
the beam. The transverse vibrations of the CNT generate a time-varying capacitance between the CNT and the gate that leads to non-equilibrium potential
and charge induced on the tube. A self-consistent approach using a ballistic
semi-classical transport model is applied iteratively to compute the charge and
current variations at the drain terminal. Application of the system as a signal
demodulator is analyzed by mixing the drain current with a matched high-frequency carrier signal that drives the gate terminal.
2:45
4pEA6. Highly efficient piezoelectric micromachined ultrasonic transducer for the application of parametric array in air. Yub Je, Wonkyu
Moon (Mechanical Engineering, Pohang Universitiy of Science and Technology, San 31, Hyojadong Namgu, Pohang, Kyungbuk 790-784, Republic
of Korea, [email protected]), and Haksue Lee (Agency for Defense
Development, Changwon, Kyungnam, Republic of Korea)
A highly efficient piezoelectric micromachined ultrasonic transducer
was achieved for the application of parametric array in air. It is not easy to
2068
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
generate high-intensity sound required for nonlinear interaction in air due to
huge impedance mismatch between the air and the transducer. A thin-film
transducer such as micromachined ultrasonic transducer can achieve high
mechanoacoustic efficiency by reducing the mechanical characteristic impedance of the radiating plate. By theoretical analysis, the power efficiency
of the micromachined ultrasonic transducer and its maximum achievable
value were considered. Also, based on the theoretical model, the design
issues which reduce the power efficiency and radiation intensity were listed
and discussed. The effects of the leakage current through the parasitic impedances, resistance of the electrode pad, and unit-to-unit variation of the
MUT array on transducer efficiency were verified as the problems. By
adapting proper design approaches, a highly efficient pMUT array was
designed, fabricated, and tested as a source transducer for parametric array
in air. The pMUT array may promote the practical uses of a parametric array
source in air.
3:00
4pEA7. Acoustic filter design for precise measurement of parametric
array in near-field region. Yonghwan Hwang, Yub. Je (Mechnical engineering, Postech, Po hang, Kyung sang buk do, Republic of Korea), Wonho
Kim, Heesun Seo (ADD, Chang won, Kyung sang nam do, Republic of
Korea), and Wonkyu Moon (Mechnical engineering, Postech, Hyo ja dong,
Nam gu, Po hang, Kyung sang buk do KS010, Republic of Korea,
[email protected])
A parametric array is a nonlinear conversion process that generates a
narrow beam of low-frequency sound using a small aperture and that is
used in active underwater SONAR and communication systems. A parametric array generates a highly directional difference frequency wave
(DFW) by nonlinear interaction of bi-frequency primary waves at higher
frequencies. However, it is difficult to measure the parametric array effect
in the near distance of the transducer, because the high-pressure level of
the primary waves may cause the receiving transducer to provide with
pseudo-sound signals of difference frequency by its nonlinearity. The
pseudo-sound signals make it difficult to measure the real DFW due to the
parametric array process. In this study, we confirmed the existence of
pseudo-sound and its effects on measurement of the near-distance DFW by
the parametric array using numerical simulations based on the KZK equation. Also, a newly designed acoustic filter was proposed to eliminate
pseudo-sound signals. This new acoustic filter uses the resonance of a
thickness extensional mode. With the acoustic filter, low frequencies
(DFW) were passed and high frequencies (primary frequencies) were effectively reduced. Using this acoustic filter, the precise characteristics of the
difference frequencies could be measured. [Work supported by
ADD(UD100002KD).]
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2068
THURSDAY AFTERNOON, 25 OCTOBER 2012
ANDY KIRK A/B, 1:55 P.M. TO 5:45 P.M.
Session 4pMU
Musical Acoustics and Animal Bioacoustics: Sound Production in Organs, Wind Instruments, Birds, and
Animals—Special Session in Honor of Neville Fletcher
Thomas D. Rossing, Cochair
Stanford University, Los Altos Hills, CA 94022
Joe Wolfe, Cochair
University of New South Wales, Sydney, NSW 2052, Australia
Chair’s Introduction—1:55
Invited Papers
2:00
4pMU1. Neville Fletcher: Scientist, teacher, scholar, author, musician, friend. Thomas D. Rossing (Music, Stanford University,
Stanford, CA 94305, [email protected])
Neville has distinguished himself in many ways. He has received many awards from scientific societies in Australia, America,
Europe, and Asia. His hundreds of publications, including seven books, deal with physics, meteorology, biology, acoustics, and even poetry. Today we focus on his contributions to acoustics and especially to the Acoustical Society of America and the Australian Acoustical
Society, where he is especially remembered for his work in musical acoustics.
2:20
4pMU2. Employing numerical techniques in the analysis and design of musical instruments. Katherine A. Legge and Joe Petrolito
(Civil Engineering and Physical Sciences, La Trobe University, PO Box 199, Bendigo, VIC 3552, Australia, [email protected])
In the simplest of terms, a musical instrument consists of a source of oscillation coupled to a resonating body. The exception to this
is an idiophone such as a triangle or gong, where the vibrating source is able to be its own radiator of sound. Whatever the configuration,
the radiating structure is generally not a simple shape easily represented by a mathematical formulation, and analytical solutions to the
governing equations of even a simplified model are often not obtainable. Working with Neville Fletcher in the 1980’s a personal computer was employed to undertake a time-stepping routine through the equations of a simplified model of a kinked metal bar, to depict
nonlinear coupling of its modes of vibration. Similar analysis of a gong modelled by a spherical shell with a kinked edge was well
beyond the available computing power. In this paper we illustrate how the development of computers and numerical techniques over the
intervening thirty years means that we are now able to describe and analyse complex shapes and are encroaching on the domain of the
instrument maker through the use of numerical optimisation for instrument design.
2:40
4p THU. PM
4pMU3. The physics and sound design of flue organ pipes. Judit Angster (Acoustics, Fraunhofer IBP, Nobelstr 12, Stuttgart 70569,
Germany, [email protected]), Andras Mikl
os (Applied Acoustics, Steinbeis Transfer Center, Stuttgart, Baden Wuerttemberg,
Germany), Peter Rucz, and F€
ul€
op Augusztinovicz (Dept. of Telecommunications, Budapest University of Technology and Economics,
Budapest, Budapest, Hungary)
Sound design methods for flue organ pipes can be developed for the practical application in organ building due to the high performance of modern computers and the advanced level of the scientific research on flue organ pipe acoustics. The research of Neville Fletcher
and the team around him has contributed in a high extent to the scientific understanding of the behaviour of flue organ pipes. By extending this knowledge sound design methods and dimensioning software for different special flue organ pipes have been developed in the
frames of European research projects. As examples the following topics will be mentioned: -the development of optimal scaling and a
software for designing the depth and width of wooden organ pipes without changing the sound character; -the development of optimal
scaling, design and software of chimney flutes by means of appropriate laboratory experiments and computer simulations.
3:00
4pMU4. Coupling of violin bridge modes to corpus modes analytical model. Graham Caldersmith (Caldersmith Luthiers, 12 Main
Street, Comboyne, NSW 2429, Australia, [email protected])
The transmission of string vibration forces to the violin belly by the bridge is modulated by two principal bridge resonances around
3KHz and 6 KHz, frequency bands critical to tone perception by the human hearing system. Much music acoustics research has dealt with
these bridge modes (and those of the ‘cello and bass) and their influence on the excitation of the corpus modes at the bridge feet. This analytical treatment of the string to corpus transmission by the bridge is necessarily complex, but reveals factors in the process which explain
2069
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2069
the action of the different strings through the bridge and the levels of corpus mode excitation amplified by the bridge modes. The theoretical
predictions are tested against experimental responses with normal bridges and bridges blocked to eliminate the two important modes.
3:20–3:30 Break
3:30
4pMU5. The didjeridu: Relating acoustical properties to players’ reports of performance qualities. John Smith, Guillaume Rey,
and Joe Wolfe (Physics, University of New South Wales, Sydney, NSW 2052, Australia, [email protected])
Relating objective acoustical measurements of an instrument, without a player, to the qualities reported by players is often a difficult
goal in music acoustics. The didjeridu offers advantages in such a study because it is inherently ‘blind’—neither player nor researcher
knows what is inside—and because there are wide variations in objective parameters. Here, seven experienced players reported several
qualities and overall quality of 38 traditionally made didjeridus whose acoustic impedance spectra and overall geometry were measured.
The rankings for ‘overtones’, ‘vocals’, ‘resonance’, ‘loudness’ and overall quality were all negatively correlated with the characteristic impedance of the instrument, defined as the geometric mean of the first impedance maximum and minimum. ‘Speed’ was correlated positively
with the frequency of the lowest frequency impedance peak, near which the instrument plays. Assessments of geometrically simple PVC
pipes yielded similar results. The overall ranking was highest for instruments with a low magnitude impedance, particularly in the 1-2 kHz
range. This is the range in which players produce a strong formant in the radiated sound by varying vocal tract resonances with comparable
ranges of impedance. This study and the researchers were inspired by the pioneering research in music acoustics by Neville Fletcher.
3:50
4pMU6. Bilateral coordination and the motor basis of female preference for sexual signals in canary song. Roderick A. Suthers
(Medical Sciences, Indiana University, Jordan Hall, Bloomington, IN 47405, [email protected]), Eric Vallet, and Michel Kreutzer
(Laboratoire d’Ethologie et Cognition Comparees, Universite Paris Ouest, Nanterre, France)
The preference of female songbirds for particular traits in songs of courting males has received considerable attention, but the relationship
of preferred traits to male quality is poorly understood. There is evidence that some aspects of birdsong are limited by physical or physiological constraints on vocal performance. Female domestic canaries (Serinus canaria) preferentially solicit copulation with males that sing special
high repetition rate, wide-band, multi-note syllables, called ‘sexy’ or A-syllables. Syllables are separated by minibreaths but each note is produced by pulsatile expiration, allowing high repetition rates and long duration phrases. The wide bandwidth is achieved by including two notes
produced sequentially on opposite sides of a syrinx, in which the left and right sides are specialized for low or high frequencies, respectively.
The temporal offset between notes prevents cheating by unilaterally singing a note on the left side with a low fundamental frequency and
prominent higher harmonics. The syringeal and respiratory motor patterns by which sexy syllables are produced, support the hypothesis that
these syllables provide a sensitive vocal-auditory indicator of a male’s performance limit for the rapid, precisely coordinated inter-hemispheric
switching, which is essential for many sensory and motor processes involving specialized contributions from each cerebral hemisphere.
4:10
4pMU7. Acoustical aspects of the flute. William Strong (Physics & Astronomy, Brigham Young Univ., C126 ESC, Provo, UT 84602,
[email protected])
Neville Fletcher’s wide ranging interests in excitation mechanisms and sound production in musical instruments and other sound sources have resulted in many related reports and publications. The presenter had the good fortune to work with Fletcher while holding a Senior
Fulbright Fellowship at the University of New England. The joint research on acoustical aspects of the flute resulted in two papers:
“Acoustical characterization of flute head joints” (Fletcher, Strong, and Silk, JASA 71, 1255-1260) and “Numerical calculation of flute
impedances and standing waves” (Strong, Fletcher, and Silk, JASA 77, 2166-2172). These two papers will be reviewed in the presentation.
4:30
4pMU8. Free reeds as pressure-controlled valves. James P. Cottingham (Physics, Coe College, 1220 First Avenue, Cedar Rapids, IA
52402, [email protected])
The analysis of excitation mechanisms in pressure-controlled wind instruments by Neville Fletcher in a 1979 paper [Acustica 43, 6372 (1979)] and later papers has been applied in understanding the operation of free reed instruments. This analysis enables calculation of
impedance curves for reed generators and yields the well-known result that an inward striking reed coupled to a pipe operates at a frequency below the reed resonance and below but close to the frequency of an impedance maximum of the pipe, while an outward striking
reed has operating frequency above both the reed resonance and the frequency of the pipe impedance maximum. This is useful, since
free reeds can function as either inward striking or outward striking reeds, depending on details of the reed design and possible coupling
with resonators. Fletcher’s analysis was first applied to free reeds in the harmonica by Johnston [Acoustics Australia, 16, 69-75 (1987)]
and has subsequently been applied to free reeds and free reed instruments in a variety of ways, which are summarized in this paper.
These include modeling the change in sounding frequency with blowing pressure as well as determining the sounding frequency of both
inward and outward striking reed-resonator combinations.
4:50
4pMU9. K-12 acoustics experiments. Uwe J. Hansen (Indiana State University, Terre Haute, IN 47803, [email protected])
Neville Fletcher has been active in many areas of acoustics, including acoustics education. He chaired a committee charged with
revamping the science education curriculum in Australia’s elementary and secondary education programs. Neville believes, as do I, in
the old maxim: “You hear - you forget, You see - you remember, You do - you understand”. In keeping with that I will discuss a number
of basic acoustics experiments adaptable in the elementary and secondary classroom or hands-on laboratory, including speed of sound,
2070
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2070
musical intervals, the Doppler effect, beats, resonance, spectral analysis and synthesis, musical instruments (violin, clarinet, etc.), the
human voice, and sound level.
5:10
4pMU10. The evolution of musical instruments. Neville H. Fletcher (Research School of Physics and Engineering, Australian
National University, Canberra, ACT 0200, Australia, [email protected])
The first musical instruments probably arose by chance because of the observed acoustic properties of materials, tools and weapons.
Subsequent human generations have then refined their design to produce the broad array of instruments we know today. Percussion instruments based upon simple wooden slats evolved into marimbas, metal shields evolved into gongs and bells, the taut string of a military bow
became a guitar and so on, while in the wind-instrument domain the uniform tubes of bamboo or animal leg bones became flutes, organs,
or reed instruments. Hydraulic mechanisms, then electrical motors, developed to power instruments such as pipe organs, while the feedback oscillations of electric amplifiers gave rise to electronic instruments. It is not possible to cover all aspects of this evolution in a short
presentation, but some interesting examples, particularly for wind instruments, will be examined in more detail. It will be seen that this evolution is not a terminated process but is still continuing today under influences very much like those of Darwinian natural selection.
5:30–5:45 Panel Discussion
THURSDAY AFTERNOON, 25 OCTOBER 2012
TRIANON C/D, 2:00 P.M. TO 5:05 P.M.
Session 4pNS
Noise, Architectural Acoustics, and ASA Committee on Standards: Ongoing Developments in Classroom
Acoustics—Theory and Practice in 2012, and Field Reports of Efforts to Implement Good Classroom
Acoustics II
David Lubman, Chair
DL Acoustics, 14301 Middletown Ln., Westminster, CA 92683-4514
Louis C. Sutherland, Chair
lcs-acoustics, 5701 Crestridge Ridge, Rancho Palos Verdes, CA 90275
Chair’s Introduction—2:00
Invited Papers
2:05
4p THU. PM
4pNS1. Renovation of an elementary school to meet high performance acoustic prerequisites for a special high school: Design
predictions and commissioning results. Steve Pettyjohn (The Acoustics & Vibration Group, Inc., 5700 Broadway, Sacramento, CA
95820, [email protected])
The renovation of an elementary school to house the MET high school program was done with a goal of meeting High Performance/
Green Building (HP/GB) requirements. Two acoustic prerequisites are a part of the HP requirements. If these two requirements are not
met, the project can not meet HP criteria no matter the number of points achieved in other areas. Acoustical consulting was brought in at
the 90 percent construction document phase with most designs completed for the mechanical system design, wall design and room finishes. An evaluation of the mechanical system, reverberation time and sound transmission loss showed that modifications would be
required. The HVAC systems had been ordered and placement was fixed. A room mock-up was done in the school to show that mechanical noise would exceed the 45 dB(A) limit as designed and installed. Early morning tests done on a modified system showed that meeting the prerequisites for background sound would be very difficult without additional modifications. Wall, door and window
modifications were implemented as was some of the acoustical treatment. Final sound commissioning tests proved that all of the prerequisites had been met and the low background sound from all sources was a pleasant surprise.
2:25
4pNS2. A soundscape study in an open plan classroom in a primary school. Sang Bong Shin and Gary W. Siebein (School of Architecture, University of Florida, PO Box 115702, Gainesville, FL 32611-5702, [email protected])
Advantages and disadvantages on open plan classrooms have been reported since this type of classroom was first constructed in
1970’s all over the world. A new type of open plan classroom combined with small classrooms that accommodate specific learning
activities has been developed as the 21st century learning environment to foster creative, interactive learning. In this study, a new school
2071
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2071
building which has this type of open plan classroom is examined with soundscape approaches in order to investigate the acoustical environments in the building. Acoustical events occurring in the open plan classroom in a primary school were analyzed and categorized.
The activities that created specific acoustical events were observed and categorized and then acoustical taxonomies were established for
each space. Acoustical measurements were conducted to examine the acoustical characteristics of each acoustic event and the combinations of acoustical events in the learning spaces. In addition, the results of the observations and measurements were compared to determine the differences in the effects of room conditions on the acoustical events in both traditional and open plan classrooms in the same
school. The study found that there are differences in the characteristics of the acoustical events as measured in traditional and open plan
according classrooms.
2:45
4pNS3. Room acoustic effects on speech comprehension by English-as-a-second-language versus native English-speaking listeners. Zhao Peng, Lily M. Wang, and Siu-Kit K. Lau (Durham School of Architectural Engineering and Construction, University of
Nebraska-Lincoln, 1110 S. 67th Street, Omaha, NE 68182-0816, [email protected])
English-as-a-second-language (ESL) listeners are generally more impaired than native English-speaking listeners on speech intelligibility tasks under reverberation and noise; however, more study is needed to ascertain these groups’ performances on speech comprehension tasks, rather than intelligibility. A recent study (Valente et al, 2012) showed that speech comprehension by both native Englishspeaking adults and children was more negatively affected by adverse acoustic environments than sentence recognition. The current project investigates the speech comprehension of both ESL and native English-speaking adult listeners under combinations of reverberation
and noise. Sets of 15-minute long listening comprehension tests based on the format of the Test of English for International Communication (TOEIC) were developed, anechoically recorded, convolved with binaural room impulse responses (BRIRs) to produce five mid-frequency reverberation times (0.4 to 1.1 seconds), and presented over loudspeakers in the Nebraska acoustics chamber. Background noise
was also varied at three levels (RC-30, 40 and 50) through an Armstrong i-Ceiling system. In total then, subjects were individually
exposed to 15 acoustic combinations of reverberation and background noise. Preliminary results will be presented and compared
between ESL and native English listeners. [Work supported by a UNL Durham School Seed Grant and the Paul S. Veneklasen Research
Foundation.]
3:05–3:20 Break
3:20
4pNS4. The link between cooling mechanical system type and student achievement in elementary schools. Ana M. Jaramillo and
Michael Ermann (School of Architecture + Design, Virginia Tech, Blacksburg, VA 24060, [email protected])
In aggregate, air-conditioning systems are the largest contributors to classroom noise, and some types of air-conditioning systems
are noisier than other types. The impact of noise in human performance has been widely studied and found to be stronger in children. A
study of 73 elementary schools in a single Orlando, Florida school district sought to relate mechanical cooling system type with student
achievement. It found that for schools populated with students of similar socio-economic background, schools cooling with the noisiest
types of mechanical system underperformed on student achievement tests relative to those with quieter types of systems. It also found
that schools with the poorest students were more likely to cool their classrooms with noisier systems.
3:40
4pNS5. Acoustic contributions and characteristics of floor treatments for elementary school classroom background noise levels.
Robert Celmer, Clothilde Giacomoni, Alex Hornecker, Ari M. Lesser, Adam P. Wells, and Michelle C. Vigeant (Acoustics Prog. & Lab,
University of Hartford, 200 Bloomfield Avenue, West Hartford, CT 06117, [email protected])
A two part investigation involving the effect of floor treatments on classroom background noise levels will be presented. Phase 1
determined the effects of hard versus soft flooring on overall speech and activity noise levels using long-term calibrated sound recordings in elementary classrooms. Two similar-sized classrooms were used: one with vinyl composition tile (VCT) flooring, and one with
short-pile commercial carpeting. After parsing the recordings into separate segments of (a) teacher/student speech (alone), and (b) classroom activity noise, including footfalls, chair scrapes, and impacts (no speech), a significant decrease in overall levels was found in the
carpeted rooms. Phase 2 determined the acoustical properties of nine different flooring materials ranging from resilient athletic floors to
VCT to commercial carpeting. Sound absorption was measured following ISO 10534-2, while ISO 3741 sound power measurements
were made while either (a) using a standard tapping machine, or (b) scraping a classroom chair back/forth over the floor surface in a
reciprocating manner. In general, both carpet samples resulted in the lowest sound levels and the highest absorption. Relative performances of each material will be presented along with additional classroom usability factors, such as maintenance, cost and durability. [Work
supported by Paul S. Veneklasen Research Foundation.]
2072
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2072
Contributed Paper
4:00
4pNS6. Comparing performance of classrooms with similar reverberation times but varying absorptive material configurations. James R. Cottrell and Lily M. Wang (Durham School of Architectural Engr. and Constr.,
Univ. of Nebraska - Lincoln, Lincoln, NE 68182, [email protected]
com)
The ASA/ANSI Standard on Classroom Acoustics S12.60-2010 suggests
that the mid-frequency reverberation times of classrooms be less than 0.6
seconds; however, no details are provided on how to place absorptive materials optimally within the classroom. In this investigation, impulse responses
from a database measured at Armstrong World Industries have been analyzed to determine the effect of varying absorptive material configurations
on resulting acoustic parameters, including reverberation time, clarity index
(C50), and speech transmission index (STI). Two different material configurations were analyzed for each of three mid-frequency-averaged reverberation times: 0.6 sec, 0.8 sec, and 0.9 sec. Results show that metrics
significantly differ when the material placement varies dramatically, and
that configurations with more evenly distributed absorption produce better
conditions for speech intelligibility throughout the classroom. [Work supported by a UNL Undergraduate Creative Activities and Research Experience Grant.]
4:15–5:05 Panel Discussion
THURSDAY AFTERNOON, 25 OCTOBER 2012
TRIANON A, 2:00 P.M. TO 4:00 P.M.
Session 4pPA
Physical Acoustics and Noise: Infrasound II
Roger M. Waxler, Chair
NCPA, University of Mississippi, University, MS 38677
2:00
4pPA1. Infrasound propagation in the atmosphere in a presence of a
fine-scale wind velocity structure. Igor Chunchuzov, Sergey Kulichkov,
Oleg Popov, Vitaly Perepelkin (Obukhov Institute of Atmospheric Physics,
3 Pyzhevskii Per., Moscow 119017, Russian Federation, [email protected]
gmail.com), Roger Waxler, and Jelle Assink (National Center for Physical
Acoustics, University, MS)
The results of modeling infrasound propagation in the atmosphere from
100-t surface explosion in Israel, and volcano eruptions in Ecuador and Kamchatka are presented. The signal as a function of a range from a source is calculated by parabolic equation method for the vertical profile of wind velocity and
temperature obtained from the Ground-to-Space atmospheric model. The
effects of fine-scale layered structure of wind velocity and temperature fields on
infrasound propagation in the atmosphere have been also taken into account.
The one-dimensional (vertical and horizontal) wavenumber spectra of the wind
and temperature fluctuations associated with the fine-scale structure are close to
the observed spectra in the middle and upper atmosphere. The calculated wave
forms, amplitudes and durations of the stratospheric and thermospheric arrivals
are compared with those observed in the experiments. It is shown that the scattering of infrasonic signals by anisotropic fluctuations leads to a significant
increase in the duration of the stratospheric and thermospheric arrivals in comparison with the case when these fluctuations are absent. The acoustic field scattered from anisotropic nonhomogenerities may be responsible for the arrivals
of the acoustic pulses observed in the acoustic shadow zones.
2:15
4pPA2. An infrasound calibration system for characterizing the environmental effect on sensor sensitivity and noise floor. Carrick L. Talmadge (NCPA, University of Mississippi, 1 Coliseum Drive, University,
MS 38655, [email protected])
An infrasound calibration system has been developed at the National
Center for Physical Acoustics. The calibration tank is comprised of a 1" cylindrical shell 40" in diameter, 40" long, with 40" diameter hemispherical
2073
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
end caps. The interior volume of the tank is approximately 1.8 cubic meters.
Each hemisphere has a 10" punch-out with sealing gasket that allows either
a speaker assembly or an “end cap” to be attached to each end. Normal
assess is through an end cap attached to one end, with a speaker assembly
attached to the other. The end cap allows rapid switching out of sensors.
The speaker assembly consists of a 10" subwoofer with a sealable back volume designed to equalize the static pressure in the interior of the tank to that
of the speaker back volume. The subwoofer is able to generate pressures up
to 10-Pa in the interior of the chamber, with excellent isolation from external sound. Using 2 subwoofers, by playing a different frequency tone
through each speaker and measuring their associated intermodulation distortion in the transduced signal, the linearity of the sensor can be accurately
assessed. The measured leak constant of the tank is longer than one week,
permitting the characterization the sensor response to ambient pressure to be
measured. The large interior volume of the calibration chamber allows the
interior be heated or cooled returned slowly to ambient conditions, so that
the effects of changing temperature on the sensor can be assessed.
2:30
4pPA3. Detection of infrasonic energy from tornado-producing storms.
Carrick L. Talmadge (NCPA, University of Mississippi, 1 Coliseum Drive,
University, MS 38655, [email protected]), William G. Fraizer, Roger Waxler
(NCPA, University of Mississippi, University, MS), Joseph C. Park (Center
for Operational Oceanographic Products and Services, National Oceanic
and Atmospheric Administration, Silver Spring, MD), Daniel E. Kleinert
(NCPA, University of Mississippi, University, MS), Geoffery E. Carter,
Gerald Godbold, David Harris, Chad Williams (Hyperion Technology
Group, Inc, Tupelo, MS), and Hank R. Buchanan (NCPA, University of
Mississippi, University, MS)
There are numerous reports in the literature on the observation of infrasound emitted from tornadic thunderstorms. Most of these observations
have been made from sensors that are several hundreds of kilometers from
the location of the storm, and “ground truth” about the tornadic activity is
not well established. We report here on a campaign carried out during the
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2073
4p THU. PM
Contributed Papers
summer of 2011 in which 50 infrasound microphones were deployed, as
part of an ongoing multi-university program on hazard detection and alert
funded by the National Oceanic and Atmospheric Administration. Sensors
were placed along the paths of developing tornadic storms. We focus here
on a severe weather outbreak that took place near Oklahoma City on May
24, 2011, in which a total of 7 tornados including one F5 and two F2 tornados were produced. Three sensors were located between the paths of an F4
and an F5 tornado, and 11 additional sensors were located northeast of an
F4 tornado that generated a 75-km track. Substantial meteorological information, including ground truth about tornados (intensity and size as a function of time), and the relative close proximity of the sensors to the storms,
provides us with a level of detail not available in previous storms. We will
report on our infrasound measurements and analysis from this outbreak as
well as discuss data from two other interceptions of tornadic storms, which
occurred on May 30 and June 19, 2011.
2:45
4pPA4. Geomagnetic auroral infrasound wave characteristics and generation. Justin J. Oldham, Charles R. Wilson, John V. Olson, Hans Nielsen,
and Curt Szuberla (Physics, University of Alaska Fairbanks Geophysical
Institute, PO Box 750972, Fairbanks, AK 99775, [email protected])
Periods of persistent, high-trace velocity infrasound activity have
been routinely observed in the data from the CTBT/IMS I53US infrasound station in Fairbanks, AK. Previous studies of magnetic field disturbances and displays of aurora borealis suggested that these infrasound
signals were generated as bow waves by moving auroral electrojets.
Recent analysis of the data obtained from the Geophysical Institute Magnetometer Array, the Poker Digital All-Sky Camera, and historic data
from the Poker Flat Imaging Riometer have demonstrated the presence of
extended periods of infrasound signals during times of enhanced geomagnetic activity along with verification that the observed infrasound is being
generated in the lower ionosphere. Further examination of these data sets
and the I53US infrasound data provide a basis for comparison with idealized magneto-hydrodynamic models of geomagnetic auroral infrasound
wave generation.
3:15
4pPA6. Pneumatic infrasound sources. Thomas Muir, Justin Gorhum,
Charles Slack, Martin Barlett, and Timothy Hawkins (Applied Research
Laboratories, University of Texas at Austin, P.O. Box 8029, Austin, TX
78713, [email protected])
The generation of infrasound from the pulsation of compressed air is
examined analytically and experimentally to explore the aerodynamic
physics as well as engineering implementations. Several model experiments
were developed and utilized to explore the problems associated with this
approach. Applications to long range propagation in the atmosphere, including calibration and testing of infrasonic sensor systems are addressed.
[Work supported by ARL:UT Austin.]
3:30
4pPA7. Application of a blind source separation algorithm for the
detection and tracking of tornado-generated infrasound emissions during the severe weather outbreak of 27 April 2011. Hank S. Rinehart (Miltec Systems, Ducommun Miltec, 678 Discovery Dr NW, Huntsville, AL 35806,
[email protected]), Chris Clark, Matt Gray, and Kevin Dillion
(Miltec Research and Technology, Ducommun Miltec, Oxford, MS)
April 25-28, 2011 has been identified by many as the most significant
and severe single-system outbreak of tornadoes in recorded history. One day
in particular, the 27th of April, has been classified by the National Oceanic
and Atmospheric Administration (NOAA) as the fourth deadliest tornado
outbreak in US history. Severe tornadic activity on this day levied catastrophic damage to life and property across areas of Mississippi, Alabama,
Georgia and Tennessee. During this outbreak, multiple Ducommun Miltecdeveloped infrasound sensors collecting continuous, high resolution data
were deployed in two-dimensional array configurations in Northern Alabama. Prior research on the collection and analysis of infrasonic emissions
from severe weather phenomenon has provided much insight on the nature
of tornado-generated infrasound. Our effort focuses on the application of
novel bearing estimation algorithms using closely spaced (4-6 m) array elements. Direction of Arrival (DOA) estimates, derived from Blind Source
Separation (BSS) techniques, will be presented for at least two significant
tornadoes: the long-track EF5 that impacted Hackleburg and Phil Campbell,
AL and the large multi-vortex EF4 that struck Cullman, AL. Correlation of
infrasound detection and bearing estimate initiation and termination with
NOAA Storm Prediction Center (SPC) Storm Reports will also be reviewed.
3:00
4pPA5. Non-linear infrasound signal distortion and yield estimation
from stratospheric and thermospheric arrivals. Joel B. Lonzaga, Roger
Waxler, and Jelle Assink (National Center for Physical Acoustics, University of Mississippi, 1 Coliseum Dr., University, MS 38677, [email protected]
olemiss.edu)
The propagation of sound through gases and fluids is intrinsically
non-linear. The degree of non-linearity increases as the density of the
propagation medium decreases. As a consequence, signals traveling
through the upper atmosphere undergo severe non-linear distortion. This
distortion takes two forms: waveform steepening and pulse stretching.
These nonlinear effects are numerically investigated using non-linear ray
theory. On one hand, waveform steepening is generally associated with
stratospheric arrivals with sufficiently large amplitude for which the
decreased density of the atmosphere causes shock fronts to form. On the
other hand, pulse stretching is generally associated with thermospheric
arrivals where severe attenuation prevents significant shock formation but
severe non-linearity causes significant pulse stretching. Since non-linear
effects increase with increasing signal amplitude, it is possible to use
non-linear distortion to estimate signal source strength. Uncertainties in
propagation path and in thermospheric attenuation will limit the accuracy
of this approach. Using the non-linear propagation model, the fundamental limits of this approach are also investigated. Comparisons with available data will be made.
2074
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
3:45
4pPA8. Infrasound as a remote sensing technique for the upper atmosphere. Jelle D. Assink, Roger Waxler, Joel Lonzaga, and Garth Frazier (NCPA/
UM, 1 Coliseum Dr., (NCPA), University, MS 38677, [email protected])
Understanding and specification of the higher altitudes of the atmosphere with global overage over all local times is hampered by the challenges
of obtaining direct measurements in the upper atmosphere. Methods to measure the properties of the atmosphere above the stratopause is an active area
of scientific research. In this presentation, we revisit the use of infrasound as
a passive remote sensing technique for the upper atmosphere. In the past,
various studies focused on the sensitivity of infrasound to various upper
atmospheric processes. It has been shown that the current state-of-the-art
climatologies for the middle and upper atmosphere are not always in agreement with the acoustic data, suggesting a use of infrasound as a complementary remote sensing technique. Previously, we reported on the error in
thermospheric celerities which was found to be in accord with the typical
uncertainty in upper atmospheric winds and temperature. In this presentation, we report on the expected variation of the various infrasound observables from a forward modeling perspective. This information, in
combination with the experimental measurement error provides constraints
on the expected resolution from the inverse problem. With this information,
we minimize misfits in travel time and source location using a LevenbergMarquardt search algorithm in combination with ray theory.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2074
THURSDAY AFTERNOON, 25 OCTOBER 2012
LIDO, 3:30 P.M. TO 4:30 P.M.
Session 4pSA
Structural Acoustics and Vibration: Applications in Structural Acoustics and Vibration
Robert M. Koch, Cochair
Chief Technology Office, Naval Undersea Warfare Center, Newport, RI 02841-1708
R. Daniel Costley, Cochair
Geotechnical and Structures Lab., U.S. Army Engineer R&D Center, Vicksburg, MS 39180
Contributed Papers
3:30
4:00
4pSA1. Effects of acousto-optic diffraction in the acoustic frequency
range on the laser Doppler vibrometry method in air. Hubert S. Hall,
Joseph F. Vignola, John A. Judge, Aldo A. Glean, and Teresa J. Ryan (Mechanical Engineering, The Catholic University of America, 620 Michigan
Ave NE, Washington, DC 20064, [email protected])
4pSA3. Physical quantities measurement by using the fiber optic sensor
in the pendulum ball collision. Jongkil Lee (Mechanical Engineering Education, Andong National University, 388 Sonchun-dong, Andong, Kyungbuk 760-749, Republic of Korea, [email protected]), Alex Vakakis
(Mechanical Engineering, University of Illinois at Urbana-Champaign,
Urbana, IL), and Larry Bergman (Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL)
The effect of acousto-optic diffraction on the transmitted laser signal
light path in the laser Doppler vibrometry (LDV) method has been a known
concern to measurement accuracy. To further understand and quantify the
LDV accuracy implications, for the in-air case, an experimental study was
performed in a high-intensity sound field within the acoustic frequency
range (less than 20 kHz). Results of the study showed that acousto-optic
diffraction has minimal impact on the accuracy of LDV measurements for
the in-air case in sound-fields less than 20 kHz. Follow-on work investigating the effect of measurements in water is proposed. It is hypothesized that
the higher refractive index of water will exacerbate the impact of the
acousto-optic effect on the accuracy of LDV measurements. Previous work
in the field in the megahertz frequency region has shown this. However,
within the acoustic frequency range, the accuracy implications remain
unknown.
It is interesting to measure the impact force when the moving pendulum
ball collides to the fixed body. In this paper physical quantities were measured using by fiber optic sensor when a pendulum ball collides to the fixed
ball on the wall. Both steel ball dimensions are 1 inch in diameter. The fiber
optic Sagnac interferometer is well established as a sensor for acoustic and
vibration. It is made a 1mm penetrated hole and optical fiber in the Saganac
loop passed through the hole. The ball was welded on the wall as a fixed
ball. When the external force applied to the fixed ball the optical fiber in the
Sagnac loop detects the impact force. The output signal is proportional to
the output voltage in the oscilloscope. Based on the result suggested fiber
optic sensor can measure the impact intensity and this technique can be
expanded to the moving bodies.
4:15
4pSA2. References per coherence length: A figure of merit for multireference acoustical holography. Alan T. Wall, Michael D. Gardner, Kent L.
Gee, and Tracianne B. Neilsen (Dept. of Physics and Astronomy, Brigham
Young University, Provo, UT 84602, [email protected])
Multireference partial field decomposition (PFD) can be used to generate coherent holograms for scan-based near-field acoustical holography
measurements. PFD is successful when the reference array completely
senses all independent subsources, but meeting this requirement is not
straightforward when the number of subsources and their locations are
ambiguous (such as in aeroacoustic sources). A figure of merit based on
spatial coherence lengths, called references per coherence length (RPLc),
is a useful metric to guide inter-reference spacing in the array design so
that the source is spanned. Coherence length is defined as the axial distance over which the ordinary coherence drops from unity to some desired
value. Numerical experiments involving an extended, partially correlated
source show that sufficiency of the reference array for different source
conditions may be simply expressed in terms of RPLc. For sources of
varying spatial coherence and over a large range of frequencies, one reference per coherence length is equivalent to sensing all independent
subsources.
2075
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
4pSA4. Experimental investigation on reconstruction of sound field
based on spherical near-field acoustic holography. Xinguo Qiu, Minzong
Li, Huancai Lu, and Wei Jiang (Key Laboratory of E&M, Ministry of Education & Zhejiang Province, College of Mechanical Engineering, Zhejiang
University of Technology, PO Box 21, 18 ChaoWang Road, Hangzhou,
Zhejiang Province 310014, P.R.C., [email protected])
This paper presents the results of an experimental study on the methodology of spherical near-field acoustic holography (spherical NAH) to reconstruct interior sound field. The experiment was carried out in a full anechoic
chamber, in which the sound filed was generated with different combination
of speakers at different positions, a rigid spherical array was used to collect
the field acoustic pressures as input to the reconstruction calculation. There
are three cases which were investigated. Case 1, a source was set near to the
microphone array. Case 2, two sources were eccentrically set opposite to
each other around the microphone array. And Case 3, two sources were
placed on one side of microphone array on the same orbit, while they were
positioned apart at a small angle. The accuracy of the reconstruction of
sound field was examined and analyzed compared to the benchmarks and
the results of the numerical simulations. The reconstructed results show that
the methodology of Spherical NAH is capable to locate sources and reconstruct sound field within certain accuracy.
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2075
4p THU. PM
3:45
THURSDAY AFTERNOON, 25 OCTOBER 2012
TRUMAN A/B, 1:00 P.M. TO 5:00 P.M.
Session 4pSC
Speech Communication: Speech Perception II: Intelligibility, Learning, and Audio-Visual Perception
(Poster Session)
Michael S. Vitevitch, Chair
Psychology, University of Kansas, Lawrence, KS 66045
Contributed Papers
All posters will be on display from 1:00 p.m. to 5:00 p.m. To allow contributors an opportunity to see other posters, contributors of
odd-numbered papers will be at their posters from 1:00 p.m. to 3:00 p.m. and contributors of even-numbered papers will be at their
posters from 3:00 p.m. to 5:00 p.m.
4pSC1. Confusability of Bengali consonants and its relation to phonological dissimilarity. Sameer ud Dowla Khan (Linguistics, Reed College,
Portland, OR, [email protected])
Language-specific consonant similarity can be measured indirectly by
looking at the phoneme inventory, the lexicon (e.g. cooccurrence restrictions), or the phonology (e.g. processes that take the notion of similarity of
dissimilarity into account). A more direct approach involves the use of the
confusion matrix. For Bengali, thus far, consonant similarity has only been
measured indirectly, through the lexicon and phonology. Previous studies
(Khan 2006, 2012) claim that Bengali speakers judge the similarity of consonants in echo reduplication (similar to English doctor-schmoctor), where
the initial consonant of the base is systematically replaced with a phonologically dissimilar consonant in the reduplicant. This measurement of similarity assumes a set of features assigned language-specific weights; for
example, [voice] is weighted more heavily that [spread glottis], to explain
why speakers treat the pair [t, th] as more similar than the pair [t, d]. But
does the measurement of similarity inherent in the echo reduplicative construction correspond to the relative perceptibility of different consonant contrasts? The current study compares the relative confusability of Bengali
consonants produced in noise with the claims of phonological notions of
similarity associated with echo reduplication.
4pSC2. Effects of age, hearing loss, and phonological neighborhood density on children’s perceptual confusions. Mark Vandam (Boys Town
National Research Hospital, 555 North 30th Street, Omaha, NE 68131, mrk.
[email protected]), Noah H. Silbert (Center for Advanced Study of Language, University of Maryland, College Park, MD), and Mary Pat Moeller
(Boys Town National Research Hospital, Omaha, NE)
Age, hearing loss, and phonological neighborhood density have been
shown to substantially affect the accuracy of productions in a word imitation
task [VanDam, et al., 161st ASA Meeting]. Older children (7 years of age)
are more accurate than younger children (4 years of age), normal hearing
children are more accurate than children with mild- to severe hearing loss,
and words from sparse phonological neighborhoods are produced more
accurately than are words from dense neighborhoods. In an ongoing series
of analyses, we extend these findings by analyzing how patterns of perceptual confusion vary as a function of age, hearing status (normal hearing versus hearing loss), and phonological neighborhood structure. Multilevel
cognitive models fit to confusion data provide detailed quantitative descriptions of perceptual space and response bias and enable analysis of betweenand within-group variability. Results shed light on the organization of the
lexicon in young children with both normal hearing and hearing loss, and
add to our understanding of the relationship between speech production and
speech perception in children.
2076
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
4pSC3. Phonological neighborhood clustering coefficient influences
word learning. Rutherford Goldstein and Michael S. Vitevitch (Psychology, University of Kansas, 1415 Jayhawk Blvd., Lawrence, KS 66045,
[email protected])
Network science is one approach used to analyze complex systems, and
has been applied to a complex cognitive system, namely the phonological
lexicon (Vitevitch, 2008). One of the measures provided by network science, termed the clustering coefficient or C, influences lexical processes
such as speech production (Chan & Vitevitch, 2010) and speech perception
(Chan & Vitevitch, 2009). The current study presents evidence of C influencing the process of learning new words. Participants were trained and
tested on nonword-nonobject pairs over three lab sessions at one week intervals. Testing occurred immediately after training and after a one week interval. Participants were tested on a picture naming task, a two-alternativeforced-choice task, and a lexical decision task. Results show an advantage
for learning new words with a high clustering coefficient. A spreading activation account is used to explain the findings.
4pSC4. Phonological neighborhood density and vowel production in
children and adults. Benjamin Munson (Speech-Language-Hearing Sciences, University of Minnesota, 115 Shevlin Hall, 164 Pillsbury Drive SE,
Minneapolis, MN 55455, [email protected]), Mary E. Beckman (Linguistics, Ohio State University, Columbus, Minnesota, MN), and Jan Edwards
(Communicative Disorders, University of Wisconsin, Madison, WI)
Previous studies have shown that vowels in words with high phonological neighborhood densities (ND) are produced closer to the periphery of the
vowel space than are vowels in low-ND words (Munson & Solomon, 2004;
Scarborough, 2010; Wright, 2003). Different explanations for this phenomenon have been proposed. One hypothesis is that they reflect a speaker’s
attempt to maintain acoustic distinctiveness among similar-sounding words.
If this were true, then we might expect that the effect of ND on vowel production would be smaller in children than in adults, given that children have
overall smaller-sized lexicons than adults. To evaluate this, we examined
the effect of ND on vowel production in children and adults. The productions were taken from the paidologos corpus (Edwards & Beckman, 2008).
Preliminary analyses of the productions of 8 high-ND and 8 low-ND words
by 10 2-year-olds, 10 5-year-olds and 20 adults have been completed. There
are no effects of ND on vowel-space size or vowel duration for either group
of children. There were strong, statistically significant effects found the
group of adults. These were in the opposite than predicted direction: lowND words were produced with more-expanded vowel spaces than high-ND
words. Analysis of a larger group of participants is ongoing [support:
NIDCD 02932.]
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2076
The COmpetition between Verbal and Implicit Systems (COVIS) model
posits multiple cognitive learning systems that are functionally distinct and
compete with each other throughout learning. COVIS indicates two cognitive
systems: a hypothesis-testing system mediated predominantly by the frontal
cortex and a procedural-based system mediated by the striatum. Initial learning is dominated by the hypothesis-testing system, but with increased practice, control is passed to the procedural system. Importantly, each learning
system can be optimized differentially to maximize performance. In this
study, the COVIS model was applied to optimize Mandarin tone learning by
adult native English speakers. The optimized category training (OCT) was
designed to boost the hypothesis-testing system initially, then the proceduralbased system subsequently. We then examined the extent to which OCT
enhanced performance in a word learning task which required the participants to use their attained categories for Mandarin tones to disambiguate
words. OCT was found to significantly enhance word learning relative to a
control condition in which the learning systems were not optimized. These
results demonstrate that a multiple systems approach can be used to develop
optimized training protocols to maximally boost category learning.
4pSC6. The young and the meaningless: Novel-word learning without
meaning or sleep. Efthymia C. Kapnoula, Stephanie Packard, Keith S. Apfelbaum, Bob McMurray, and Prahlad Gupta (Psychology, The University of Iowa,
Iowa City, IA 52242-1409, [email protected])
Existing work suggests that sleep-based consolidation (Gaskell & Dumay,
2003) is required for newly learned words to interact with other words and
phonology. Some studies report that meaning may also be needed (Leach and
Samuel, 2007), making it unclear whether meaningful representations are
required for such interactions. We addressed these issues by examining lexical competition between novel and known words during online word recognition. After a brief training on novel word-forms (without referents), we
evaluated whether the newly learned items could compete with known words.
During testing, participants heard word stimuli that were made by cross-splicing novel with known word-forms (NEP+NET=NEpT) and the activation of
the target-word was quantified using the visual world paradigm. Results
showed that the freshly learned word-forms engaged in competition with
known words with only 15 minutes of training. These results are important
for two reasons: First, lexical integration is initiated very early in learning and
does not require associations with semantic representations or sleep-based
consolidation. Second, given studies showing that lexical competition plays a
critical role in resolving acoustic ambiguity (McMurray, Tanenhaus & Aslin,
2008; McMurray et al, 2009), our results imply that this competition does not
have to be between semantically integrated lexical units.
and 2) NAS presents a more naturalistic speech adaptation than QS, leading
to better sentence recall for listeners. This experiment suggests that acousticphonetic modifications implemented in listener-oriented speech lead to
improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.
4pSC8. The intelligibility of clear and conversational allophones of coda
consonants. Matthew J. Makashay, Nancy P. Solomon, and Van Summers
(Audiology and Speech Center, Walter Reed National Military Medical Center, 8901 Wisconsin Ave., Bethesda, MD 20889-5600, matthew.j.makashay.
[email protected])
For many hearing-impaired (HI) listeners, hearing-aid amplification provides near-normal speech recognition in quiet. Nonetheless, many of these
same listeners show large speech deficits, relative to normal-hearing (NH) listeners, that are not effectively addressed via amplification in noisy listening
conditions. One compensating strategy HI listeners use is to ask talkers to
speak clearly. However, as one of the features of clear speech is a shift to
higher frequencies, HI listeners may not benefit as much as NH listeners if the
new frequencies are outside their audible range. This study examined the intelligibility of conversationally- and clearly-spoken coda consonants in nonsense
syllables. These free-variant allophones of 21 American English consonants
were produced in three phonological environments: syllable (utterance) final;
syllable final followed by schwa; and syllable final followed by palatal approximant and schwa. The stimuli were presented in broadband noise and in quiet
to NH and HI listeners. Consonant confusions were investigated to determine
whether NH and HI listeners receive similar clear-speech advantages. [The
views expressed in this abstract are those of the authors and do not necessarily
reflect the official policy or position of the Departments of the Navy, Army, or
Air Force, the Department of Defense, or the US Government.]
4pSC9. Variability in speech understanding in noise by listeners with
hearing loss. Peggy B. Nelson, Yingjiu Nie, Adam Svec, Tess Koerner,
Bhagyashree Katare, and Melanie Gregan (University of Minnesota, 164
Pillsbury Dr Se, Minneapolis, MN 55455, [email protected])
Listeners with sensorineural hearing loss (SNHL) report significant difficulties when listening to speech in the presence of background noise and are highly
variable in their tolerance to such noise. In our studies of speech perception,
audibility predicts understanding of speech in quiet for most young listeners
with SNHL. In background noise, however, the speech recognition performance
of some young listeners with SNHL deviates significantly from audibility predictions. We hypothesize that vulnerability to background noise may be related
to listeners’ broader auditory filters, to a loss of discrimination ability for rapid
spectral changes, or to a disruption of the speech temporal envelopes by the
addition of noise. Measures of spectral resolution, spectral change detection, and
envelope confusion will be presented for listeners with SNHL. Relationships
between those estimates and speech recognition in noise will be described.
Results may suggest a range of custom strategies for improving tolerance for
background noise. Work supported by NIDCD R018306 to the first author.
4pSC7. Recognition memory in noise for speech of varying intelligibility. Rachael C. Gilbert (Linguistics, The University of Texas at Austin,
Austin, TX 78751, [email protected]), Bharath Chandrasekaran
(Communication Sciences and Disorders, The University of Texas at
Austin, Austin, TX), and Rajka Smiljanic (Linguistics, The University of
Texas at Austin, Austin, TX)
4pSC10. Relationships among word familiarity, volume unit level, rootmean-square power, and word difficulty. Edward L. Goshorn (Speech
and Hearing Sciences, Psychoacoustics Research Laboratory, University of
Southern Mississippi, 118 College Dr. #5092, Hattiesburg, MS 39401,
[email protected])
Speech intelligibility is greatly affected by the presence of noise. However, there has been little research investigating the effects of noise on recognition memory. Per the effortfulness hypothesis (McCoy et al., 2005), we
expect that processing speech in challenging listening environments requires
additional processing resources that might otherwise be available for encoding speech in memory. This resource reallocation may be offset by speaker
adaptations to the environment and to the listener. Here we compare recognition memory for conversational and clear speech sentences recorded in quiet
(QS) and for sentences produced in response to the actual environment noise,
i.e. noise adapted speech (NAS). Listeners heard 40 unique conversational
and clear QS or NAS sentences mixed with 6-talker babble at SNRs of 0 or
+3 dB. Following the exposure, listeners identified 80 sentences in quiet as
old or new. Results showed that 1) increased intelligibility through conversational-to-clear speech modifications leads to improved recognition memory
Due to time constraints, 10-25 item NU-6 word lists that are rank-ordered
by word difficulty (W-D) are often used for audiological speech intelligibility
testing rather than full 50-item lists (Hurley and Sell, 2003). The factors contributing to W-D are not well delineated. Although word familiarity is well
established as an important contributor to W-D (Savin, 1963), the contributions of acoustical factors such as peak VU level (VU) and root-mean-square
(RMS) power are less known. A better understanding of relationships among
factors associated with W-D may prove useful in compiling word lists. This
study investigated the relationships among word familiarity, VU, RMS power,
and W-D for four 50-item NU-6 word lists. VU and RMS measures for each
word were obtained with SoundForge. The standard frequency index (SFI)
provided a measure of word familiarity (Carroll et al, 1971). An unexpected
positive significant (p<.05) correlation was found between W-D and VU
level for one list and weak positive correlations for three lists. Positive
2077
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2077
4p THU. PM
4pSC5. Optimized speech sound category training bootstraps foreign
word learning. Han-Gyol Yi, Bharath Chandrasekaran (Communication
Sciences and Disorders, The University of Texas at Austin, Austin, TX
78712, [email protected]), and W. Todd Maddox (Psychology, The University of Texas at Austin, Austin, TX)
correlations are in the opposite direction hypothesized for this variable. No
significant correlations were found between W-D and RMS power. Significant negative correlations were found between W-D and SFI for three NU-6
lists. Expected and unexpected findings will be addressed.
4pSC11. Modeling talker intelligibility variation in a dialect-controlled
corpus. Daniel McCloy, Richard Wright, and August McGrath (Linguistics,
University of Washington, Box 354340, Seattle, WA 98115-4340,
[email protected])
In a newly created corpus of 3600 read sentences (20 talkers x 180 sentences), considerable variability in talker intelligibility has been found. This
variability occurs despite rigorous attempts to ensure uniformity, including
strict dialectal criteria in subject selection, speech style guidance with feedback during recording, and head-mounted microphones to ensure consistent
signal-to-noise ratio. Nonetheless, we observe dramatic differences in talker
intelligibility when the sentences are presented to dialect-matched listeners
in noise. We fit a series of linear mixed-effects models using several
acoustic characteristics as fixed-effect predictors, with random effects terms
controlling for both talker & listener variability. Results indicate that
between-talker variability is captured by speech rate, vowel space expansion, and phonemic crowding. These three dimensions account for virtually
all of the talker-related variance, obviating the need for a random effect for
talker in the model. Vowel space expansion is found to be best captured by
polygonal area (contra Bradlow et al 1996), and phonemic overlap is best
captured by repulsive force (cf. Liljencrants & Lindblom 1972, Wright
2004). Results are discussed in relation to prior studies of intelligibility.
4pSC12. The effects of prior access to talker information on vowel identification in single- and mixed-talker contexts. John R. Morton (Psychology, Washington University, Campus Box 1125, 1 Brookings Drive, Saint
Louis, MO 63130, [email protected]), Steven M. Lulich (Speech and
Hearing Sciences, Indiana University, Saint Louis, MO), and Mitchell
Sommers (Psychology, Washington University, Campus Box 1125, 1
Brookings Drive, Saint Louis, MO 63130)
Speech intelligibility is significantly impaired when words are spoken by
multiple- compared with single talkers. In the present study, we examined
whether providing listeners with information about the vocal tract characteristics of the upcoming speaker would reduce the difference between single- and
mixed-talker conditions. All participants were initially trained to identify 6
talkers (3 male and 3 female) from isolated vowels. Participants then completed closed-set vowel identification tests, including a blocked and a mixed
condition, and one of two mixed-talker precursor conditions. In one precursor
condition, participants saw one of the six talker’s name immediately prior to
hearing the target vowel. In the other precursor condition, participants heard a
non-target vowel (/i/) spoken immediately before the target stimulus. For both
precursor conditions, the name (or vowel) precursor matched the target-vowel
speaker on half of the trials. For the other half, the precursor was of the same
gender as the target-vowel speaker. Only when a sample vowel precursor was
spoken by the same talker as the subsequent target was there a significant
improvement in scores relative to the mixed-talker condition. The results suggest that exposure to isolated vowels can provide enough information about
that talker’s vocal tract to improve perceptual normalization.
4pSC13. The relationship between first language and second language
intelligibility in Mandarin-English bilinguals. Jenna S. Luque, Michael
Blasingame, L. A. Burchfield, Julie Matsubara, and Ann R. Bradlow (Linguistics, Northwestern University, 2016 Sheridan Rd., Evanston, IL 60208,
[email protected])
Previous research has shown that L1 speech intelligibility, as judged by
native listeners, varies due to speaker-specific characteristics. Similarly, L2
speech intelligibility as judged by native listeners also varies across speakers. Given variability in L1 and L2 intelligibility, we hypothesize that,
within bilinguals, some speaker-specific characteristics that contribute to
variability in L1 intelligibility (e.g., long-term average spectrum, speech
rate, and articulatory precision) are language-independent and therefore also
contribute to variability in L2 intelligibility. This leads to the expectation
that within a group of bilingual speakers, relative L1 intelligibility is a significant predictor of relative L2 intelligibility. In the current study, 14
2078
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
Mandarin-English bilinguals produced 112 short meaningful sentences in
their L1 (Mandarin) and L2 (English). Independent groups of Mandarin and
English listeners then repeated back the sentences (native-accented Mandarin productions for Mandarin listeners at -4 dB SNR, Mandarin-accented
English productionsfor English listeners at 0 dB SNR). Intelligibility was
calculated as proportion of words correctly repeated. L1 intelligibility was a
significant predictor of L2 intelligibility (b=.58, S.E.b=.21, p<.05) within
these Mandarin-English bilinguals, supporting the hypothesis that languageindependent speaker-specific characteristics contribute to both L1 and L2
intelligibility in bilingual speakers.
4pSC14. The role of first-language production accuracy and talker-listener alignment in second-language speech Intelligibility. Kyounghee
Lee and Ann R. Bradlow (Department of Linguistics, Northwestern University, 2016 Sheridan Road, Evanston, IL 60208-4090, [email protected]
u.northwestern.edu)
This study investigated variability in L2 speech intelligibility as a function of L1 speech intelligibility and of talker-listener L1 match. Non-native
Korean talkers varying in their L2 proficiency were recorded reading simple
English (L2) and Korean (L1) sentences. The intelligibility of these sentences was then assessed by Korean listeners (both Korean and English productions) and English listeners (English productions only) in a sentence
recognition task. The results revealed that for these Korean-English bilingual talkers, L1 intelligibility was significantly correlated with L2 intelligibility for both Korean and English listeners, suggesting that variability in L1
speech intelligibility can serve as a predictor of variability in L2 production
accuracy. We also examined the interlanguage speech intelligibility benefit
for non-native listeners (ISIB-L) (e.g., Bent & Bradlow, 2003; Hayes-Harb
et al., 2008). Korean listeners performed better at identifying the sentences
produced by Korean talkers with relatively low L2 intelligibility, implying
that the benefit of a shared native language between talkers and listeners
may be larger when non-native listeners process speech from a talker with
low L2 intelligibility. Overall, these findings indicate that variability in L2
speech intelligibility is related to language-general talker characteristics as
well as to the talker-listener language alignment.
4pSC15. The effect of age on phonetic imitation in children. Kuniko
Nielsen (Linguistics, Oakland University, 320 O’Dowd Hall, Rochester, MI
48309-4401, [email protected])
This study aims to examine the effect of age on phonetic imitation in
children. Previous studies have shown that adult speakers implicitly imitate
the phonetic properties of recently heard speech (e.g. Goldinger, 1998).
Recently, Nielsen (2011) reported that third-graders show similar patterns
of phonetic imitation, including word-specific patterns of imitation. The current study extends these findings and investigates the effect of age on phonetic imitation, by comparing the pattern of imitation between third-graders
and preschoolers. According to Piaget (1962), development of imitation is a
manifestation of the increasing distinctiveness between assimilation and
accommodation in early childhood, predicting greater imitation for older
children. At the same time, findings on motor imitation by newborns (e.g.,
Meltzoff & Moore, 1997) suggest that the intermodal mapping necessary for
imitation is at least partly innate. The experiment employed a picture-naming task: participants’ speech production was compared before and after
they were exposed to model speech with extended VOT on the target phoneme /p/. A preliminary data analysis revealed a greater degree of imitation
for older children, while both groups showed significant imitation. These
findings are in agreement with the effects of age and developmental level on
motor imitation observed in Fouts and Liikanen (1975).
4pSC16. Speech recognition informed by distinctive feature theory: The
featurally underspecified Lexicon model and its implications. Philip J.
Roberts (Faculty of Linguistics, University of Oxford, Clarendon Institute,
Walton Street, Oxford OX1 2HG, United Kingdom, [email protected]
ling-phil.ox.ac.uk) and Henning Reetz (Institut f€
ur Phonetik, Universit€at
Frankfurt, Frankfurt, Hesse, Germany)
We present a speech recognition engine that implements the Featurally
Underspecified Lexicon calculus (FUL). The FUL model defines an inventory of privative phonological features that is necessary and sufficient to
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2078
4pSC17. Evaluating automatic speech-to-speech interpreting. Jared
Bernstein (Linguistics, Stanford University, Palo Alto, CA 94301, [email protected]
stanford.edu) and Elizabeth Rosenfeld (Tasso Partners, Palo Alto, CA)
A na€ıve speech-to-speech interpreter can be implemented as three component processes in series: speech recognition, machine translation, and
speech synthesis. However, evaluating the performance of a speech-tospeech interpreting system may be either simpler or more complicated than
merely calculating the product of the accuracies of those three component
processes. This is because human users are sensitive to the rate at which an
interpreter operates and because a system’s communication success rate is
properly measured by a listener’s correct understanding of the speaker’s
intention in a particular context rather than by the system’s word-for-word
accuracy and intelligibility. We are evaluating two currently available systems, each operating in both directions for two language pairs: English/
Spanish and English/Chinese. For each system and each language pair and
each direction, we compare word-for-word spoken interpretation accuracy
with the successful communication of speaker intent-in-context. Results for
the Spanish/English pair suggest that word-for-word accuracy is high (about
75-90% correct) for both systems, and that taking a lower information rate
measure like communication of intent in context reduces the error rate substantially. Finally, suggestions for improved system design are presented.
4pSC18. Parametric forms of spectral amplitude nonlinearities for use
in automatic speech recognition. Stephen Zahorian (Electrical and Computer Engineering, State University of New York at Binghamton, PO Box
6000, Binghamton, NY 13902, [email protected])
This work is a continuation and extension of work presented at the fall 2011
meeting of the Acoustical Society of America (Wong and Zahorian). In that
work, and also at work done at Carnegie Mellon University, auditory model
derived spectral amplitude nonlinearities, with symmetric additional compression (after log amplitude scaling) were found to improve automatic speech recognition performance when training and test data are mismatched with respect
to noise. In this new work, a parametric nonlinearity, controlled by three parameters, was formed to allow non-symmetric compression with respect to high and
low amplitudes. Several variations of this basic nonlinearity were evaluated for
both matched and mismatched conditions for training and test data with respect
to noise. For the mismatched cases, the most effective nonlinearity was found to
be compressive for low amplitudes but expansive at high amplitudes (thus
emphasizing spectral peaks). However, for matched conditions, none of the
spectral amplitude nonlinearities improve automatic recognition accuracy. The
additional nonlinearity can be combined with log compression to create a single
unified amplitude nonlinearity for speech processing.
4pSC19. Acoustical features in Mandarin emotional speech by native
speakers of English. Hua-Li Jian (Fac. Tech. Des. and Art, Oslo and Akershus
University College of Applied Sciences, Postbox 4, St. Olavs Plass, Oslo NO0130, Norway, [email protected]) and Jessica Hung (Foreign Languages and
Literature, National Cheng Kung University, Tainan, Tainan City, Taiwan)
This study examines (1) whether native speakers of English (NS-E) can
express emotions successfully in Mandarin speech, and (2) how their emotional expressions differ from native speakers of Mandarin (NS-C) when the
emotional portrayals are recognizable. The acoustic features analyzed
included F0, duration, and intensity. The scenario approach was adopted to
2079
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
elicit emotions joy, anger, sadness and fear, with neutral as a control. The
data gathered (Sixteen NS-E and NS-C) were rated. F0 range at sentential
level, mean F0 of each syllable, sentential and syllabic duration, and intensity signal of each segment were contrasted across groups within each emotional expression. The findings indicated that emotions by NS-C were
recognized well, but joy, anger and fear by NS-E had low recognition rates
because of accents, vocal cues and culture-specific components. Both
groups adopted similar F0 range at the sentence level but joy in both and
fear by NS-C showed small range. Only NS-C showed fast speech rate in
anger. Emotions with high activations by NS-E were shorter. Anger and joy
showed high intensity, while sadness and fear low intensity in both groups.
NS-C tended to use different intensity range to indicate different emotions,
while NS-E used similar range for all emotions.
4pSC20. Cross-linguistic
American English. Jiyoun
for Psycholinguistics, PO
[email protected]),
Tilburg, n/a, Netherlands)
emotion recognition: Dutch, Korean, and
Choi, Mirjam Broersma (Max Planck Institute
Box 310, Nijmegen 6500 AH, Netherlands,
and Martijn Goudbeek (Tilburg University,
This study investigates the occurrence of asymmetries in cross-linguistic
recognition of emotion in speech. Theories on emotion recognition do not
address asymmetries in the cross-linguistic recognition of emotion. To study
perceptual asymmetries, a fully crossed design was used, with speakers and listeners from two typologically unrelated languages, Dutch and Korean. Additionally, listeners of American English, typologically close to Dutch but not
Korean, were tested. Eight emotions, balanced in valence (positive-negative),
arousal (active-passive), and basic vs. non-basic emotions -properties that are
known to affect emotion recognition- were recorded by eight Dutch and eight
Korean professional actors, in a nonsense phrase that was phonologically legal
in both languages (and English). Stimuli were selected on the basis of prior
validation studies with Dutch and Korean listeners. 28 Dutch, 24 Korean, and
26 American participants were presented with all 256 Dutch and Korean stimuli, blocked by language. Participants indicated for each stimulus which emotion it expressed by clicking on one of the eight emotions or “neutral”. Results
showed strong asymmetries across languages and listener groups that cannot
be explained along previously described dimensions (valence, arousal, basicnon-basic). The present results call for the extension of theories of cross-linguistic emotion recognition to incorporate asymmetrical perception patterns.
4pSC21. Cues for the perception of expressive speech. Daniel J. Hubbard
and Peter F. Assmann (School of Behavioral and Brain Sciences, GR4.1,
University of Texas at Dallas, P.O. Box 830688, Richardson, TX 75083,
[email protected])
In a previous study the contribution of fundamental frequency (F0) to
the perception of expressive speech was examined using a selective adaptation technique. Listeners heard either F0-present or F0-removed adaptors
(vocoder-processed VCV syllables) from one of two expressive categories:
angry or happy. In a two-alternative listening task, contrastive aftereffects
(characterized by a tendency to label test stimuli as originating from the
non-adapted category) were documented only in the F0-present condition.
This suggests that F0 is important for the perception of emotional expressions. However, listeners were still able to identify F0-removed syllables as
angry or happy at a rate significantly better than chance (58%). Subsequent
analyses revealed systematic differences in formant frequencies as potential
cues for categorization (higher F1 and F2 frequencies for happy compared
to angry tokens). A discriminant analysis was performed using formant
measurements (F1-F3) and measures related to the voicing source. Removing voice source measures (including mean F0) produced a decrease in classification accuracy that closely matched listener response patterns for the
F0-removed stimuli. The results suggest that in the absence of F0, formant
frequencies may be used for perception of angry and happy speech.
4pSC22. Perception of speaker sex and vowel recognition at very short
durations. David R. Smith (Psychology, University of Hull, Cottingham
Road, Hull HU6 7RX, United Kingdom, [email protected])
A man or woman saying the same vowel do so with very different voices. The auditory system solves the problem of extracting what the man or
woman said despite substantial differences in the acoustic properties of the
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2079
4p THU. PM
describe contrasts between phonemes in any language in the world. The
model also defines conditions for comparing feature bundles recovered from
the signal with segments defined in the lexicon: a feature may MATCH (the
feature is present in both the signal and the lexicon), it may MISMATCH
(the feature in the signal is impossible in tokens of the segment in the lexicon, e.g. when a stop-burst, which indicates the [PLOSIVE] feature, is compared with a segment carrying the [CONTINUANT] feature in the lexicon),
or it may provoke a NOMISMATCH response when the feature in the signal
is not part of the segment in the lexicon. This matching mechanism also
accounts for asymmetries as they are observed in natural speech, as [CORONAL] assimilates to [LABIAL] but not the other way around. The engine
computes distances to neighboring words according to a coherence measure
to simulate co-activation in the lexicon. We will demonstrate online the
operation of this engine in English and German.
carrier voice. Much of this acoustic variation is due to differences in the
underlying anatomical mechanisms for producing speech. If the auditory
system knew the sex of the speaker then it could correct for speaker-sex
related acoustic differences thus facilitating vowel recognition. We measured the minimal stimulus duration necessary to accurately discriminate
whether a brief vowel segment was spoken by a man or woman, and to
accurately recognize what vowel was spoken. Results show that reliable
vowel recognition precedes reliable speaker sex discrimination. Furthermore, the pattern of performance across experiments where voice pitch and
resonance information were systematically varied, is markedly different
depending on whether the task is speaker-sex discrimination or vowel recognition. These findings suggest that knowledge of speaker sex has little
impact upon vowel recognition at very short stimulus durations.
4pSC23. Partial effects of perceptual compensation need not be auditorily driven. Gregory Finley (Linguistics, University of California, Berkeley,
CA 94720, [email protected])
An experiment was devised to test whether compensation for coarticulation could be motivated by nonspeech for which gestural recovery is impossible. Subjects were presented with CV stimuli formed by concatenating an
/s ~ S/ continuum fricative with an /i/ or /o/ from three types of synthesized
vocalic nuclei: full spectral vowels (Set A), vowels with F2 but no other formants (B), and pure sine tones at F2 (C). F2 was chosen as the common parameter between sets because extremely high or low F2 is enough
information for English speakers to judge vowel roundedness (and rounding
lowers centroid frequency, a key difference between /s/ and /S/). Comparing
fricative identification boundaries between the vowels within each set, compensation occurred reliably in Sets A (t = 4.6, p < 0.01) and B (t = 3.7, p <
0.01) but not C. Comparing conditions against each other, Set A showed significantly higher vowel-triggered boundary shift than B (F = 8.18, p <
0.05). Results support the conclusion that relevant acoustic cues might not
trigger compensation if they cannot be associated with speech; additionally,
sounds reminiscent of speech do not generate as strong an effect as sounds
firmly identifiable as speech.
4pSC24. Language specific compensation for coarticulation. Keith Johnson, Shinae Kang, Greg Finley, Carson Miller Rigoli (Linguistics, UC
Berkeley, 1203 Dwinelle Hall, Berkeley, CA 94720, [email protected]), and Elsa Spinelli (Psychology and Neurocognition, Universite
Pierre Mendès-France, Grenoble, Cedex 9, France)
This paper reports an experiment testing whether compensation for coarticulation in speech perception is mediated by linguistic experience. The
stimuli are a set of fricative-vowel syllables on continua from [s] to [S] with
the vowels [a], [u], and [y]. Responses from native speakers of English and
French (20 in each group) were compared. Native speakers of French are familiar with the production of the rounded vowel [y] while this vowel was
unfamiliar to the native English speakers. Both groups showed compensation for coarticulation (both t > 5, p < 0.01) with the vowel [u] (more “s”
responses indicating that in the context of a round vowel, fricatives with a
lower spectral center of gravity were labeled “s”). The French group also
showed a compensation effect in the [y] environment (t[20] = 3.48,
p<0.01). English listeners also showed a tendency for more subject-to-subject variation on the [y] boundary locations than did the French listeners
(Levene’s test of equality of variance, p < 0.1). The results thus indicate
that compensation for coarticulation is a language specific effect, tied to the
listener’s experience with the conditioning phonetic environment.
4pSC25. Audio/visual compensation for coarticulation. Shinae Kang,
Greg Finley, Keith Johnson, and Carson Miller Rigoli (Linguistics, UC
Berkeley, UC Berkeley, Berkeley, CA 94720-2650, [email protected]
edu)
This study investigates how visual phonetic information affects compensation for coarticulation in speech perception. A series of CV syllables with
fricative continuum from [s] to [sh] before [a],[u] and [y] was overlaid with
a video of a face saying [s]V, [S]V, or a visual blend of the two fricatives.
We made separate movies for each vowel environment. We collected [s]/[S]
boundary locations from 24 native English speakers. In a test of audio-visual
integration, [S] videos showed significantly lower boundary locations (more
2080
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
[sh] responses) than [s] videos (t[23]=2.9, p<0.01) in the [a] vowel environment. Regardless of visual fricative condition, the participants showed a
compensation effect with [u] (t[23] > 3, p<0.01), but not with the unfamiliar vowel [y]. This pattern of results was similar to our findings from an
audio-only version of the experiment, implying that the compensation effect
was not strengthened by seeing the lip rounding of [y].
4pSC26. The influence of visual information on the perception of Japanese-accented speech. Saya Kawase, Beverly Hannah, and Yue Wang
(Department of Linguistics, Simon Fraser University, Burnaby, BC V5A
1S5, Canada, [email protected])
This study examines how visual information in nonnative speech affects
native listener judgments of second language (L2) speech production.
Native Canadian English listeners perceived three English phonemic contrasts (/b-v, h-s, l-/) produced by native Japanese speakers as well as native
Canadian English speakers as controls. Among the stimuli, /v, h, l, / are not
existent in the Japanese consonant inventory. These stimuli were presented
under audio-visual (AV), audio-only (AO), and visual-only (VO) conditions.
The results showed that while overall perceptual judgments of the nonnative
phonemes (/v, h, l, /) were significantly less intelligible than the native phonemes (/b,s/), the English listeners perceived the Japanese productions of
the phonemes /v, h, b,s/ as significantly more intelligible when presented in
the AV condition compared to the AO condition. However, the Japanese
production of // was perceived as less intelligible in the AV compared to
the AO condition. Further analysis revealed that a significant number of Japanese productions of // lacked lip-rounding, indicating that nonnative
speakers’ incorrect articulatory configurations may decrease intelligibility.
These results suggest that visual cues in L2 speech productions may be either facilitative or inhibitory in native perception of L2 accented-speech.
[Research supported by SFU and SSHRC.]
4pSC27. Effects of visual cue enhancement on speech intelligibility for
clear and conversational speech in noise. Jasmine Beitz, Kristin Van
Engen (Communication Sciences and Disorders, University of Texas at
Austin, 1 University Station, Austin, TX 78712, [email protected]
com), Rajka Smiljanic (Linguistics, University of Texas at Austin, Austin,
TX), and Bharath Chandrasekaran (Communication Sciences and Disorders,
University of Texas at Austin, 1 University Station A1100, College of Communication, Austin, TX 78712)
Visual presentation of a speaker enhances the auditory perception of
speech information in noisy conditions (e.g. Helfer and Freyman, 2005; Helfer, 1997). Intelligibility is also improved when a speaker adopts a clear
speaking style (Smiljanic and Bradlow, 2009). The present study investigates
the contributions of these two intelligibility-enhancing factors in the presence
of several types of noise, which vary with respect to the degree of informational and energetic masking they impose on target speech. Specifically, it
measures sentence intelligibility in the presence of 1 competing talker, 2talker babble, 4-talker babble, and speech-shaped noise. Meaningful sentences (in clear and conversational styles) were presented to participants in each
modality (audio-only; audio-visual) and in all noise conditions. Participants
reported all intelligible words. Our data shows overall better intelligibility
for clear speech and for AV speech relative to conversational speech and
audio-only condition. However, the visual benefit associated with conversational speech is significantly greater than the visual benefit associated with
clear speaking style. The relative contribution of visual influences and speech
clarity to intelligibility enhancements will be discussed.
4pSC28. Sumby and Pollack revisted: The influence of live presentation
on audiovisual speech perception.. Justin M. Deonarine, Emily J. Dawber,
and Kevin G. Munhall (Psychology, Queen’s University, Humphrey Hall,
Queen’s University, Kingston, ON K7L 3N6, Canada, [email protected]
yahoo.com)
In their classic paper, Sumby and Pollack (1954) demonstrated that the
sight of a talker’s face enhanced the identification of auditory speech in
noise. Recently, there has been interest in the influence of some of their
methodologies on audiovisual speech perception. Here, we examine the
effects of presenting the audiovisual stimuli live, like Sumby and Pollack.
Live presentation yields 3D visual stimuli, higher resolution images, and
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2080
social conditions not present in modern replications with recorded stimuli
and display monitors. Subjects were tested in pairs and alternated in the
same session between a live talker (Live Condition) and a live feed of the
talker to a television screen (Screen Condition). Order of presentation mode
and word lists (monosyllabic English words) were counterbalanced across
subjects. Subjects wore sound isolating headphones and signal intensity was
THURSDAY AFTERNOON, 25 OCTOBER 2012
controlled across conditions. Word lists were counterbalanced for spoken
word frequency and initial consonant structure. Stimuli were presented in 7
signal-to-noise ratios (pink noise). Accuracy of identification was higher in
the Live Condition than the Screen Condition. Possible causes of this effect
are explored through manipulations of monocular and binocular depth cues
and through testing with modern 3D display technology.
MARY LOU WILLIAMS A/B, 2:00 P.M. TO 5:00 P.M.
Session 4pUW
Underwater Acoustics and Signal Processing in Acoustics: Array Signal Processing and Source
Localizations
Jeffrey A. Ballard, Chair
Applied Research Laboratories, The University of Texas, Austin, TX 78713-8029
Contributed Papers
2:00
2:30
4pUW1. Performance of a large-aperture array in a complex littoral
environment. Steven L. Means, Stephen C. Wales, and Jeffrey S. Rogers
(Naval Research Laboratory, 4555 Overlook Ave SW, Washington, DC
20375, s[email protected])
4pUW3. Imaging objects with variable offset in an evanescent wave field
using circular synthetic aperture sonar and spectroscopy. Daniel Plotnick and Philip L. Marston (Dept. of Physics and Astronomy, Washington
State University, Pullman, WA 99163, [email protected])
Over 850 hours, from an ~2 month time period, of ambient acoustic
measurements were taken on a long (917m), 500 phone linear array 12 km
off the coast of Fort Lauderdale, Florida, capturing both day and night, commercial and recreational shipping generated noise. Marine-band radar and
AIS data were collected concurrently so that ship locations could be tracked
in a large area surrounding the array. Array performance is investigated by
beamforming in a number of frequency bands and apertures to determine
median beam noise, noise gain reduction, and noise window statistics as a
function of bearing. Additionally, the mean-square coherence is computed
as a function of normalized distance (range/wavelength). The results are
compared for a variety of time periods and environmental conditions. Comparisons of measurements with a computational noise model using known
ship locations will be made. (Work supported by ONR base funding at
NRL.)
Imaging properties of objects suspended in an acoustic evanescent wave
field are examined. Evanescent waves are generated using a tank containing
immiscible liquids and an appropriately directed acoustic beam [C. F. Osterhoudt et al., IEEE J. Oceanic Eng. 33, 397-404 (2008)]. The source and receiver transducers are in the liquid having the higher sound velocity.
Object(s) are spun about a vertical axis while scattering is measured. The
object(s) offset into the wave field is then varied and the experiment
repeated. In this work small spheres and other objects are used to gain
insight into imaging properties as a function of the object or object(s) position in the evanescent field. Data is examined using circular synthetic aperture techniques. Additionally, a comparison is made between spectral data
and a heuristic model in the case of two spheres. The spectral evolution in
that case is affected by the interference from the two scatterers. Cases where
the source and receiver are collocated (monostatic) and those where the
source and receiver are separated (bistatic) are compared. [Work supported
by ONR.]
4pUW2. Depth-based suppression of moving interference with vertical
line arrays in the deep ocean. Reid K. McCargar and Lisa M. Zurk (Electrical and Computer Engineering, Northwest Electromagnetics and Acoustics Research Lab, Portland State University, PO Box 751, Attn NEAR Lab,
Portland, OR 97207, [email protected])
Vertical line arrays (VLAs) deployed below the critical depth in deep
ocean environments can exploit the reliable acoustic path (RAP), which
exhibits low transmission loss (TL) at moderate ranges and increased TL for
distant interference. However, nearby surface ship interference presents a
major challenge for an array lacking horizontal aperture that doesn’t provide
bearing discrimination. The motion of the interference degrades covariance
estimation and limits observation intervals, thus limiting adaptive rejection
capabilities. An image-method interpretation of the propagation physics
reveals a depth-dependent modulation feature which enables separation of
signals originating from near-surface and those from sub-surface passive
acoustic sources. This discrimination can be achieved in the data through an
integral transform. The feature is robust to environmental variability and
allows for rejection of near-surface interference and depth classification.
The transform-based filter is derived in closed form, and demonstrated with
simulation results for a deep ocean environment with multiple moving surface interferers.
2081
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
2:45
4pUW4. Comparison of compressive sampling beamforming to adaptive
and conventional beamforming. Jeffrey A. Ballard (Applied Research
Laboratories, The University of Texas, P.O. Box 8029, Austin, TX 787138029, [email protected])
Interest in the application of compressive sampling to beamforming has
increased in recent years [G.F. Edelmann and C.F. Gaumond, J. Acoust.
Soc. Am. 130(4) EL232-EL237 (2011)]. This work compares the performance of a compressive sampling beamformer to conventional beamforming
and to an adaptive beamforming algorithm which utilizes dominant mode
rejection. The algorithms are applied to measured data collected on a horizontal line array off the southeast coast of Florida in 2008 [K. D. Heaney
and J. J. Murray, J. Acoust. Soc. Am. 125(3), 1394-1402 (2009)]. Several
factors affecting performance are considered: high and low SNR signals,
amount of clutter (number of other targets), and varying degrees of array
health (force failing 50%, 75%, & 90% of the array). To assess the ability of
each beamformer to differentiate signals from noise, known signals are
injected into the data at decreasing SNR. The analysis compares the frequency-azimuth data and bearing-time-records that each beamformer outputs. [Work supported by ARL:UT IRD.]
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2081
4p THU. PM
2:15
3:00
4pUW5. Frequency-difference beamforming with sparse arrays. Shima
H. Abadi (Mechanical Engineering, University of Michigan, 2010 W.E.Lay
Automotive Laboratory 1231 Beal Ave., Ann Arbor, MI 48109, [email protected]
edu), Heechun Song (Marine Physical Laboratory, Scripps Institution of Oceanography, University of California at San Diego, La Jolla, CA), and David R.
Dowling (Mechanical Engineering, University of Michigan, Ann Arbor, MI)
When an acoustic signal is transmitted to a remote receiving array with
sufficient aperture and transducer density, the arrival direction(s) of the ray
paths linking the source and the array may be determined by beamforming
the transducer recordings. However, when the receiving array is sparse, i.e.
there are many signal wavelengths between transducers, the utility of conventional beamforming is degraded because of spatial aliasing. Yet, when
the signal has sufficient bandwidth, such aliasing may be mitigated or eliminated through use of an unconventional nonlinear beamforming technique
that manufactures a desired frequency difference from the recorded signals.
When averaged through the signal’s frequency band, the output of frequency-difference beamforming is similar to that of conventional beamforming evaluated at the desired difference frequency. Results and
comparisons from simple propagation simulations and FAF06 experimental
measurements are shown for broadband signal pulses (11-19 kHz) that propagate 2.2 km underwater to a vertical 16-element receiving array having a
3.75-m-spacing between elements (almost 40 signal-center-frequency wavelengths). Here, conventional delay-and-sum beamforming results in the signal’s frequency band are featureless, but received ray-path directions are
successfully determined using frequency differences that are well below the
broadcast signal’s frequency band. [Sponsored by ONR.]
3:15–3:30 Break
3:30
4pUW6. Blind source localization and separation in three-dimensional
space. Na Zhu (Department of Engineering Technology, Austin Peay State
University, P.O. Box 4455, Clarksville, TN 47044, [email protected]) and
Sean Wu (Department of Mechanical Engineering, Wayne State University,
Detroit, MI)
A new methodology called blind source localization and separation
(BSLS) is developed to locate sound sources in three-dimensional space and
extract their corresponding sound signals from the directly measured data.
This technology includes two steps: Firstly, locate the sound sources by
applying signal pre-processing and triangulation algorithms to the signals
measured at microphones. Secondly, taking the sound source location results
from the first step and measured signals at microphone as the input data, use
the point source separation method to extract the sources and reconstruct the
sound sources at their locations. The impact of various factors, such as the
types and characteristics of sources, microphone configurations, signal to
noise ratios, number of microphones, and errors in source localizations on
the quality of source separation will be examined and compared to those
obtained by the conventional blind source separation method.
3:45
4pUW7. Broadband sparse-array blind deconvolution using unconventional beamforming. Shima H. Abadi (Mechanical Engineering, University
of Michigan, 2010 W.E.Lay Automotive Laboratory 1231 Beal Ave., Ann
Arbor, MI 48109, [email protected]), Heechun Song (Marine Physical
Laboratory, Scripps Institution of Oceanography, University of California at
San Diego, La Jolla, CA), and David R. Dowling (Mechanical Engineering,
University of Michigan, Ann Arbor, MI)
Sound from a remote underwater source is commonly distorted by multipath propagation. Blind deconvolution is the task of estimating the unknown
waveforms for the original source signal, and the source-to-receiver impulse
response(s), from signal(s) recorded in an unknown acoustic environment.
Synthetic time reversal (STR) is a technique for blind deconvolution of underwater receiving-array recordings that relies on generic features of the propagating modes or ray paths that lead to multipath sound propagation. In prior
studies the pivotal ingredient for STR, an estimate of the source-signal’s
phase (as a function of frequency), was generated from conventional beamforming of the recorded signals. However, through the use of unconventional
2082
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
nonlinear frequency-difference beamforming, STR can be extended to sparse
array recordings where the receiving-array elements are many wavelengths
apart and conventional beamforming is inadequate. This extension of STR
was tested with simple propagation simulations and FAF06 experimental
measurements involving broadband signal pulses (11-19 kHz) that propagate
2.2 km in the shallow ocean to a vertical 16-element receiving array having a
3.75-m-spacing between elements (almost 40 signal-center-frequency wavelengths). The cross-correlation coefficient between the source-broadcast and
STR-reconstructed-signal waveforms for the simulations and experiments are
98% and 91-92%, respectively. [Sponsored by ONR.]
4:00
4pUW8. Beam steering response matrix inversion method: Accurate
localization of simultaneous high and low frequency sources using a
small line array. Jon W. Mooney (Acoustics & Vibration, KJWW Engineering Consultants, 623 26th Avenue, Rock Island, IL 61201, [email protected]
kjww.com)
The Beam Steering Response (BSR) Matrix Inversion Method is a technique using a small line array that, when mathematically steered to populate
a BSR matrix, can simultaneously resolve the locations of multiple sources.
Using this technique, high directivity over a wide frequency range may be
achieved with a small array. Simultaneous tracking of multiple low and high
frequency sources is demonstrated using the BSR Matrix Inversion Method
with an acoustic video camera having a 49 inch, 8 element line array. This
paper addresses 1) the derivation of the Beam Steering Response matrix,
2) line array design considerations, 3) processing limitations, and 4) a practical demonstration of the BSR Matrix Inversion Method.
4:15
4pUW9. An experimental study of drive-by single channel time reversal
using scaled targets and clutter. Ahmad T. Abawi (HLS Research, 3366
North Torrey Pines Court, Suite 310, La Jolla, CA 92037, [email protected]), Ivars Kirsteins (NUWC, Newport, RI), and Daniel Plotnick
(Physics, Washington State University, Pullman, WA)
Previous results suggest that iterative single channel time reversal (TR) is
a promising, simple and inexpensive technique to enhance the backscatter signature of elastic objects and to simultaneously focus on its dominant resonance
frequency in the presence of noise and clutter (Pautet et al. 2005), (Waters et
al. 2009). However, to the best of our knowledge, these and other studies have
only considered the case when the sonar system, target, and clutter are fixed or
stationary, i.e., not moving. This is an unrealistic assumption since the sonar
platform is typically moving. In fact, motion and environmentally-induced
clutter scintillations may actually be beneficial for TR when the dominant target resonance varies slowly with aspect in clutter-limited noise environments.
Theoretical arguments in (Kirsteins et al. 2009) suggest TR provides additional
signal-to-noise ratio (SNR) enhancements against clutter under these circumstances. To confirm this hypothesis, a drive-by TR experiment was conducted
in a tank using scaled elastic targets/clutter sources with the platform motion
simulated by rotating the target/clutter during each TR iteration. Quantitative
analysis in regards to detecting the dominant resonance clearly shows that TR
in drive-by mode provides SNR enhancements against clutter.
4:30
4pUW10. Estimation of motion parameters and trajectory of an low
flying aircraft using acoustic sensors. A. Saravanakumar (MIT
Campus, Annauniversity, Chrompet, Chennai, Tamilnadu 600044, India,
[email protected])
An aircraft generates an acoustic impulse that propagates outwards from
the source. The position of the source and hence the trajectory can be estimated by measuring the relative time of arrival of the impulse at a number
of spatially distributed sensors. The time difference for the acoustic wave
front to arrive at two spatially separated sensors is estimated by cross correlating the digitized outputs of the sensors. The time delay estimate is used to
calculate the source bearing and the position of the source is found using triangulation technique using the bearings from two widely separated receiving nodes. The flight parameter of the aircraft is obtained by autocorrelation
method using acoustic Multipath delays. The signal emitted by an UAV
arrives at a stationary sensor located above a flat ground via a direct path
and a ground-reflected path. The difference in the times of arrival of the
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2082
direct path and ground-reflected path signal components is known as the
Multipath delay. A model is developed to predict the variation with time
and autocorrelation method is formulated to estimate the motion parameters
like speed and altitude of the aircraft.
4:45
4pUW11. Accuracy of passive source localization approaches with a single towed horizontal line-array in an ocean waveguide. Zheng Gong
(Mechanical Engineering, Massachusetts Institute of Technology, 5-435, 77
Massachusetts Ave, Cambridge, MA 02139, [email protected]) and Purnima
Ratilal (Electrical and Computer Engineering, Northeastern University,
Boston, MA)
Instantaneous passive source localization applying the (1) synthetic
aperture tracking, (2) array invariant, (3) bearings-only target motion
analysis in modified polar coordinates via the extended Kalman filter, and
(4) bearings-migration minimum mean-square error methods to measurements made on a single towed horizontal receiver array in a random rangedependent ocean waveguide are examined. These methods are employed to
localize and track a vertical source array deployed in the far-field of a towed
horizontal receiver array during the Gulf of Maine 2006 Experiment. The
source transmitted intermittent broadband pulses in the 300-1200 Hz frequency range. All four methods are found to be comparable with averaged
error of between 9% to 13% in estimating the mean source position in a
wide variety of source-receiver geometries. In the case of a relatively stationary source, the synthetic aperture tracking outperforms the other three
methods by a factor of two with only 4% error. For a moving source, the
Kalman filter method yields the best performance with 8% error. The array
invariant is the best approach for localizing sources within the endfire beam
of the receiver array with less than 10% error.
THURSDAY EVENING, 25 OCTOBER 2012
7:30 P.M. TO 9:30 P.M.
OPEN MEETINGS OF TECHNICAL COMMITTEES
The Technical Committees on the Acoustical Society of America will hold open meetings on Tuesday, Wednesday, and Thursday
evenings beginning at 7:30 p.m.
These are working, collegial meetings. Much of the work of the Society is accomplished by actions that originate and are taken in these
meetings including proposals for special sessions, workshops, and technical initiatives. All meeting participants are cordially invited to
attend these meetings and to participate actively in the discussion.
Committees meeting on Thursday are as follows:
Julia Lee A/B
Trianon C/D
Mary Lou Williams
Bennie Moten
4p THU. PM
Animal Bioacoustics
Noise
Speech Communication
Underwater Acoustics
2083
J. Acoust. Soc. Am., Vol. 132, No. 3, Pt. 2, September 2012
164th Meeting: Acoustical Society of America
Downloaded 18 Oct 2012 to 192.87.79.51. Redistribution subject to ASA license or copyright; see http://asadl.org/terms
2083
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement