BSA BASIC AUDITORY SCIENCE MEETING 4th

BSA BASIC AUDITORY SCIENCE MEETING 4th
BSA BASIC AUDITORY SCIENCE MEETING
4th – 5th September 2017
University of Nottingham
2
BSA Basic Auditory Science Meeting 2017
Local Organising committee
Alan Palmer1
Angie Killoran1
Johanna Barry1
Christian Füllgrabe1
Peyman Adjamian1
Ian Wiggins2
1
Medical Research Council Institute of Hearing Research
School of Medicine, University of Nottingham, Nottingham, NG7 2RD
2
NIHR Nottingham Biomedical Research Centre, Ropewalk House, Nottingham NG1 5DU
Meeting venue, meeting information, accommodation addresses etc
Lectures: Sir Clive Granger Building, Lecture Theatre room A48, University of Nottingham, University
Park, NG7 2RD (No 16 the campus map, page 5)
Posters/catering: Sir Clive Granger Building, room/foyer A42, University of Nottingham, University
Park, NG7 2RD (No 16 the campus map, page 5)
BSA BAS Dinner: The Senate Chamber, Trent Building, University of Nottingham, University Park,
NG7 2RD (No 11 the campus map, page 5). A cash bar will be available on the night.
Lenton and Wortley Hall accommodation address for 3rd – 4th September 2017: Lenton and Wortley
Hall, University of Nottingham, University Park, NG7 2RD (See residence
on the campus, near
Woodside Road, page 5). Breakfast will be served between 07.30-08.30am in the Hall.
Ancaster Hall accommodation address for 4th September 2017: Ancaster Hall, University of
Nottingham, University Park, NG7 2RD (See residence
on the campus, near A52 Derby Road, page
5). Breakfast will be served between 07.30-08.30am in the Hall.
Talks: Each talk will last 20 minutes including 5 minutes for questions.
Poster board size: A0 portrait
Size Width x Height (mm) Width x Height (in)
A0 841 x 1189 mm
33.1 x 46.8 in
Car parking: Parking is available next to the Sir Clive Granger Building (see
on the campus map
page 5) for delegates and there is limited parking available at both Ancaster Hall and Lenton and
Worthley Hall (see the orange zones on the map page 5). A visitors parking permit will be emailed to
you prior to the meeting. Delegates will need to print this out, write their registration details and then
display clearly on their windscreens (Parking permits/vouchers or pay and display tickets are required to
be displayed on vehicles on University campuses between 9.15am and 4.00pm week days). Anyone who
forgets their permit will need to purchase a ticket at a pay and display machine. Anyone who does not
display the visitor permit clearly while on site may be issued with a parking fine.
Other: A conference name badge, bag and a printed programme and abstracts booklet will be provided
during registration.
3
BSA Basic Auditory Science Meeting 2017
Getting to the University from Nottingham and Beeston
Nottingham is right in the centre of the country and is very well connected to all other major cities by the
train network.
From Nottingham and Beeston
A number of public transport services run close to or through our campuses.
By bus
There are a number of bus services running from Nottingham to University Park Campus. Please see the
bus services page for further information.
By tram
If you are coming to University Park Campus from Nottingham Train Station, you can now hop on a tram
which is accessible by the walkway leading from the station. Tickets must be purchased before boarding.
The Toton line takes you directly to the University and visitors should disembark at the University of
Nottingham stop. More information is available here.
By taxi
There are taxi ranks throughout the city and immediately adjacent to the main railway and bus stations.
The journey to the campus takes approximately 15 minutes.
By train
The nearest train stations are located in Nottingham City Centre or Beeston. Taxis and buses are available
at both stations.
From East Midlands Airport
From East Midlands Airport you can take the Trent Barton Indigo service directly to the campus or the
Skylink bus to Nottingham. Buses leave from outside the Airport Arrivals hall. See bus services for
further information.
You can also walk to the taxi rank on the terminal forecourt and take a direct taxi to the University. The
cost of a single/one way journey is approximately £20.
From M1 motorway
Leave the M1 motorway at Junction 25 to join the A52 to Nottingham. Follow the A52 for approximately
4 miles, at the Toby Carvery roundabout turn right onto the A6464, turn left at the next roundabout to
enter the University's West Entrance.
Cycling to University Park
Facilities
Visit the cycling facilities page for interactive maps for each of our campuses.
Find out more about cycling facilities available across the University on our Sustainability pages.
Cycle route maps
Find the best cycle route to University Park from across Nottingham and other campuses with our route
maps:
 University Park to King's Meadow and Jubilee Campus
 Beeston and West Bridgford to University campuses
Other route maps
4
BSA Basic Auditory Science Meeting 2017
To view a larger map/directions visit:
https://www.nottingham.ac.uk/about/visitorinformation/mapsanddirections/mapsanddirection
5
s.aspx
BSA Basic Auditory Science Meeting 2017
DAY ONE
BSA Basic Auditory Science 2017
Monday September 4th 2017
08.30-09.10
Registration in the Sir Clive Granger Building
09.10-09.15
Welcome: Alan Palmer
Session 1: Oral communications in Lecture Theatre A48 Sir Clive Granger Building
Chair:
John Culling
09.15-09.35
Ulrick Beierholm
“Modeling human auditory stream segregation through non-parametric Bayesian
inference”
09.35-09.55
Dan Goodman
“On the use of hypothesis-driven reduced models in auditory neuroscience”
09.55-10.15
Lore Thaler
“Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description
and Model Based Signal Synthesis”
10.15-10.35
Luboš Hládek
“Effect of width of acoustic beam in eye-controlled beamforming in a dynamic
‘cocktail party’”
10.35-11.15
Tea and Coffee
Session 2: Oral communications in Lecture Theatre A48 Sir Clive Granger Building
Chair:
Michael Akeroyd
11.15-11.35
Carlos Jurado
“Brain's frequency-following responses to low frequency and infrasound”
11.35-11.55
Mark Fletcher
“Effects of very high-frequency sound and ultrasound on humans”
11.55-12.15
Stephen Rowland
“Brain activity during effortful listening in naturalistic scenes”
12.15-12.35
Antonio Forte
“Selective auditory attention modulates the human brainstem's response to
running speech”
12.35-12.55
Maria Chait
“Measuring auditory salience and distraction”
12.55-14.00
Lunch break
6
BSA Basic Auditory Science Meeting 2017
Session 3: Poster communications Lecture Theatre Atrium Sir Clive Granger Building
14.00-17.10
Poster presentations
17.15-18.00
Business Meeting main auditorium
19.30-21.30
Dinner in Senate Chamber
7
BSA Basic Auditory Science Meeting 2017
DAY TWO
BSA Basic Auditory Science 2017
Tuesday September 5th 2017
Session 4: Poster communications Lecture Theatre Atrium Sir Clive Granger Building
09.30-12.00
Poster presentations
12.00-13.15
Lunch break
Session 5: Oral communications in Lecture Theatre A48 Sir Clive Granger Building
Chair:
Ian Winter
13.15-13.35
Axelle Calcus
“Developmental effects of mild to moderate sensorineural hearing loss on the
mismatch negativity”
13.35-13.55
Ian Forsythe
“The constraints of metabolic substrate and presynaptic ATP generation for fast
synaptic transmission at the calyx of Held”
13.55-14.15
Bas Olthof-Bakker
“Nitric oxide synthase expression and function in the central nucleus of the
inferior colliculus”
14.15-14.35
Neil Ingham
“Inner hair cell auditory neuropathy in Klhl18 mutant mice”
14.35-15.10
Tea and Coffee
Session 6: Oral communications in Lecture Theatre A48 Sir Clive Granger Building
Chair:
Deborah Vickers
15.10-15.30
Deniz Başkent
“Voice cues and speech perception in cochlear-implant users”
15.30-15.50
Robert Carlyon
“Effects of a potassium channel modulator and chronic stimulation on temporal
processing by cochlear implant listeners”
15.50-16.10
Anne Schilder
“Early phase trials of novel hearing therapies; challenges and opportunities”
16.10-16.30
Sarah Michiels
“Prognostic indicators for decrease in tinnitus severity after cervical physical
therapy in patients with somatic tinnitus”
16.30
End of Conference
8
BSA Basic Auditory Science Meeting 2017
Oral Presentations
9
BSA Basic Auditory Science Meeting 2017
O1. Modeling human auditory stream segregation through non-parametric Bayesian inference
U.R. Beierholm1,2, N. Larigaldie1 and T. Yates2
1
Department of Psychology, Durham University, Durham, DH1 3LE, UK
2
School of Psychology, University of Birmingham, B15 2TT, UK
When hearing a sequence of auditory tones (e.g. L-H-L-H-L-H-L-H) human subjects can either perceive
these as fused into a single ‘stream’ or segregated into two separate ‘streams’, dependent on the specifics
of the stimuli (Bregman, 1994, Van Noorden, 1977). Such perceptual grouping of sequential auditory
cues is traditionally modelled using a mechanistic approach (e.g. Beauvois & Meddis, 1991). The
problem however is essentially one of source inference – a problem that has recently been tackled using
statistical Bayesian models in visual and auditory-visual modalities (Shams & Beierholm, 2010). Usually
the models are restricted to performing inference over just one or two possible sources, but human
perceptual systems have to deal with much more complex scenarios. To characterize human perception
we have developed a Bayesian inference model that allows an unlimited number of signal sources to be
considered: it is general enough to allow any discrete sequential cues, from any modality. The model uses
a non-parametric process prior, hence increased complexity of the signal does not necessitate more
parameters. The model not only determines the most likely number of sources, but also specifies the
source that each signal is associated with. The model gives an excellent fit to data from an auditory stream
segregation experiment in which the pitch and presentation rate of pure tones determined the perceived
number of sources. Likewise, it is able to explain several known results in the literature (e.g. Bregman,
1978), as well as make novel testable predictions. The approach is very general – it can be applied to any
set of discrete sequential cues involving multiple sources – and it gives a simple, principled way to
incorporate natural signal constraints into the generative model.
References
Beauvois, M.W. & Meddis, R. 1991. A computer model of auditory stream segregation. Q J Exp Psychol,
43(3), 517-541.
Bregman A.S. 1978. Auditory streaming: Competition among alternative organizations. Percept
Psychophys, 23, 391-398.
Bregman, A.S. 1994. Auditory Scene Analysis: The Perceptual Organization of Sound (Bradford Book).
MIT Press.
Shams, L. & Beierholm, U.R. 2010. Causal inference in perception. Trends Cogn Sci, 14(9), 1-8.
Van Noorden, L.P.A.S. 1977. Minimum differences of level and frequency for perceptual fission of tone
sequences ABAB. J Acoust Soc Am, 61, 1041-1045.
10
BSA Basic Auditory Science Meeting 2017
O2. On the use of hypothesis-driven reduced models in auditory neuroscience
D. Goodman
Department of Electrical Engineering, Imperial College London, London, SW7 2AZ, UK
There are a number of detailed models of auditory neurons that are able to reproduce a wide range of
phenomena. However, using these models to test hypotheses can be challenging, as they have many
parameters and complex interacting subsystems. This makes it difficult to investigate the function of a
mechanism by varying just one parameter in isolation, or to assess the robustness of a model by
systematically varying many parameters. In some cases, by limiting the scope of a model to testing a
specific hypothesis using a particular set of stimuli, it is possible to create a reduced mathematical model
with relatively few, independent parameters. This has considerable advantages with respect to the
problems above. In particular, if a certain behaviour is robust and doesn’t depend on finely tuned
parameters, then different implementations are more likely to produce the same results – a key property
for reproducible research. In addition, the code for these models is typically simpler and therefore more
readable, and can often run faster, enabling us to carry out systematic parameter exploration. I will
illustrate these points with a reduced model of chopper cells in the ventral cochlear nucleus.
11
BSA Basic Auditory Science Meeting 2017
O3. Mouth-clicks used by blind expert human echolocators – Signal description and model based
signal synthesis
L. Thaler1, G.M. Reich2, X. Zhang3, D. Wang4, G.E. Smith5, Z.Tao3, R. Abdullah6, M. Cherniakov2, C.J.
Baker2, D. Kish7 and M. Antoniou2
1
Department of Psychology, Durham University, UK
2
Department of Electronic Electrical and Systems Engineering, School of Engineering, University of
Birmingham, UK
3
School of Information and Electronics, Beijing Institute of Technology, China
4
College of Electronic Science and Engineering, National University of Defense Technology, China
5
Department of Electrical & Computer Engineering, The Ohio State University, USA
6
Department of Computer and Communication Systems Engineering, Universiti Putra Malaysia,
Malaysia
7
World Access for the Blind, California, USA
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some
blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step
of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound
through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of
reception of the resultant sound.
For the current report, we collected a large database of click emissions with three blind people
expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the
current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human
expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not
available before.
Our data show that transmission levels are fairly constant within a 60° cone emanating from
the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal
features, our data show that emissions are consistently very brief (~3 ms duration) with peak frequencies
2-4 kHz, but with energy also at 10 kHz. This differs from previous reports of durations 3-15 ms and
peak frequencies 2-8 kHz, which were based on less detailed measurements.
Based on our measurements we propose to model transmissions as sum of monotones modulated by a
decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for
each echolocator.
These results are a step towards developing computational models of human biosonar. For
example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test
model based hypotheses about behaviour. The data we present here suggest similar research opportunities
within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of
human echolocation that could be virtual (i.e., simulated) or real (i.e., loudspeaker, microphones), and
which will help understanding the link between physical principles and human behaviour.
Acknowledgements
Supported by the British Council and the Department for Business, Innovation and Skills in the UK
(award SC037733) to the GII Seeing with Sound Consortium. This work was partially supported by
BBSRC grant to LT (BB/M007847/1).
12
BSA Basic Auditory Science Meeting 2017
O4. Effect of width of acoustic beam in eye-controlled beamforming in a dynamic ‘cocktail party’
L. Hládek1, B. Porr2 and W.O. Brimijoin1
1
Medical Research Council/Chief Scientist Office Institute of Hearing Research (Scottish Section),
Glasgow Royal Infirmary, Glasgow, G31 2ER, UK
2
School of Engineering, Glasgow University, Glasgow, G12 8QQ, UK
Hearing impaired people often point their eyes at a speaker in a conversation in order to get the benefit
from the visual cues. However, this strategy may not be compatible with the typical microphone
directionality of a hearing aid because it is fixed with respect to the head, not the eyes. Equipping hearing
aids with eye-controlled beamforming may help alleviate this problem (Kidd et al., 2013), because
hearing aids users may benefit from better amplification of the initial portions of the speech of the speaker
to whom they switch their gaze. However, the actual benefit in terms of speech intelligibility remains
unknown. In the current experiment we compared head-controlled, eye-controlled beamforming with
omnidirectional listening in a ‘dynamic cocktail party’ scenario. We expect that the intelligibility of
speech immediately after a change in target location will be the most enhanced under the eye-controlled
condition. The task of the normal hearing and hearing impaired participants was to listen to a sequence
of numbers (presented among speech-shaped noise distractors) and report them back to the experimenter.
The numbers could originate either from the left or right, such that the switch between the sides occurred
randomly after a few presentations. The beamforming technology was simulated in the loudspeaker ring
using head tracking and eye tracking. In the analysis we focus on the overall performance over the course
of a block of measurements as a function of the width of the acoustic beam, and the performance during
the switching periods. Preliminary data suggest that the signal processing strategy and the width of the
acoustic beam influence the outcome measures. This work is ongoing and we expect to arrive at
conclusive results once we record data from more listeners.
Acknowledgements
This work was supported by the Medical Research Council (grant number U135097131) and the Oticon
Foundation.
References
Kidd, G., Favrot, S., Desloge, J.G., Streeter, T.M. & Mason, C.R. 2013. Design and preliminary
testing of a visually guided hearing aid. J Acoust Soc Am, 133(3), EL202-EL207.
13
BSA Basic Auditory Science Meeting 2017
O5. Brain's frequency-following responses to low frequency and infrasound
C. Jurado and T. Marquardt
UCL Ear Institute, University College London, 332 Grays Inn Road, London, WC1X 8EE, UK
Sources of airborne infrasound are common in our everyday environment. While there is clear evidence
that infrasound can be perceived and may affect individuals, the processing of infrasound by the auditory
system and its possible differences to that of sounds in the conventional hearing range has received little
study. In this work, brain’s frequency following responses (FFR) to monaurally applied steady sinusoidal
sounds of 11 Hz and 38 Hz have been compared for a group of 11 subjects, using EEG. An especially
designed infrasound sound source with a 9m long feeding tube was used, in a configuration that allowed
its placement outside the measurement sound booth, to avoid electrical artefacts. For both frequencies,
the growth of the FFR was measured at increasing loudness levels. The averaged results show that the
FFR strength grows more steeply in response to 11 Hz than to 38 Hz. Further, in contrast to frequencies
in the conventional hearing range, measurable brain responses could often be recorded with sound
pressure levels down the 11 Hz pure-tone threshold. We assume that the latter is possible due to the
extremely long periodicity of infrasound, which allows neurons along the entire auditory pathway to
synchronously phase-lock to the stimulus, giving a strongly modulated population response to the
infrasound tone.
14
BSA Basic Auditory Science Meeting 2017
O6. Effects of very high-frequency sound and ultrasound on humans
M. Fletcher1, S. Lloyd Jones1,2, C. Dolder1, P. White1, T. Leighton1 and B. Lineton1
1
Faculty of Engineering and the Environment, University of Southampton, University Road,
Southampton, SO17 1BJ, UK
2
Department of Audiology and Hearing Therapy, Royal South Hants Hospital, Brinton’s Terrace,
Southampton SO14 0YG, UK
For many years workers have reported adverse symptoms resulting from exposure to very high-frequency
(VHF) sound and ultrasound (US), including annoyance, difficulty concentrating, tinnitus, and
headaches. Recent work showing the presence of a new generation of VHF/US sources in public places
has reopened the debate about whether there are adverse effects of VHF/US, and has identified
weaknesses in standards and exposure guidelines. Our field measurements of VHF/US sources in public
places have identified devices producing up to 100 dB SPL at 20 kHz. Nearly all of the sources measured,
including those in places occupied by tens of millions of people each year, are likely to be clearly audible
to many young people.
To establish whether VHF/US exposure produces the symptoms reported, we conducted two
experiments. In each, participants were separated into two groups: “symptomatics”, who reported
previously experiencing symptoms, and “asymptomatics”, who did not. In the first experiment, claims
that symptoms occur even when the source is inaudible were tested. A 20 kHz stimulus below the
participant’s detection threshold (~88 dB SPL) was compared to a sham exposure under double-blind
conditions. In two further conditions, participants were told truthfully that US was not present and
untruthfully that US was present to test for nocebo effects. No evidence of an US effect, but evidence of
a small nocebo effect for some symptoms, was found. It is possible that the substantial effects reported
for inaudible US exposure were not reproduced because of ethical restrictions on stimulus level and
duration, or that a stronger nocebo stimulus was required.
In the second experiment, participants were exposed to either a VHF stimulus (>14 kHz) or a
1 kHz reference stimulus, both set to the same sensation level (25 dB SL). For each condition, participants
were exposed to the stimuli four times consecutively for 3 minutes. In both symptomatics and
asymptomatics, VHF sound exposure led to greater overall discomfort ratings than the reference stimulus.
In the symptomatic group, discomfort appeared mainly to be associated with increased difficulty
concentrating and annoyance. A clear effect of the VHF stimulus on galvanic skin response compared to
the reference was also observed. Hypersensitivity only to VHFs may explain why some hyperacusis
patients have normal discomfort thresholds in clinical tests, which typically do not measure above 8 kHz.
Acknowledgements
Supported by the Colt Foundation (ref: CF/03/15). Part of Tim Leighton’s time was funded by EU funded
project: EMPIR 15HLT03.
15
BSA Basic Auditory Science Meeting 2017
O7. Brain activity during effortful listening in naturalistic scenes
S.C. Rowland1, D.E.H. Hartley1,2,3,4 and I.M. Wiggins1,2
1
Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of
Nottingham, Nottingham, NG7 2UH, UK
2
National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Ropewalk
House, 113 The Ropewalk, Nottingham, NG1 5DU, UK
3
Medical Research Council (MRC) Institute of Hearing Research, School of Medicine, University of
Nottingham, University Park, Nottingham, NG7 2RD, UK
4
Nottingham University Hospitals NHS Trust, Queens Medical Centre, Derby Road, Nottingham, NG7
2UH, UK
Evidence from non-invasive human brain imaging shows that listening to speech in adverse conditions
draws on a wide network of cortical resources extending beyond the auditory cortex. However, little is
known about the brain activity that underlies effortful listening in the complex environments of daily life.
We used the emerging technique functional near-infrared spectroscopy (fNIRS) to examine brain activity
during listening to speech in naturalistic scenes.
Thirty normally-hearing participants listened to speech narratives of 4–5 minutes duration in
which the background auditory scene changed every 8–27 seconds. The scenes were generated from realworld binaural recordings, reproduced over earphones using virtual acoustics techniques. Participants
were instructed to listen attentively to the target speech while we non-invasively monitored their frontotemporal brain activity using fNIRS. In a separate run, participants listened to another set of closely
matched narratives and rated perceived effort and intelligibility for each scene using visual analogue
scales.
Subjective ratings of effort and intelligibility were consistent across individuals (mean
pairwise r ≈ 0.8). Perceived effort was negatively correlated with perceived intelligibility (r ≈ –0.8),
although there were some scenes in which listening was reported to be moderately effortful despite
perceived intelligibility being near-perfect. After controlling for better-ear signal-to-noise ratio,
perceived effort was greater in scenes that contained competing speech than in those that did not,
potentially reflecting an additional cognitive cost of overcoming informational masking.
We analysed the fNIRS recordings using inter-subject correlation (ISC) analysis, a datadriven approach well-suited to analysing data collected under naturalistic conditions. Statistically
significant ISC was observed not only in bilateral auditory cortices, but also in several prefrontal regions
involved in higher-order language processing, working memory and executive function. Our results
confirm that higher-order cognitive processes are engaged during attentive listening to speech in complex
real-world environments. However, further work is needed to elucidate the relationship between
perceived effort and patterns of brain activity in these extended cortical networks.
Acknowledgements
Supported by the NIHR.
References
Hasson, U., Nir, Y., Levy, I., Fuhrmann, G. & Malach, R. 2004. Intersubject synchronization of cortical
activity during natural vision. Science, 303, 1634–1640.
16
BSA Basic Auditory Science Meeting 2017
O8. Selective auditory attention modulates the human brainstem's response to running speech
A.E. Forte, O. Etard and T. Reichenbach
Department of Bioengineering, Imperial College, London, SW7 2AZ, UK
It is known that the encoding of speech in the auditory cortex is modulated by selective attention.
However, it remains debated whether such modulation occurs already in subcortical auditory structures.
Measuring the auditory brainstem response normally requires a large number of repetitions of the same
short sound stimuli, which may lead to a loss of attention as well as to neural adaptation and can hence
hinder an assessment of attentional modulation.
We therefore sought to develop a method to measure the brainstem's response to natural
continuous speech that does not repeat. The brainstem responds indeed at the fundamental frequency of
voiced speech, which can be measured from the response to short speech segments or to monotone speech
(Skoe & Kraus, 2010; Reichenbach et al., 2016). Natural speech, however, has a fundamental frequency
that varies in time, which compounds a readout of the brainstem's response.
We employed empirical mode decomposition (EMD) of speech stimuli of several minutes in
duration to identify an empirical mode that oscillate at the fundamental frequency of the speech signal;
we refer to this mode as the 'fundamental mode' of the speech stimulus (Forte et al., 2017). EMD can
indeed extract non-linear, irregular oscillations and has recently been used to determine the time-varying
pitch of speech (Huang & Pan, 2006). We show that the correlation of this fundamental waveform with
the brainstem response yields a significant response at a latency of around 9 ms.
We then employed this method to assess the brainstem's activity when a subject listens to one
of two competing speakers. Our results show that the human auditory brainstem response to continuous
speech is larger when attending than when ignoring a speech signal. The attentional modulation is
consistent across different subjects and speakers and evidences a role of the brainstem in selective
attention for analysing complex acoustic scenes.
Acknowledgements
Supported by EPSRC grant EP/M026728/1 to TR.
References
Forte, A.E., Etard, O. & Reichenbach, T. 2017. The human auditory brainstem response to running speech
reveals a subcortical mechanism for selective attention. eLife (in revision).
Huang, H. & Pan, J. 2006. Speech pitch determination based on hilbert-huang transform. Signal Process,
86(4), 792-803.
Reichenbach, C.S., Braiman, C., Schiff, N.D., Hudspeth, A.J. & Reichenbach, T. 2016. The auditorybrainstem response to continuous, non-repetitive speech is modulated by the speech envelope
and reflects speech processing. Front Comput Neurosci, 10, 47.
Skoe, E. & Kraus, N. 2010. Hearing it again and again: on-line subcortical plasticity in humans. PLoS
One, 5(10), e13645.
17
BSA Basic Auditory Science Meeting 2017
O9. Measuring auditory salience and distraction
S. Zhao1, M. Yoneya2, E. Benhamou1, L. Benjamin1, S. Furukawa2 and M. Chait1
1
Ear Institute, University College London, London, UK
2
NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, Japan
A key question in our pursuit to understand sensory processing within complex environments is how
perceptual encoding is affected by interference from distracting events. Whereas attention works to limit
our perceptual processing and behavioural responses to a subset of the available sensory input, distraction
designates the complementary ability to monitor the background for potentially relevant events outside
of the current focus of attention - a mechanism that may be useful if it allows critical events to penetrate
perception, and deleterious if it draws essential resources away from a task. Understanding this process
is a key step towards revealing the fundamental architecture of perception, and it also has immediate
applications in preventing distraction in mission- or life-critical situations, while maintaining the ability
to react to alarms.
I will present a series of recent and ongoing experiment from my laboratory through which
we seek to quantify distraction by brief acoustic signals and understand the conditions that support
successful resistance of distraction. The approach is based on a combination of behavioural methods (lab
based as well as mass participant online experiments) and physiological measures (changes in pupil
diameter and micro-saccades).
Acknowledgements
This work is supported by an EC Horizon 2020 grant (MC) and a UCL-NTT extended collaboration.
18
BSA Basic Auditory Science Meeting 2017
O10. Developmental effects of mild to moderate sensorineural hearing loss on the mismatch
negativity
A. Calcus, A. Campos, O. Tuomainen, S. Rosen, X. Wang and L. Halliday
Department of Speech, Hearing and Phonetic Sciences, University College London, London WC1N 1PF,
UK
This study examined the developmental effects of childhood mild to moderate sensorineural hearing loss
(MMHL) on auditory discrimination of speech, “speech-like” and “nonspeech” stimuli, using the
Mismatch Negativity (MMN). In a previous study, we tested 46 children with MMHL and 44 normallyhearing chronological age-matched controls (CA). Children were divided into two age groups: “younger”
(8-11 years; n = 24 MMHL, 24 CA) and “older” (12-16 years; n = 22 MMHL, 20 CA). All children were
monolingual British English native speakers and had normal nonverbal IQ. Children were presented with
a passive two-deviant oddball paradigm, using three different types of stimuli. In the nonspeech
condition, standards were 1-kHz pure tones, and deviants were pure tones frequency-modulated at a rate
of 40 Hz that varied in modulation depth. In the speech-like condition, standards were complex periodic
sounds and deviants were modulated in f0 at a rate of 4 Hz around a centre frequency of 100 Hz. In the
speech condition, standards were consonant-vowel /ba/ syllables, and deviants were taken from a /ba//da/ continuum. Large deviants elicited a larger MMN in controls than in children with MMHL. However,
there were no significant differences in MMN latency or amplitude between the younger MMHL group
and CA controls. Importantly, while present in younger children with MMHL, there was no significant
MMN in older MMHL children, whatever the condition. In an attempt to replicate this finding, fourteen
children from the initial younger group participated in a longitudinal follow-up study again 6 years later
(age range: 14-17 years). Although this group had a significant MMN when they were aged 7-11 years,
this was only the case for speech when they reached 14-17 years. Our findings are consistent with those
from profoundly deaf children with cochlear implants and animal studies, and suggest that even mild or
moderate levels of hearing loss during childhood may entail a persistent immaturity of auditory cortical
functioning.
Acknowledgments
This research was supported by an Economic and Social Research Council (ESRC) First Grants Award
(RES-061-25-0440) to LH and a European Union ITN grant (FP7-607139).
19
BSA Basic Auditory Science Meeting 2017
O11. The constraints of metabolic substrate and presynaptic ATP generation for fast synaptic
transmission at the calyx of Held
S.J. Lucas1, C.B. Michel2, M.H. Hennig3, B.P. Graham2 and I.D. Forsythe1
1
Department of Neuroscience, Psychology & Behaviour, University of Leicester, LE1 9HN, UK
2
Computing Science & Mathematics, Faculty of Natural Sciences, Stirling, FK9 4LA, UK
3
Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, EH8
9AB, UK
The brain has high activity-dependent energy demands (Harris et al., 2012) and an established
relationship between increased brain activity, blood flow and cognition in which energy provision for
synaptic transmission is crucial. The auditory pathway has a particularly high metabolic demand
(Kennedy et al., 1978) suggesting local activity-dependent constraints on physiological transmission,
which could underlie impairment of auditory processing, pathology and/or neurodegeneration.
We have used high frequency synaptic stimulation at the mouse calyx of Held in an in vitro
brain slice preparation to monitor postsynaptic EPSCs as an index of transmission (Kopp-Scheinpflug et
al., 2011). Changes in energy substrates were achieved under saturated oxygen conditions and without
pharmacological block of respiration by washout of extracellular metabolic substrates (glucose and/or
lactate). Presynaptic and postsynaptic patch clamp recording showed that glucose depletion enhanced
EPSC depression during high frequency stimulation and slowed recovery of EPSC response amplitude.
Depletion of presynaptic ATP (achieved by diffusion from the patch pipette) also impaired transmitter
release in an analogous manner. Computational modelling of these experimental data demonstrated that
the impaired transmission was caused by a reduction in the number of functional release sites and slowed
vesicle pool replenishment, rather than a sustained change in release probability. Glucose was
demonstrated as the primary presynaptic energy substrate; although lactate could be utilised (Magistretti
& Allaman, 2015); normal transmission did not require the astrocyte-neuron lactate shuttle (ANLS). Only
with extreme glucose deprivation was breakdown of glycogen to produce lactate found to contribute to
maintenance of synaptic transmission. We conclude that the ANLS made little contribution to synaptic
transmission when glucose was available. Presynaptic metabolism is a physiological constraint on
synaptic transmission, so that exploring these physiological limits will give insights into metabolic and
activity-dependent changes in auditory processing and contributions to central mechanisms of hearing
loss associated with disease and ageing.
Acknowledgements
Supported by BBSRC and the Rosetree Trust
References
Harris, J.J., Jolivet, R. & Attwell, D. 2012. Synaptic energy use and supply. Neuron, 75, 762-777.
Kennedy, C., Sakurada, O., Shinohara, M., Jehle, J. & Sokoloff, L. 1978. Local cerebral glucose
utilization in the normal conscious macaque monkey. Ann Neurol, 4, 293-301.
Kopp-Scheinpflug, C., Steinert, J.R. & Forsythe, I.D. 2011. Modulation & control of synaptic
transmission across the MNTB. Hear Res, 279, 22-31.
Magistretti, P.J. & Allaman, I. 2015. A cellular perspective on brain energy metabolism and functional
imaging. Neuron, 86, 883-901.
20
BSA Basic Auditory Science Meeting 2017
O12. Nitric oxide synthase expression and function in the central nucleus of the inferior colliculus
B.M.J. Olthof-Bakker, S.E. Gartside and A. Rees
Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
Neuronal nitric oxide synthase (nNOS) catalyses the synthesis of the neurotransmitter nitric oxide (NO).
NO modulates neural responses principally via soluble guanylate cyclase (sGC) (Garthwaite, 2008). The
expression of nNOS is well documented in neurons in the dorsal and lateral cortices of the inferior
colliculus (IC) (Coote & Rees, 2008). In these cells nNOS is expressed diffused through the cytoplasm.
However, we recently demonstrated the existence of a punctate expression of nNOS in neurons in the IC,
including in the central nucleus. We have also reported that neuronal responses to NMDA in the IC are
dependent on nNOS activity (Olthof et al., 2015). Here we used fluorescent immunohistochemistry to
determine the underlying protein interactions that explain the observed NO dependency of the NMDA
receptor (NMDA-R).
For anatomical studies, pigmented guinea pigs were given an overdose of pentobarbital and
transcardially perfused with 0.1M PBS followed by 4% PFA. The brains were harvested, cryoprotected
in 30% sucrose and cut in 40 µM coronal sections in preparation for multi-label fluorescent
immunohistochemistry using antibodies against NMDA-R1, nNOS, sGC(α2) and PSD95.
For in vivo electrophysiology studies, pigmented guinea pigs were anesthetized with urethane
(1g/kg), fentanyl (0.3mg/kg) and midazolam (5mg/kg). Drugs targeting NO signalling were applied by
reverse dialysis via a microdialysis probe inserted in the IC. Neurons across the frequency laminae of the
central nucleus were recorded using a 32 channel single shank electrode (Neuronexus) inserted rostral to
the dialysis probe. Sound stimuli were delivered using closed field presentation.
High power confocal imaging following antibody labelling revealed that the nNOS puncta
were abundant in neurons throughout the IC including in the central nucleus and occurred in neurons
which did not display diffuse nNOS labelling. These nNOS puncta always co-labelled for PSD95,
NMDA-R1 and sGC(α2).
The direct perfusion of NMDA in the central nucleus caused a dose-dependent increase in
pure tone-evoked and spontaneous neuronal firing. The increased activity in response to NMDA was
largely abolished in the presence of the nNOS inhibitor L-mono-methyl arginine acetate, or 1H[1,2,4]oxadiazolo[4,3-a]quinoxalin-1-one (ODQ), a drug that blocks the binding of NO to sGC.
Our results demonstrate that NO is an important modulator of glutamatergic NMDA-R
activity in the IC, and that this effect is mediated through the activation of sGC and subsequent elevation
of cGMP. Our anatomical data suggest that the functional dependence of NMDA-R activity on NO in the
central nucleus is brought about via a multi-protein signalling complex consisting of nNOS, NMDA-R
and sGC(α2) brought together by their interaction with PSD95.
Acknowledgements
Mark Wallace generously provided the tissue for the anatomical studies. Supported by the BBSRC.
References
Coote, E.J. & Rees, A. 2008. The distribution of nitric oxide synthase in the inferior colliculus of guinea
pig. Neurosci, 154, 218-225.
Garthwaite, J. 2008. Concepts of neural nitric oxide‐mediated transmission. Eur J Neurosci, 27, 27832802.
Olthof, B.M.J., Gartside, S.E. & Rees, A. 2015. Presented at the BSA Basic Auditory Science Meeting,
Cardiff.
21
BSA Basic Auditory Science Meeting 2017
O13. Inner hair cell auditory neuropathy in Klhl18 mutant mice
N.J. Ingham1,2, N. Banafshe1, J. Crunden1, K. Boustani1, S. Pearson2, M. Lewis1,2, J.K. White2 and K.P.
Steel1,2
1
Wolfson Centre for Age-Related Diseases, King’s College London, London, SE1 1UL, UK
2
Wellcome Trust Sanger Institute, Hinxton, CB10 1SA, UK
Following a large-scale screen of new mouse mutants using Auditory Brainstem Response recording
(Sanger Institute Mouse Genetics Project), we discovered two mutant alleles of the Klhl18 gene,
Klhl18tm1a(KOMP)Wtsi and Klhl18lowf, with progressive hearing loss predominantly affecting low frequencies.
The first allele was a targeted mutation leading to knockdown of Klhl18 expression, and the second allele
was a spontaneous missense mutation of Klhl18 (9:110455454 C>A; V55F), predicted to have a
damaging effect on protein structure. Compound heterozygotes for the two alleles also showed low
frequency hearing impairment.
Mutants had normal middle and external ears and the gross structure of the inner ear was also
normal. Mutants had normal Auditory Brainstem Response (ABR) thresholds at 3 weeks old but showed
progressive increase in ABR thresholds from 4 weeks onwards affecting low frequencies first.
Distortion Product Otoacoustic Emission measurements up to 14 weeks old revealed normal
thresholds. These findings suggest that Klhl18 is not required for development of cochlear responses and
that outer hair cells function normally. The defect is likely to involve the inner hair cells or their synapses
with afferent spiral ganglion neurons.
Frequency Tuning Curves, derived from ABR wave 1, using probe tones of 12 kHz and 24
kHz suggest that tuning sharpness at lower frequencies is more affected than high frequencies in the
mutant mice.
We have counted inner hair cell synapses using GluR2 and Ribeye immuno-labelling to label
postsynaptic densities and presynaptic ribbons. Klhl18 mutant mice appear to show fewer synapses, but
this was not significantly different.
Preliminary observations from scanning electron micrographs of the organ of Corti suggest
that whilst outer hair cells have normal stereociliary bundles, inner hair cells in the apical turn showed
elongated and tapering stereocilia.
Our observations suggest that the progressive low frequency hearing impairment in Klhl18
mutant mice is an auditory neuropathy affecting the inner hair cells.
Acknowledgements
Supported by Welcome Trust and MRC grants to KPS.
22
BSA Basic Auditory Science Meeting 2017
O14. Voice cues and speech perception in cochlear-implant users
Deniz Başkent1,2
1
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology/Head and
Neck Surgery, The Netherlands
2
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, The Netherlands
Two main components of speech are voice (who said it) and linguistic content (what they said). Voice
information can directly inform about the speaker, such as their sex, age, socio-economic and
geographical background. Voice cues can also significantly contribute to communication, in conveying
vocal emotions and enhancing speech segregation and comprehension in cocktail-party listening. These
are situations where cochlear-implant (CI) listeners have most difficulties, yet most clinical tools used in
patient care are based on assessment of linguistic content with simple speech materials, such as words
and sentences, usually produced by a single talker, where voice plays no or little role.
The two voice cues, voice pitch (related to glottal pulse rate of the speaker and fundamental
frequency, F0) and vocal-tract length (VTL; related to the height of the speaker and formant frequencies),
are particularly effective for voice discrimination. Further, they seem to be related to voice advantages
observed for speech-on-speech perception. The reduced sound quality due to impoverished spectrotemporal details of the CI speech signal would also affect perception of the voice cues. Confirming this,
a significant amount of past research on voice perception by CI users has shown perception of F0 to be
poor. In contrast, minimal research has been done on perception of VTL.
Recently, we manipulated F0 and VTL separately on the word recordings from the same talker
and assessed gender categorization of the manipulated voices by CI users (Fuller et al., 2014). This way,
we were able to assess perception of F0 and VTL per se and in a systematic manner. Since the
manipulation was applied on the recordings from the same speaker, listeners could not rely on other
speaker-related characteristics, such as accent, pace, intonation, that may additionally help in identifying
speakers and their genders. Only with such systematic approach, we observed that, while perception of
F0 was poor, as was reported before, this was still the main voice cue that CI users relied on. They were
not able to make use of VTL cues, however, which lead to abnormal patterns in recognizing speaker’s
gender, compared to normal hearing where both F0 and VTL are used to achieve this task. This finding
implied that voice perception in CI users is more complex than what was previously shown.
In our follow-up research, we now focus on perception of VTL to explore whether the deficit
comes from VTL-related cues not being delivered by the CI, or if these are delivered in a distorted way
and therefore ignored by the CI user (Gaudrain & Başkent, 2015; 2017). If it is the latter, CI users may
be able to learn to make use of these distorted cues with focused training. We further explore perception
of these cues and their connection to speech perception in various populations, such as persons implanted
early as children or later after a long duration of auditory deprivation. Since VTL perception relies on
spectral resolution, we explore new signal-processing techniques that may help improving perception of
this voice cue. With such a comprehensive approach, we aim to fully explore the limitations in voice
perception in CI users, and what potential solutions we can offer to overcome these limitations.
Acknowledgements
Rosalind Franklin Fellowship (University Medical Center Groningen, University of Groningen), and
VIDI and VICI Grants (ZonMw, Netherlands Organization for Scientific Research (NWO).
References
Fuller, C.D., Gaudrain, E., Clarke, J.N., Galvin, J.J., Fu, Q.J., Free, R.H. & Başkent, D. 2014. Gender
categorization is abnormal in cochlear implant users. J Assoc Res Otolaryngol, 15(6), 10371048.
Gaudrain, E. & Başkent, D. 2015. Factors limiting vocal-tract length discrimination in cochlear implant
simulations. J Acoust Soc Am, 137(3), 1298-1308.
Gaudrain, E. & Başkent, D. 2017. Voice pitch and vocal tract-length discrimination in cochlear implant
users. Ear Hear, (In press).
23
BSA Basic Auditory Science Meeting 2017
O15. Effects of a potassium channel modulator and chronic stimulation on temporal processing by
cochlear implant listeners
R.P. Carlyon1, J.M. Deeks1, F. Guerit1, W. Lamping1, C. Large2 and P. Harris2
1
MRC Cognition & Brain Sciences Unit, University of Cambridge, CB2 7EF, UK
2
Autifony Therapeutics, Stevenage SG1 2FX, UK
Temporal processing by cochlear implant (CI) listeners is degraded. When the pulse rate applied to a
single electrode is increased, pitch increases only up to an upper limit of about 300-500 pps, and, even at
lower pulse rates, the smallest detectable change is much larger than in normal hearing. Inferior
Colliculus (IC) recordings from animals show that a physiological correlate of the upper limit is reduced
by deprivation and increased by subsequent chronic stimulation. We describe the effects of two
interventions in human CI listeners. (i) The fast-acting Kv3.1 potassium channel is important for
sustained temporally accurate firing and is susceptible to deprivation, the effects of which can be partially
restored in animals by the molecule AUT00063. As part of a larger randomised placebo-controlled
double-blind study, we investigated the effects of AUT00063 on the upper limit of rate discrimination,
gap detection, and discrimination of low rates (centred on 120-pps) for monopolar pulse trains presented
to an apical electrode. The upper limit was measured using a pitch-ranking procedure; thresholds were
obtained for the other two measures using an adaptive procedure. Twelve CI users (MedEl and Cochlear)
were tested before and after two 28-day periods of AUT00063 or placebo in a within-subject crossover
study. All three measures showed high test-retest reliability, measured using the two baseline conditions,
with no significant difference between post-drug and post-placebo conditions. Observations of our
experience with this first-ever clinical trial of a drug aimed at improving hearing by CI users will be
presented. (ii) We tested the upper limit and low-rate discrimination, for pulse trains applied to electrode
16 of the Cochlear device, in nine patients on the day of implant activation and two, six, and nine months
later. All stimuli were presented at Most Comfortable Level (MCL). Significant improvements were
observed between the switch-on and two-month sessions for both measures; the effect size was
significantly larger for the upper limit task. Part of this improvement may have been due to the increase
in MCL over time, but this could not account for all aspects of the results.
Acknowledgements
Supported by MRC and Autifony Therapeutics.
24
BSA Basic Auditory Science Meeting 2017
O16. Early phase trials of novel hearing therapies; challenges and opportunities
A.G.M. Schilder1, S. Saeed2, J. Palmer1, H. Blackshaw1, M Topping1, T. Bibas3, D. Kikidis3 and S.
Wolpert4, on behalf of the REGAIN consortium
1
evidENT, Ear Institute, University College London, London, WC1X 8EE, UK
2
The Royal National Throat, Nose and Ear Hospital, London, WC1X 8DA, UK
3
Department of Otolaryngology, National and Kapodistrian University of Athens, Athens 11527, Greece
4
Hearing Research Center, University of Tübingen, D-72076 Tübingen, Germany
With biotech companies moving rapidly from discovery to development of drug, gene and cell therapies
for sensorineural hearing loss, and their investors keen to see return on their investments, clinical trials
have become the decisive moment to success. This creates both opportunities as well as challenges to
clinicians and researchers in the hearing field, many of whom are new to this type of research. In this
presentation, I will share some of the dilemmas our international team faced while preparing for the
REGAIN trials.
Gamma-secretase inhibitors (GSIs) have been shown to induce transdifferentiation of
supporting cells into hair cells in acoustically deafened mice and restore hearing (Mizutari et al., 2013;
Bramhall et al., 2014). The REGAIN trials will translate these finding to humans and test the safety and
efficacy of a selected GSI in patients with adult onset mild to moderate sensorineural hearing loss.
Preclinical work (candidate selection, GMP production, formulation, pharmacokinetics, safety and
toxicology) and regulatory submissions were completed in summer 2017. Following this, the phase 1
open label multi-ascending dose safety study was initiated in the UK; the phase 2 open-label efficacy
study is planned to be underway by the end of 2017 in the UK, Germany and Greece.
Novel therapies targeting the underlying biological causes of hearing loss in a safe way may
meet a need for millions of patients. Challenges in designing and delivering proof of concept include
selection of patients and outcome measures, governance and regulatory requirements and costs. With
patient safety and benefit at the heart of these new developments, it is vital that translational expertise is
sought and shared and essential quality criteria are agreed upon by hearing researchers.
Acknowledgements
This project receives funding from the EU Horizon 2020 Research and Innovation Programme (No
634893). The REGAIN consortium includes 7 dedicated partners, coordinated by Audion Therapeutics
and involving evidENT at the UCL Ear Institute, the Eberhard Karls University of Tübingen, National
and Kapodistrian University of Athens, Eli Lilly, Nordic Biosciences and ttopstart.
References
Bramhall, N.F., Shi, F., Arnold, K., Hochedlinger, K. & Edge, A.S. 2014. Lgr5-positive supporting cells
generate new hair cells in the postnatal cochlea. Stem Cell Reports, 2, 311-22.
Mizutari, K., Fujioka, M., Hosoya, M., Bramhall, N., Okano, H.J., Okano, H. & Edge, A.S.B. 2013.
Notch inhibition induces cochlear hair cell regeneration and recovery of hearing after acoustic
trauma. Neuron, 77, 58-69.
25
BSA Basic Auditory Science Meeting 2017
O17. Prognostic indicators for decrease in tinnitus severity after cervical physical therapy in
patients with somatic tinnitus
S. Michiels, P. Van de Heyning, E. Nieste and W. De Hertogh
Rehabilitation Sciences and Physiotherapy, ENT, head and neck surgery and communication disorders,
Faculty of Medicine and Health Sciences, Campus Drie Eiken - Lokaal D.S.022, Universiteitsplein 1 2610 Wilrijk, Belgium
Background/Aim: Tinnitus can be related to many different etiologies such as hearing loss or a noise
trauma, but it can also be related to the somatosensory system, in which case it is called somatic or
somatosensory tinnitus (ST). Recently, a positive effect of multi-modal cervical physical therapy on
tinnitus severity in patients with ST was demonstrated (Michiels et al., 2016). To date however, the
outcome of the intervention cannot be predicted. Therefore, this study aimed to identify prognostic
indicators for decrease in tinnitus severity after cervical physical therapy in patients with ST.
Methods: Patients with subjective tinnitus and neck complaints were recruited in a tertiary tinnitus clinic.
All patients received multimodal cervical physical therapy for 6 weeks. Tinnitus Functional Index (TFI)
and Neck Bournemouth Questionnaire (NBQ) scores were documented at baseline, after treatment and
after 6-weeks follow-up. Tinnitus analysis and impairments in cervical spine function were identified at
baseline and after follow-up. The relationship between TFI decrease after treatment and potential
prognostic indicators was evaluated and a multivariate model for the prediction of TFI decrease was
created.26
Results: All patients (n=38) suffered from moderate to severe tinnitus at baseline with an average TFIscore of 49 (SD: 21) and NBQ-score of 33 points (SD: 12).
Co-variation between TFI and NBQ-scores—meaning that tinnitus and neck complaints decrease or
increase together—could be noted in 49%. The presence of this co-variation and a combination of low
pitched (< 1000 Hz) tinnitus and increasing tinnitus during inadequate cervical spine postures are
prognostic indicators for a decrease in TFI-scores after cervical physical therapy (adjusted R²=0.357).
Discussion: All patients meeting these criteria experienced substantial improvement of their tinnitus.
Although these conclusions are based on small numbers, the co-variation and low-pitched tinnitus with
an ‘increase of tinnitus during inadequate postures while resting, walking, working or sleeping’ seem to
be suitable criteria for referring patients for cervical spine treatment. Larger RCT’s are however needed.
Conclusion: Patients who will experience a decrease in tinnitus annoyance from cervical physical
therapy are those with co-varying tinnitus and neck complaints and those with a combination of lowpitched tinnitus and increasing tinnitus during inadequate cervical spine postures.
The results of this study are accepted for publication in ‘Musculoskeletal science and practice’ journal.
26
BSA Basic Auditory Science Meeting 2017
Poster Presentations
27
BSA Basic Auditory Science Meeting 2017
P1. Generation of mouse models for human hearing loss using CRISPR/Cas9 gene editing
M.S. Hammett1, S. Newton1, A. Richardson1, S.W. Robinson2, D. Linley1 and I.D. Forsythe1,2
1
Department Neuroscience, Psychology & Behaviour, University of Leicester, UK
2
MRC Toxicology Unit, Leicester, LE1 7RH, UK
We are investigating the molecular mechanisms underlying sensorineural hearing loss in the auditory
brainstem. Over 400 genetic syndromes are linked to hearing loss, but gene mutations may be pleotropic,
with secondary deficits in hearing being unrecognised (White et al., 2013) since involved genes may be
more or less expressed across large parts of the brain. For example, there are over 80 genes for potassium
channel subunits (Coetzee et al., 1999) with Kv1, Kv2 and Kv3 families having distinct roles in auditory
processing (Johnston et al., 2010).
Kv3 channels are widely expressed, mediating action potential repolarization in the superior
olivary complex (SOC) and are crucial for timing accuracy in binaural processing. Quantitative rtPCR
and immuno-labelling reveals that only Kv3.1 and Kv3.3 subunits are present, but knockout mice exhibit
little or no change in auditory brainstem responses up to one month old. Point mutations of Kv3.3 cause
Spinocerebellar Ataxia type-13 (SCA13) and compromised sound localization in humans (Middlebrooks
et al., 2013) so Kv3.3 is crucial for auditory processing. We have employed CRISPR gene editing to
generate a mouse possessing this R420H mutation. Injection of Cas9 protein and a suitable guide RNA,
homologous to Kv3.3 and containing the edited codon, into 168 fertilized embryos which were implanted
and yielded 24 live births. Gene sequencing revealed that 11 mice had InDels and 7 possessed the
recombination and gene edit. Comparison of Kv subunit immune-localization in each of the auditory
brainstem nuclei for WT mice, the knockouts and the R420H edited strain will test the hypothesis that
Kv3.3 is required for fast binaural processing in the auditory brainstem.
Acknowledgements
Funded by the MRC, BBSRC and Rosetrees Trust.
References
Coetzee, W.A., Amarillo, Y., Chiu, J., Chow, A., Lau, D., McCormack, T., Moreno, H., Nadal, M.S.,
Ozaita, A., Pountney, D., Saganich, M., Vega-Saenz de Miera, E. & Rudy, B. 1999. Molecular
diversity of K+ channels. Ann N Y Acad Sci, 868, 233-285.
Johnston, J., Forsythe, I.D. & Kopp-Scheinpflug, C. 2010. Going native: voltage-gated potassium
channels controlling neuronal excitability. J Physiol, 588, 3187-3200.
Middlebrooks, J.C., Nick, H.S., Subramony, S.H., Advincula, J., Rosales, R.L., Lee, L.V., Ashizawa, T.
& Waters, M.F. 2013. Mutation in the Kv3.3 voltage-gated potassium channel causing
spinocerebellar ataxia 13 disrupts sound-localization mechanisms. PLoS One, 8, e76749.
White, J.K., Gerdin, A.K., Karp, N.A., Ryder, E., Buljan, M., Bussell, J.N., Salisbury, J., Clare, S.,
Ingham, N.J., Podrini, C. & Houghton, R. 2013. Genome-wide generation and systematic
phenotyping of knockout mice reveals new roles for many genes. Cell, 154, 452-464.
28
BSA Basic Auditory Science Meeting 2017
P2. Gene expression changes in the mouse spiral ganglion following noise induced hearing loss
S. Newton1, M. Mulheran2, B.D. Grubb1,3 and I.D. Forsythe1
1
Dept. Neuroscience, Psychology and Behaviour, University of Leicester, LE1 7RH, UK
2
Dept. Medicine & Social Care Education, University of Leicester, LE1 7RH UK
3
Curr address: Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
Noise induced hearing loss (NIHL) is classically divided into permanent or temporary forms. Following
noise exposure, individuals will expereince temporary threshold shifts (TTS) and elevated hearing
thresholds which resolves over several days or weeks. Evidence suggests that long term changes include
“silent damage”: a type of neuropathic damage with reduced numbers of hair cells synapses onto spiral
ganglion neuron (SGN) processes; and additionally a slowly developing SGN death exacerbates
presbyacusis (Kujawa & Liberman, 2009; Jensen, et al., 2015). Previous studies of gene expression
changes following noise insult have used whole cochlea preparations that do not differentiate between
the changes in the different cochlear structures. We have used micro-dissection of the modiolus to focus
on the SGNs.
Female CBA/Ca mice aged P40 were exposed to 105 dB SPL broadband noise for 1.5hrs
under anaesthesia (age-matched controls were also anaesthetised and exposed to silence). Auditory
brainstem recordings (ABRs) taken before noise exposure and at the time of tissue collection, showed
the noise exposure produced an immediate threshold shift of 36±3 dB SPL (click) which recovered to
10±3 dB SPL 28 days later. Mice were divided into three groups and allowed to recover for 24hrs, 7d or
28d following exposure. After recovery, the modiolus was micro-dissected from both cochleae of each
animal. RNA-Sequencing was performed on the Illumina NexSeq500 at the Deep Seq facility at the
University of Nottingham.
Initial analysis of the RNA-Sequencing data has revealed 421 differentially expressed genes
over this 28d post-exposure time period. Changes in 57 neuron specific genes, included upregulation of
P2X3 synaptoporin, peripherin and 5HTR3a, synaptotagmin I and suppression of message for
synaptogamin II and synaptic vesicle protein glycoprotein 2c. Gene Ontology analysis shows that the
acute phase inflammatory response persists over the 28d period, with chronic upregulation of
apolipoproteins (A1, C1 & H), serum amyloids (A1/A2), serum albumin, and complement C3.
Acknowledgements
Supported by The Physiological Society and Rosetrees Trust
References
Jensen, J.B., Lysaght, A.C., Liberman, M.C., Qvortrup, K. & Stankovic, K.M. 2015. Immediate and
delayed cochlear neuropathy after noise exposure in pubescent mice. PLoS One, 10(5),
e0125160.
Kujawa, S.G. & Liberman, M.C. 2009. Adding insult to injury: cochlear nerve degeneration after
“temporary” noise-induced hearing loss. J Neurosci, 29, 14077-14085.
29
BSA Basic Auditory Science Meeting 2017
P3. Cochlear outer hair cells regulate their volume using prestin/SLC26A5 as a water pump
L. Kukovska, N.Q. Chui and J.F. Ashmore
UCL Ear Institute, University College London, London, WC1X 8EE, UK
There are now many lines of evidence that implicate outer hair cells as responsible for sound
amplification in the inner ear. The cells do so by using a protein, ‘prestin’, to generate forces that control
cochlear mechanics (reviewed in Ashmore, 2008). The puzzle has been that prestin, described as a ‘motor
protein’ is a member of a family, SLC26, of chloride-bicarbonate transporters. Although chloride plays
a significant role in the generation of OHC forces, it is not known in detail which conformational changes
in prestin protein determine the forces produced by the cells. Electron microscopy (Kalinec et al., 1991)
and single molecule photo-bleaching studies (Hallworth & Nichols, 2012) show that mammalian prestin
inserts itself into the cell membrane as a tetramer. Such data may explain how prestin can act as a sugar
transporter (Chambard & Ashmore, 2003), but open the possibility that the structure is also capable of
regulating water flux in and out of the cell but on a time scale slow compared to electromotility.
To investigate such possibility we expressed prestin in cultured CHO cells and superfused the
cells to a step of low 2 mM external Cl- and 23 mM HCO3-, conditions favouring electrogenic antiport
exchange by prestin (Mistrik et al., 2012). The diameter of the cells was determined by co-expressing
GFP and recording the fluorescent images. Under these conditions the cell volume swells by a maximum
of 6%.
To determine whether such behaviour also occurs in murine OHCs, we measured cell
diameters at the level of the cell nucleus. Mouse temporal bones were isolated according to UK Home
Office regulations. Using confocal microscopy cell boundaries were identified using the fluorescent dye
FM143 in the bath. Cells swelled in low Clo in the presence of external Hepes buffer, an effect reduced
by using physiological levels of HCO3-. A significant shrinkage was observed when the exchange
experiment was carried out in the presence of 10mM salicylate. Combined with nonlinear capacitance
data, the results are consistent with a transport of an excess of 1000 water molecules per prestin during
each cycle of chloride-bicarbonate exchange.
References
Ashmore, J.F. 2008. Outer hair cell motility. Physiol Revs, 88, 173-210.
Chambard, J. & Ashmore, J.F. 2003. Sugar transport by mammalian members of the SLC26 superfamily
of anion-bicarbonate exchangers. J Physiol, 550(3), 667-677.
Hallworth R. & Nichols M.G. 2012. Prestin in HEK cells is an obligate tetramer. J Neurophysiol, 107(1),
5-11.
Kalinec, F., Holley, M.C., Iwasa, K.H., Lim, D.J. & Kachar B. 1992. A membrane-based force generation
mechanism in auditory sensory cells. Proc Natl Acad Sci USA, 89(18), 8671-8755.
Mistrik, P., Daudet, N., Morandell, K. & Ashmore, J.F. 2012. Mammalian prestin is a weak Cl-/ HCO3electrogenic antiporter. J Physiol, 590, 5597-5610.
30
BSA Basic Auditory Science Meeting 2017
P4. Investigating the specificity of a KV2 potassium channel antagonist in the auditory pathway
M.S.Y. Leung, B.M. Pigott and I.D. Forsythe
Department of Neuroscience, Psychology & Behaviour, University of Leicester, LE1 7RH, UK
The KV2 potassium channel family, consisting of KV2.1 and KV2.2 subunits, are widely expressed in the
mammalian brain and are important regulators of neuronal excitability. They play a role in a number of
physiological processes along the auditory pathway, including sound localisation by the auditory
brainstem and regulating neuronal excitability in the neocortex. Lack of a specific blocker has hindered
research on these channels, but the recently identified antagonist RY785 could provide a new tool for
their study (Herrington et al., 2011).
The aim of this project was to characterise the action of RY785 on potassium currents in two
auditory areas known to contain KV2: KV2.2 in the medial nucleus of the trapezoid body (MNTB), and
KV2.1 in the auditory neocortex (Johnston et al., 2008; Guan et al., 2007). These experiments were
conducted using in vitro brain slices from humanely killed CBA/Ca mice.
Whole-cell patch clamp recordings were made from the MNTB and from neocortical layer
II/III pyramidal neurons. Under voltage-clamp recording conditions, RY785 significantly reduced
currents in the MNTB, and inhibited a similar current in the neocortex to that blocked by KV2 gating
modifiers. Under current clamp, RY785 inhibited sustained action potential firing in the neocortex in
response to large current injections and reduced the after hyperpolarisation. This is consistent with a role
for KV2 in maintaining repetitive firing by assisting voltage-gated sodium channel recovery from
inactivation.
These experiments show that RY785 blocked currents with properties consistent with KV2
recorded from neocortical neurons. This is a first step in validating RY785 as a specific antagonist of
these channels, and as a tool to test the role of KV2 channels in auditory processing.
Acknowledgements
This project was funded by Leicester Medical School.
References
Guan, D., Tkatch, T., Surmeier, D.J., Armstrong, W.E. & Foehring, R.C. 2007. KV2 subunits underlie
slowly inactivating potassium current in rat neocortical pyramidal neurons. J Physiol, 581,
941-960.
Herrington, J., Solly, K., Ratliff, K.S., Li, N., Zhou, Y.P., Howard, A., Kiss, L., Garcia, M.L., McManus,
O.B., Deng, Q., Desai, R., Xiong, Y. & Kaczorowski, G.J. 2011. Identification of novel and
selective KV2 channel inhibitors. Mol Pharmacol, 80, 959-964.
Johnston, J., Griffin, S.J., Baker, C., Skrzypiec, A., Chernova, T. & Forsythe, I.D. 2008. Initial segment
KV2.2 channels mediate a slow delayed rectifier and maintain high frequency action potential
firing in medial nucleus of the trapezoid body neurons. J Physiol, 586, 3493-3509.
31
BSA Basic Auditory Science Meeting 2017
P5. Effects of a cannabinoid agonist in an awake animal model of tinnitus
J.I. Berger1, B. Coomber1, S. Hill1, A. Hockley1, W. Owen1, S.P.H. Alexander2, A.R. Palmer1 and M.N.
Wallace1
1
MRC Institute of Hearing Research, School of Medicine, University of Nottingham, UK
2
School of Life Sciences, University of Nottingham, UK
Animal models of tinnitus have revealed long-term hyperexcitability and altered neural synchrony,
thought to arise from pathology affecting the balance between excitation and inhibition in the auditory
system. This balance is regulated by neuromodulators, such as endogenous cannabinoids
(endocannabinoids). Cannabinoid drugs are potent anti-nociceptive agents in models of chronic
neuropathic pain (Paszcuk et al., 2011), a condition that shares substantial parallels with tinnitus, i.e.
phantom sensory percept in the absence of sensory input, initiated peripherally through deafferentation
and subsequently involving central mechanisms. We therefore sought to determine whether the highlyselective CB1 agonist arachidonyl-2'-chloroethylamide (ACEA) could abolish putative mechanisms of
tinnitus.
In the first experiment, guinea pigs (GPs) were implanted with electrocorticography (ECoG)
multi-electrode assemblies. Following baseline data collection, GPs were given intraperitoneal injections
of either (1) sodium salicylate in order to induce tinnitus (350 mg/kg; n = 8), (2) salicylate coadministered with ACEA (1 mg/kg; n = 5), or (3) ACEA alone (1 mg/kg; n = 4). Resting-state and
auditory-evoked neural activity recorded in awake GPs was compared between groups. Hearing status
was assessed using the auditory brainstem response (ABR).
Cluster-based permutation analysis indicated that salicylate altered resting-state activity,
specifically by reducing alpha band activity (6-10 Hz) in cortical oscillations. Auditory-evoked responses
were also enhanced (between 79-145%), indicative of an oversensitivity to sound (hyperacusis), whilst
wave I ABR amplitudes were significantly decreased at 20 kHz. Co-administration of ACEA still resulted
in slight reductions in ABR amplitudes, but these were no longer significant. Decreases in oscillatory
activity at 6-10 Hz were no longer evident, although enhanced cortical potentials were still present
(between 61-159%). Administration of ACEA alone did not significantly affect oscillatory activity or
wave I amplitudes, but did partially increase cortical evoked potentials.
In a second experiment, we co-administered salicylate with ACEA as before (1 mg/kg; n =
3), but using non-implanted GPs, and tested for behavioural evidence of tinnitus using the gap detection
test (Berger et al., 2013). A significant deficit in behaviour was evident, consistent with the presence of
tinnitus, thereby demonstrating that ACEA failed to prevent the onset of tinnitus. These data suggest that
while ACEA may be potentially otoprotective, cannabinoid agonists are not effective in diminishing
evidence of tinnitus or hyperacusis.
Acknowledgements
BC and WO were supported in part by Action on Hearing Loss (International Project Grant G62).
References
Berger, J.I., Coomber, B., Shackleton, T.M., Palmer, A.R. & Wallace, M.N. 2013. A novel behavioural
approach to detecting tinnitus in the guinea pig. J Neurosci Methods, 213, 188-195.
Paszcuk, A.F., Dutra, R.C., da Silva, K.A.B.S., Quintao, N.L.M., Campos, M.M. & Calixto, J.B. 2011.
Cannabinoid agonists inhibit neuropathic pain induced by brachial plexus avulsion in mice by
affecting glial cells and MAP kinases. PloS One, 6, e24034.
32
BSA Basic Auditory Science Meeting 2017
P6. Evidence that the tinnitus-inducing agent salicylate has a direct effect on neural activity in the
inferior colliculus
B.M.J. Olthof-Bakker, D. Lyzwa, S.E. Gartside and A. Rees
Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
High doses of sodium salicylate induce tinnitus in humans and animals. Salicylate reportedly increases
spontaneous firing rates in neurons in the inferior colliculus (IC), although it is unclear whether these
effects are mediated up-stream of the IC, or directly within the IC. We addressed this question by
comparing the effects of systemic and locally applied salicylate on IC neuronal activity.
Pigmented guinea pigs (350-650 g) were anaesthetised (Urethane 1g/kg i.p., Fentanyl 0.3
mg/kg i.p. and Midazolam 5mg/kg i.m.) a microdialysis probe and a single shank 32-channel recording
electrode were implanted in the right IC. Spontaneous and sound-evoked multiunit activity was recorded
at sites through the IC at baseline and after salicylate administration. In 7 animals the microdialysis probe
was perfused with artificial cerebrospinal fluid throughout and salicylate was administered systemically
(200mg/kg i.p.). In the other 3 animals the microdialysis probe was perfused with artificial cerebrospinal
fluid followed by salicylate at increasing concentrations (0.1mM (2h), 1mM (2h) and 10mM (1h)).
Following systemic injection of salicylate, spontaneous activity remained unchanged for the
first three hours, but then increased rapidly to more than double six hours post salicylate administration.
In contrast, local delivery of salicylate into the IC reduced spontaneous activity in a time/concentrationdependent manner.
Frequency response areas (FRAs) recorded following systemic salicylate showed a reduction
in driven multi-unit activity on the low frequency side of the CF during the first two hours. Later, an
increase in driven activity was observed across the entire FRA. These changes resulted in an increase of
the compound CF. Local administration of salicylate also increased driven activity across the FRA. Again
this resulted in an increase in the measured CF. It was notable that neither mode of application of
salicylate altered the minimum threshold for driven responses.
These results confirm previous findings of increased spontaneous rate and sound-driven
activity in response to systemic salicylate. Since salicylate applied locally in the IC decreased rather than
increased spontaneous activity, we conclude that the elevation of spontaneous firing by salicylate is
mediated outside the IC. Similarly, the initial effect of systemic salicylate in reducing sound-driven
activity in a frequency-selective manner was not mirrored by locally applied salicylate and hence likely
originates elsewhere. However, the finding that both locally applied and systemic salicylate raised the
CF and increased sound driven activity, suggests that these effects are, at least in part, mediated directly
within the IC.
Acknowledgements
Supported by Action on Hearing Loss.
33
BSA Basic Auditory Science Meeting 2017
P7. Serotonin transporter positive release sites in the rat inferior colliculus, evidence for triad
synapses
S.E. Gartside, B.M.J. Olthof-Bakker and A. Lister
Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
The inferior colliculus (IC) is an integrative auditory centre which receives dense serotonergic (5-HT)
innervation. Serotonin modifies the response of IC neurones to auditory stimuli and may play a role in
hearing loss and tinnitus.
Serotonin has been shown to modulate both inhibitory and excitatory synaptic transmission
in the IC (Zhou & Hablitz, 1999) and can act as a synaptic as well as an extra-synaptic signaller. Serotonin
enhances GABA IPSCs (Obara et al., 2014) and it seems likely that it may also modulate glutamatergic
transmission. Recently, in other brain areas, serotonergic terminals have been shown to form triads with
excitatory and inhibitory synapses providing a mechanism by which serotonin could modulate
neurotransmission. Here we used fluorescence immunohistochemistry to determine whether serotonin
boutons form synaptic triads in the IC.
Male Lister hooded rats, (250-350 g) were overdosed with pentobarbital and transcardially
perfused with heparinized 0.1M PBS and 4% paraformaldehyde. Brains were harvested, cryoprotected
and cut in 40 µm coronal sections. Immunohistochemical labelling was performed using rabbit anti-5HTT (Calbiochem), mouse anti-synaptophysin (Sigma) and goat anti-PSD-95 (Abcam) primary
antibodies.
Using a Nikon A1+ point scanning confocal microscope equipped with four solid state lasers
(405, 488, 561 and 647nm), we acquired two z-stacks (0.3 µm step size, 2.4µs/pixel) per section from
the dorsal cortex, lateral cortex and central nucleus.
Using Imaris™ software, the stacks were deconvolved and 5-HTT labelled fibres
reconstructed and masked. The masked fibres were considered to be serotonergic fibres. Distances
between PSD95 and synaptophysin puncta were determined. A distance of 0.6 µm between
synaptophysin and PSD95 puncta was considered to be indicative of a glutamatergic synapse.
5-HTT immunoreactivity was present throughout the IC in the form of beaded fibres, with
highest density in the dorsal and lateral cortices. Synaptophysin labelled puncta were abundant in the IC
and were located both within and outside the serotonin fibres; PSD-95 labelled puncta were less abundant
and always outside the serotonin fibres.
PSD95 labelled puncta were found in close apposition to serotonin fibres in three distinct
arrangements. i) with no associated synaptophysin, ii) closely apposed to a synaptophysin labelled
punctum within the serotonin fibres, iii) closely apposed to synaptophysin labelled puncta both within
and outside of the serotonin fibres.
References
Obara, N., Kamiya, H. & Fukuda, S. 2014. Serotonergic modulation of inhibitory synaptic transmission
in mouse inferior colliculus. Biomed Res (Japan), 35(1), 81-84.
Zhou, F.M. & Hablitz, J.J. 1999. Activation of serotonin receptors modulates synaptic transmission in rat
cerebral cortex. J Neurophysiol, 82(6), 2989-2999.
34
BSA Basic Auditory Science Meeting 2017
P8. Nitrergic modulation in the ventral cochlear nucleus and its changing role in tinnitus
A. Hockley1,2, J.I. Berger1, S.M.D. Hill1, P.A. Smith2, A.R. Palmer1 and M.N. Wallace1
1
MRC Institute of Hearing Research, University of Nottingham, Nottingham, UK
2
School of Life Sciences, University of Nottingham, Nottingham, UK
Tinnitus chronically affects an estimated 10-15% of adults and is characterised by the perception of sound
independent of external stimuli. Nitric oxide synthase (NOS) expression has been studied in guinea pig
ventral cochlear nucleus (VCN) where it is located in a sub-population of each cell type. Following
unilateral acoustic over-exposure, a within-animal asymmetry of NOS expression was found exclusively
in animals that developed tinnitus (Coomber et al., 2015). The decrease in NOS expression in the
contralateral VCN was observed as soon as 1 day after acoustic-over exposure, and the asymmetry in
NOS expression was strongest at eight weeks after noise exposure. This provided evidence for a role of
nitric oxide (NO) in tinnitus, and not simply as a biomarker for hearing loss.
Here, we describe the use of iontophoresis to apply the NOS inhibitor L-NG-Nitroarginine
methyl ester (L-NAME) and the NO donor 3-Morpholinosydnonimine hydrochloride (SIN-1) to units
within the VCN of the anaesthetised guinea pig. Upon isolation and characterisation of a single unit,
hour-long, pure tone pulse-trains were presented at the characteristic frequency (200 ms tone pip, 800 ms
silence, 3600 repeats). Spontaneous and auditory-driven spike rates were recorded over the hour while
drugs were applied iontophoretically.
Noise-exposure resulted in a greater proportion of neurons spontaneous firing rate changing
following application of the NO donor SIN-1 (an increase from 4% to 37%). This suggests that noise
exposure may increase the sensitivity of units in the VCN to NO. When blocking NO production with LNAME, the proportion of neurons decreasing their auditory-driven firing rate increased (from 2% to
37%) solely in the animals that developed behavioural evidence of tinnitus. Therefore, in tinnitus animals
endogenous NO may be increasing excitability in a larger proportion of neurons, potentially producing
an increase in transmission through the auditory system with potential to contribute to the ‘increased
central gain’ thought to be present in tinnitus.
References
Coomber, B., Kowalkowski, V.L., Berger, J.I., Palmer, A.R. & Wallace, M.N. 2015. Modulating central
gain in tinnitus: Changes in nitric oxide synthase in the ventral cochlear nucleus. Front Neurol,
6, 1-12.
35
BSA Basic Auditory Science Meeting 2017
P9. Scaffolding protein PSD95 co-localises with neuronal nitric oxide synthase, NMDA receptors
and soluble guanylyl cyclase in the rat inferior colliculus
H.V. Maxwell1, B.M.J. Olthof1, C.H. Large2, S.E. Gartside1 and A. Rees1
1
Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
2
Autifony Therapeutics Ltd, Stevenage Bioscience Catalyst, Stevenage, SG1 2FX, UK
Glutamatergic activation of NMDA receptors (NMDA-Rs) induces an influx of calcium and initiates a
post-synaptic signalling cascade, initiated by the activation of neuronal nitric oxide synthase (nNOS) and
the production of nitric oxide (NO). Downstream NO signalling results in a multitude of effects such as
altered synaptic plasticity and intrinsic neuronal excitability. In the hippocampus, cerebellum and
cerebral cortex, PSD95 - a post synaptic density protein - provides a scaffold for nNOS and NMDA-R
enabling the proteins to stay in close enough proximity for calcium signalling to be effective. PSD95
contains three binding domains: NMDA-R and nNOS have preferences for the first and second domains
respectively, whereas soluble guanylyl cyclase (sGC), the enzyme responsible for catalysing the
formation of cGMP and the primary target of nitric oxide (NO), preferentially binds to the third domain
(Kornau et al., 1995; Brenman et al., 1996; Russwurm et al., 2001). To determine the involvement of
PSD95 in supporting the NMDA-R post-synaptic signalling pathway in the inferior colliculus of rat, we
investigated whether these signalling cascade proteins are co-localised.
Long Evans rats were deeply anaesthetised with sodium pentobarbital and perfused with 4%
paraformaldehyde. The brains were cryoprotected with 30% sucrose before cutting 30µm coronal
sections using a microtome. Primary antibodies targeting PSD95, nNOS, sGCα2 and GluN1 were applied
in triplets to free floating sections. Primary antibodies were detected using secondary antibodies with
fluorophores attached. Fluorescent labelling was visualised with a Nikon A1 confocal microscope.
Cells with diffuse nNOS labelling, visible under low power, were most apparent in the cortices of the IC.
However, under high power, cells expressing nNOS in punctate form were found throughout the central
nucleus. Cells labelling with GluN1, sGC and PSD95 were present throughout the IC.
The punctate labelling for nNOS observed in the central nucleus was co-localised with
PSD95, GluN1 and sGC. Such puncta were distributed predominantly over the soma. In some cortical
neurons, nNOS and sGC expression was distributed more widely within the cells although puncta with
GluN1 and PSD95 were also present. In contrast, other neurons with diffuse nNOS labelling in the
cortices did not express PSD95 or GluN1.
These results indicate that the PSD95 scaffolding construct is present in the IC, although
nNOS and sGC can also be found in the absence of PSD95 and GluN1.
Acknowledgements
H.M. is supported by a CASE Studentship from the MRC and Autifony Therapeutics Ltd.
References
Brenman, J.E., Chao, D.S., Gee, S.H., McGee, A.W., Craven, S.E., Santillano, D.R., Wu, Z., Huang, F.,
Xia, H., Peters, M.F., Froehner, S.C. & Bredt, D.S. 1996. Interaction of nitric oxide synthase
with the postsynaptic density protein psd-95 and α1-syntrophin mediated by pdz domains.
Cell, 84, 757-767.
Kornau, H.C., Schenker, L.T., Kennedy, M.B. & Seeburg, P.H. 1995. Domain interaction between
NMDA receptor subunits and the postsynaptic density protein PSD-95. Science, 269, 17371740.
Russwurm, M., Wittau, N. & Koesling, D. 2001. Guanylyl Cyclase/PSD-95 Interaction: Targeting of the
nitric oxide-sensitive α2β1 guanylyl cyclase to synaptic membranes. J Biol Chem, 276, 4464744652.
36
BSA Basic Auditory Science Meeting 2017
P10. Measurement of acoustic and electric biasing of electrophonic response in the guinea-pig
inferior colliculus
A. Fráter1, S.K. Riis2, P. Maas2 and T. Marquardt1
1
Ear Institute, University College London, London, WC1X 8EE, UK
2
Oticon Medical, Kongebakken 9, 2765 Smørum, Denmark
In recent years great effort is made to improve surgical techniques of cochlear implantation to preserve
residual hearing of patients. Combined electrical and acoustical hearing (EAS) is known to enhance
listening experience of cochlear implant (CI) users by providing acoustic low frequency information in
addition to the electric stimulation. However, besides direct stimulation of auditory nerves, electrical
currents evoke also an electro-motile response in the functioning cochlea (electrophonic effect). This
mechanical excitation results in acoustic-like response (Nuttall & Ren, 1995) that travels to the lowfrequency region of the cochlea and might interfere with the residual acoustical hearing (Sato et al., 2016).
We are investigating the effect of an additional acoustic tone or electric DC current on the electrophonic
response using a guinea pig model.
Neural responses are recorded by a 32-channel electrode array along the tonotopic axis of the
inferior colliculus and acoustic responses in the ear canal (EEOAE) are monitored by an otoacoustic
emission probe. Short electrical sinusoid stimulation of 8ms duration is presented via a CI inserted into
the Scala Tympany. In one experiment, 100ms long acoustic biasing tones with various levels and
frequencies corresponding to the characteristic frequency of the direct electric stimulation place are
presented simultaneously with the electric sinusoids. In a second experiment 100ms long biasing DC
current signals with positive and negative polarities and various levels are delivered to the cochlea
additionally to the electric sinusoids.
First results using a 1600-Hz electric sinusoid show no alteration of electrophonic response
neither by the biasing acoustic tones nor by the DC currents. On the other hand enhanced neural activity
is observed along the IC’s tonotopy at loci corresponding to approximately 3200 Hz and 6400 Hz.
EEOAE signals are also detected and currently analysed.
EEOAE might prove useful as a non-invasive diagnostic tool for characterising remaining
electro-motility and electric-acoustic interaction in CI patients. Future efforts will attempt to find
electrical pulse shapes that minimise the disruptive effects of electro-motility on the low-frequency
acoustical signal in combined hearing.
Acknowledgements
A.F. is supported by a UCL Grand Challenge studentship, supplemented by Oticon Medical.
References
Nuttall, A. & Ren, T. 1995. Electromotile hearing: evidence from basilar membrane motion and
otoacoustic emissions. Hear Res, 92, 170-177.
Sato, M., Baumhoff, P. & Kral, A. 2016. Cochlear implant stimulation of a hearing ear generates separate
electrophonic and electroneural responses. J Neurosci, 36(1), 54-64.
37
BSA Basic Auditory Science Meeting 2017
P11. Selective attention in a spectro-temporally complex auditory scene: a ferret cocktail party
J. Sollini and J. Bizley
UCL Ear Institute, University College London, London, WC1X 8EE, UK
A key function of the auditory system is the segregation of competing sounds. An example of this is the
Cocktail Party phenomenon, where a listener is able to attend to one talker in the presence of one or more
competing talkers. Animal behavioural models have been employed to investigate the cocktail party
phenomenon, though these models generally focus on conspecific vocalizations rather than human
speech. Here we tested the feasibility of using an animal model in a selective attention task using human
speech, while making neural recordings from auditory cortex.
Ferrets (n = 4) were trained to respond to a target word embedded within a stream of distractor
words, all performed significantly above chance (p<0.05, bootstrap, chance = ~26%, average = ~63%
correct). We are currently testing whether target word detection generalises across talkers, to ensure
word detection is pitch invariant. Animals (n=2, thus far) were trained to selectively attend to one of two
competing streams and again respond to the targets embedded within the attended stream. Chronic
recording electrodes were implanted bilaterally in auditory cortex and neural responses were measured
during behavioural testing. As would be expected the stream onset responses were strongest in the
contralateral hemisphere and were larger in situations where there was no simultaneous distractor stream.
Preliminary results demonstrate the feasibility of this behavioural model potentially allowing closer
inspection of the neural mechanism that underpins the cocktail party phenomena.
Acknowledgements
Supported by BBSRC grant awarded to JB.
38
BSA Basic Auditory Science Meeting 2017
P12. Does adenosine monophosphate kinase (AMPK) modulate neuronal excitability in the
auditory brainstem?
G.Yan1,2, S.J. Lucas2 and I.D. Forsythe2
1
Leicester Medical School, Leicester, LE1 7HA, UK
2
Department of Neuroscience, Psychology and Behaviour, University of Leicester, LE1 7RH, UK
The enzyme 5’-adenosine monophosphate-activated protein kinase (AMPK) is ubiquitously expressed
and functions to monitor cellular energy status. Activated by AMP, which rises as ATP falls, AMPK
regulates multiple biochemical pathways to maintain energy homeostasis. AMPK is activated in cerebral
ischaemia and promotes neuronal survival in vitro (Culmsee et al., 2001) thus, it is a potential therapeutic
target in stroke. In addition, four of the five brain regions with the highest metabolic rates are auditory
hence, AMPK might be involved in regulating auditory transmission. The effects of AMPK activation in
the brainstem are unknown. Given that action potential (AP) firing and synaptic activity account for a
significant proportion of a neuron’s energy expenditure (Khatri & Man, 2013), this project tested the
hypothesis that AMPK modulates neuronal excitability to reduce ATP consumption.
Whole-cell patch clamp experiments were performed from principal neurons in the medial
nucleus of the trapezoid body (MNTB). These neurons have high metabolic demand due to their highfrequency AP firing properties and their synapse, the Calyx of Held, is one of the most well-defined in
electrophysiology. Pharmacological manipulation of AMPK was employed to identify potential roles of
activated AMPK in neuronal excitability.
This project found that that activation of AMPK produced a small potentiation of voltagegated K+ current in response to high voltage commands but no consequential effect on single AP-firing
was observed. Activated AMPK did not affect voltage-gated Na+ current. Changes in ATP, independent
of AMPK, did not affect voltage-gated K+ or Na+ currents. Further work will be conducted by the lab to
identify potential effects of AMPK on multiple AP-firing and synaptic transmission.
Acknowledgements
This project was funded by Leicester Medical School. GY also received a bursary stipend from the
Wolfson Foundation (Intercalated Award, administered by the Royal College of Physicians).
References
Culmsee, C., Monnig, J., Kemp, B.E. & Mattson, M.P. 2001. AMP-activated protein kinase is highly
expressed in neurons in the developing rat brain and promotes neuronal survival following
glucose deprivation. J Mol Neurosci, 17, 45-58.
Khatri, N. & Man, H. 2013. Synaptic activity and bioenergy homeostasis: Implications in brain trauma
and neurodegenerative diseases. Front Neurol, 4, 199.
39
BSA Basic Auditory Science Meeting 2017
P13. Optimizing auditory brainstem response (ABR) recording for frequency-specific
measurement of the wave I/V ratio
A. Hardy, J. DeBoer and K. Krumbholz
Medical Research Council Institute of Hearing Research, School of Medicine, The University of
Nottingham, University Park, Nottingham, NG7 2RD, UK
The homeostatic plasticity model posits that tinnitus is triggered by an increase in neural gain due to a
reduced input from a damaged periphery. Evidence for this mechanism in humans has come from
auditory brainstem responses (ABRs). ABRs recorded from tinnitus patients were found to show a
reduced wave I amplitude but a normal wave V (Schaette & McAlpine, 2011). This was interpreted as
evidence of an increase in neural gain between the auditory nerve (wave I) and the upper auditory
brainstem (wave V). However, a confound in this interpretation is that wave I is more dependent on
contributions from high-frequency cochlear regions than wave V. High-frequency cochlear regions are
also more affected by hearing loss. As a result, hearing loss may be expected to affect the amplitude of
wave I more than wave V, and this could be driving the increased wave I/V ratio. To address this
confound, this study aimed to develop optimized methods for measuring this ratio within specific
restricted frequency regions.
Frequency-specific ABRs can be measured by masking the eliciting stimulus with high-pass
(HP) noises of variable cut-off frequencies. This so-called “derived-band” method has been applied
successfully to the larger wave-V response (Dau et al., 2000). Wave I, however, is considerably smaller
and typically requires a very high stimulus level to be elicited reliably. This means that the HP noise
would have to be presented at uncomfortably loud levels to provide effective masking.
The aim of the current study was to develop an optimised protocol for recording reliable wave
I responses at stimulus levels low enough to enable derived-band masking. We tested three different
approaches: 1) using a specialised chirp rather than the conventional click as the evoking stimulus; this
should optimise the response synchrony (Dau et al., 2000); 2) increasing the low-frequency content of
the stimulus; we hoped that this would increase the low-frequency contributions to wave I; 3) using inear electrodes (“tiptrodes”) rather than standard mastoid electrodes; this was expected to enhance the
wave I amplitude due to the tiptrodes’ closer proximity to the wave I generators.
Our results show that using chirps enhances not only the wave V, but also the wave I
amplitude. The tiptrodes enhanced the wave I amplitude for some conditions but not others, and
consistently reduced the wave V amplitude. Against our expectation, increasing the low-frequency
content of the stimulus reduced the wave I amplitude, even in conditions that included low-frequency
contributions.
References
Dau, T., Wegner, O., Mellert, V. & Kollmeier, B. 2000. Auditory brainstem responses with optimized
chirp signals compensating basilar-membrane dispersion. J Acoust Soc Am, 107, 1530-1540.
Eggermont, J.J. & Don, M. 1980. Analysis of the click-evoked brainstem potentials in humans using
high-pass noise masking. II. Effect of click intensity. J Acoust Soc Am, 68, 1671-1675.
Schaette, R. & McAlpine, D. 2011. Tinnitus with a normal audiogram: Physiological evidence for hidden
hearing loss and computational model. J Neurosci, 31(38), 13452-13457.
40
BSA Basic Auditory Science Meeting 2017
P14. Crossmodal cortical reorganisation following profound deafness, measured using 3D
MPRAGE magnetic resonance imaging
J. Britton1, R.S. Dewey2,3,4 and D.A. Hall3,4
1
School of Medicine, University of Nottingham, Nottingham, UK. NG7 2UH, UK
2
Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, NG7
2RD, UK
3
National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Nottingham,
NG1 5DU, UK
4
Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of
Nottingham, NG7 2RD, UK
Introduction: Neuroimaging studies suggest that lacking sensory stimulation results in crossmodal
cortical reorganisation. Crossmodal cortical reorganisation has been studied using different neuroimaging
strategies such as functional MRI and anatomical MRI analysed using voxel-based morphometry.
Although there is evidence for crossmodal cortical reorganisation in the deprived sensory cortex
following sensory loss (Bavelier et al., 2006, Ding et al., 2015), there is disagreement about how these
changes manifest throughout the reorganised cortex. The main objective of this exploratory study was to
examine signs of regional cortical atrophy and hypertrophy associated with profound deafness.
Methods: The present study employs 3D anatomical imaging and FreeSurfer to quantify morphometric
differences in profoundly-deaf individuals, compared with normally hearing controls. FreeSurfer is an
open-source software package for the analysis of structural MRI data. Specific research questions were
motivated by previous studies hypothesising that differences would manifest in profoundly-deaf
individuals thus: (1) hypertrophy in visual cortices (Lomber et al., 2010); (2) no net morphometric
difference in primary auditory cortex (Wong et al., 2014); and (3) hypertrophy of association cortices
(for a discussion of this in the visual network, see Carriere et al., 2007). Evidence for cortical atrophy or
hypertrophy was determined at a significance level of (P = 0.001).
Results: Hypertrophy was found in cortical regions of the visual processing pathway and auditory
association cortex, suggesting the role of visual areas in the process of crossmodal compensation and
associated changes in areas of multisensory integration. Meanwhile, no significant differences were
observed in the primary auditory cortex.
Conclusion: These findings, placed in the context of behavioural outcomes and neurorehabilitation,
further delineate the bearing of profound deafness on the cortical architecture.
References
Bavelier, D., Dye, M.W. & Hauser, P.C. 2006. Do deaf individuals see better? Trends Cogn Sci, 10, 512518.
Carriere, B.N., Royal, D.W., Perrault, T.J., Morrison, S.P., Vaughan, J.W., Stein, B.E. & Wallace, M.T.
2007. Visual deprivation alters the development of cortical multisensory integration. J
Neurophysiol, 98, 2858-2867.
Ding, H., Qin, W., Liang, M., Ming, D., Wan, B., Li, Q. & Yu, C. 2015. Cross-modal activation of
auditory regions during visuo-spatial working memory in early deafness. Brain, 138, 27502765.
Lomber, S.G., Meredith, M.A. & Kral, A. 2010. Cross-modal plasticity in specific auditory cortices
underlies visual compensations in the deaf. Nat Neurosci, 13, 1421-1427.
Wong, C., Chabot, N., Kok, M.A. & Lomber, S.G. 2014. Modified areal cartography in auditory cortex
following early- and late-onset deafness. Cereb Cortex, 24, 1778-1792.
41
BSA Basic Auditory Science Meeting 2017
P15. Human hearing: How it works
D. Hewitt
Audiology, Portsmouth Hospitals NHS Trust, Queen Alexandra Hospital, Portsmouth, PO6 3LY, UK
If we search the scientific text books it seems that, compared to the Visual System, much less is known
about the Auditory System. The ‘Human Hearing: How it Works’ book has several incomplete and
missing chapters. This missing knowledge has proved a significant barrier to the advancement of effective
habilitation and rehabilitation for those with:

difficulties hearing speech in noisy environments

auditory processing disorder (APD)

tinnitus symptoms

hyperacusis symptoms
Pure-tone threshold audiometry (PTA) remains the only hearing related test routinely used in Audiology
clinics. This test was developed over 100 years ago. While useful for predicting the extent of hearing
difficulty in quiet environments, it is of limited utility for all other types of hearing problems. Whilst
speech audiometry tests are available they are infrequently used in clinics. This is not unrelated to the
fact that most, as with PTA, involve single ear testing using headphones. This significantly limits the use
of their results and similarly limits the conclusions reached by the research studies that have used such
speech audiometry tests.
Over the past two decades Auditory Neuroimaging has begun to provide some of the missing
knowledge needed for a more complete understanding of our sense of Hearing. This has been made
possible by advanced neuroimaging technologies such as functional magnetic resonance imaging (fMRI)
and functional near-infrared spectroscopy (fNIRS).
This presentation is an attempt at ‘data mining’ this missing knowledge from the many
Neuroscience (and Hearing) journals. It describes the key components of hearing and how hearing takes
many years to fully mature. Connections are made between the common hearing complaints (such as
difficulty hearing in noisy places and tinnitus symptoms) and these key components.
The presentation is an Opinion. It is a Viewpoint and not a Review. The aim is to stimulate
debate and new research. To make a start at writing the missing chapters of the 'Human Hearing: How it
Works' book. To begin to move on from the significant limitations of 100 year old Pure-tone Threshold
Audiometry.
42
BSA Basic Auditory Science Meeting 2017
P16. A systematic review of the impact of adjusting input dynamic range (IDR), electrical threshold
T level and rate of stimulation on speech perception ability in cochlear implant users
T.B. Nunn, D. Vickers and T. Green
UCL Ear Institute, UCL Speech, Hearing and Phonetic Sciences, London WC1N 1PF, UK
There are many programmable parameters available within a Cochlear Implant (CI); these are used to
configure a listener’s program or map and define how the CI will stimulate spiral ganglion cells. Research
has to date evaluated the effect of adjusting programmable parameters within the CI with the goal of
improving speech perception. However, there is an absence of systematic evaluation of the effect of
adjusting parameters either individually or in combination using standardised speech perception
measures as an outcome. Accordingly no clinical consensus or corresponding practice guidelines exist
(Vaerenburg et. al., 2014). A systematic review was conducted to explore adjustment to the following
programmable parameters; (i) the input dynamic range (IDR is the range of acoustic input sound levels
that are mapped into the electrical dynamic range (EDR) or electrical hearing range of the CI recipient,
(ii) the electrical stimulation threshold level (T Level); for each electrode, the level of electrical
stimulation mapped to the lowest level of the IDR, (iii) the rate of electrical stimulation, per electrode in
pulses per second (pps).
Data sources include MEDLINE and EMBASE with no language restrictions.
Study selection - A database search found 242 articles of which 32 met the inclusion criteria
including search terms: “Cochlear Implant” and “Speech perception” with each programmable parameter
“T level”, “IDR” and electrical “stimulation rate”.
Data extraction - Study quality assessment included whether ethical approval was gained, a power
calculation performed, eligibility criteria was clearly specified, appropriate controls and outcome
measures were used, confounding factors were reported and controlled for, missing data was accounted
for and an appropriate analysis of data was performed.
Data synthesis - Due to the heterogeneity of outcomes, CI devices and study design;
comparisons were made by structured review.
Conclusion: The quality of studies was found to be moderate to poor. Increasing T levels above
behavioural threshold or as proportion of EDR has been demonstrated to improve monosyllable, speech
in quiet and speech in noise perception. Specific IIDR and IDR setting may improve monosyllable and
speech in noise perception. No recommendation could not be determined for rate of stimulation as speech
perception varied significantly across stimulation rates examined. To optimise speech perception, a
bespoke approach to optimising parameter settings providing a personalised CI fitting is recommended;
however, specific detail of how to optimise settings and the interactions between parameters is as yet
unknown.
Acknowledgements
Supported by an Action on Hearing Loss Grant to TBN.
Reference
Vaerenberg, B., Smits, C., De Ceulaer, G., Zir, E., Harman, S., Jaspers, N., Tam, Y., Dillon, M., Wesarg,
T., Martin-Bonniot, D., Gartner, L., Cozma, S., Kosaner, J., Prentiss, S., Ssidharan, P., Briaire,
J.J., Bradley, J., Debruyne, J., Hollow, R., Patadia, R., Mens, L., Veekmans, K., Greisiger, R.,
Harbourn-Cohen, E., Borel, S., Tavora-Vieira, D., Mancini, P., Cullington, H., Ng, A.H.,
Walkowiak, A., Shapiro, W.H., Govaerts, P.J. 2014. Cochlear implant programming: a global
survey on the state of the art. The Scientific World Journal, 2014, 501738.
43
BSA Basic Auditory Science Meeting 2017
P17. Could spectral centroid play a role in voice pitch and vocal-tract length perception in acoustic
and electric hearing?
E. Gaudrain1,2,3
1
CNRS UMR 5292 Lyon Neuroscience Research Center, University of Lyon, Lyon, France
2
University of Groningen, University Medical Center Groningen, Dept Otorhinolaryngology/Head and
Neck Surgery, The Netherlands
3
University of Groningen, Research School of Behavioral and Cognitive Neurosciences, The Netherlands
To segregate voices in cocktail party situations, normal-hearing (NH) listeners rely on two principal voice
characteristics: voice pitch (F0) and vocal-tract length (VTL). Cochlear implant (CI) listeners have been
shown to have much larger F0 and VTL discrimination thresholds than NH listeners. However, the
mechanisms underlying perception of these cues in CI users, but also in NH listeners, remain largely
unknown.
In implants, while many studies argue that F0 can be perceived via the temporal coding
mechanisms, other recent studies have suggested that spectral centroid (SC) could be used instead. When
the F0 changes, the level of excitation of the lower frequency channels of the implant changes, thus
shifting the SC. Similarly, while some researchers argue that VTL is perceived through its effect on
individual formants, others have argued that, similar to musical timbre, VTL perception might also rely
on SC.
However, both these assumptions result from the observation of steady-state stimuli. In
natural speech, the formant trajectories create a tremendous variability on SC which may blur small F0
and/or VTL differences. Using basic auditory models, the variability of perceptual SC in natural speech
was evaluated and compared to the effects of F0 and VTL variations. The results indicate that SC is
unlikely to play a role in F0 and VTL perception in normal acoustic hearing. However, while likely
insufficient to totally explain sensitivity performances in CI listeners, SC could play a role in F0 and VTL
perception in electric hearing.
44
BSA Basic Auditory Science Meeting 2017
P18. Evaluation of a programmable hearing aid
A. Awawdeh and S. Bleeck
Institute of Sound and Vibration Research, University of Southampton, Southampton, SO17 1BJ, UK
We are working towards a prototype of a hearing aid that is fully programmable and that can be controlled
via a smartphone. This aid is primarily aimed at research purposes; it should be able to run software
algorithms for take-home use, giving the wearer days of weeks of exposure to explore new algorithms.
In our presented prototype towards this goal, we are presenting a hardware solution with the following
characteristics: programmable BTE with 12 channels wide dynamic range compression, adaptive
feedback cancellation and noise reduction. The aid is build on the Ezario 7150 hearing aid chip, and the
software is provided by the company Two-Pi. The 4 programmes of the aid can be controlled via a
smartphone.
We are currently evaluating the aid in a student project with 8-10 participants that are
experienced hearing aid users. We are using questionnaires, and interviews to establish measures for
sound quality, loudness perception, presence of distortsions or echo, quality of their own voice and the
ability to control the hearing aid using a smart phone, and will compare them with their current aid.
At the conference we will demonstrate the hardware and show results of the evaluation.
45
BSA Basic Auditory Science Meeting 2017
P19. Using psychophysics to better fit hearing-aid gain
B. Caswell-Midwinter1,2, W.M. Whitmer1 and I.R.C. Swan1,2
1
MRC/CSO Institute of Hearing Research - Scottish Section, Glasgow, UK
2
School of Medicine, Dentistry and Nursing, College of Medical, Veterinary and Life Sciences, University
of Glasgow, UK
Gain is defined as the level of amplification applied at particular frequencies by a hearing aid. It is the
fundamental parameter which makes sounds audible for the listener. Adjusting gain is central to real-ear
measurements (REMs), which verify whether the appropriate gain is being delivered to the eardrum, and
also fine-tuning, which allows for personalisation. However, there is no agreement on the scale of
adjustments that should be made by audiologists, and clinical practice varies widely. Fine-tuning gain
can be unreliable and inefficient if patients cannot detect gain adjustments. Furthermore, while widely
used troubleshooting guides exist for adjusting gain based on descriptors (Jenstad et al., 2003), these have
not been verified by patients, nor operationalised to detail level and frequency of adjustments.
To investigate the detectability of gain adjustments, we measured the just-noticeable
differences (JNDs) for increments away from a prescription reference in speech-shaped noise and singlesyllable, male-talker words. We used fixed-level, same-different procedures. JNDs were estimated from
the increment corresponding to a d’ of 1. For speech-shaped noise, we measured a median JND of 3.0 dB
at five octave-bands (.5-4 and 6 kHz) and 4.8 dB at .25 kHz. We measured a JND of 1.7 dB for overalllevel increments. The procedure led to less missing data and floor effects when compared to a threealternative forced choice adaptive procedure, and the JNDs were lower (better) than those previously
reported. These JNDs suggest values for REM tolerances, and also suggest that starting gain adjustments
of 1 dB are unreliable. For words, we measured initial median JNDs of 4.6 dB and 4.1 dB for single and
multiple octave-band increments respectively, and 2.0 dB for overall-level increments. These JNDs
suggest further caution when using small adjustments to fine-tune gain with live speech.
We also propose a two-part experiment for applying JNDs in troubleshooting practices.
Listeners will initially be tasked with mapping words processed with gain adjustments to a closed-set of
descriptors. Adjustments will be formed of fixed-level increments away from a prescription reference.
Subsequently, listeners will adjust their own mapped problem setups to preferred setups using a pairedcomparison approach. Results could examine the validity of troubleshooting guides and suggest the level
and frequency of adjustments required to troubleshoot descriptors.
Examining the detectability of gain adjustments and their relevance to problem descriptors
will help develop a fitting protocol that fosters reliable listener feedback, informing suitable starting
levels and tolerances for adjusting gain by audiologists and hearing-aid users themselves.
Acknowledgements
This work was supported by the Medical Research Council (grant number U135097131) and by the Chief
Scientist Office of the Scottish Government.
References
Jenstad, L.M., Van Tasell, D.J. & Ewert, C. 2003. Hearing aid troubleshooting based on patients’
descriptions. J Am Acad Audiol, 14(7), 347-360.
46
BSA Basic Auditory Science Meeting 2017
P20. Audio mixing for hearing-impaired ears: A literature review
T.R. Agus
School of Arts, English and Languages, Queen's University Belfast, Belfast, BT7 1NN, UK
Television audio is usually mixed by normal-hearing sound engineers, and on average, hearing-impaired
listeners prefer different balances viewers (Shirley et al., 2017). Relatively clean tracks of speech, music
and sound effects are typically available at the later stages of production, and may even be delivered to
consumers in the form of object-based audio (Herre et al., 2015). Although object-based audio was
motivated towards greater flexibility in spatial reproductions, it also paves the way for personalised audio
mixes, raising questions about strategies to mix for hearing-impaired ears.
Off-line remixing lacks the flexibility and real-time reactivity of hearing aids, but where it is
feasible, it offers considerable advantages in terms of compute power: the processing need not be lowlatency or even real-time and, with a continuous power supply, algorithms could use more power-intense
algorithms and higher quality audio. An offline processor would have the advantage of being able to plan
ahead for the full dynamic range of the track. Furthermore, where there is access to clean tracks, they
could be compressed independently as appropriate while avoiding comodulations between tracks. Signalto-noise ratios could be improved for hearing-impaired ears, potentially affecting different frequency
bands independently, to some extent.
The development of strategies for mixing audio is limited by the extent to which the goals of
mixing are understood and can be modelled.
Here, I review the psychoacoustical models used by those engaged in automated mixing, and
the tests of these models (e.g., Ma et al., 2015). I also review sound-engineering books, both for explicit
links to the psychoacoustical literature but also for implicit knowledge of psychoacoustical principles
demonstrated through practical advice.
References
Herre, J., Hilpert, J., Kuntz, A. & Plogsties, J. 2015. MPEG-H 3D audio: The new standard for coding of
immersive spatial audio. IEEE J Sel Top Signal Process, 9(5), 770-779.
Ma, Z., Man, B.D.E., Pestana, P.D.L., Black, D. & Reiss, J.D. 2015. Intelligent multitrack dynamic range
compression. J Audio Eng Soc, 63(6), 412-426.
Shirley, B., Meadows, M., Malak, F., Woodcock, J. & Tidball, A. 2017. Personalized object-based audio
for hearing impaired TV viewers. AES J Audio Eng Soc, 65(4), 293-303.
47
BSA Basic Auditory Science Meeting 2017
P21. Benefits of contralateral routing of signals for listeners with single-sided deafness in diffuse
noise: modelling and simulation
J. F. Culling and Myriam Weiss
School of Psychology, Cardiff University, Cardiff, CF10 3AT, UK
In single-sided deafness, contralateral routing of signals (CROS) can add sound from the deaf side to the
sound at the hearing ear. Similarly, bone-conduction hearing aids conduct sound contralaterally to the
active cochlea. These techniques improve awareness of sound on the patient’s deaf side, but may not
provide any benefit to speech perception in noise. Peters et al. (2015) concluded that there was no good
evidence for speech-in-noise benefit with either method. Experiments generally demonstrate an
improvement in speech reception when the target speech is on the deaf side, countered by a worsening
when the interfering noise is on the deaf side. However, Taal et al. (2016) pointed out that many real
listening situations involve diffuse noise from many directions. Under these conditions, they
demonstrated improvement in speech reception threshold with CROS (CROS benefit) averaging 1.75 dB
across different target-speech positions (-90°, 0°, 90°) in five unilateral cochlear-implant users. A model
predicted these data and that the average benefit across all target positions would be 1.25 dB.
We confirmed these findings in normally hearing listeners by simulating the same listening
situation over headphones using both tone-vocoded and unprocessed speech-in-noise stimuli. Mean
CROS benefits of 1.5 dB were found in each case. CROS was modelled using the Jelfs et al. model (2011)
by first summing the left- and right-ear head-related impulse responses. The model predicted the variation
in SRTs accurately (r = 0.97), but mean predicted CROS benefit for the three target positions (or
averaging over 360°) was only 0.5 dB. The modelling also indicated that when CROS is implemented
with minimal delay (e.g. through a wired connection), it produces a narrowly focussed (±5°)
beamforming effect for frontally located target voices of 2.6 dB. However, modelling a real restaurant,
this beamforming effect was no more beneficial than turning the hearing ear towards the target speech.
References
Jelfs, S., Culling, J.F. & Lavandier, M. 2011. Revision and validation of a binaural model for speech
intelligibility in noise. Hear Res, 275, 96-104.
Peters, J.P.M., Smit, A.L., Stegeman, I. & Grolman, W. 2015. Review: Bone conduction devices and
contralateral routing of sound systems in single-sided deafness. Laryngoscope, 125, 218-226.
Taal, C.H., van Barneveld, D.C.P.B.M., Soede, W., Briaire, J.J. & Frijns, J.H.M. 2016. Benefit of
contralateral routing of signals for unilateral cochlear implant users, J Acoust Soc Am, 140,
393-401.
48
BSA Basic Auditory Science Meeting 2017
P22. Effect of chronic stimulation and level on temporal pitch perception by cochlear-implant
listeners
J.M. Deeks1, F.M. Guerit1, A.J. Billig1, Y.C. Tam2, F. Harris2 and R.P. Carlyon1
1
MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
2
The Emmeline Centre, Addenbrooke’s Hospital, Cambridge, UK
Presenting a pulse train to one cochlear-implant (CI) electrode produces a pitch that usually increases
with pulse rate only up to up to an “upper limit” that is about 300 – 500 pps (task dependant) and varies
markedly across subjects and electrodes. Animal experiments suggest that the limitation arises at or prior
to the inferior colliculus, where neurons entrain to pulse trains up to a certain rate, above which they
show only an onset response. This physiological upper limit is sensitive to auditory deprivation, and can
be restored (increased) by periods of chronic stimulation (Middlebrookes, CIAP 2013). We investigated
whether the psychophysical upper limit increased in adult-deafened human CI patients following
activation of their implant. We therefore tested 9 users of the Cochlear device on the day of activation
(“switch on”) and two months later, using 400-ms pulse trains presented in monopolar mode to electrode
16. We used an optimally-efficient pitch-ranking procedure and eight pulse rates logarithmically spaced
between 120-981 pps to measure the upper limit of pitch. Two measures controlled for potential practice
effects: (i) we included a task – rate discrimination measured using an adaptive procedure and with
standard and signal rates centred on 120 pps – that we expected to be less sensitive to chronic stimulation,
(ii) each task was tested twice during each session; the reasoning was that practice effects would be at
least as large within than between sessions. At the start of each session the levels of each pulse train were
set to the listener’s Most Comfortable Level (MCL), and these levels were used for the rate discrimination
and pitch-ranking tasks. The upper limit increased significantly between but not within sessions.
Performance on the low-rate discrimination task improved significantly between sessions, but the effect
size was significantly less than for the upper limit. Both of these findings are consistent with an effect of
neural plasticity. However, subjects set the pulse trains to a higher MCL in the second session, which
might explain the increase in upper limit. Indeed, we found that, for a separate set of subjects,
performance on both tasks improved with increasing level. Two findings provide tentative arguments
against the level-difference explanation: (i) the increase in upper limit did not correlate, across subjects,
with the change in MCL between sessions, (ii) a group of seven listeners re-tested after 6 months showed
further increases in MCL from 2-6 months that were not consistently accompanied by changes in the
upper limit. In addition, we re-tested six subjects after 9 months, using levels similar to those at switchon. Overall, the upper limit did not differ significantly from switch-on, but two listeners still showed
improved discrimination despite the softer loudness of the stimuli.
Acknowledgements
Supported by the MRC.
49
BSA Basic Auditory Science Meeting 2017
P23. Development of the auditory change complex after cochlear implant switch-on
R.G. Mathew1, J. Undurraga1,2, P. Boyle3, A. Shaida4, D. Selvadurai5, D. Jiang6 and D. Vickers1
1
UCL Ear Institute, London, UK
2
Department of Linguistics, Macquarie University, Australia
3
Advanced Bionics GmbH, Hannover, Germany
4
University College London Hospital, London, UK
5
St Georges Hospital NHS Trust, London, UK
6
Guys and St Thomas’ NHS Trust, London, UK
Background: Electrode discrimination is related to speech perception in cochlear implant (CI users). We
have previously shown that electrode discrimination can be measured objectively with the spatial
auditory change complex (ACC), which is cortical response elicited by a change in place of stimulation.
If the spatial ACC is to be used clinically then it is important to understand how this response develops
in relation to behavioural discrimination after switch-on. The aim of this study was to determine A) how
the spatial ACC develops in the first 6 months after switch-on and B) whether changes in the ACC could
be explained by the development of loudness tolerance.
Method and Results: A) Ten adult CI users (age 27-80 years), including 2 pre-lingually deafened adults
took part in the study. Apical electrode pairs were tested at 1 week, 3 months and 6 months after switchon. For 2 participants, data were also collected at 12 months. EEG was recorded with the Biosemi Active
2 system and behavioural electrode discrimination was measured with a 3-interval 2-alternative forced
choice task. Open-set sentence and closed-set vowel perception were measured at each visit. There was
a significant increase in spatial ACC amplitude as well as behavioural discrimination scores over time.
Of note, in certain individuals the spatial ACC preceded accurate behavioural discrimination. Mixed
model analysis showed that ‘time after switch-on’ and the ‘number of discriminable apical electrodes’
(measured with the ACC) were significant predictors of speech perception.
B) Changes in cortical responses that occur over time could be confounded by changes in loudness
perception and stimulation levels. We therefore conducted an experiment on the effect of stimulus
intensity on the spatial ACC in 9 adult CI users (age 42-68). Apical electrode pairs were loudness
balanced and stimulated at 40, 50, 60, 70, and 80% of the dynamic range. EEG and behavioural
performance were measured as in part A. We found that in general, stimulus intensity resulted in better
behavioural discrimination scores as well as larger ACC amplitude. This effect however, was highly
variable and was predominantly seen in poor discriminators.
Discussion: There is strong relationship between the spatial ACC and behavioural discrimination. These
measures show that electrode discrimination can improve over long periods after switch-on. In addition,
or data shows that changes over time can only partially be explained by development of loudness
tolerance. We suggest that the spatial ACC could be used to aid rehabilitation of CI users.
Acknowledgements: This work is supported by the UCL Graduate Research Scholarship Programme
and Advanced Bionics
50
BSA Basic Auditory Science Meeting 2017
P24. Steady streaming as a method for drug delivery to the inner ear
L. Sumner and T. Reichenbach
Bioengineering Department, Imperial College London, London, SW7 2AZ, UK
The human ear converts pressure waves into electrical signals which are then relayed to the brain. This
mechanotransduction occurs in the Organ of Corti situated on the basilar membrane inside the spiralshaped temporal bone. The motion of the basilar membrane due to sound causes hair cells inside the
Organ of Corti to open ion channel gates and trigger action potentials in attached auditory-nerve fibers.
The hair cells can be damaged due to noise exposure or age, however. Mammalian hair cells cannot be
regenerated, leading to sensorineural hearing loss.
Sensorineural hearing loss can potentially be prevented or treated through drugs, but
delivering the drugs to the hair cells remains one of the important problems. The inner ear is encased in
the body's hardest bone, and drugs can only be injected through the round window or oval window at its
base (Salt & Plontke, 2009).
Here we seek to investigate if steady streaming may be employed to distribute drugs from the
base of the cochlea across its longitudinal extent. Steady streaming is indeed present in fluid systems
with a fluctuating flow and results in a non-zero mean flow (Riley, 2001). In the cochlea, the fluctuating
flow results from the motion of the basilar membrane. This membrane has a spatially-varying impedance
which allows to spatially segregate frequencies (Reichenbach and Hudspeth 2014). At a particular
frequency the basilar membrane responds maximally at a frequency-specific location. Distributing drugs
through steady streaming may thus employ a series of sounds of different frequencies to transport drugs
from the high- to the low-frequency region.
We model the basilar-membrane motion at a particular frequency through the WKB
approximation. We then use this motion as input to a computational fluid-dynamics (CFD) simulation
that we implement using OpenFoam. We show that the simulation predicts considerable steady
streaming, and plan to use it to investigate different types of sound stimulation for efficient drug transport.
Acknowledgements
Supported by EPSRC through the CDT Fluid Dynamics Across Scales.
References
Salt A.N. & Plontke S.K. 2009. Principles of local drug delivery to the inner ear. Audiol Neurotol, 14,
350.
Riley, N. 2001. Steady streaming, Ann Rev Fluid Mech, 33, 43.
Reichenbach, T. & Hudspeth, A.J. 2014. The physics of hearing: fluid mechanics and the active process
of the inner ear. Rep Progr Phys, 77, 7.
51
BSA Basic Auditory Science Meeting 2017
P25. EEG-measured correlates of intelligibility in speech-in-noise listening
O. Etard and T. Reichenbach
Bioengineering Department, Imperial College London, London, SW7 2AZ, UK
Humans excel at analysing complex acoustic scenes. Independent sources can be detected and segregated
to extract perceptually relevant and distinct auditory objects, even when they are acoustically
overlapping. This process is especially relevant in the context of the cocktail party problem where
effective segregation of different competing speech signals is necessary for communication.
Brain activity in the delta and theta frequency band has recently been found to track the speech
envelope. This neural correlate of speech processing is enhanced for an attended speaker in a setup of
competing speaker (attentional modulation). Moreover, variations of the neural response in delta and
theta frequency bands with speech intelligibility (acoustic degradations) have been established. However,
intelligibility variations are obtained through acoustic manipulations of the stimuli, making it arduous to
tease apart the effect of the intelligibility of a stimulus from the acoustic modifications.
We employed electroencephalograpy (EEG) with 64 scalp electrodes to record neural
responses of native English speakers listening to naturalistic continuous speech in varying conditions of
noise and intelligibility (competing speakers and speech-in-babble-noise). We obtained EEG recordings
in response to English stimuli as well as in response to a foreign unknown language (Dutch). We used
regularised linear spatio-temporal models to correlate the speech envelopes to the EEG waveforms. We
show that we can accurately reconstruct the stimulus envelopes, and demonstrate modulations of the
amplitude and latency of the neural response in different frequency bands (delta, theta, alpha) with
increasing noise levels. The comparison of the results obtained for the intelligible English stimuli to the
unintelligible Dutch stimuli allows us to identify neural correlates of speech intelligibility.
Acknowledgements
We are grateful to the Royal British Legion Centre for Blast Injury Study at Imperial College London for
their support.
52
BSA Basic Auditory Science Meeting 2017
P26. Electrophysiological correlates of word sequence features in natural spoken language
H. Weissbart, K.D. Kandylaki and T. Reichenbach
Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
Research on electrophysiological correlates of language processing often employs simplified stimuli such
as single words or short sentences (Ding et al., 2016). However, such an approach cannot assess neural
responses to statistical features of word sequences in longer narratives. To overcome this limitation, we
measured cortical responses to natural spoken language. Here we aimed to quantify neural response to
linguistic features that encodes for sequentiality of words.
We employed electroencephalography (EEG) to measure neural responses of native English
speakers to continuous English speech. Linguistic features encoding for statistics of language were
extracted from the text that corresponded to the speech signals and were aligned to the acoustic signal
through forced alignment. Firstly, the frequency of each word in large corpus was obtained using Google
Ngrams. Then we employed recurrent neural networks for language modelling to obtain word-level
statistics that accounts for context. We obtained the probability for each word in a sequence given the
previous words in the sequence. The negative logarithm of that probability is the surprisal of that word.
To control for neural responses to the acoustic properties of speech, we determined the onset of each
word and used that feature as a control variable. We then employed linear regression with regularization
to correlate the EEG responses to the linguistic and acoustic features. As an additional control, we
performed the same analysis for EEG responses to a foreign language, namely Dutch, which has similar
acoustic properties as English but was incomprehensible to the participants.
We found that both the acoustic feature as well as the linguistic features elicited distinct neural
responses. Neural responses to surprisal could not be explained by the acoustic properties or by word
frequency alone. The neural responses to linguistic features were absent when participants listened to
foreign language.
Our study reveals electrophysiological correlates of statistical features of word sequences that
emerged when analyzing neural responses to long sequences of continuous spoken language. The cortical
response to surprisal, which represents an encoding of the predictability of word sequences, supports the
predictive coding hypothesis as a sequence of words leads to the prediction of the next word.
Acknowledgements
This work was supported by the EPSRC Centre for Doctoral Training in Neurotechnology for Life and
Health (grant number EP/L016737/1).
References
Ding, N., Melloni, L., Zhang, H., Tian, X. & Poeppel, D. 2016. Cortical tracking of hierarchical linguistic
structures in connected speech. Nature Neurosci, 19, 158
53
BSA Basic Auditory Science Meeting 2017
P27. Effects of frequency region and number of formants in an interferer on the informational
masking of speech
B. Roberts and R.J. Summers
Psychology, School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK
The impact on intelligibility of a single extraneous formant presented in the opposite ear to a target
sentence is strongly influenced by the depth of variation in the formant-frequency contour of the interferer
(Roberts & Summers, 2015; see also Roberts et al., 2014). This study explored whether the extent of
informational masking also depends on the frequency region and number of formants in an interferer.
Three-formant analogues of natural sentences were generated using a monotonous buzz
source (F0 = 140 Hz) and second-order resonators. Target formants (F1+F2+F3) were presented
monaurally; the target ear was assigned randomly on each trial to ensure conditions of spatial uncertainty.
Interferers were presented in the contralateral ear. In experiment 1, single-formant interferers were
created using the time-reversed F2 frequency contour and a constant amplitude, RMS-matched to F2.
Interferer centre frequency was matched to that of F1, F2, or F3, while maintaining the extent of formantfrequency variation (depth) on a log scale. In experiment 2, the interferer comprised either one formant
(derived from F1) or three (derived from F1+F2+F3). Each interfering formant was created using the
time-reversed frequency contour of the corresponding target formant, and had an RMS-matched constant
amplitude. Owing to the spectral tilt of natural speech, the inclusion of the higher formants had little
effect on overall interferer intensity (<1 dB). Interferer formant-frequency variation was scaled to 0%,
50%, or 100% of the original depth.
Adding a single-formant interferer typically lowered intelligibility but its centre frequency
had relatively little effect. Although increasing the depth of formant-frequency variation in the interferer
had the greatest impact on intelligibility, the number of interfering formants was also important; these
factors had independent and additive effects. The results suggest that the impact on intelligibility depends
primarily on the overall extent of frequency variation in the interferer and the number of extraneous
formants, not on any particular frequency region.
Acknowledgements
Supported by ESRC (Grant number ES/N014383/1).
References
Roberts, B. & Summers, R.J. 2015. Informational masking of monaural target speech by a single
contralateral formant. J Acoust Soc Am, 137, 2726-2736.
Roberts, B., Summers, R.J. & Bailey, P.J. 2014. Formant-frequency variation and informational masking
of speech by extraneous formants: Evidence against dynamic and speech-specific acoustical
constraints. J Exp Psychol Hum Percept Perform, 40, 1507-1525.
54
BSA Basic Auditory Science Meeting 2017
P28. The effect of lexicality on the audio-visual integration of speech
P.C. Stacey1, D. Scott1, C.J. Sumner3, P.T. Kitterick2 and K.L. Roberts1
1
Department of Psychology, Nottingham Trent University, Nottingham, NG1 4BU, UK
2
NIHR Nottingham Biomedical Research Centre, Ropewalk House, Nottingham, NG1 5DU, UK
3
MRC Institute of Hearing Research, Nottingham University, NG7 2RD, UK
Integrating information from audition and vision is of great benefit to speech perception, especially when
people listen in noisy environments. The present study investigated the role of lexicality in audio-visual
integration, asking whether combining information from audition and vision is purely a bottom-up
process, or whether top-down lexical information also plays a role. Lexical information is known to be
important in the perception of speech under audio-only conditions; the word superiority effect (Cutler et
al., 1987) shows that phonemes are recognised more quickly and accurately if they appear in words than
if they appear in non-words. Research into whether the audio-visual integration of speech is also lexically
influenced has revealed mixed results. Fort et al. (2010) found evidence for the word superiority effect
for audio-visual conditions, while Sams et al. (1998) found no difference in the strength of the McGurk
effect for words compared to non-words.
The present study extends previous research by explicitly requiring participants to match
auditory and visual information, and investigating whether lexical information affected responses.
Twenty-five normal hearing participants were presented with either words or non-words in which either
the audio and visual information matched or did not. In order to increase task difficulty, stimuli were
degraded by adding white noise to the auditory signal and by blurring the visual signal. The results
showed no overall differences in accuracy or response sensitivity (d’) between word or non-word trials.
However, responses to words were significantly faster than responses to non-words. As expected
responses were slower and less accurate when the stimuli were degraded. The results support the idea
that audio-visual integration is informed by not only the physical properties of stimuli, but also by
information from the lexicon.
References
Cutler, A., Mehler, J., Morris, D. & Segui, J. 1987. Phoneme identification and the lexicon. Cogn Psychol,
19, 141-177.
Fort, M., Spinelli, E., Savariaux, C. & Kandel, S. 2010. The word superiority effect in audiovisual speech
perception. Speech Commun, 52, 525-532.
Sams, M., Manninen, P., Surakka, V., Helin, P. & Katto, R. 1998. McGurk Effect in Finnish syllables,
isolated words, and words in sentences: Effect of word meaning and sentence context. Speech
Commun, 26, 75-87.
55
BSA Basic Auditory Science Meeting 2017
P29. Discrimination and identification of lexical tones and consonants in Mandarin-speaking
children using cochlear implants
L. Cabrera1 and F-M. Tsao2
1
UCL, Speech Hearing and Phonetic Sciences, UK
2
National University of Taiwan, Taiwan
Although children using cochlear implants (CIs) generally show poorer language outcomes compared to
normal-hearing children, hearing-impaired children learning English or French are able to discriminate
phonetic features or identify words above chance level. However, children learning a lexical-tone
language, such as Mandarin, show specific difficulties in lexical-tone production and also perceptual
confusions for some lexical tones compared to normal-hearing children. Indeed, CIs processors do not
convey important acoustic information for pitch perception, such as the voice-pitch related information
conveyed by fine spectro-temporal structure. Nevertheless, no study has directly measured whether
consonants are easier to perceive than lexical tones for the same children using CIs.
The present study assessed speech perception abilities of 4-to-7-year-old children using CIs
and speaking Mandarin in Taiwan. The perception of consonants and lexical tones was compared in a
group of 16 children. Lexical tone perception was supposed to be overall more difficult to perceive than
consonants with CIs. Speech perception abilities of children were measured using a syllable
discrimination task and a word identification task. For the consonant perception, manner of articulation
contrasts ([ʦ] vs. [s]; [ʂ] vs. [tʂ]; [x] vs. [k]) were supposed to be easier to perceive than place of
articulation contrasts ([p] vs. [t]; [ʦ] vs. [tʂ]; [ʂ] vs. [x]) as observed for adults. For lexical-tone
perception, the contrast between tone 1 (high level tone) and 3 (low dipping tone), that have different
pitch contour, was supposed to be easier than the contrast between tone 1 and tone 2 (rising tone) that
have similar pitch height.
Results of the word identification task showed that CI children experience more difficulties
to use lexical tones than consonants to identity words. More precisely, the performance varies according
to the tone contrast: as expected Tone 1 vs 2 was the most difficult. However, the phonetic discrimination
showed a different pattern of results, CI children are significantly better at discriminating lexical-tone
contrasts than consonant contrasts.
These results suggest that children using CIs are able to discriminate lexical tones on the basis
of reduced spectro-temporal information. However, word identification, and thus word learning, may be
challenging for this same speech contrasts. It is possible that the speech rehabilitation programs in Taiwan
accentuate the discrimination of lexical tones compared to consonants, as lexical-tone perception is
known to be difficult with CIs. It will be important to explore further this difference between phonetic
discrimination and word identification performance in children using CIs.
56
BSA Basic Auditory Science Meeting 2017
P30. Effect of the number of amplitude-compression channels on the intelligibility of speech
in noise
M. Salorio-Corbetto1, T. Baer1, M.A. Stone2 and B.C.J. Moore1
1
Department of Experimental Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
2
Manchester Centre for Audiology and Deafness, University of Manchester, UK
Multichannel compression amplification is widely used in hearing aids. Since it was demonstrated that
two-channel amplitude-compression amplification led to better outcomes than linear amplification
(Laurence et al., 1983), the number of channels in commercial hearing aids has dramatically increased.
This increase has potential advantages, such as the ability to compensate for variations in loudness
recruitment across frequency and to provide appropriate frequency-response shaping. However,
multichannel hearing aids also have potential disadvantages. A large number of channels may decrease
sound quality and speech intelligibility, due to reduction of spectral contrast (Plomp, 1988) and temporal
and spectral distortion (Kates, 2010). The objective of this study was to assess the effect of varying the
number of compression channels on speech intelligibility when the channels were used solely to
implement amplitude compression, and not for frequency-response shaping.
Computer-simulated hearing aids were used, where the frequency-dependent insertion gain
recommended by the CAM2B procedure (Moore et al., 2010; Moore & Sek, 2016) for speech with a level
of 65 dB SPL was applied using a single filter before the signal was filtered into compression channels.
This allowed the desired frequency response to be obtained accurately even with a small number of
compression channels. Compression using 3, 6, 12, and 22 channels was applied subsequently. The
compression speed was either fast (attack 10 ms, release 100 ms) or slow (attack 50 ms, release 3000
ms). Stimuli were IEEE sentences spoken by a male, presented in backgrounds that varied in temporal
envelope (2- and 8-talker babble), and signal-to-noise ratio (3, 0, and +3 dB).
The results for the 12 subjects tested so far indicate that the number of channels has little or
no effect on speech intelligibility, regardless of the speed of compression, type of noise, or signal-tonoise ratio used.
Acknowledgements
Supported by the H.B. Allen Charitable Trust.
References
Kates J.M. 2010. Understanding compression: modeling the effects of dynamic-range compression in
hearing aids. Int J Audiol, 49, 395-409.
Laurence R.F., Moore B.C.J. & Glasberg B.R. 1983. A comparison of behind-the-ear high-fidelity linear
aids and two-channel compression hearing aids in the laboratory and in everyday life. Br J
Audiol, 17, 31-48.
Moore B.C.J., Glasberg B.R. & Stone M.A. 2010. Development of a new method for deriving initial
fittings for hearing aids with multi-channel compression: CAMEQ2-HF. Int J Audiol, 49, 216227.
Moore B.C.J. & Sek A. 2016. Comparison of the CAM2A and NAL-NL2 hearing-aid fitting methods for
participants with a wide range of hearing losses. Int J Audiol, 55, 93-100.
Plomp R. 1988. The negative effect of amplitude compression in multichannel hearing aids in the light
of the modulation-transfer function. J Acoust Soc Am, 83, 2322-2327.
57
BSA Basic Auditory Science Meeting 2017
P31. Development of an arabic speech-in-noise test for children
R. Alkahtani1 and D. Rowan2
1
Princess Moura bint Abdulrahman, Riyadh, Saudi Arabia
2
Institute of Sound and Vibration, University of Southampton, Southampton, UK
Background: Hearing loss (HL) is the most prevalent sensory disorder in children in Saudi Arabia (SA)
Al-Abduljawad and Zakzouk, 2003). It is widely accepted that untreated HL in children has long-term
effects on children’s wellbeing (Hall, 2003). Despite that fact, age of identification with HL in children
in SA is deemed high in compare to other countries. In order to overcome this issue, it is crucial to
implement a child hearing screening programme in SA, which could lead to early identification and allow
for early intervention of HL in children (Madell and Flexer, 2008).
Objectives: To develop an Arabic version of the McCormick Toy Test (ATT) in noise in order to be
used for both hearing screening and clinical diagnosis of HL in Arabic children. It has been decided to
develop a speech test rather than using pure tone audiometry, the standard gold hearing test, since speech
tests describe the auditory functioning rather than measuring the audibility of an individual (Madell and
Flexer, 2008).
Method: Words’ selection: Seven pairs of monosyllabic words that are age appropriate to children as
young as 3 years old has been selected. Each pair of words were selected to be acoustically similar (has
the same vowel) such as in (Shoe - Spoon) and (Duck - Cup), etc.). Audiologists from different Arabic
countries have reviewed the words in term of the appropriateness and applicability to children in their
countries. Familiarity of the words have then be checked on a focus group of 30 Arabic children aged 36 years old. Following that, a final list has been selected. Words’ recording: Recordings took place at
an Anechoic chamber using a sound level meter, sound card and Adobe Audition3 software. The speaker
was a native Arabic female from the central region of Saudi Arabia. After completing the recording
process, a MATLAB code was used to generate noise that matches the long term average speech spectrum
of the recorded words and to build up an interface for the speech test. Intelligibility Equalization of the
words: In an attempt to equalize the psychometric functions (PF) of the words, 30 normal-hearing adults
has been tested over two sessions. During the first session, the PF of each word has been measured. Based
on the results, adjustments have been made to the location of some words in order to match the overall
PFs of the other words. During the second session, the PFs of the words after adjustments will be
measured.
Results: It is expected to have equal PFs for all the words. If that was the case, the next step in will be
an implementation of the test in an adaptive procedure.
Conclusion: Research is under process.
References:
Al-Abduljawad, K.A. & Zakzouk, S.M. 2003. The prevalence of sensorineural hearing loss among Saudi
children, International Congress Series. Elsevier, 199-204.
Hall, D. & Elliman, D. 2003. Health for all children, Forth ed. New York: Oxford University Press Inc.
Madell, J.R. & Flexer, C. 2008. Pediatric Audiology: Diagnosis, Technology and Management. New
York: Thieme Medical Publishers, Inc.
58
BSA Basic Auditory Science Meeting 2017
P32. Using the STRIPES test as a measure of spectro-temporal sensitivity and of the effect of
stimulation mode on speech perception
A.W. Archer-Boyd1, W.S. Parkinson2, H.A. Kreft3, J.M. Deeks1, R.E. Turner4, A.J. Oxenham3, J.A.
Bierer2 and R.P. Carlyon1
1
MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK
2
University of Washington, Seattle, WA, USA
3
University of Minnesota, Minneapolis, MN, USA
4
Department of Engineering, University of Cambridge, Cambridge, UK
A number of methods, e.g. novel speech-processing algorithms, for improving performance by cochlear
implant (CI) users have been proposed. However, it has not always proved possible to demonstrate the
benefits of these approaches. This may be due to the absence of a genuine benefit, or test limitations.
Listeners have learnt the relationship between their regular speech processing strategy and speech
segments, making it difficult to know if a new strategy is effective on the basis of a speech test, which
could result in an underestimation of the benefits of a new method. This obstacle can be overcome by
using psychophysical tests; however these typically require either spectral or temporal processing, but
not both.
The STRIPES (Spectro-Temporal Ripple for Investigating Processor Effectiveness) test
requires, like speech, both spectral and temporal processing to perform well. The test requires listeners
to discriminate between stimuli comprising of temporally overlapping exponential sine sweeps (the
“stripes”) that go up or down in frequency over time. The task difficulty is increased by increasing the
sweep density (number of sweeps present at the same time).
Results from eight Advanced Bionics (AB) CI users show good performance with a 12channel map using logarithmically spaced filters and poorer performance with a 12-channel map using
wider, overlapping logarithmically spaced filters, modelling the effect of increased current spread in the
cochlea. All listeners produced monotonic psychometric functions, and convergent thresholds using an
adaptive staircase method. STRIPES thresholds obtained from eight AB CI users using apical (electrodes
1-6) or more basal (electrodes 7-12) maps with identical input filters (250 Hz to 4 kHz) showed a pattern
of results similar to percentage correct scores for speech in quiet.
We also determined whether STRIPES can consistently capture subject-by-subject
differences in the effect of different stimulation methods. This is important because novel methods
sometimes improve speech perception in some listeners and degrade it in others. Eighteen AB CI users
were tested on STRIPES and on another spectro-temporal test (SMRT; Aronoff & Landsberger, 2013),
using three 14-channel maps in which stimulation was in monopolar (MP), partial tripolar (pTP), or
dynamic tripolar (DT; Bierer et al., 2016) mode.
Results show an effect of stimulation mode for vowels (DT best). There is no evidence that
either STRIPES or SMRT are sensitive to modest effects of stimulation mode on speech perception, as
studied by Bierer et al. (2016). This may be because (i) the stimulation mode effects studied were small,
as across-subject results from the two speech scores did not correlate with each other, and (ii) the speech
scores were obtained acutely, and results may correlate with STRIPES or SMRT after take-home
exposure.
Acknowledgements
Author AAB was funded by a grant from Advanced Bionics. This collaboration was made possible in
part by a Visiting Scientist Award granted to author RPC from the Virginia Merrill Bloedel Hearing
Research Center.
References
Aronoff, J.M. & Landsberger, D.M. 2013. The development of a modified spectral ripple test. J Acoust
Soc Am, 134(2), EL217-EL222.
Bierer, J.A., Parkinson, W.S., Kreft, H.A., Oxenham, A.J., Chen, C. & Litvak, L. 2016. Testing a new
cochlear implant stimulation strategy with dynamic focusing. ARO Midwinter Meeting.
59
BSA Basic Auditory Science Meeting 2017
P33. The role of offset sensitivity in consonant discrimination in noise
F. Ali1, D.E. Bamiou1,2, S. Rosen3 and J.F. Linden1,4
1
UCL Ear Institute, 332 Grays Inn Rd, London, WC1X 8EE, UK
2
The Royal National Throat, Nose and Ear Hospital, London, WC1X 8DA, UK
3
Department of Speech, Hearing & Phonetic Sciences, UCL, London, WC1N 1PF, UK
4
Department of Neuroscience, Physiology & Pharmacology, UCL, London, WC1E 6BT, UK
Sound offsets are important cues for recognising, distinguishing and grouping sounds, but the neural
mechanisms and perceptual roles of sound-offset sensitivity remain poorly understood. In particular,
while it is known that troughs in amplitude modulation are essential to consonant perception, there is a
gap in the literature relating physiological studies of sound-offset responses in the auditory brain to the
psychophysics of speech perception.
Recent studies in a mouse model of developmental disorder have reported the discovery of a
central auditory deficit specific to the processing of sound offsets (Anderson & Linden, 2016). This
finding raises the possibility that deficits in sound-offset sensitivity might contribute to listening
difficulties associated with developmental disorders. Difficulty perceiving speech in noise is the
characteristic feature of central auditory processing disorder, and is also associated with other
developmental or language disorders (Ferguson et al., 2011). Here, we used mathematical modelling and
auditory psychophysics in human subjects to ask how sound-offset sensitivity relates to discrimination
of vowel-consonant-vowel (VCV) stimuli in multi-talker babble noise.
For mathematical modelling, we used a phenomenological model introduced by Anderson
and Linden (2016), based on the assumption that auditory brain activity arises from a sum of inputs from
independently weighted onset-sensitive and offset-sensitive channels. By reducing the weighting of the
offset-sensitive channel, we simulated reduced offset sensitivity and assessed its influence on the
discriminability of model outputs for 48 non-sense vowel-consonant-vowel (VCV) speech stimuli in
varying levels of multi-talker babble noise (-12, -6, 0, 6, 12 dB SNR). We show that offset salience in
noise can be used to categorise phonetic consonants, and we identify particular consonants for which
discrimination in noise is more strongly or more weakly affected by offset sensitivity. We also report the
results of a preliminary psychophysical study of offset sensitivity and VCV perception in normal healthy
subjects aged 18-60, comparing ratios of sound-onset to sound-offset reaction times with thresholds for
gap-in-noise detection and VCV discrimination in noise.
Acknowledgements
Supported by EPSRC and Action on Hearing Loss.
References
Anderson, L.A. & Linden, J.F. 2016. Mind the gap: two dissociable mechanisms of temporal processing
in the auditory system. J Neurosci, 36(6), 1977-1995.
Ferguson, M.A., Hall, R.L., Riley, A. & Moore, D.R. 2011. Communication, listening, cognitive and
speech perception skills in children with auditory processing disorder (APD) or specific
language impairment (SLI). J Speech Lang Hear Res, 54, 211-227.
60
BSA Basic Auditory Science Meeting 2017
P34. The contribution of cognition and hearing loss to individual differences in speech intelligibility
in a variety of speech-perception-in-noise tests in younger and older adult listeners
A. Dryden1,2, H. Allen2, H. Henshaw3 & A. Heinrich1
1
MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, NG7 2RD, UK
2
School of Psychology, The University of Nottingham, NG7 2RD, UK
3
NIHR Nottingham Biomedical Research Centre, School of Medicine, The University of Nottingham, NG1
5DU, UK
Successful Speech-in-Noise (SiN) perception can be difficult, particularly for older listeners. A loss of
hearing sensitivity accounts for some of the difficulty, but cannot explain all individual differences.
Cognition, and in particular working memory, has emerged as another key factor. However, our
understanding of the role of cognition for speech perception and its potential interaction with hearing loss
has often been limited by a lack of systematicity and theoretical rigor in the selection of speech stimuli
and cognitive tests. The current project aimed to increase the systematicity of tested SiN situations and
the theoretical rigor of selected cognitive tests. It also aimed to assess the potentially changing role of
cognition for SiN perception with different degrees of hearing loss by testing younger adults (N=50, age
range 18-30 years) and older adult listeners (N=50, age range 60-85 years) who can be expected to differ
in their extent of hearing loss. Hearing sensitivity, measured using pure-tone audiometry, was normal in
all the younger listeners (<20 dB HL PTA 0.25-4 kHz) while the older group showed a range of hearing
loss from normal to mild (<40 dB HL PTA 0.25-4 kHz).
SiN perception was measured for all listeners in a total of six listening situations that
systematically varied in target sound (from high-predictability context sentences to low-predictability
sentences to single words) and background noise (speech-modulated noise or 3-talker babble). This was
done because we hypothesised that the nature of the fore- and/or background signal might affect the type
and extent of cognitive involvement in SiN perception and its interaction with hearing loss. Speech
stimuli were presented via headphones at fixed signal-to-noise ratios that presented approximately 50%
intelligibility level for a particular condition and age group.
Tests to assess cognitive abilities were selected based on Baddeley’s model of working
memory, and each verbal component of the theory that is, the phonological loop, episodic buffer, and
central executive, was assessed by multiple tests. This allowed us to compute latent variable scores for
each theoretical component thus making our results more generalisable to the theory than single surface
tests would have been able to.
Behavioural performance on speech perception tests (6 tests), cognitive abilities (4
components) and hearing sensitivity were analysed in two separate linear mixed models, one model of
each age group. Preliminary results show that individual differences in the phonological loop component
account for differences in intelligibility between the two background noises but not between the three
foreground signals, in the older listener group only. Moreover, different components of the working
memory network appear to be engaged in SiN perception in young and old listeners. Finally, besides a
general role of hearing loss for SiN perception, particularly in older adults, hearing loss also interacted
with the episodic buffer component and masking condition.
61
BSA Basic Auditory Science Meeting 2017
P35. What we talk about when we talk about speech intelligibility
W.M. Whitmer and D. McShefferty
MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, G31 2ER, UK
When discussing speech intelligibility benefits, it is common to refer to the signal-to-noise ratio (SNR)
where a listener’s ability to repeat the signal correctly 50% of the time (SNR50). If performance has been
measured robustly, there should be objective equivalence in difficulty across any signal and noise pairs
presented at a listener’s SNR50. It is reasonable to assume that the listener’s perception of difficulty will
also be equivalent across stimuli presented at their respective SNR50s. We found this assumption of
subjective equivalence to be false.
Twenty adult (median age of 67 years) listeners (nine female) of varying hearing ability
(median better-ear average 29 dB HL) participated. In different blocks of trials, listeners first were tasked
with repeating back IEEE sentences in same-spectrum or two-talker babble noise at various SNRs.
Individual SNR50s were then estimated from the psychometric functions. Thereafter, in a modification
to our previous method to measure the SNR JND (McShefferty et al., 2015), listeners heard on a given
trial two intervals: a sentence presented in babble and the same sentence presented in same-spectrum
noise. One interval would be presented at its SNR50 and the other at its SNR50 plus an increment varying
from 0-8 dB in 2 dB increments. Listeners were asked to choose which sentence was clearer. All stimulus
combinations and orders were counter-balanced and repeated 12 times.
The result of note was when there was a 0 dB increment (i.e., when both stimuli were
presented at their SNR50). It was initially expected that listeners would choose each stimulus 50% of the
time. Listeners on average, however, chose sentences in babble to be clearer 64% vs. 36% for sentences
in same-spectrum noise [t(19) = 7.81; p < 0.0001]. In the rest of the conditions, this preference or “clarity
gap” persisted. There was no correlation between the clarity gap and individual SNR50s nor individual
differences in SNR50s. This particular result indicates a difference between objective, perceptual benefits
and subjective, perceived benefits (cf. Saunders et al., 2004). If equivalent performance is not perceived
as being equivalent in clarity across stimuli, perhaps an altogether different measure, such as effort, could
yield subjective equivalence.
Acknowledgements
Work supported by the Medical Research Council (grant number U135097131) and the Chief Scientist
Office of the Scottish Government.
References
McShefferty, D., Whitmer, W.M. & Akeroyd, M.A. 2015. The just-noticeable difference in speech-tonoise ratio. Trends Hear, 19, 1-9.
Saunders, G.H., Forsline, A. & Fausti, S.A. 2004. The performance-perceptual test and its relationship to
unaided reported handicap. Ear Hear, 25, 117-126.
62
BSA Basic Auditory Science Meeting 2017
P36. Using automatic speech recognition for the prediction of impaired speech perception
L. Fontan1 and C. Füllgrabe2
1
Archean Technologies, Montauban, France
2
Medical Research Council Institute of Hearing Research, School of Medicine, The University of
Nottingham, UK
Given that the main complaint of people with age-related-hearing loss (ARHL) is the difficulty to
communicate with other people in noisy environments, the assessment of unaided and aided speechidentification performance (clinically referred to speech audiometry) provides important information for
the audiologist. However, most standard speech-intelligibility tests can be fairly lengthy and are prone to
familiarity with the speech materials which may limit the tests’ usefulness and applicability. For example,
to achieve a stable score, several lists (of words or sentences) need to be administered for each
combination of hearing-aid settings. Hence, when a large number of such combinations needs to be tested,
the patient may experience increased levels of fatigue resulting in a drop in and/or larger variability of
performance. In addition, in principle, the test material needs to be refreshed for each test condition and
thus the limited number of test items might restrict the number of test conditions the audiologist wishes
to assess.
The use of automatic speech recognition (ASR) could overcome these issues, and provide a fast and
objective means for conducting speech audiometry (Aumont & Wilhem-Jaureguiberry, 2009). Here, we
report the results from two proof-of-concept experiments using an ASR system (i.e., a speech recognizer
composed of a front-end extracting Mel-frequency cepstral coefficients from the speech signal, and a
back-end using acoustic models based on Hidden Markov chains, a language model and a lexicon) to
recognize speech signals that were processed to mimic some of the perceptual consequences of ARHL at
different levels of severity. In Experiment 1, machine scores were compared to human speech
intelligibility and comprehension scores obtained by young normal-hearing listeners presented with the
same processed speech tokens as those fed to the ASR system (Fontan et al., 2017b). Correlational
analyses revealed significant and consistently strong positive associations between human and ASR
scores (with r generally exceeding .90). In Experiment 2, speech intelligibility for three types of speech
material (i.e., logatoms, words and sentences) was assessed in older listeners with ARHL (Fontan et al.,
2017a). Again, significant positive correlations were found between human and ASR scores but their
strength varied from strong to moderate. Subsequent multiple linear regression analyses revealed that the
prediction of human scores could be improved by taking into consideration not only the ASR scores but
also the general cognitive functioning of the listeners.
Acknowledgements
The MRC Institute of Hearing Research is supported by the Medical Research Council (grant number
U135097130).
References
Aumont, X. & Wilhem-Jaureguiberry, A. (2009). European Patent No. 2136359 — Method and Device
for Measuring the Intelligibility of a Sound Distribution System. France: Institut National de
la Propriété Industrielle.
Fontan, L., Cretin-Maitenaz, T. & Füllgrabe, C. (2017b). Automatic speech recognition predicts speech
perception in older hearing-impaired listeners. (In preparation).
Fontan, L., Ferrané, I., Farinas, J., Pinquier, J., Magnen, C., Tardieu, J., Gaillard, P., Aumont, X. &
Füllgrabe, C. (2017b). Automatic speech recognition predicts speech intelligibility and
comprehension for listeners with simulated age-related hearing loss. J Speech Lang Hear Res,
(In press).
63
BSA Basic Auditory Science Meeting 2017
P37. Individual differences in the amount of benefit obtained from visual speech information when
listening in noise
C.L. Blackburn1, P. Kitterick2, G. Jonesˡ, P.C. Stacey1
1
Department of Psychology, Nottingham Trent University, NG1 4BU, UK
2
NIHR Nottingham Biomedical Research Centre, Nottingham, NG1 5DU, UK
Being able to see a talker’s face while they are speaking is of great benefit in challenging listening
situations. However, people vary greatly in the extent to which they are able to use this ‘visual speech’
information when listening in background noise (Stacey et al., 2016). Twenty-five participants completed
a battery of tests to examine what factors predict differences in the amount of benefit received from visual
speech. Speech perception was assessed using audio-only and audio-visual stimuli. Factors assessed were
the ability to detect temporal fine structure information, aspects of attention measured using an auditory
elevator and visual map search task from the Test of Everyday Attention, the ability to detect audio-visual
(AV) synchrony, verbal working memory capacity, and tendency towards greater autistic traits measured
using the Autism Spectrum Quotient (AQ). Visual speech benefit was measured by comparing audioonly with audio-visual performance on an open-set speech perception task. Participants identified words
in sentences when listening to clear or sine-wave vocoded speech in 16-talker background noise. For
clear speech, three of the predictors explained 59.8% of the variability in the amount of visual speech
benefit gained (F(7,17)=6.1, p=.001). First, general performance was a significant predictor; participants
who performed better on the audio-only and audio-visual tasks received more benefit from visual speech
(β=-.54, p=.003). Second, participants who were more able to detect AV asynchrony obtained more
benefit (β=.45, p=.006). Third, participants who scored more highly on the AQ gained less benefit from
visual speech information (β =-.39 p=-.013). No significant predictors were identified when speech was
vocoded. Taken together, the results suggest that levels of visual speech benefit received in clear speech
are dependent on the ability to detect speech synchrony in addition to general speech perception abilities,
and that speech specific deficits in multi-sensory processing shown by those diagnosed with autism may
be extended to non-clinical populations who score more highly on the AQ.
References
Stacey, P.C., Kitterick, P.T., Morris, S.D. & Sumner, C.J. 2016. The contribution of visual information
to the perception of speech in noise with and without informative temporal fine structure.
Hear Res, 336, 17-28.
64
BSA Basic Auditory Science Meeting 2017
P38. Testing a loudness model for binaural time-varying sounds using vocoded speech
J. Schlittenlacher and B.C.J. Moore
Department of Experimental Psychology, University of Cambridge, CB2 3EB, UK
The loudness model of Moore et al. (2016) was developed from a model for time-varying sounds
(Glasberg & Moore, 2002) but incorporates the concept of binaural inhibition; a sound at one ear can
reduce the internal response to a sound at the other ear (Moore & Glasberg, 2007). It is important to
consider the stage of processing at which binaural inhibition takes place. The 2016 model implements
binaural inhibition after calculating short-term specific loudness and implements binaural summation
after calculating long-term loudness. It correctly predicts that an amplitude-modulated pure tone with an
interaural modulation phase difference (IMPD) is perceived as somewhat louder than a tone with no
IMPD (Moore et al., 2016).
A more realistic case is that of two talkers, one on each side of the head, which would lead to
short-term interaural differences in spectral shape and level. To assess the effect of these differences on
loudness while ensuring that participants judged a single percept, we used 4-channel vocoded speech of
one speaker and mixed the channels across recordings and ears. For test sound 1, channels 1 and 3 of
segment A were combined with channels 2 and 4 of segment B for presentation to the left ear, and
channels 2 and 4 of A were combined with channels 1 and 3 of B for presentation to the right ear. For
test sound 2, channels 1 and 2 of one segment were combined with channels 3 and 4 of the other segment.
The comparison stimulus was a diotic sound obtained by summation of the two vocoded segments.
The duration of each stimulus was 2 sec. A one-up/one-down 2-AFC task was used to estimate
the level differences required for equal loudness of the test and comparison sounds. For each pair of
stimuli, four interleaved tracks were presented in one block with either the test or the comparison stimulus
fixed in level and with the level of the variable stimulus starting 10 dB below or above the equal-level
point.
Individual differences between the subjects tested so far were very small, suggesting that
mixing the channels resulted in the desired single percept. The initial results agree with the prediction of
the 2016 model that, at equal level, the dichotic test stimuli were louder than the diotic comparison
stimulus.
Acknowledgements
JS was supported by the EPSRC (Grant number RG78536).
References
Glasberg, B.R. & Moore, B.C.J. 2002. A model of loudness applicable to time-varying sounds. J Audio
Eng Soc, 50, 331-342.
Moore, B.C.J. & Glasberg, B.R. 2007. Modeling binaural loudness. J Acoust Soc Am, 121, 1604-1612.
Moore, B.C.J., Glasberg, B.R., Varathanathan, A. & Schlittenlacher, J. 2016. A loudness model for timevarying sounds incorporating binaural inhibition. Trends Hear, 20, 1-16.
65
BSA Basic Auditory Science Meeting 2017
P39. Amplitude modulation frequency and history contribute to subsequent sound detection in a
modulated scene
K.C. Poole, J. Sollini and J. Bizley
UCL Ear Institute, University College London, London, WC1X 8EE, UK
Our perception of a sound is strongly influenced by the sounds that preceded it. For example, a preceding
(pre-cursor) tone will reduce the detectability of another tone when they are similar in frequency (forward
masking). In the same way an amplitude modulated (AM) pre-cursor masks a modulated signal the more
closely matched their modulation frequencies are. Recent physiological work has hinted at a cortical role
in the suppression of neural responses to prolonged modulated sounds (Malone et al., 2015, Sollini &
Chadderton, 2016), though in different contexts (the former investigated modulation masking the latter
comodulation masking release). Models of these two phenomena have included a modulation filter bank
suggesting the same mechanism could influence both.
In this study, we investigated the effect of modulation masking on the detection of pure tone
signals (3-AFC, adaptive-staircase). A brief modulated pre-cursor (600ms) preceded a tone and
simultaneous modulated masker (200ms tone temporally centred on a 400ms maker). The AM frequency
of the pre-cursor was then varied to test the role of matching and mismatching the AM frequency of the
pre-cursor (5-50 Hz) and simultaneous masker (20 or 30Hz) on tone detection. This was performed in
both a narrowband carrier (1 kHz, n = 15) and broadband comodulated configuration (0.25, 0.5, 2 and 4
kHz, n = 12 so far).
Our results demonstrate that, as predicted, tone detection in modulated scenes was
significantly influenced by the pre-cursor AM frequency in both narrowband (20 Hz, p<0.01 and 30 Hz,
p<0.0001) and broadband configurations (30 Hz, p<0.01). This demonstrates that modulation masking
influences the detection of subsequent sounds and does so in an AM frequency dependent manner. These
findings have implications for the interpretation of CMR results when using temporally predictable
envelopes.
Acknowledgements
Supported by BBSRC grant awarded to JB.
References
Malone, B.J., Beitel, R.E., Vollmer, M., Heiser, M.A. & Schreiner, C.E. 2015. Modulation-frequencyspecific adaptation in awake auditory cortex. J Neurosci, 35, 5904-5916.
Sollini, J. & Chadderton, P. 2016. Comodulation enhances signal detection via priming of auditory
cortical circuits. J Neurosci, 36, 12299–12311.
66
BSA Basic Auditory Science Meeting 2017
P40. Computational modelling insights into amplitude-modulation masking of frequencymodulation detection
A. King, L. Varnet and C. Lorenzi
Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL
Research University, CNRS, 75005 Paris, France
After demonstrating tuned masking of frequency modulation (FM) detection by amplitude
modulation (AM) of the same carrier signal (King & Lorenzi, 2016), we attempted to model the masking
features with a simple computational filter-bank model of modulation processing (Dau et al., 1997). It
was comprised of a gamma-tone filter-bank, a ‘broken-stick’ compressive input-output function, halfwave rectification, a 1st order 1-kHz low-pass filter, a 1st order 3-Hz high-pass filter and a modulation
filter-bank between 2 and 120 Hz with a Q factor of 1. Then, additive and multiplicative noises were
combined with the signals in the channels. The model then attempted to detect modulations by crosscorrelating these noisy signals with a template based on signals without noise. The model performed this
task in the same adaptive procedure (and with the same stimuli) as the human subjects who produced the
data of AM masking FM (King & Lorenzi, 2016).
The model detects modulations in amplitude, it detects FM through the AM generated by the
FM passing through the channels of the gamma-tone filter-bank. The additive noise was adjusted to fit
the model to behavioural thresholds of AM detection at 2, 4, 8, 16 and 32 Hz. It was then able to produce
similar FM detection thresholds between 8 and 64 Hz, but overestimated thresholds at 2 and 4 Hz. The
model was only able to simulate some of the observed masking effects (16-Hz AM masking FM) when
the model was insensitive to modulation phase above 10 Hz and when the template did not include the
AM masker (avoiding beating between the FM and the AM). The model was unable to simulate the
masking effect of a 2-Hz AM on FM. These results suggest that FM detection probably does not rely on
detecting the AM produced from cochlear filtering of the FM at low FM rates. Whereas at higher FM
rates detection of FM as AM is plausible, but it only produces the observed masking effects if it cannot
use phase differences between the FM and AM masker which occur in some of the gamma-tone filterbank channels.
Acknowledgements
Supported by two grants from ANR (HEARFIN and HEART projects). This work was also supported by
ANR-11-0001-02 PSL* and ANR-10-LABX-0087.
References
Dau, T., Kollmeier, D. & Kohlrausch, A. 1997. Modeling auditory processing of amplitude modulation:
I. Detection and masking with narrowband carriers. J Acoust Soc Am, 102, 2892-2905.
King, A. & Lorenzi, L. 2016. Amplitude-modulation masking for frequency-modulation detection,
presented at the Basic Auditory Science Meeting, Cambridge, 4-6 September.
67
BSA Basic Auditory Science Meeting 2017
P41. Loudness perception in listeners with a complex disorders caused by traumatic experiences
J.L. Verhey1, J. Hots1, P. Braak2 and S. Metzner3
1
Department of Experimental Audiology, Otto von Guericke University, Magdeburg, 39120, Germany
2
Zentrum ÜBERLEBEN, Arbeitsbereich Behandlung und Rehabilitation, Berlin, Germany
3
Zentrum für interdisziplinäre Gesundheitsforschung, Universität Augsburg, Augsburg, Germany
Patients with a post-traumatic stress disorder (PTSD) often have an altered sound perception, especially
a high sensitivity to loud sounds. This could indicate a hyperacusis in these patients. This study
investigates, if loudness perception is indeed different from that of listeners without PTSD. Listeners
were recruited from the Zentrum ÜBERLEBEN gGmbH in Berlin. They participated voluntary in the
experiment which was approved by the Ethics committee of the University of Magdeburg. The aim of
the experiment was to characterise loudness perception with only a small set of data points to minimise
the stress for the listeners. Thresholds in quiet and the level at the most comfortable loudness (MCL)
were measured using a standard clinical audiometer. The level at MCL was estimated using categorical
loudness scale. The whole dynamic range, i.e., from inaudible to extremely loud could not be assessed
due to the health condition of the listeners. The categories, the instructions and the agreement form were
translated into their native language or into a language they were fluent in. In addition, listeners of a
control group participated that did not report a traumatic experience but had a similar cultural
background. Listeners with a complex disorder caused by traumatic experiences often showed a reduced
dynamic range between the level at threshold in quiet and level at MCL. However, the results showed a
high inter-individual variability in both groups of listeners.
68
BSA Basic Auditory Science Meeting 2017
P42. Effects of induction sequences on the tendency to segregate auditory streams: Exploring the
stream biasing effects of constant- and alternating-frequency inducers
S.L. Rajasingam, R.J. Summers and B. Roberts
Psychology, School of Life and Health Sciences, Aston University, Birmingham B4 7ET, UK
The extent of stream segregation for a test sequence comprising repeating high- (H) and low-frequency
(L) pure tones, presented in a galloping rhythm (e.g., LHL–LHL–…), is often much greater when
preceded by a constant-frequency induction sequence matching one subset of the constituent tones (e.g.,
L–L–L–L–…) than by an inducer of the same duration configured like the test sequence (Haywood &
Roberts, 2013). This difference persists for several seconds after the test sequence begins. The origin of
this effect was explored using a stimulus configuration in which long test sequences were preceded by
various short (2 s) induction sequences.
In experiment 1, 20-s-long test sequences were used (50 LHL– triplets; L = 1 kHz; H = 4, 6,
or 8 semitones above; tone and silence durations = 100 ms) and one or other subset of the inducer tones
was attenuated (0 dB, 6 dB, 12 dB, 24 dB, or infinite). Listeners monitored each test sequence throughout
and reported when they heard it as one stream and when as two streams. Greater attenuation of either
subset of inducer tones led to a progressive increase in the segregation of the subsequent test sequence,
towards that following the constant-frequency inducer. In experiment 2, 12-s-long test sequences were
used (30 HLH– triplets; H = 1 kHz; L = 4, 6, or 8 semitones below) and the frequency of the L-subset of
inducer tones was raised or lowered relative to their test-sequence counterparts, such that ∆fI = 0, 0.5,
1.0, or 1.5 × ∆fT. Either change increased subsequent stream segregation.
These outcomes are consistent with the notion of stream biasing – that constant-frequency
inducers promote segregation by capturing the matching subset of test-sequence tones into an on-going,
pre-established stream (see, e.g., Bregman & Rudnicky, 1975; Rogers & Bregman, 1993; Haywood &
Roberts, 2013).
Acknowledgements
Supported by Aston University.
References
Bregman, A.S. & Rudnicky, A. 1975. Auditory segregation: Stream or streams? J Exp Psychol Hum
Percept Perform, 1, 263-267.
Haywood, N.R. & Roberts, B. 2013. Build-up of auditory stream segregation induced by tone sequences
of constant or alternating frequency and the resetting effects of single deviants. J Exp Psychol
Hum Percept Perform, 39, 1652-1666.
Rogers, W.L. & Bregman, A.S. 1993. An experimental evaluation of three theories of auditory stream
segregation. Percept Psychophys, 53, 179-189.
69
BSA Basic Auditory Science Meeting 2017
P43. Listening across the life span: What you have to do determines how you listen
A. Casaponsa1,3, V. Vianeva1, M.A. Akeroyd1 and J.G. Barry1,2
1
MRC Institute of Hearing Research, University of Nottingham, Nottingham, UK
2
Nottingham University Hospital Trust, Nottingham, UK
3
Department of Linguistics and English Language, Lancaster University, UK
The ability to perceive and understand speech when listening in noise is known to decline with age, and
there is considerable evidence to suggest that as it does, older listeners become more reliant on cognitive
and linguistic skills (Pichora-Fuller, 2008). However, most of this research is based on tasks specifically
sensitized to capture differences in cognition or language.
In this study, we used two different tasks to assess impact of task design on conclusions about
age-related changes in listening abilities. One task (Cole et al., 1980; Roebuck & Barry, in revision)
involved continuous listening while detecting mispronunciations. The second task (Barry et al., 2014;
Heinrich et al., 2014) involved repeating short sentences out loud. In both tasks, linguistic context was
modulated to assess the role of language in supporting listening. We predicted that compared with the
younger listeners, the older listeners would perform worse on both tasks and would show an increased
reliance on cognitive and linguistic skills for completing them.
Two groups of participants with age normal hearing completed both listening tasks in quiet
and in a 4-speaker babble noise (0 dB SNR) as well as tests of non-verbal IQ, serial and working memory.
The older adults (n = 32) had a mean age of 64 years with better-ear pure tone averages (PTA) between
2.5 – 41.5 dB (M = 15.8 dB). The younger adults (n = 32) had a mean age of 23 years and better-ear
PTAs between -3.6 – 15.7 dB (M = 2.3 dB). In the continuous listening task, both groups were
significantly worse at detecting mispronunciations in babble compared with quiet (p<.001). PTA alone
predicted mispronunciation detection. There was also a significant predictability effect in the older adults
only (p = .03) suggesting an increased reliance on linguistic cues. In the sentence repetition task, the older
adults performed significantly worse than the younger adults (p < .001) with a marked effect for
predictability in both groups (p < .001). In addition to perceptual abilities (PTA), performance in the
older adults also correlated with cognitive abilities (working memory, non-verbal IQ).
The findings from this study suggest cognitive abilities only emerge as important for speechin-noise listening, when the listening task designed to investigate these abilities specifically stresses them.
It is possible that we are over-stating the role of cognition in supporting everyday communication in
noise. We suggest tests which better model the everyday listening experience of patients are more
informative. Such tests may also offer an effective tool for assessment, and for patient counselling
regarding listening functionality post-hearing aid fitting.
Acknowledgements
Supported by intramural funding (project U135097130).
References
Barry, J.G., Knight, S., Heinrich, A., Moorhouse, R. & Roebuck, H. 2014. Sensitivity of the British
English sentence set test (BESST) to differences in children’s listening abilities. Poster. BSA.
Keele.
Cole, R.A. & Perfetti, C.A. 1980. Listening for mispronunciations in a childrens’ story - the use of context
by children and adults. J Verb Learn Verb Behav, 19, 297-315.
Pichora-Fuller, M.K. 2008. Use of supportive context by younger and older adult listeners: Balancing
bottom-up and top-down information processing. Int J Audiol, 47, S72-S82.
Henirich, A. Knight, S., Young, M., Moorhouse, R. & Barry, J.G. 2014. Assessing the effects of semantic
context and background noise for speech perception with a new British English speech test.
Poster. BSA. Keele.
Roebuck, H. & Barry, J.G. (in revision). Listening difficulties and everyday listening: An interaction
between attention and language weaknesses.
70
BSA Basic Auditory Science Meeting 2017
P44. Estimating the auditory discrimination thresholds of preschool children: Parametric vs nonparametric methods
N. Gillen1, T.R. Agus2 and T. Fosker1
1
School of Psychology, Queen’s University Belfast, BT9 5BN, UK
2
Sonic Arts Research Centre, School of Arts, English & Languages, Queen’s University Belfast, BT9
5HN, UK
Adaptive staircase procedures are the most commonly used method for estimating auditory
discrimination thresholds with young children, perhaps largely due to their rather simple and versatile
nature. However, the variability in the attentional capacities of young children makes it difficult to
estimate their thresholds reliably. Here, we investigated the effects that task design and presentation had
on threshold reliability.
Study 1 consisted of a sample of 24 three-year-olds, 23 four-year-olds and 20 adults. The
participants completed a series of frequency discrimination tasks across two timepoints, approximately
two weeks apart. The frequency discrimination tasks were presented in 3I-2AFC and 3I-3AFC
paradigms. Participants also completed the tasks in two presentation formats aimed at eliciting differing
levels of engagement.
Our results showed that task formats designed to elicit greater levels of engagement resulted
in both age groups of preschoolers having significantly lower and more reliable thresholds. However, we
found that the performance level from which thresholds were estimated (p = 75%) was not achieved by
a significant proportion of preschoolers, whereas all adults achieved this criterion. This suggests that the
thresholds produced by preschoolers in this task may be biased, possibly due to variability in attentional
capacities.
Study 2 aimed to measure auditory discrimination thresholds via an adaptive procedure
designed to estimate several psychometric function parameters (Shen & Richards, 2012), rather than just
thresholds. We hypothesised that having more information regarding the psychometric function
parameters would lead to less biased threshold estimates. We also aimed to investigate how individual
variability in cognitive factors, including attention and short-term memory, are related to the threshold
estimates of children.
This study consisted of 72 preschoolers aged between three and four years old completing
either an amplitude rise-time discrimination or frequency discrimination task at two timepoints,
approximately one week apart. On a third timepoint, participants completed separate computer-based,
auditory measures of attentional lapse rate, sustained attention and short-term memory span.
The reliability of the psychometric function parameter estimates will be discussed, as well as
the relationship between the cognitive measures and threshold estimates. We will also compare the
differences between the thresholds estimated in Study 1 and Study 2.
Acknowledgements
This project is supported by a strategic priorities studentship grant from the Department for Employment
and Learning, Northern Ireland.
References
Shen, Y. & Richards, V.M. 2012. A maximum-likelihood procedure for estimating psychometric
functions: Thresholds, slopes, and lapses of attention. J Acoust Soc Am, 132(2), 957-967.
71
BSA Basic Auditory Science Meeting 2017
P45. No evidence for noise-induced cochlear synaptopathy in humans with obscure auditory
dysfunction
H. Guest1,2, K.J. Munro1,2 and C.J. Plack1,2,3
1
Manchester Centre for Audiology and Deafness, University of Manchester, UK
2
NIHR Manchester Biomedical Research, Central Manchester University Hospitals NHS Foundation
Trust, M13 9WL, UK
3
Department of Psychology, Lancaster University, UK
In rodents, noise exposure can destroy synapses between inner hair cells and auditory nerve fibers
(“cochlear synaptopathy”) without causing hair cell loss. Noise-induced cochlear synaptopathy does not
necessarily degrade threshold sensitivity, but is associated with reduced auditory brainstem response
(ABR) amplitudes at medium-to-high sound levels. This pathophysiology has been suggested to degrade
speech perception in noise (SPiN), perhaps explaining why SPiN ability varies so widely among
audiometrically normal humans.
The present study tested for associations between measures of noise-induced cochlear
synaptopathy and obscure auditory dysfunction (OAD): deficits of SPiN despite normal audiometric
thresholds. Individuals with both self-reported and lab-measured SPiN deficits were matched with
controls on the basis of age, sex, and audiometric thresholds up to 14 kHz. ABRs and envelope following
responses were recorded at high stimulus levels, yielding both raw amplitude measures and withinsubject difference measures of synaptopathy. Past exposure to high sound levels was assessed by detailed
structured interview, yielding a measure of lifetime noise exposure based on the equal energy hypothesis.
OAD was not associated with greater lifetime noise exposure, nor with any electrophysiological measure
of synaptopathy. It is possible that retrospective self report cannot reliably capture noise exposure, and
that the brainstem response measures offer limited sensitivity to synaptopathy. Nevertheless, the results
do not support the notion that noise-induced synaptopathy is a significant aetiology of OAD. It may be
that synaptopathy alone does not have significant perceptual consequences, or is not widespread in
humans with normal audiograms.
Acknowledgements
Supported by the Marston Family Foundation, Action on Hearing Loss, and the Medical Research
Council, UK (MR/L003589/1).
72
BSA Basic Auditory Science Meeting 2017
P46. Reduced binaural frequency selectivity with efferent activation
I. Yasin1, M. Kordus2, V. Drga1 and J.L. Verhey2
1
Department. of Computer Science, University College London, WC1E 6EA, UK
2
Department of Experimental Audiology, Otto von Guericke University, 39120 Magdeburg, Germany
Binaural notched-noise experiments indicate a reduced frequency selectivity of the binaural system
compared to monaural processing. The ratio of the equivalent rectangular bandwidth of a filter fitted to
dichotic data divided by the corresponding monaural bandwidth derived from diotic data ranges from
about 1.3 to 2.4, depending on the frequency and the procedures used to derive the filter (Nitschmann &
Verhey, 2013). A similar ratio was found in listeners with a sensorineural hearing loss indicating that the
mechanism underlying this effective broadening of the monaural filters when binaural cues are processed
is unaltered when the cochlear nonlinearity is reduced (Nitschmann et al., 2010).
The present study investigates how efferent efferent activation (via the medial olivocochlear
system) affects binaural frequency selectivity in normal-hearing listeners. Cochlear gain and nonlinearity
is reduced due to this efferent activation (e.g., Yasin et al., 2013). Thus we hypothesize that monaural
and binaural frequency selectivity may be affected by efferent activation. Thresholds were measured for
a 1-kHz signal embedded in a diotic bandpass-filtered notched-noise noise masker (60-2000 Hz) for
various notch widths. The signal was either presented in phase (diotic) or in antiphase (dichotic), gated
with the noise. In order to avoid efferent activation due to the masker or the signal, the duration of these
stimuli were each set to 25 ms. A bandpass-filtered precursor (60-2000 Hz) without a notch and the same
spectrum level as the masker (30 dB; 63 dB overall level) was presented prior to the masker and stimuli
to activate the efferent system. The silent interval between the precursor and signal-masker complex was
50 ms. For comparison, thresholds were also measured without the precursor and without the masker.
For the auditory filter simulations, the equivalent rectangular bandwidth (ERB) of a third-order
gammatone filter centered at the signal frequency was fitted to the threshold curves using a power
spectrum model, where the spectrum of the 25-ms notched-noise masker was used as the input to the
model. On average, the without-precursor results indicate an effectively wider binaural filter compared
to monaural filters, which is in agreement with previous studies using the same masking paradigm.
Addition of a precursor reduces frequency selectivity and broadens the filter for both diotic and dichotic
stimuli, in agreement with the hypothesis.
Acknowledgements
Supported by Deutsche Forschungsgemeinschaft (DFG, SFB trr 31) and Action on Hearing Loss.
References
Nitschmann, M. & Verhey, J.L. 2013. Binaural notched-noise masking and auditory-filter shape. J Acoust
Soc Am, 133, 2262-2271.
Nitschmann, M., Verhey, J.L. & Kollmeier, B. 2010. Monaural and binaural frequency selectivity in
hearing-impaired subjects. Int J Audiol, 49, 357-367.
Yasin, I., Drga V. & Plack, C.J. 2013. Estimating peripheral gain and compression using fixed-duration
masking curves. J Acoust Soc Am, 133, 4145-4155.
73
BSA Basic Auditory Science Meeting 2017
P47. Frequency range of the auditory efferent effect in humans
I. Yasin1, V. Drga1 and C.J. Plack2
1
Dept of Computer Science, University College London (UCL), WC1E 6EA, UK
2
Manchester Centre for Audiology and Deafness, The University of Manchester, M13 9PL, UK
Behavioural and physiological evidence suggests that the amount of gain applied to the basilar membrane
(BM) in the cochlea may change during the time course of acoustic stimulation due to efferent neural
feedback via the medial olivocochlear system (Liberman, 1996). Recordings from guinea pigs show the
greatest reduction in BM vibration due to efferent stimulation for a stimulating tone close to the
characteristic frequency associated with the recording site (Russell & Murugasu, 1997). Similarly,
suppression of human otoacoustic emissions via efferent activation is more effective for the stimulating
frequency as well as above and below the test-probe frequency (Maison et al., 2000).
The present psychoacoustical study used forward masking to estimate the frequency range of
efferent-mediated cochlear gain reduction in humans (Yasin et al., 2014). This method uses a combined
duration of masker-plus-signal stimulus of 25 ms, which is within the efferent onset delay. The effect of
efferent tuning on cochlear gain reduction can then be studied separately by presenting a precursor of
differing frequencies prior to the onset of the masker-signal stimulus (Drga et al., 2016). Five normalhearing listeners participated in the study. Masker level at threshold for a 10-dB SL, 4-kHz signal was
obtained in the presence of an on-frequency (4 kHz) or off-frequency (1.8 kHz) tonal forward masker.
The signal and masker durations were set at 6 ms and 19 ms respectively and the masker-signal silent
interval was set to 0 ms. The difference between thresholds for detection of the signal in the presence of
an on- and off-frequency masker provides an estimate of BM gain. The precursor was bandlimited noise
(200-Hz wide, centred at 4 kHz) of 160 ms duration presented at 60 dB SPL. The precursor was centred
at frequencies of 2.5, 3.0, 3.5, 3.75, 4.0, 4.25, 4.5, 5.0 and 5.5 kHz.
On average, for precursor frequencies close to the 4-kHz signal, there was a greater gain
reduction for precursor frequencies lower, rather than higher than the signal frequency. There was a
significant reduction in gain for precursor frequencies of 3.5, 3.75 and 4 kHz (t(4) = 4.27, t(4) = 4.32 and
t(4) = 3.06 respectively, p < 0.05, one-tailed)]. In order to quantify the extent of the efferent effect, the
data were fitted with rounded exponential (roex) functions with three free parameters (p, w, t) for each
upper and lower filter slope (Patterson et al., 1982). The results are in line with studies that suggest an
asymmetry in the tuning of the efferent effect (Mott et al., 1989; Harrison & Burns, 1993) and the 3-dB
bandwidth of the efferent-effect filter centred at 4000 Hz was estimated to be about 390 Hz (Q3 of 9.9).
Acknowledgements
Supported by an Action on Hearing Loss International Project Grant.
References
Drga, V., Plack, C.J. & Yasin, I. 2016. Frequency tuning of the efferent effect on cochlear gain in humans,
In: Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing (Eds. P. van
Dijk et al.), Springer-Verlag, Heidelberg, 477-484.
Harrison, W.A. & Burns, E.M. 1993. Effects of contralateral acoustic stimulation on spontaneous
otoacoustic emissions. J Acoust Soc Am, 94, 2649-2658.
Liberman, M.C., Puria, S. & Guinan, J.J.Jr. 1996.The ipsilaterally evoked olivocochlear reflex causes
rapid adaptation of the 2f1-f2 distortion product otoacoustic emission. J Acoust Soc Am, 99,
3572-3584.
Maison, S., Micheyl, C., Andeol, G., Gallego, S. & Collet, L. 2000. Activation of medial olivocochlear
efferent system in humans: influence of stimulus bandwidth. Hear Res, 140, 111-125.
Mott, J.B., Norton, S.J., Neely, S.T. & Warr, B. 1989. Changes in spontaneous otoacoustic emissions
produced by acoustic stimulation of the contralateral ear. Hear Res, 38, 229-242.
Patterson, R.D., Nimmo-Smith, I., Wever, D.L. & Milroy, R. 1982. The deterioration of hearing with
age: Frequency selectivity, the critical ratio, the audiogram, and speech threshold. J Acoust
Soc Am, 72, 1788-1803.
74
BSA Basic Auditory Science Meeting 2017
Russell, I.J. & Murugasu, E. 1997. Efferent inhibition suppresses basilar membrane responses to near
characteristic frequency tones of moderate to high intensities. J Acoust Soc Am, 102, 17341738.
Yasin, I., Drga, V. & Plack, C.J. 2014. Effect of human efferent feedback on cochlear gain and
compression. J Neurosci, 34, 15319-15326.
75
BSA Basic Auditory Science Meeting 2017
P48. On the potential benefits of audibility in the 5-10 kHz region for orientation behaviour
W.M. Whitmer1, W.O. Brimijoin1, S.C. Levy2, D. McShefferty1, G. Naylor1 and B. Edwards2
1
MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, G31 2ER, UK
2
EarLens Corporation, Menlo Park, California, 94025, USA
Hearing prostheses have a limited bandwidth due to multiple aspects of their design, from lower sampling
rates to the acoustic tubing. The current useable frequency range is approximately .25-6 kHz. Recent
advances in signal processing as well as novel methods of transduction (e.g., fibre-optic actuation of the
tympanic membrane), however, allow for a greater useable frequency range, up to 10 kHz. Previous
studies have shown a benefit for this extended bandwidth in consonant recognition, gender identification
and spatial release from masking. The question remains on whether there would be any direct spatial
benefits to extending bandwidth without pinna cues. To explore this question, we used a dynamic
localization method developed by Brimijoin et al. (2014).
To create a plausible scenario, we used a near-far distinction between targets and distractors
speaking at equivalent levels in a simulated large room with common building materials. Twenty-eight
mostly normal-hearing adult participants were asked to orient themselves as quickly and accurately as
comfortable to a new near-field talker in a background of far-field talkers of the same overall level. At
the start of each trial, a male or female talker at 0° relative to the participant began with a 1-s segment of
a story. Then the opposite-gender talker continued the story for 4 s at ±30-90°. Eight far-field speech
distractors played for all 5 s of each trial. On each trial, all stimuli were low-pass filtered with a cutoff
frequency of 5 or 10 kHz. To remove pinna cues, participants were fitted with microphones above the
pinnae and insert earphones adjusted to provide a linear, zero-gain response. Each location, gender and
low-pass cutoff frequency condition were repeated in random order four times. In addition to this main
experiment, each participant also orientated to 5-10 kHz bandpass versions of the same near-field signals
without far-field noises.
For each trial, each individual trajectory was analysed for accuracy, duration, velocity,
reversals, misorientations, complexity, start and end time. Results across listeners did not show any effect
of extended bandwidth on accuracy or duration, but did show a significant increase in velocity and
significant decrease in start time (both p < 0.0001). These earlier, swifter orientations are coincident with
the hypothesis that extended bandwidth without pinnae cues leads to more salient spatial cues in plausible
conditions.
Acknowledgements
Work supported by the Medical Research Council (grant number U135097131) and the Chief Scientist
Office of the Scottish Government.
References
Brimijoin, W.O., Whitmer, W.M., McShefferty, D. & Akeroyd, M.A. 2014. The effect of hearing aid
microphone mode on performance in an auditory orienting task. Ear Hear, 35, e204-212.
76
BSA Basic Auditory Science Meeting 2017
P49. Development of a novel method for investigating acoustic stealth awareness
M. Blyth1, D. Rowan1, S. Liversedge2 and A. Allsopp3
1
Institute of Sound and Vibration Research, University of Southampton, UK
2
Centre for Vision and Cognition, University of Southampton, UK
3
Institute of Naval Medicine, Gosport, UK
Acoustic stealth awareness refers to the behaviour associated with remaining acoustically undetectable
in the presence of a nearby enemy or prey. For example, if a soldier must approach an enemy and remain
undetected, it is important that the soldier has an awareness of their acoustic output (footfall,
communication) so as to avoid making themselves detectable by the enemy.
The factors associated with acoustic stealth awareness are unknown. An understanding of the
role of hearing ability in acoustic stealth awareness is particularly relevant for assessing auditory fitness
for duty in military personnel. The aim of the current project is to develop a robust method to assess the
role of hearing ability in acoustic stealth awareness.
The experimental task currently in development involves the participant judging whether a
target observer (e.g. an enemy soldier) at various distances from the participant can detect a sound that is
presented close to the participant. The visual environment and nearby observer conditions are created
using a virtual reality headset (Oculus Rift) in order to allow the independent control of the acoustic and
visual environments.
We will present the findings of an initial experiment to compare distance perception of a
visual target in normal and virtual reality as well as the progress made on the development of the acoustic
stealth awareness task.
Acknowledgements
MB is support by a PhD studentship funded by the Royal Centre for Defence Medicine.
77
BSA Basic Auditory Science Meeting 2017
P50. Speech localisation in noise: A developmental perspective
R.E. Brook1, W.O. Brimijoin2, M.A. Akeroyd1, P.T. Kitterick3 and J.G. Barry1,4
1
MRC Institute of Hearing Research, University of Nottingham, Nottingham, UK
2
MRC/CSO Institute of Hearing Research, Glasgow, UK
3
NIHR Nottingham BRC, Ropewalk House, Nottingham, UK
4
Nottingham University Hospitals NHS Trust, Nottingham, UK
It is well established that children can successfully localise in quiet to an adult-like level by 6 years of
age (Van Deun et al., 2009; Kühlne et al., 2012; Lovett et al., 2012). Less is known about children’s
localisation in noise abilities. Boothalingam et al. (2016) showed that children age 7 to 16 years old can
localise in broadband noise at a similar level to adults, but cannot localise as well as adults in the presence
of speech babble. This new study will further investigate localisation in noise abilities of children,
specifically looking for developmental effects. This is a largely unexplored area and could increase
understanding of how children listen in noise, which is a challenging yet recurrent situation for children
and can affect their educational outcomes.
40 children, age 6 to 17 years old, and 5 young adults will be tested. 6 years has been chosen
as the minimum age as these children should have adult-like localisation in quiet abilities. Children up to
17 years old will be tested to ensure any developmental effects are observed, as results from
Boothalingam et al. (2016) suggest that teenagers don’t have adult-like abilities. All participants will be
typically developing, normal hearing and monolingual English speakers. Listeners will sit within a ring
of 24 speakers, spaced 15° apart, and complete a 5 AFC task. The 5 AFC task was chosen to ensure that
localisation, rather than detection, is tested as this was raised as an issue during pilot testing. 15°
separation angles were chosen following pilot testing, which showed 30° to be too easy. Male or female
voiced IEEE sentences will be played from 6 randomly selected speakers and a female or male (opposite
gender to IEEE sentences) voiced colour will be played from 1 of the 5 labelled speakers (angles: 0°,
±15°, ±30°). The listener will indicate which speaker they heard the colour from using a touchscreen. An
adaptive test method will be used to find the signal level at which the listener achieves 50% localisation
accuracy. It is predicted that children in their mid-late teens will have adult-like localisation in noise
abilities.
Preliminary results from child participants will be presented. Pilot results from adult
participants have shown this task to be repeatable and sensitive to age effects. Following this initial work,
a study will be carried out to investigate how localisation skills are applied in real world listening
situations, specifically orientation to support listening in noise in typically developing children and
children with Auditory Processing Disorder.
References
Boothalingam, S., Macpherson, E., Allan, C., Allen, P. & Purcell, D. 2016. Localization-in-noise and
binaural medial olivocochlear functioning in children and young adults. J Acoust Soc Am,
139(1), 247-262.
Kühnle, S., Ludwig, A.A., Meuret, S., Kuttner, C., Witte, C., Scholbach, J., Fuchs, M. & Rubsamen, R.
2013. Development of auditory localization accuracy and auditory spatial discrimination in
children and adolescents. Audiol Neurootol, 18(1), 48-62.
Lovett, R.E., Kitterick, P.T., Huang, S. & Summerfield, A.Q. 2012. The developmental trajectory of
spatial listening skills in normal-hearing children. J Speech Lang Hear Res, 55(3), 865-878.
Van Deun, L., van Wieringen, A. & Wouters, J. 2010. Spatial speech perception benefits in young
children with normal hearing and cochlear implants. Ear Hear, 31(5), 702-713.
78
BSA Basic Auditory Science Meeting 2017
P51. An advantage in human echolocation for using emissions containing higher spectral
frequencies is explained by echo intensity
L. J. Norman and L. Thaler
Department of Psychology, Durham University, Durham, DH1 3LE, UK
Humans can learn to echolocate – a skill which allows them to detect and classify objects using reflected
sound waves. People who are trained in echolocation typically use a tongue click as their preferred type
of emission to induce these sound reflections. One acoustic property of the tongue click that is correlated
with echolocation performance is spectral content. Specifically, individuals who produce clicks with
higher spectral frequencies perform better (Thaler & Castillo-Serrano, 2016; Flanagin et al., 2017). A
possible cause of this effect is that echoes from objects of finite size will be stronger for emissions
containing higher frequencies. The current study was designed to test experimentally (1) if emissions
with higher frequencies lead to better performance, and (2) if this effect is mediated by echo intensity.
Methods: We made binaural sound recordings (B&K 4101, Tascam DR-100 MK2, 96kHz, 24bit) in an
anechoic room using a human model, whose mouth was positioned behind a loudspeaker (Fostex
FE103En) that emitted artificially generated emissions. The emissions were either 500 ms noise (‘white’
noise with 9 dB boost at 3500, 4000 or 4500 Hz) or a 5 ms click (modelled after human mouth clicks;
i.e. sine wave of 3500, 4000 or 4500 Hz modulated by a decaying exponential). Recordings were made
with a wooden disk of 50 cm diameter at 1, 2 or 3 m distance from the loudspeaker, or with no object
present at all. Sighted participants then listened to these recordings and judged whether or not they heard
the reflecting object. They also did the same under conditions in which the sounds had been digitally
altered to equate the intensity of echoes at each distance across emissions.
Results: Participants detected the object with greater accuracy using emissions with higher spectral
content. Furthermore, we were able to eliminate this high frequency advantage by equating the intensity
of the echoes across emissions. These results demonstrate that emissions with higher spectral content
can benefit echolocation performance in conditions where they lead to an increase in echo intensity.
Acknowledgments
Supported by BBSRC (BB/M007847/1).
References
Flanagin, V.L., Schörnich, S., Schranner, M., Hummel, N., Wallmeier, L., Wahlberg, M., Stephan, T. &
Wiegrebe, L. 2017. Human exploration of enclosed spaces through echolocation. J Neurosci,
37(6), 1614-1627.
Thaler, L. & Castillo-Serrano, J.G. 2016. People’s ability to detect objects using click-based
echolocation: A direct comparison between mouth-clicks and clicks made by a loudspeaker.
PLoS One, 11, e0154868.
79
BSA Basic Auditory Science Meeting 2017
P52. Learning absolute sound source localisation with limited supervision
Y. Chu and D.F.M. Goodman
Department of Electrical and Electronic Engineering, Imperial College London, SW7 2AZ, UK
An accurate auditory space map can be learned from auditory experience, for example during
development or in response to altered auditory cues such as a modified pinna. We studied neural network
models which learn to localise a single sound source in the horizontal plane using binaural cues based on
a very limited form of feedback. This feedback represents the additional information gained after turning
to that heading, which may not be sufficient to know the correct location of the sound.
For the first model, if after turning the target is within a fixed range, the exact location is given
as feedback. This would represent, for example, visual confirmation of the location of the target. If the
target is outside this range, the only feedback given is whether the sound is to the left or the right. This
left/right feedback is always available, for example by comparing acoustic energy received by the two
ears, and may be related to the auditory evoked orienting behavior of newborns. After training with
simulated data using a gradient descent algorithm, this model successfully learned a non-linear mapping
between the interaural level difference and the azimuth.
In the second model, we removed the precise feedback entirely and showed how to learn the
mapping with only left/right feedback using a maximum-margin neural network.
We tested both models with different levels of internal and external noise, which allows us to
compare to experimental data, for example to predict just noticeable differences. Our results show that
the auditory space mapping can be calibrated even without explicit supervision.
Acknowledgements
YC is supported in part by China Scholarship Council (Grant No. 201606210126).
80
BSA Basic Auditory Science Meeting 2017
P53. Listening to moving sound sources: compensation for head movement for sounds in different
directions
J.O. Stevenson-Hoare, T.C.A. Freeman and J.F. Culling
School of Psychology, Cardiff University, Cardiff, CF10 3A, UK
Motion in the acoustic image is inherently ambiguous as it could arise from movement of the head with
a stationary source, or vice versa. Evidence suggests this ambiguity is resolved using ‘extra-cochlear
signals’ that describe head-movement, such as vestibular signals. But, the resulting compensation for
head-movement has been shown to be inaccurate (Freeman et al., 2017).
Two recent models predict that lateral sound presentation should increase head-compensation
accuracy. The Equivalent Arc Ratio EAR model (Brimijoin, 2017) explains changes in perceived motion
as a function of azimuth using spatial distortion that is expanded towards the median line and compressed
towards the sides. A Bayesian model (Freeman et al., 2017) suggests that when the precision of acoustic
signals changes with factors like azimuth, their bias towards a no-motion prior will change. Both models
predict that compensation accuracy should improve (or possibly reverse) when sources are placed
laterally.
We presented sounds to participants at different azimuths using a loudspeaker ring. Source
motion was a proportion (gain factor) of real-time tracked head-movement.
Experiment 1 used the method-of-constant-stimuli. Participants judged whether sounds
moved in the same or opposite direction to their head-movements for a range of gains. Point of Subjective
Equality (PSE) determined compensation accuracy; psychometric function slope determined precision.
Front/Behind sounds needed to move slightly with the head to be perceived as stationary; for Behind this
effect was heightened. Left/Right compensation was more accurate. Precision was lower for Left/Right
presentations than Front. However, Behind showed similar precision to Left/Right despite the difference
in PSE. This may have been due to confusion between ‘same’ and ‘opposite’ in Behind.
Experiment 2 used the method-of-adjustment, with participants adjusting stimulus gain to find
the PSE. Precision was defined as the standard deviation over the 20 settings made by each listener per
condition. Results followed the same pattern as Experiment 1, except that Behind was now as precise as
Front.
Overall, these results provide broad support for both models. Neither, however, can explain
why head-compensation is worse for Behind than Front.
References
Brimijoin, W.O. 2017. The equivalent arc ratio for auditory space. bioRxiv.
Freeman, T.C.A., Culling, J.F., Akeroyd, M.A. & Brimijoin W.O. 2017. Auditory compensation for head
rotation is incomplete. J Exp Psychol Hum Percept Perform, 43(2), 371-380.
81
BSA Basic Auditory Science Meeting 2017
P54. “Turn an Ear to Hear”: listeners’ benefit of head orientation to speech intelligibility in
realistic social settings
J.A. Grange, J.F. Culling, B. Bardsley and L. MacKinney
School of Psychology, Cardiff University, Cardiff, CF10 3AT, UK
It was first highlighted in Culling et al. (2012) that the Jelfs et al. (2011) model of spatial release from
masking could be used to predict the head-orientation benefit (HOB) to speech intelligibility in noise for
hearing impaired (HI) listeners and cochlear implant (CI) users. Grange and Culling’s (2016a) normalhearing (NH) baseline validated the model for HOB, and confirmed that listeners could gain as much as
8 dB from orienting away from facing a target talker in a sound-treated room. Grange & Culling (2016b)
confirmed that CI users and NH listeners alike, could obtain a 30º HOB (up to 5 dB) without degrading
their lip-reading ability. To verify the robustness of HOB with multiple interferers in a reverberant
environment, virtual simulation of a real restaurant was also presented. It showed that NH listeners
binaurally immersed in the restaurant (over headphones) could obtain a ~2 dB HOB when surrounded
with interfering steady speech-shaped noises (SSN) or talkers alike. Listeners’ ability to segregate the
target talker’s voice compensated for any modulation or informational masking generated by the babble
formed by 9 interfering talkers.
Two new simulations following the exact same protocol are presented, that involved HI
listeners (mild to severe high-frequency loss) as well as NH listeners attending to the target via the
SPIRAL CI simulator. SPIRAL is a tonal vocoder that enables binaural simulations and does not suffer
from the extraneous temporal fluctuations a noise vocoder exhibits. Overall, SSN-masked SRTs were
significantly higher in HI (+2 dB) and simulated-CI listeners (+13.5 dB) compared to NH listeners’.
HOBs did not differ significantly between listener groups in the SSN condition. The babble condition,
compared to SSN, led to significantly higher SRTs for HI listeners (+1.5 dB) and simulated CI users (+5
dB). The broadening of HI listeners’ auditory filters and the even-more-limited CI spectral resolution
rendered target voice segregation more difficult. A significant interaction between interferer type and
head orientation in simulated CI users meant that the effect of modulation or informational masking on
SRTs varies faster with head orientation than that of energetic masking.
Acknowledgements
Partially supported by Action on Hearing Loss.
References
Culling, J.F., Jelfs, S., Talbert, A., Grange, J.A. & Backhouse, S.S. 2012. The benefit of bilateral versus
unilateral cochlear implantation to speech intelligibility in noise. Ear Hear, 33, 673-682.
Grange, J.A. & Culling, J.F. 2016a. The benefit of head orientation to speech intelligibility in noise. J
Acoust Soc Am, 139, 703-712.
Grange, J.A. & Culling, J.F. 2016b. Head orientation benefit to speech intelligibility in noise for cochlear
implant users and in realistic listening conditions. J Acoust Soc Am, 140, 4061-4072.
Jelfs, S., Culling, J.A. & Lavandier, M. 2011. Revision and validation of a binaural model for speech
intelligibility in noise. Hear Res, 275, 96-104.
82
BSA Basic Auditory Science Meeting 2017
P55. Investigating audio-vestibular integration and the front/back illusion
A.I. McLaren1, I.R.C. Swan2 and W.O. Brimijoin1
1
Medical Research Council/ Chief Scientist Office Institute of Hearing Research – Scottish Section, UK
2
School of Medicine, Dentistry and Nursing, College of Medical, Veterinary and Life Sciences, University
of Glasgow, UK
In the real world, sound localisation is a dynamic process, with listener and sound-source movements
combining in unpredictable ways. This leads to complicated auditory cues and often causes localisation
errors such as front/back confusions. To interpret these cues in a reliable way, three abilities should be
required of the listener: (1) accurate source-motion processing, (2) accurate self-motion processing, (3)
integration of source- and self-motion cues. This would allow a listener to make use of dynamic cues
caused by movement of the head, and to render a stable perception of the world around them. Given the
inherent front/back ambiguity in static binaural cues (which can be resolved by dynamic cues resulting
from a simple head rotation), this ability to process self motion and compare it to source motion is
necessary, even for interactions with a stationary world.
We hypothesize that front/back confusions could arise not only due to a listener having poor
spatial hearing, but also due to a listener having an underlying vestibular problem (even if they had
excellent spatial hearing as tested in a clinic). Additionally, we hypothesize that failure to integrate spatial
hearing and self-motion cues would lead to similar front/back localisation issues, even if these abilities
are both independently normal.
A battery of three tests was run with both normal-hearing and hearing-impaired listeners to
test these hypotheses regarding an accurate percept of the three-dimensional world. A clinical standing
balance test (Modified Clinical Test of Sensory Interaction on Balance) was conducted to estimate the
subjects’ processing of their own motion and vestibular health. Measures of the minimum angle of soundsource motion detectable – the minimum audible moving angle (MAMA) – were taken to establish their
ability to process moving sounds. Finally, an adapted version of the front/back illusion test previously
conducted in the lab (in which sounds are moved in real time to give the perception of a sound located in
the opposite hemifield) was carried out to examine the ability to combine self and sound-source motion.
Unexpectedly, no relationship was found between the three tests, potentially due to a lack of truly balance
impaired participants. Further work with a group of balance impaired participants may be required to test
this. It is also possible that listeners give different weightings to various self-motion cues (vestibular,
visual and proprioceptive), and that a different way of combining balance scores may lead to a
relationship being seen.
Acknowledgements
This work was supported by the Medical Research Council (grant number U135097131) and by the Chief
Scientist Office of the Scottish Government.
83
BSA Basic Auditory Science Meeting 2017
P56. An information-theoretic analysis of auditory features in noisy environments
L.P.J. Weerts1,2, D. Goodman3 and C. Clopath1
1
Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
2
Centre for Neurotechnology, Imperial College London, London, SW7 2AZ, UK
3
Department of Electrical Engineering, Imperial College London, London, SW7 2AZ, UK
Current engineering efforts such as automatic speech recognition (ASR) systems and cochlear implants
tend to perform much worse than the human auditory system in noisy environments. We aim to address
this problem by developing auditory-inspired features that are robust to noise. Here, we review several
general principles that appear to be important for the noise robustness of the auditory system, such as
precisely timed inhibition. These principles are combined with a basic model of the auditory nerves (a
half-rectified Gammatone filterbank) to create a new set of auditory features. To assess the quality of
these features, we use information theoretic measures. Altogether this new class of features may allow
us to both improve current understanding of general principles in the auditory system, as well as finding
new features that could be directly applied to ASR systems.
Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council [grant number
EP/L016737/1].
84
BSA Basic Auditory Science Meeting 2017
P57. How predicted suppression is affected by the realization of the DRNL filter
M. Gottschalk and J.L. Verhey
Department of Experimental Audiology, Otto von Guericke University, Magdeburg, 39120, Germany
The dual-resonance nonlinear (DRNL) filter has been used to describe nonlinear effects in peripheral
processing. The DRNL filter consists of a linear part and a nonlinear part. The linear part consists of a
bandpass filter and a low-pass filter, in the nonlinear part a nonlinear I/O function is sandwiched between
two bandpass filters. A bank of DRNL filters with the human parameter set of Lopez-Poveda et al. (2001)
was used by Jepsen et al. (2008) to successfully model several psychoacoustic data.
In the present study, this parameter set is evaluated by comparing its predictions to
psychoacoustic data on two-tone suppression of Ernst et al. (2010). It is shown that a minor alteration of
filter orders within the DRNL filter can markedly improve the model's predictions. This slight change of
model parameters also leads to a more realistic I/O function than the previous version. The suppression
prediction was further evaluated by comparing it to level-dependent psychoacoustic data by Duifhuis
(1980) for high-side and low-side suppressors. Again, the slightly altered realisation of the DRNL filter
predicts the general trends better than the original version.
References
Duifhuis, H. 1980. Level effects in psychophysical two-tone suppression. J Acoust Soc Am, 67(3), 914927.
Ernst, S.M., Rennies, J., Kollmeier, B. & Verhey, J.L. 2010. Suppression and comodulation masking
release in normal-hearing and hearing-impaired listeners. J Acoust Soc Am, 128(1), 300-309.
Jepsen, M.L., Ewert, S.D. & Dau, T. 2008. A computational model of human auditory signal processing
and perception. J Acoust Soc Am, 124(1), 422-438.
Lopez-Poveda, E.A. & Meddis, R. 2001. A human nonlinear cochlear filterbank. J Acoust Soc Am,
110(6), 3107-3118.
85
BSA Basic Auditory Science Meeting 2017
P58. The model initiative: a framework to facilitate model comparability and testability
J.H. Lestang1, M. Dietz2, T. Marquardt3, P. Majdak4, R. Stern5, S.D. Ewert6, W. Hartmann7 and D.F.M.
Goodman1
1
Department of Electrical and Electronic Engineering, Imperial College London, London, UK
2
National Centre for Audiology, Western University, London, ON, Canada
3
Ear Institute, University College London, London, UK
4
Institut für Schallforschung, Österreichische Akademie der Wissenschaften, Wien, Austria
5
Carnegie Mellon University, Pittsburgh, PA, USA
6
Medizinische Physik, Universität Oldenburg, Oldenburg, Germany
7
Michigan State University, East Lansing, MI, USA
The use of computational models to interpret psychoacoustical results is widely spread in auditory
research. Yet, accurately recreating the experimental context in which psychoacoustical data is obtained
can be challenging and often makes model comparability and testability difficult.
To facilitate that, we suggest the use of a framework that integrates experimental protocols on
one side and computational models on the other side. The experiment side deals with generating sound
files the same way it would during an experiment in psychoacoustics. On the other hand, the model acts
as an artificial observer, it processes the sound files and redirects its output to a decision stage.
The produced decision is subsequently fed back to the experiment side which takes it into
account to design a new set of intervals. This framework presents several advantages. First, its simple
organization allows the user to run models written in any language and does not constrain its use to
psychoacoustical data only. Second, the experiment side can easily be handled by pre-existing dedicated
third party software. Last, this framework encourages users to share their models and experimental
protocols with the community, resulting in facilitating research reproducibility.
A first version of the framework written in matlab and python can be found at:
https://github.com/model-initiative/model_initiative
86
BSA Basic Auditory Science Meeting 2017
P59. A probabilistic model of concurrent vowel identification
S.S. Smith1, A.Chintanpalli2, M.G. Heinz3 and C.J. Sumner1
1
MRC Institute of Hearing Research, University of Nottingham NG7 2RD, UK
2
Department of Electrical and Electronics Engineering Birla Institute of Technology & Science, Pilani333 031, Rajasthan, India
3
Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
When positioned in a complex auditory environment individuals with normal hearing are able to identify
and concentrate on specific components within that environment, famously termed the cocktail party
phenomenon. However, individuals with hearing loss have demonstrated a reduced ability to understand
speech in the presence of interfering speech. There are a multitude of cues which can be used to facilitate
auditory stream segregation (pitch differences, dynamics, onset/offset asynchronies, etc.). Particular
attention has been paid to the positive effect that differences in fundamental frequency (F0) between two
concurrent vowels (steady-state harmonic complexes) has on concurrent vowel identification (CVI)
(review: Micheyl & Oxenham, 2010).
Computer models exist which predict with some success the improvement in CVI observed
with increasing F0 differences, in individuals with normal hearing. However, particularly in the reduced
situation that concurrent vowels have no F0 difference, existing models of CVI predict human
performance poorly. The most widely cited CVI model (Meddis & Hewitt, 1992), on average,
underestimates the ability of listeners with NH to correctly identify concurrent vowels with identical F0s
by ~20% (human: ~57%, model: ~37%). The poor performance of existing CVI models is also apparent
in incorrect predictions of listener confusions, even when F0 differences are present (Chintanpalli &
Heinz, 2013). It therefore seems that existing CVI computer models are likely correctly identifying
vowels for the wrong reasons.
Presented is our CVI model which incorporates internal templates of concurrent vowel pairs,
with a naïve Bayesian classifier. This probabilistic model contrasts with previous models, which were
deterministic and assumed that a segregation process separated out individual vowel representations
based on F0 differences, followed by a comparison with templates of individual vowels. The new model
was able to successfully predict people’s systematic perceptual errors when concurrent vowels had
identical F0s. When templates were created using spectral processing our model predicted these human
confusions excellently. However, only when temporal processing (pooled autocorrelation) was
implemented did our model qualitatively replicate the positive effect that differences in F0 have on human
CVI. This was the case whether perfect, or no, knowledge of F0 information was assumed. Our predicted
confusion matrices were highly correlated with human confusion matrices, generating Pearson
correlation coefficients between 0.8 and 0.95. Further optimisation of model parameters pointed to F0
estimation being worst at small F0 differences, consistent with psychoacoustic findings regarding the
perception of pitch in concurrent vowels (Assmann & Paschall, 1998).
References
Assmann, P.F. & Paschall, D.D. 1998. Pitches of concurrent vowels. J Acoust Soc Am, 103(2), 11501160.
Chintanpalli, A. & Heinz, M.G. 2013. The use of confusion patterns to evaluate the neural basis for
concurrent vowel identification a. J Acoust Soc Am, 134(4), 2988-3000.
Meddis, R. & Hewitt, M.J. 1992. Modeling the identification of concurrent vowels with different
fundamental frequencies. J Acoust Soc Am, 91(1), 233-245.
Micheyl, C. & Oxenham, A.J. 2010. Pitch, harmonicity and concurrent sound segregation:
Psychoacoustical and neurophysiological findings. Hear Res, 266(1), 36-51.
87
BSA Basic Auditory Science Meeting 2017
P60. Complex statistical model for detecting the auditory brainstem response to natural speech and
for decoding attention
M. Kegler, O. Etard, A.E. Forte and T. Reichenbach
Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
The neural activity of the auditory brainstem tracks features such as the onset of sound or the frequency
of a pure tone (Skoe et al., 2010). Reichenbach et al. (2016) have recently shown that continuous, nonrepetitive monotone speech may be employed to measure the auditory brainstem response at the
fundamental frequency of a speech signal. Forte et al. (2017) developed a method to extract a waveform
from a speech signal that vibrates at the time-varying fundamental frequency of natural speech. The
correlation of this fundamental waveform with the neural response of the auditory brainstem yields a
measure of the brainstem's response to natural non-repetitive speech. The brainstem response was thereby
measured from bipolar recordings through four scalp electrodes only.
Here we show how the brainstem response to natural speech can be detected from high-density
EEG recordings. We employ linear regression with regularization to reconstruct the fundamental
waveform of a speech signal from the high-density EEG signals at different delays. We find an optimal
delay of 9 ms, in agreement with previous results. The obtained regression model also yields a
topographic map of the amplitude and phase at each electrode.
We then employ the developed method to investigate how attention to one of two speakers
can be decoded from high-density EEG signals. Forte et al. (2017) showed indeed that the brainstem
response to natural non-repetitive speech is modulated by selective attention to one of two speakers. Here
we build linear regression models to reconstruct the fundamental waveform of an attended as well as of
an ignored speaker. We then employ linear discriminant analysis to classify EEG data according to the
attentional focus. Testing the model on EEG data that has not been used for training the model shows
that we can decode attention with a high accuracy of over 90% using short segments of EEG of a few
seconds in duration only. These results may be combined with cortical responses to yield an efficient
readout of attention to one of several sound sources from EEG recordings.
References
Forte, A., Etard, O. & Reichenbach T. 2017. Selective auditory attention modulates the brainstem's
response to running speech. Submitted.
Reichenbach, C.S., Braiman, C., Schiff, N.D., Hudspeth, A.J. & Reichenbach, T. 2016. The auditorybrainstem response to continuous, non-repetitive speech is modulated by the speech envelope
and reflects speech processing. Front Comp Neurosci, 10, 47.
Skoe, E. & Kraus, N. 2010. Auditory brainstem response to complex sounds: a tutorial. Ear Hear, 31,
302-324.
88
BSA Basic Auditory Science Meeting 2017
P61. Gating without swinging: a two-channel model of hair-cell mechanotransduction with bilayermediated cooperativity
F. Gianoli1, T. Risler2,3 and A. Kozlov1
1
Department of Bioengineering, Imperial College London, London SW7 2AZ, UK
2
Laboratoire, Physico Chimie Curie, Institut Curie, PSL Research University, CNRS, 26 rue d’Ulm,
75005 Paris, France
3
Sorbonne Universités, UPMC Univ Paris 06, CNRS, Laboratoire Physico Chimie Curie, 75005 Paris,
France
Mechanoelectrical transduction (MET) is the fundamental process of hearing that transforms auditory
stimuli into electrical signals intelligible to the brain. It occurs when auditory stimuli open
mechanosensitive ion channels located in the hair cells of the internal ear. Since 1988, the gating-spring
model developed by Howard and Hudspeth has been the main explanation for this phenomenon: the
stimulus-driven deflection of the stereocilia tense the tip link that transmits force to a single MET channel
and thus opens it. Although this model has been very successful, the predicted size of the movement
associated with the channel’s opening is unrealistically large for a single channel. Furthermore,
experiments indicate that each tip link connects to two channels rather than one and there is strong
evidence that the lipid bilayer that supports the channels modulates their open probability.
We developed a new model of mechanotransduction that features two channels per tip link,
which interact through membrane-mediated elastic forces. By using only realistic parameters, the model
not only quantitatively reproduces all the main physiological properties of MET, but it also accounts for
experimental results that were previously unexplained.
89
BSA Basic Auditory Science Meeting 2017
P62. Distribution of astrocytes, microglia, inhibitory neurones and blood vessels in guinea pig
inferior colliculus
S. D. Webb1 and L.D. Orton1
1
School of Healthcare Science, Manchester Metropolitan University, M1 5GD, UK
Glia are an essential component of the auditory system, from cochlea to cortex. We know little of the
roles glia play in the ascending auditory system. Recent evidence suggests both astrocytes and microglia
form essential components of the neurovascular unit, yet this link has not, to our knowledge, been
demonstrated in the inferior colliculus (IC). To address this gap, we employed fluorescent
immunohistochemical staining to reveal the distribution of GFAP+ astrocytes, Iba1+ microglia, GAD67+
neurones and Griffonia Simplicifolia lectin 1 + blood vessels in the IC of young, pigmented guinea pig
(n=4).
Astrocytes were found to encapsulate the dorsal and lateral borders of the IC, forming a dense,
overlapping plexus of glia limitans staining. Astrocytes were also located adjacent to most, but not all,
large and medium sized penetrating vessels. The ramified processes of peri-vascular astrocytes coursed
both perpendicular and orthogonal to vessel walls. Few astrocytes were found throughout the
parenchyma, although some cells with stellate morphology were located in the outermost layer of the
dorsal and lateral cortex.
Conversely, microglia were found to tile the parenchyma throughout the IC, with largely nonoverlapping domains. The majority of microglial processes were distributed radially throughout the
neuropil. Some microglia had processes that wrapped around the outside of blood vessels. A significant
minority of microglia had peri-somatic processes onto GAD67+ neurones. The size and morphology of
microglia was found to vary throughout the IC. Larger somata and wider ramified processes were found
in lateral cortex than central nucleus. Microglia in central nucleus were more numerous than dorsal and
lateral cortices. However, microglia in the dorsal cortex had a greater number of branches and junctions.
The majority of microglia in lateral cortex exhibited a distinctive trend of directionality. The density of
GAD67+ neurones and microglia increased from dorsal to ventral regions of central nucleus.
These observations highlight the differential distribution and putative function of astrocytes
and microglia in IC. Astrocytes may operate to protect the IC via glia limitans and act as part of the
neurovascular unit. Microglia may operate to sense local inflammation and contribute to the
neurovascular unit; however, their variation in morphology and density throughout the IC suggests a
potential role in other aspects of IC physiology that requires further study.
90
BSA Basic Auditory Science Meeting 2017
Delegate list
Poster/talk number
Page number
Trevor
P20, P44
47, 71
Akeroyd
Michael
P43, P50
70, 78
Ali
Fatima
P33
60
Alkahtani
Rania
P31
58
Archer-Boyd
Alan
P32
59
Ashmore
Jonathan
P3
30
Baer
Thomas
P30
57
Barry
Johanna
P43, P50
70, 78
Beierholm
Ulrik
O1
10
Berger
Joel
P5, P8
32, 35
Blackburn
Catherine
P37
64
Bleeck
Stefan
P18
45
Britton
John
P14
41
Brook
Rhiannon
P50
78
Cabrera
Laurianne
P29
56
Calcus
Axelle
O10
19
Campos
Ana
O10
19
Carlyon
Robert
O15, P22, P32
24, 49, 59
Caswell-Midwinter
Benjamin
P19,
46
Chait
Maria
O9
18
Chu
Yang
P52
80
Culling
John
P21, P53, P54
48, 81, 82
de Boer
Jessica
P13
40
Deeks
John
O15, P22, P32
24, 49, 59
Drga
Vit
P46, P47
73, 74
Dryden
Adam
P34
61
Etard
Octave
O8, P25, P60
17, 52, 88
Flanagan
Sheila
Surname
First name
Adjamian
Peyman
Agus
91
BSA Basic Auditory Science Meeting 2017
First name
Poster/talk number
Page number
Fletcher
Mark
O6
15
Forte
Antonio Elia
O8, P60
17, 88
Forsythe
Ian
O11, P1, P2, P4, P12
20, 28, 29, 31, 39
Freeman
Tom
P53
81
Füllgrabe
Christian
P36
63
Galarza
Yadira
Gartside
Sasha
O12, P6, P7, P9
21, 33, 34, 36
Gianoli
Francesco
P61
89
Gillen
Nicola
P44
71
Gockel
Hedwig
Goehring
Tobias
Goodman
Dan
O2, P52, P56, P58
11, 80, 84, 86
Gottschalk
Martin
P57
85
Grange
Jacques
P54
82
Guest
Hannah
P45
72
Halliday
Lorna
O10
19
Hammett
Michelle
P1
28
Hardy
Alex
P13
40
Hayman Tansley
Lisa
Heinrich
Antje
P34
61
Hewitt
Dale
P15
42
Hládek
Luboš
O4
13
Hockley
Adam
P5, P8
32, 35
Ingham
Neil
O13
22
Jurado
Carlos
O5
14
Kadir
Shabnam
Kandylaki
Katerina
P26
53
Kavadias
Constantin
Kegler
Mikolaj
P60
88
Killoran
Angie
Surname
92
BSA Basic Auditory Science Meeting 2017
Poster/talk number
Page no
Katrin
P13
40
Kukovska
Lilia
P3
30
Lestang
Jean-Hugues
P58
86
Leung
Michael
P4
31
Linden
Jennifer
P33
60
Maxwell
Hannah
P9
36
McLaren
Andrew
P55
83
Meddis
Ray
Michiels
Sarah
O17
26
Moore
Brian
P30, P38
57, 65
Newton
Sherylanne
P1, P2
28, 29
Norman
Liam
P51
79
Nunn
Terry
P16
43
Olthof-Bakker
Bas
O12, P6, P7, P9
21, 33, 34, 36
Orton
Llwyd
P62
69
Palmer
Alan
P5, P8
32, 35
Phillpot
Jemima
Poole
Katarina
P39
66
Rees
Adrian
O12, P6, P9
21, 33, 36
Reichenbach
Tobias
O8, P24, P25, P26, P60
17, 51, 52, 53, 88
Roberts
Brian
P27, P42
54, 69
Rowan
Daniel
P31, P49
58, 77
Rowland
Stephen
O7
16
Saiz Alia
Marina
Salorio-Corbetto
Marina
P30
57
Schilder
Anne
O16
25
Schlittenlacher
Josef
P38
65
Smith
Samuel
P59
87
Sollini
Joseph
P11, P39
38, 66
Surname
First name
Knight
Sarah
Krumbholz
93
BSA Basic Auditory Science Meeting 2017
First name
Poster/talk number
Page number
Stacey
Paula
P28, P37
55, 64
Stevenson-Hoare
Joshua
P53
81
Summers
Robert
P27, P42
54, 69
Sumner
Laura
P24
51
Sumner
Chris
P28, P59
55, 87
Thaler
Lore
O3, P51
12, 79
Turton
Laura
Verhey
Jesko
P41, P46
68, 73
Vickers
Deborah
P16, P23
43, 50
Wallace
Mark
P5, P8
32, 35
Webb
Samuel
P62
90
Weerts
Lotte
P56
84
Weissbart
Hugo
P26
53
Weston
Jane
Whitmer
William
P35, P48
62, 76
Wiggins
Ian
O7
16
Winter
Ian
Yan
Georgina
P12
39
Yasin
Ifat
P46, P47
73, 74
Surname
94
BSA Basic Auditory Science Meeting 2017
NOTES
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
95
BSA Basic Auditory Science Meeting 2017
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
96
BSA Basic Auditory Science Meeting 2017
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
97
BSA Basic Auditory Science Meeting 2017
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
98
BSA Basic Auditory Science Meeting 2017
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
99
BSA Basic Auditory Science Meeting 2017
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement