Differential activity in Heschl's gyrus between deprivation rather than language modality

Differential activity in Heschl's gyrus between deprivation rather than language modality
Differential activity in Heschl's gyrus between
deaf and hearing individuals is due to auditory
deprivation rather than language modality
Velia Cardin, Rebecca C. Smittenaar, Eleni Orfanidou, Jerker Rönnberg, Cheryl M. Capek,
Mary Rudner and Bencie Woll
Linköping University Post Print
N.B.: When citing this work, cite the original article.
Original Publication:
Velia Cardin, Rebecca C. Smittenaar, Eleni Orfanidou, Jerker Rönnberg, Cheryl M. Capek,
Mary Rudner and Bencie Woll, Differential activity in Heschl's gyrus between deaf and hearing
individuals is due to auditory deprivation rather than language modality, 2016, NeuroImage,
(124), 96-106.
http://dx.doi.org/10.1016/j.neuroimage.2015.08.073
Copyright: 2015 The Authors. Published by Elsevier B.V. This is an open access article
under the CC BY-NC-ND license.
http://www.elsevier.com/
Postprint available at: Linköping University Electronic Press
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-123221
NeuroImage 124 (2016) 96–106
Contents lists available at ScienceDirect
NeuroImage
journal homepage: www.elsevier.com/locate/ynimg
Differential activity in Heschl's gyrus between deaf and hearing
individuals is due to auditory deprivation rather than language modality
Velia Cardin a,b,⁎, Rebecca C. Smittenaar c, Eleni Orfanidou a,d, Jerker Rönnberg b, Cheryl M. Capek e,
Mary Rudner b,1, Bencie Woll a,1
a
Deafness, Cognition and Language Research Centre, 49 Gordon Square, University College London, London WC1H 0BT, UK
Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
Experimental Psychology, 26 Bedford Way, University College London, London WC1H 0AP, UK
d
School of Psychology, University of Crete, Greece
e
School of Psychological Sciences, University of Manchester, Manchester M13 9PL, UK
b
c
a r t i c l e
i n f o
Article history:
Received 7 April 2015
Accepted 24 August 2015
Available online 5 September 2015
Keywords:
Heschl's gyrus
Deafness
Sign language
Speech
fMRI
a b s t r a c t
Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical
reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual
crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if
reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory
deprivation.
Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in
Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex).
Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals
with and without sign language knowledge, and in hearing controls.
Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by
visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole.
Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no
evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's
gyrus.
© 2015 Published by Elsevier Inc.
Introduction
Sensory cortices preferentially process inputs from a single modality. However, if input from the preferred modality is absent, these cortices reorganise to process input from other sensory modalities (see
Merabet and Pascual-Leone, 2010 for a review). This reorganisation
takes place in the cortices of both the absent and intact modalities
(Sadato et al., 1998; Bavelier et al., 2001; Finney et al., 2001; Giraud
et al., 2001). Evidence suggests that cortical regions preserve their computational function after plastic reorganisation, but adapt to process a
different type of sensory input (Lomber et al., 2010; Meredith et al.,
2011; Reich et al., 2011; Striem-Amit et al., 2012; Cardin et al., 2013).
Congenital or early auditory deprivation in humans has unique features
⁎ Corresponding author at: Deafness, Cognition and Language Research Centre, 49
Gordon Square, University College London, London WC1H 0PD, UK, and Linnaeus Centre
HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and
Learning, Linköping University, Sweden.
E-mail address: [email protected] (V. Cardin).
1
These authors contributed equally to this study.
http://dx.doi.org/10.1016/j.neuroimage.2015.08.073
1053-8119/© 2015 Published by Elsevier Inc.
with respect to other types of sensory deprivation (i.e., visual) because
such individuals not only lack stimulation in a sensory modality, and so
will process other modalities preferentially, but they will also have
very limited or no access to heard and spoken language, the main
form of human communication. Therefore, visual strategies, such as
speechreading (lipreading) and sign language, develop in response to
the need to communicate. Furthermore, late detection of deafness in
children with parents who communicate in spoken language can also
result in late language acquisition, with consequent functional and
anatomical effects on cortical development and organisation. Thus, the
plastic reorganisation observed in the cortex of deaf individuals is the
result of the interplay between sensory deprivation- and languagedriven mechanisms.
Studies of language and sensory processing in deaf and hearing
populations have contributed to our understanding of the differential
contribution of each of these factors to plastic changes in regions associated with auditory processing in hearing individuals, in particular in
secondary auditory cortices, such as the superior temporal cortex
(STC). In the left STC of deaf individuals, a region associated with speech
processing in hearing individuals (e.g. Scott and Johnsrude, 2003), and
V. Cardin et al. / NeuroImage 124 (2016) 96–106
in the anterior-medial part of the right STC, activity has been shown in
response to sign-language stimulation (Neville et al., 1998; Rönnberg
et al., 1998; MacSweeney et al., 2008; Cardin et al., 2013) and
speechreading (Capek et al., 2008), but not in relation to general visual
processing (Finney et al., 2001; Cardin et al., 2013). In contrast, the right
posterior STC shows responses to general visual stimulation, independently of language experience, pointing towards an effect associated
with auditory deprivation (Finney et al., 2001; Fine et al., 2005; Sadato
et al., 2005; Cardin et al., 2013). These results suggest that in the case
of an interplay of sensory and linguistic factors in plastic reorganisation,
each of these factors has an effect on different regions of the cortex.
Even though there is a large body of evidence for plastic
reorganisation in secondary auditory areas after auditory deprivation,
it is still unclear if visual crossmodal reorganisation also takes place in
primary auditory areas, and how this is influenced by language experience and sensory deprivation. In models of auditory processing in nonhuman primates, primary auditory areas are grouped in a “core” region,
and secondary areas are grouped in “belt” and “parabelt” regions, located concentrically around the core (see Hackett, 2011 for a review). The
core regions represent the first level of cortical auditory processing, and
the surrounding belt and parabelt regions support higher levels of processing. Evidence from animal studies suggests a degree of crossmodal
plasticity driven by auditory deprivation in core auditory areas. After
auditory deprivation, visual crossmodal plasticity has been shown in
mice core auditory areas A1 (primary auditory area) and AAF (anterior
auditory field; Hunt et al., 2006), and somatosensory crossmodal
plasticity also in A1 and AAF in mice and ferrets (Hunt et al., 2006;
Meredith and Allman, 2012). In congenitally deaf cats, Kral et al.
(2003) did not find visual or somatosensory crossmodal plasticity in
A1. However, in AAF, neurons do show responses to visual and, more
strongly, somatosensory stimulation (Meredith and Lomber, 2011). In
deaf humans it is less obvious if there is plasticity in these regions, in
particular because the exact composition of the core auditory cortex
and its functional organisation is still a matter of controversy. In
humans, the auditory core is located in Heschl's gyrus (HG), an anatomical landmark that is absent in other mammals, including primates.
Even though different functional and cytoarchitectonic maps of HG
have been described (Morosan et al., 2001; Formisano et al., 2003;
Talavage et al., 2004; Woods et al., 2009; Humphries et al., 2010; Da
Costa et al., 2011; Striem-Amit et al., 2011; Dick et al., 2012), the exact
correspondence between these regions and those in other species is
not clear. Several studies agree that there are two tonotopic maps
within the core region (potentially corresponding to the homologous
macaque A1 and R; Formisano et al., 2003; Talavage et al., 2004;
Woods et al., 2009; Da Costa et al., 2011; Striem-Amit et al., 2011;
Dick et al., 2012; De Martino et al., 2014; Langers, 2014; Schönwiesner
et al., 2014), but their exact orientation with respect to HG is under
debate. A different approach is to characterise primary auditory areas
based on their microstructural anatomical properties. In a study that
quantified the level of myelination across HG in vivo, Dick et al.
(2012) found that the highly myelinated auditory core occupies the
medial 2/3 of HG. In a post-mortem cytoarchitectonic analysis of
HG, Morosan et al. (2001) defined three distinct areas (from
postero-medial to antero-lateral): Te1.1, Te1.0, and Te1.2. Based on its
granularity (Morosan et al., 2001; Hackett, 2011) and anatomical correspondence to the auditory core identified in Dick et al. (2012)'s
myelination analysis, Te1.0 is the area likely to represent the human
auditory core in the brain of hearing adults. It is still unknown if this is
also the case for deaf individuals.
Anatomical definitions are vital in studies with deaf humans, given
that tonotopic mapping is not possible. Morphometric differences in
the white matter/grey matter ratio between deaf and hearing individuals in HG (Emmorey et al., 2003; but see Lyness et al., 2014), provide
an anatomical correlate for potential visual crossmodal plasticity.
Previous studies suggested some level of functional plasticity in
primary auditory areas (e.g. Finney et al., 2001; Fine et al., 2005;
97
Lambertz et al., 2005), showing activations in auditory cortices that
extended to HG, potentially including primary auditory areas. However,
all these studies consisted of group analyses, where whole brain activations are averages across participants. Given that HG is a small region
and highly variable across individuals (Penhune et al., 2003), smoothing
and averaging across subjects in a group analysis can lead to inclusion of
activations from neighbouring regions. Thus, it is not possible to assign
activations to primary auditory areas without a clear anatomical definition of the regions.
In a recent study of congenitally deaf native signers, in which subregions of HG were delineated anatomically, Karns et al. (2012) showed
plastic reorganisation in HG in response to the presentation of separate
(unimodal) and combined stimuli (bimodal) consisting of air-puffs
(somatosensory stimulation) and light flashes (visual stimulation),
stronger and more significantly for the somatosensory modality. A
subsequent study showed differences in the level of activation of HG,
between deaf and hearing individuals, in response to perifoveal (2–7°)
and peripheral (11–15°) visual stimulation (Scott et al., 2014).
Given the detailed anatomical definition, results from these two
studies addressed directly the question of cortical plasticity due to auditory deprivation in HG. However, they used basic sensory stimulation,
without linguistic content, and they were all performed in deaf native
signers. It is therefore possible that responses observed were driven
by top-down effects from language processing areas which have
become responsive to a visuo-spatial language. In hearing individuals,
activity in Te1.0 is modulated by the perceived clarity of speech, independently of differences in basic acoustic properties of the stimuli
(Wild et al., 2012). Effects in HG are also observed with visual language
signals. Speech can also be perceived visually as movements of the face
and mouth (speechreading), and both auditory and visual signals contribute to our final perceptual experience (McGurk and MacDonald,
1976; Campbell, 2008). Speechreading (lipreading) not only activates
regions typically considered secondary auditory areas or speech
processing centres, in both deaf and hearing individuals (Söderfeldt
et al., 1997; see Campbell, 2008 for a review), but it also activates
regions of HG, more towards its lateral portion (Calvert et al., 1997;
Calvert et al., 2000; Pekkola et al., 2005). This activation is significantly
greater in the left temporal cortex, including the planum temporale
and HG (Capek et al., 2008), in congenitally deaf participants (native
signers who are also proficient speechreaders) than in hearing nonsigning controls. Furthermore, enhanced responses are observed in
the left HG when auditory and visual speech signals are combined
(Calvert et al., 2000).
In addition to the evidence presented above, which suggests that
language use (speech and sign) in deaf individuals might result in different crossmodal plasticity in primary auditory areas, the fact that the
grey and white matter volume in several brain structures changes
according to language experience in deaf individuals, suggests that
results from deaf native signers cannot be generalised (Olulade et al.,
2014). Characterising which of the components of plastic reorganisation
in primary auditory areas of deaf individuals are caused by language
modality and which by auditory deprivation is not only important for
understanding the basic mechanisms of plastic reorganisation, but also
because of its relevance in terms of approaches to language acquisition
in deaf children, in particular those who have cochlear implants (CI) or
are waiting to receive one. Even though several pieces of evidence show
that crossmodal plasticity correlates with CI success (Giraud et al., 2001;
Rouger et al., 2007; Mangus et al., 2012; Stevenson et al., 2012), and that
speech acquisition in hearing individuals involves the integration of
auditory and visual inputs (Mills, 1987; Lewkowicz and Hansen-Tift,
2012), some studies suggest that plastic reorganisation due to visual
language can interfere with CI success, and that sign language and
speechreading could make this interference worse (Teoh et al., 2004;
Giraud and Lee, 2007). Here we dissociate the effects of language
modality and auditory deprivation on visual crossmodal plasticity in
HG, using a subject-specific anatomical definition, and also a subject-
98
V. Cardin et al. / NeuroImage 124 (2016) 96–106
specific cytoarchitectonic definition of area Te1.0. For this purpose, we
measured the fMRI BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls.
Materials and methods
Participants
All participants gave written consent to take part in the study, and all
procedures followed the standards set by the Declaration of Helsinki,
and were approved by the local ethics committee. Two groups of
congenitally or early (before 3 years of age) severely-to-profoundly
deaf individuals took part in the study: 1) Deaf Signers (n = 7): who
have deaf parents, are native signers of British Sign Language (BSL),
and knew spoken and written English; and 2) Deaf Oral (n = 7): who
have hearing parents, are native speakers of English who access
language through speechreading, and who have never learned a sign
language. All participants in the group of deaf signers were congenitally
deaf due to a genetic aetiology. Five participants in the group of deaf oral
individuals were congenitally deaf (aetiologies: 1 genetic, 2 rubella, 2
unknown), and two became deaf before 3 years of age (1 meningitis,
1 unknown). All aetiologies and onset of deafness were self-reported.
Seven participants with normal hearing who were native speakers of
English (Hearing Non-signers) served as a separate control group. All
structural scans were visually inspected for anatomical abnormalities.
Groups were matched for: i) sensory loss [only for deaf oral and deaf
signers; better-ear Pure Tone Average (PTA; 1 KHz, 2 KHz, 4 KHz; maximum output of equipment = 100 dB): deaf signers = 98.1 dB ± 3.7
SEM (range: 81.7–N100 dB); deaf oral = 94.5 dB ± 3.3 (range:
81.7–N100 dB); t(6) = 0.64, p = .54]; ii) age [deaf signers =
46.3 years ± 4.4 SEM; deaf oral = 47.3 ± 1; hearing non-signers =
47.6 ± 3.3; t(6) deaf oral,deaf signers = 0.23, p = .82; t(6) deaf oral, hearing =
hearing,deaf signers
0.09, p = .93; t(6)
= 0.2, p = .81]; and iii) gender [3 male and
4 female in each group]. Participants in the deaf signers and hearing
non-signers groups were recruited from local databases. Most of the
participants in the deaf oral group were recruited through an association of former students of a local oral-education school. Because of
changing attitudes towards SL, deaf people are now more likely to be
interested in learning to sign as young adults, even if they were raised
in a completely oral environment and developed a spoken language
successfully. SL knowledge was an exclusion criterion for the deaf oral
group. For this reason, all the participants in the deaf oral group were
more than 40 years of age, and participants in the other two groups
were selected to match them.
Stimuli
Results presented in this paper are part of a larger study investigating cross-linguistic differences and phonology in sign language processing, and crossmodal plasticity in signers and non-signers. Whole-brain
comparisons across groups have been reported in an earlier paper
(Cardin et al., 2013), and further results will be reported elsewhere.
Stimuli consisted of videos of sign-based material, each of 2–3 s
duration. Stimuli varied systematically in the amount of linguistic information they contained, including: 1) signs of a familiar sign language
(British Sign Language, BSL), which deliver semantic and phonological
information (to the deaf signer group); 2) signs of an unfamiliar sign
language (Swedish Sign Language, SSL), which were possible BSL
signs, but which were not part of the lexicon, delivering mainly phonological information; 3) cognates, which are lexical signs identical in
form and meaning in both BSL and SSL, and likely to build on the general
principles underpinning the link between form and meaning in sign
language; and 4) invented Non-Signs, which are invented signs that
violate the phonological rules of both BSL and SSL. Non-signs were
either reported by or created following the procedures described in
Orfanidou et al. (2009). If a differential recruitment of HG in deaf signers
exists, this design allows us to determine if the effect is due to linguistic
processing elicited by each type of stimuli, or by different linguistic demands of the tasks. There were four scanning runs, each consisting of 12
blocks of 8 videos of the same stimulus type and task, with blank-screen
inter-trial interval (ITI) of 4.5 s on average. Participants were asked to
perform either a handshape or a location monitoring task (see below),
and prior to each block, a cue picture showed which handshape or location to monitor. A baseline period of 15 s, consisting of an image of the
model sitting without making any movement with his hands, appeared
in between blocks. Throughout the manuscript, the term ‘baseline’ will
refer to this period while the model was in a static position. This baseline condition is different from blank periods of no visual stimulation,
which were also present in between blocks and videos, as described,
and that were not explicitly modelled. The participants' task was to indicate with a button-press if the sign presented in each video had the
same hand-shape or same location as a cue presented just before the
onset of the block. This is a phoneme monitoring task (cf. Grosvald
et al., 2012) for signers, but can be performed as a purely perceptual
matching task by non-signers. Performance in the task was evaluated
by calculating reaction times and d′. The latter was calculated by
counting each detected button press as a positive answer (either hits
or false positives), and equating instances in which participants
did not press the button as ‘no’ answers (either correct rejections or
misses).
Data acquisition
Functional gradient-echo EPI images (TR = 2975 ms, TE = 50 ms,
FOV = 192 × 192 mm, voxel size = 3 mm3, 35 slices) were acquired
on a Siemens Avanto 1.5 T scanner equipped with a 32-channel head
coil at the Birkbeck-UCL Centre for Neuroimaging. The first 7 volumes
of each run were discarded to allow for T1 equilibration effects. A
structural image was collected for each participant using MP-RAGE
(TR = 2730 ms, TE = 3.57 ms, voxel size = 1 mm3, 176 slices).
Data analysis
Functional data were analysed using Matlab 7.10 (MathWorks Inc.,
MA, USA) and SPM8 (Wellcome Department of Cognitive Neurology,
London, UK). For each participant separately, all EPIs were realigned
and then coregistered to each individual's anatomical scan. These EPIs
were used for extracting percent signal changes from the Heschl's
gyrus ROI (HG ROI; see below). EPIs were then normalised for use
with the standard definition of cytoarchitectonic region Te1.0. The
anatomical image was normalised to SPM's standard MNI template
image, and the parameters from this normalisation were used to
normalise all the EPIs. No spatial smoothing was applied.
Each individual's Heschl's gyri were defined using FreeSurfer 5.0.0
(http://surfer.nmr.mgh.harvard.edu/). Complete descriptions of these
procedures are provided in previous publications (Dale et al., 1999;
Fischl and Dale, 2000; Fischl et al., 2001, 2002, 2004; Ségonne et al.,
2004; Han et al., 2006; Jovicich et al., 2006). In short, brightness and
contrast normalisation were performed on the images, followed by
removal of all non-brain tissues with a hybrid watershed/surface deformation procedure (Ségonne et al., 2004). Talairach transformations
were then applied to the images, and subcortical white matter and
deep grey matter structures were segmented (Fischl et al., 2004). The
grey white matter boundary was then tessellated with automatic
correction of topology (Fischl et al., 2001; Ségonne et al., 2004). Surface
deformation was then performed using intensity gradients, optimally
placing the grey/white and grey/CSF borders where the greatest
change in intensity indicates transition to the other tissue classes
(Dale et al., 1999).
Following completion of the cortical reconstruction, the surface was
inflated (Dale et al., 1999), and registered to a spherical atlas (individual
V. Cardin et al. / NeuroImage 124 (2016) 96–106
cortical folding patterns used to align cortical geometry across subjects).
Anatomic parcellation was carried out based on gyral and sulcal patterns, a priori anatomical information, and knowledge of neighbouring
labels (Fischl et al., 2004). From the Destrieux atlas, voxels within the
Heschl's gyrus label (HG ROI) were then exported for use in the ROI
analysis. HG ROIs were visually checked to ensure correct delineation
of the gyrus.
Area Te1.0 of Heschl's gyrus was defined using the cytoarchitectonic
maps generated by Tahmasebi et al. (2009), based on those produced by
Morosan et al. (2001). Briefly, Tahmasebi et al. (2009) warped the 10
post-mortem brains and corresponding cytoarchitectonic information
of Morosan et al. (2001) into SPM's standard MNI template. This procedure resulted in cytoarchitectonic definitions in a standard space with
higher specificity and less overlap with surrounding regions. Subjectspecific cytoarchitectonic ROIs were defined by combining, separately
for each participant and each hemisphere, the HG ROI and Tahmasebi
et al. (2009)'s cytoarchitectonic maps (Fig. 1). Each participant's HG
ROI was normalised, using the parameters generated during normalisation of the anatomical scan, and combined with cytoarchitectonic maps
in standard space. Only voxels present in both, the FreeSurfer ROI and
the specific cytoarchitectonic ROI, were included in the analysis.
Table 1 shows the number of voxels in each ROI for each group of
participants. There was a significant difference in the number of voxels
in the left Te1.0 between hearing non-signers and deaf oral (t(12) =
2.78, p = .017). The difference in number of voxels between deaf oral
and deaf signers in right HG ROI approached significance (t(12) =
1.96, p = .074). t-Tests for all other comparisons across groups were
not significant at p b .05 (all p N 0.99).
Heschl’s Gyrus
Te 1.0
99
Table 1
Average number of voxels per ROI.
Hearing non-signers
Deaf signers
Deaf oral
Left HG ROI
Right HG
ROI
Left Te1.0
Left Te1.0
NV
SD
NV
SD
NV
SD
NV
SD
183.1
177.0
191.7
22.4
23.9
38.6
150.3
142.3
171.3
19.9
31.0
24.0
89.0
76.4
70.1
15.1
14.9
9.7
68.3
62.6
70.4
8.1
17.9
18.6
The table shows average number of voxels (NV) and standard deviation (SD) for each ROI
and each group of participants.
Analysis was conducted by fitting a general linear model (GLM) with
regressors representing each stimulus category, task, baseline and
cue periods. For every regressor, events were modelled as a boxcar
representing its duration, convolved with SPM's canonical hemodynamic response function, and entered into a multiple regression analysis to generate parameter estimates for each regressor at every voxel.
Movement parameters were derived from the realignment of the
images and included in the model as regressors of no interest. For
each participant and each hemisphere, average percent signal change
from each ROI was extracted from the baseline and each of the conditions using MarsBar toolbox (http://marsbar.sourceforge.net/) for SPM.
Results
The goal of our study is to characterise differences in crossmodal
plasticity and function of HG caused by auditory deprivation and
language modality. Therefore, our analysis concentrated only on identifying the effects driven by differences between groups or interaction
between group and stimulus type or task. Main effects of stimulus
type or task will reflect differences in basic properties of the stimuli or
task demands, and are not relevant for dissociating the effects of auditory deprivation and language modality.
1.8
Reaction Time (sec)
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
Cognates
BSL
SSL
Non-signs
Cognates
BSL
Handshape
SSL
Non-signs
Location
Hearing Non-Signers
Deaf Signers
3.5
Deaf Oral
3
2.5
d’
2
1.5
1
0.5
0
Cognates
Fig. 1. Participant-specific regions of interest. Left: inflated representations of the brain of
three participants, one for each group: hearing non-signers (top), deaf signers (middle),
and deaf oral (bottom). The automatically-parcellated Heschl's Gyrus Region of Interest
(HG ROI) is shown in white. Right: participant-specific region Te1.0 (white) is shown
overlaid on an anatomical slice of Heschl's gyrus for the same participants.
BSL
SSL
Handshape
Non-signs
Cognates
BSL
SSL
Non-signs
Location
Fig. 2. Behavioural performance. Top: averaged reaction times in seconds for each group.
Bottom: averaged d prime values for each group. The bars represent mean ± S.E.M. BSL:
British Sign Language. SSL: Swedish Sign Language.
100
V. Cardin et al. / NeuroImage 124 (2016) 96–106
Behavioural results
Performance (accuracy and reaction times) for each task, stimulus type and group of participants is shown in Fig. 2. A repeated measures ANOVA with reaction times as a dependent variable, and
factors Group (deaf signers, deaf oral and hearing non-signers),
Task (handshape and location), and Stimulus Type (BSL, Cognates,
SSL and non-signs) was performed to determine if there were significant differences in performance.
There was a significant main effect of task (F(1,18) = 47.4, p b .001)
and stimulus type (F(3,54) = 15.5, p b .001), but no significant main effect of Group (F(2,18) = 2.25, p = .134), and no significant interactions
(F b 1.82 and p N .19 for all interactions).
Given that we found an effect of group in the neuroimaging data (see
below), and there is a trend showing a slower performance in the group
of hearing non-signers, we conducted a post-hoc t-test comparing
hearing signers to both groups of deaf participants (collapsing
across all tasks and stimulus type). This test showed that hearing participants were significantly slower than deaf participants (t(19) = 2.16,
p = 0.044).
A similar repeated measures ANOVA was conducted with accuracy
as a dependent variable. There was no significant main effect (F N 1.36
and p N .26 for all main effects), but there were significant interactions
between group × stimulus type (F(6,54) = 2.31, p = .046), and
task × stimulus type (F(3,54) = 3.75, p = .016). There was no significant 3-way interaction (F(6,54) b 1). We investigated further the interaction between group × stimulus type by conducting separate repeated
measures ANOVAs on each group, looking for a significant main effect of
stimulus type. This significant main effect was only found in the group
of hearing non-signers (F(3,18) = 3.87, p = .027). Post-hoc t-tests reveal that, in this group, performance with non-signs was significantly
worse than performance with BSL (t(6) = 3.00, p = .024) and SSL
(t(6) = 4.62, p = .004), and the difference to cognates approaches
significance (t(6) = 2.11; p = .079).
Neuroimaging results
To determine the contribution of auditory deprivation and language
modality to crossmodal visual plasticity in HG, the responses to stimulus
types and tasks were analysed in the three experimental groups: i) Deaf
Signers; ii) Deaf Oral; and iii) Hearing Non-signers (see Materials and
methods). The stimuli and tasks had linguistic content for deaf signers,
but only required processing of visuo-spatial information for deaf oral
and hearing non-signers. Therefore, any crossmodal effects triggered
by auditory deprivation will be observed in both groups of deaf individuals (deaf signers and deaf oral), whereas any due to language modality
(i.e. sign language experience) will only be observed in the deaf
signers group. First we evaluated if there were differences between
the Baseline condition and all the sign conditions (i.e. all tasks and
stimulus types) across groups. We analysed responses from all
voxels of the anatomical definition of HG ROI, and separately for
the subject-specific cytoarchitectonic region Te1.0, which is likely
to contain (in hearing individuals) the human homologue of the
macaque auditory core. Fig. 3 shows the results from this analysis.
The pattern of results is similar with both ROI definitions. However,
separate repeated measures ANOVAs for the HG ROI and for Te1.0,
with factors Group (deaf signers, deaf oral and hearing nonsigners), Hemisphere (right, left) and Visual Condition (baseline,
signs), reveal differences in the significance of the effects (Table 2).
Specifically, in Te1.0 there is a main effect of Group and an interaction between Visual Condition × Group. However, the main effect
of Group is not significant when evaluating the response of HG ROI
as a whole, and the interaction between Visual Condition × Group
only approaches significance. t-Tests from results of Te1.0 reveal a
significant difference in the average level of activation between the
hearing non-signers group and the deaf signers group (t(12) =
3.25; p = .007), and the hearing non-signers group and the deaf
oral group (t(12) = 2.40; p = .034), with more positive percent
signals changes in both groups of deaf participants. No significant
difference was found between the groups of deaf participants
(t(12) = 0.28; p = .782). Comparisons between conditions separately for each group show that the difference between baseline
and sign stimulation is only significant in the hearing non-signers
group (t(6) = 3.07; p = .022), and not in deaf signers (t(6) = 0.95;
p = .379) and deaf oral (t(6) = 1.05; p = .333). These results indicate
that the significant interaction between Visual Condition × Group is due
to differences between conditions in the group of hearing individuals,
and not in the groups of deaf individuals.
Our sign language stimuli contained different levels of linguistic
content, so that if sign language processing occurred in the primary
auditory cortex of deaf signers, we would be able to detect the effect.
For this purpose we conducted repeated measures ANOVAs with factors
Hemisphere (right, left), Group (deaf signers, deaf oral and hearing nonsigners), Task (handshape, location), and Stimulus Type (BSL, Cognates,
SSL and Non-Signs) separately for HG ROI and Te1.0. The percent signal
change in the baseline condition was subtracted from percent signal
changes for each of the sign language conditions. Results show a significant main effect of Group in Te1.0, but not in HG ROI, in agreement with
the interaction between Visual Condition × Group of the previous
ANOVA (Table 3). There was no interaction between Stimulus Type ×
Group, or between Stimulus Type × Task × Group, indicating that responses to sign stimuli with different attributes were not different
HG ROI
Te 1.0
*
**
% signal change
0.3
Baseline
Signs
0.3
0.2
0.2
0.1
0.1
0
0
-0.2
-0.2
-0.1
-0.1
Hearing
Non-Signers
Deaf Signers
Deaf Oral
*
Hearing
Non-Signers
Deaf Signers
Deaf Oral
Fig. 3. Activity in Heschl's gyrus differs between hearing and deaf participants, but not between deaf signers and deaf oral. The figure shows the average response of the Heschl's Gyrus
Region of Interest (HG ROI; left), and participant-specific cytoarchitectonic region Te1.0 (right). The bars represent group means ± S.E.M. For each group, N = 7 participants. *p b .05;
**p b .01. No significant difference between deaf signers and deaf oral in any of the ROIs.
V. Cardin et al. / NeuroImage 124 (2016) 96–106
Table 2
Repeated measures ANOVA with factors Group, Hemisphere, and Visual stimuli.
Effect
F
df
p
HG ROI
Group
Hemisphere
Hemisphere × Group
Visual stimuli
Visual stimuli × Group
Hemisphere × Visual stimuli
Hemisphere × Visual stimuli × Group
1.63
3.92
b1
1.30
2.94
5.10
b1
2,18
1,18
2,18
1,18
2,18
1,18
2,18
.224
.063
.842
.270
.079
.037
.843
Te 1.0
Group
Hemisphere
Hemisphere × Group
Visual stimuli
Visual stimuli × Group
Hemisphere × Visual stimuli
Hemisphere × Visual stimuli × Group
4.30
b1
b1
b1
5.28
b1
b1
2,18
1,18
2,18
1,18
2,18
1,18
2,18
.030
.505
.555
.597
.016
.854
.452
Factors in the analysis were: Group (DS, DO and HN), Hemisphere (Right, Left) and Condition (Baseline, Sign Language Stimuli). Bold letters indicate significant effects.
101
in humans cannot be easily extrapolated from animal studies. This is because in humans, plastic reorganisation is not only due to auditory deprivation, but also due to the acquisition of language through visual
strategies such as speechreading and sign language. It is important to
characterise this differential contribution not only to understand plastic
reorganisation in the human brain, but also to guide the choice of
language and hearing interventions in deaf individuals (Lyness et al.,
2013). To our knowledge, our study is the first one dissociating the
effects of auditory deprivation and sign language experience on visual
crossmodal plasticity in regions of HG which have been defined in a
subject-specific manner. Here we show that, in response to visual
stimulation with sign language content, differences in activity in HG
between hearing and deaf participants are caused mainly by a reduction
in activation in the hearing group. In other words, the group of hearing
non-signers show reduced activations in all sign conditions compared
to the static baseline. Importantly, the lack of significant differences between deaf signers and deaf oral individuals suggests that sign language
knowledge does not contribute to the degree of difference between deaf
and hearing individuals, nor is there evidence that it causes additional
visual crossmodal plasticity in HG.
across groups. Results from HG ROI and Te1.0 are shown in Figs. 4 and 5
(respectively).
Differences in activity in Heschl's gyrus between hearing and deaf individuals are due to auditory deprivation, and not sign language use
Discussion
During the maturation of sensory and cognitive systems, there are
periods of maximum plasticity – sensitive periods – in which environmental experience influences the development of the neural components of these systems (Hubel and Wiesel, 1970; Hensch, 2004). In
cases of sensory deprivation, either early-induced or congenital, the
lack of environmental stimulation in one modality during the sensitive
period will prevent the normal development of the system (Kral et al.,
2001). On the other hand, the stronger reliance on the remaining sensory modalities also contributes to plastic reorganisation in sensory cortices. In the case of deafness in humans, the picture is even more complex
because there is plastic reorganisation not only due to auditory deprivation, but also due to the acquisition of language in a visual modality,
either sign language or a reliance on speechreading to process spoken
language. In addition, there is a potential delay in language acquisition
due to late diagnosis of deafness, or difficulty in obtaining adequate
language input in an appropriate modality. This means that the sensitive
periods for developing both, auditory function and language, may have
passed before access to environmental information is obtained. For
cochlear implantation, the success of which is usually measured in
terms of speech processing ability, the sensitive period has been found
to be during the first 3–4 years of life (Sharma and Campbell, 2011;
Kral and Sharma, 2012). However, because of the interplay between a
sensitive period for auditory experience, and one for language experience, both of these will contribute and interact towards the sensitive
period for CIs (Lyness et al., 2013). Previous studies have compared
hearing and deaf native signers in order to dissociate the crossmodal
plasticity effects of auditory deprivation from those of sign language
knowledge (Neville et al., 1998; MacSweeney et al., 2002; Fine et al.,
2005; Sadato et al., 2005; Sakai et al., 2005). However, none of these
delineated HG anatomically and in subject-specific manner. Furthermore, deaf and hearing native signers have rather different language
use and development (Herman and Roy, 2000; van den Bogaerde and
Baker, 2005). Thus, questions remained about the contribution of
these two factors to plastic changes in primary auditory regions. In an
effort to start understanding the interplay of language and auditory
experience in the development of the human brain, our study shows
that the difference in activation in Heschl's gyrus between hearing and
deaf individuals is the result of auditory deprivation, rather than experience of language in different modalities, either spoken or signed. Specifically, we found that in area Te1.0, which is likely to contain the auditory
core in humans, average signal change was also significantly smaller in
the group of hearing non-signers compared to each of the groups of deaf
Studies of early sensory deprivation in mammals have shown plastic
reorganisation of the unstimulated cortices (Hubel and Wiesel, 1970;
Lomber et al., 2010; Meredith et al., 2011; Meredith and Lomber,
2011). However, the consequences of early auditory deprivation
Table 3
Repeated measures ANOVA with factors Group, Hemisphere, Task and Stimulus type.
Effect
F
df
p
HG ROI
Group
Hemisphere
Hemisphere × Group
Task
Task × Group
Stimulus type
Stimulus type × Group
Hemisphere × Task
Hemisphere × Task × Group
Hemisphere × Stimulus type
Hemisphere × Stimulus type × Group
Task × Stimulus type
Task × Stimulus type × Group
Hemisphere × Task × Condition
Hemisphere × Task × Condition × Group
2.94
5.10
b1
b1
1.89
b1
b1
b1
b1
1.72
1.36
b1
1.39
1.10
b1
2,18
1,18
2,18
1,18
2,18
3,54
6,54
1,18
2,18
3,54
6,54
3,54
6,54
3,54
6,54
.079
.037
.843
.938
.180
.778
.576
.898
.816
.175
.249
.403
.236
.357
.856
Te 1.0
Group
Hemisphere
Hemisphere × Group
Task
Task × Group
Stimulus type
Stimulus type × Group
Hemisphere × Task
Hemisphere × Task × Group
Hemisphere × Stimulus type
Hemisphere × Stimulus type × Group
Task × Stimulus type
Task × Stimulus type × Group
Hemisphere × Task × Condition
Hemisphere × Task × Condition × Group
5.28
b1
b1
b1
b1
2.27
1.74
2.40
1.59
1.82
1.84
b1
b1
b1
b1
2,18
1,18
2,18
1,18
2,18
3,54
6,54
1,18
2,18
3,54
6,54
3,54
6,54
3,54
6,54
.016
.854
.452
.843
.529
.091
.128
.139
.231
.154
.109
.442
.496
.429
.438
Factors in the analysis were: Group (DS, DO and HN), Hemisphere (Right, Left), Task
(Handshape, Location), and Stimulus Type (Cognates, BSL, SSL, Non-Signs). Bold letters
indicate significant effects.
102
V. Cardin et al. / NeuroImage 124 (2016) 96–106
Heschl’s Gyrus
Hearing Non-Signers
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Handshape
Cognates
BSL
Handshape
Location
SSL
Non-Signs
Location
% signal change
(over baseline)
Deaf Signers
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Location
Handshape
Handshape
Location
Handshape
Location
Deaf Oral
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Handshape
Location
LEFT
RIGHT
Fig. 4. Results from HG ROI. Activity in response to each stimulus type is plotted separately for each group, hemisphere and task. The bars represent group means ± S.E.M. For each group,
N = 7 participants. BSL: British Sign Language. SSL: Swedish Sign Language.
individuals. This was mainly due to a reduction in activation in the hearing group in the signs condition compared to the baseline condition,
whereas no significant difference between baseline and signs was
found in either group of deaf individuals. On average, no significant difference was found between the group of deaf signers and deaf oral individuals, suggesting that differential use of signed or spoken language
does not contribute to further crossmodal plasticity effects in these
areas.
We specifically wanted to test the effect of stimuli with linguistic
content for deaf signers, in order to rule out the possibility that HG is
specifically involved in linguistic processing of signs, rather than general
visuo-spatial tasks. A previous MEG study in deaf signers showed that
the onset of the activation elicited by single signs in left perisylvian
regions falls within a late time window associated with lexicosemantic
integration, and not during an early time-window for basic sensory
processing (Leonard et al., 2012). This suggests that visual processing
of signs in perisylvian areas is associated to linguistic mechanisms,
and not to general visual crossmodal plasticity. It also suggests that
language regions of the superior temporal cortex are more likely to be
the substrates of the MEG effect, rather than primary auditory areas.
However, specific distinction between primary auditory cortex and
other regions of the temporal cortex cannot be achieved with MEG.
Previous studies have also shown a modulation of the activity of
areas of HG by visual or spoken language in both hearing and deaf individuals (Calvert et al., 1997, 2000; Pekkola et al., 2005; Wild et al., 2012).
Our stimuli contained linguistic content for signers only. Therefore,
linguistic processing effects should be observed in deaf signers if at all
present. As shown in Figs. 4, 5 and Table 3, there was no significant
Group × Stimulus type interaction in either HG ROI or in Te1.0, suggesting that the primary auditory cortex in humans is not involved in
V. Cardin et al. / NeuroImage 124 (2016) 96–106
103
Te 1.0
Hearing Non-Signers
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Handshape
Location
Cognates
BSL
Handshape
SSL
Non-Signs
Location
% signal change
(over baseline)
Deaf Signers
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Handshape
Location
Handshape
Location
Handshape
Location
Deaf Oral
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
Location
Handshape
LEFT
RIGHT
Fig. 5. Results from Te1.0. Activity in response to each stimulus type is plotted separately for each group, hemisphere and task. The bars represent group means ± S.E.M. For each group,
N = 7 participants. BSL: British Sign Language. SSL: Swedish Sign Language.
linguistic processing in deaf signers, nor is its activity modulated by
linguistic processing in other temporal regions.
Plasticity in areas of the HG
Differences in the responses between the deaf and the hearing groups
were observed more reliably in areas Te1.0 than in Heschl's gyrus as a
whole. This highlights the importance of specificity in the definition of
primary auditory cortex when comparing deaf and hearing populations,
as well as indicating more generally functional differences in subregions
of Heschl's gyrus. The human auditory core is located in HG in humans,
but its exact location and functional composition in hearing individuals
is still a matter of debate (Morosan et al., 2001; Formisano et al., 2003;
Talavage et al., 2004; Woods et al., 2009; Humphries et al., 2010; Da
Costa et al., 2011; Striem-Amit et al., 2011; Dick et al., 2012). Furthermore, tonotopic mapping of auditory function in Heschl's gyrus cannot
be undertaken in deaf individuals. Given these constraints, anatomical
methods, such as a post-mortem cytoarchitecture and myelination analysis (Morosan et al., 2001; Hackett, 2011; Dick et al., 2012), have to be
used when trying to characterise responses in the primary auditory
cortex of deaf humans. Here we used a post-mortem cytoarchitectonic
definition of area Te1.0, a region that is likely to correspond to the auditory core in humans (Morosan et al., 2001). The number of voxels in each
ROI was comparable across groups; however, there was a significant
difference between hearing non-signers and deaf oral participants in
left Te1.0. This suggests potential differences in the morphometry and
structure of primary auditory areas as a consequence of auditory deprivation and oral language use. However, our study was not designed to
104
V. Cardin et al. / NeuroImage 124 (2016) 96–106
address this question, and future studies employing techniques such as
quantitative T(1) mapping (Dick et al., 2012) will be necessary in order
to answer this.
It is not known if the cytoarchitectonic properties of primary auditory areas are the same in deaf and hearing humans, but evidence from
cats shows that the cytoarchitectonic profile of auditory areas in deaf
animals, including A1, is the same that found in their hearing counterparts, does not differ from that found in their hearing counterparts,
despite significant expansions in secondary auditory cortex and the
ventral auditory field (Wong et al., 2014). In any case, the fact that
results from Te1.0 are functionally different from those of HG as
whole, highlights its relevance in achieving more specific comparisons
between deaf and hearing individuals.
Our results also show that differences between deaf and hearing individuals were mainly due to a reduction in activation in the hearing
group in response to the signs conditions in comparison to the baseline
condition. This reduction in activation was not observed in either of the
groups of deaf individuals. This result is in agreement with extensive
literature showing reduced or negative activity of unstimulated sensory
cortices, or cortices which are unresponsive to the modality of the
attended stimuli (e.g. Laurienti et al., 2002; Johnson and Zatorre,
2005). This is also in agreement with results from the study of (Karns
et al., 2012), in which the authors compare deaf and hearing participants, and showed reduced (from baseline) activations in HG with
visual stimulation (flashing lights) in hearing individuals, but very little
difference from baseline in deaf individuals. This reduction in activation
is independent of the presence or absence of scanner noise, and it is
likely to be caused by top-down mechanisms, such as attention allocation (Johnshon and Zatorre, 2005), one of the goals of which is to reduce
noise from sensory modalities which are uninformative for solving a
particular task. This effect is absent in the deaf group, which could
be due to this top-down inhibitory route not being functional, or simply
due to the fact that there is no sensory input (i.e. auditory) that needs to
be silenced. A difference between deaf and hearing individuals was
also found in reaction times, where hearing individuals were significantly slower. It is possible that this difference in performance is the
behavioural reflection of the neuroimaging results in Te1.0, and a
consequence of the fact that hearing individuals need to actively
suppress auditory activity. However, it should be kept in mind that a
reduction in the level of activity of auditory cortices when hearing
individuals are performing a visual task is found also in other auditory
regions. Thus, we cannot be certain that it is indeed the difference in
activity of Te1.0 what is reflected in performance. We also observed a
significant interaction between group × stimulus type in accuracy,
which was driven by a poorer performance with Non-Signs in the
hearing group. However, this was not reflected either in the results of
HG ROI or Te1.0.
If visual crossmodal plasticity in deaf individuals is not significant, at
least in Te1.0 and HG as a whole, what role do these cortices have in the
deaf brain? Karns et al. (2012) showed somatosensory responses in HG
areas in deaf native signers. These responses are stronger than those
elicited by visual stimulation. The recruitment of these areas for
somatosensation has also been shown in animal studies (Hunt et al.,
2006; Meredith and Lomber, 2011; Meredith and Allman, 2012; Wong
et al., 2015), and there is a clear anatomical substrate for its development: the cochlear nucleus, the projections of which reach the auditory
cortex after several subcortical relays, also receives somatosensory afferents. Evidence demonstrates that after auditory deprivation, neurons
in the cochlear nucleus are more responsive to somatosensory stimulation (Shore et al., 2008), and there is also an increase in the number of
somatosensory projections reaching the cochlear nucleus (Zeng et al.,
2012). Future studies will need to be conducted to determine if the
level of somatosensory crossmodal plasticity in core auditory areas is
different in signers and non-signers, in particular in those individuals
with deaf-blindness, and if this has an effect on cochlear implantation
success. However, the evidence of somatosensory crossmodal plasticity
after deafness in animals (i.e. independently of language experience),
suggests that this is likely to be an effect of sensory deprivation.
Sign language use in deaf children — a confounding effect between language
proficiency and language modality
It has been previously shown that responses due to sign language
experience in deaf signers are specific for the processing of linguistic
information (Leonard et al., 2012; Cardin et al., 2013), and here we
show that sign language experience does not produce crossmodal visual
plasticity in primary auditory regions. Based on these results, it is
unlikely that sign language experience will compromise the cortical
processing of auditory signals in CI users. Nevertheless, some studies
have identified language modality as a predictor of CI success, in
which children who use communication methods that include exposure
to sign language have poorer outcomes than those using exclusively
oral communication (e.g. O'Donoghue et al., 2000; Geers, 2002; Wang
et al., 2011). An explanation for this apparent paradox is that studies
showing an effect of language modality in CI performance have typically
failed to measure sign language proficiency in the children and their
parents, or their outcome variables represent more strongly measurements of auditory speech performance, and not general language or
sign language knowledge (e.g. O'Donoghue et al., 2000; Geers et al.,
2009; Wang et al., 2011). Deaf children with hearing parents often do
not receive good sign language input, which means that sign language
proficiency in these children can be highly variable. Therefore, any effect
of language modality that does not include proficiency measurements
may be confounded, and language modality effects could be due to
delayed language acquisition related to poor language input. It is also
possible that differences exist between deaf signers and non-signers in
the processing of visual speech in primary auditory areas. Further
studies are needed to elucidate these points.
Conclusion
In summary, we show that differences between hearing and deaf
individuals in primary auditory areas are due to a reduced activations
caused by visual stimulation in the hearing group, and that the modality
of language used by deaf individuals does not contribute to visual
crossmodal plasticity in primary auditory cortex.
Funding
This work was supported by the Riksbankens Jubileumsfond
(P2008-0481:1-E), the Swedish Council for Working Life and Social Research (2008-0846), the Swedish Research Council (349-2007-8654,
Linnaeus Centre HEAD), and by grants from the Economic and Social Research Council of Great Britain (RES-620-28-6001; RES-620-28-0002)
to the Deafness Cognition and Language Research Centre.
Acknowledgments
The authors would like to thank Ingrid Johnsrude and Connor Wild
for providing the cytoarchitectonic maps and for recommendations for
the ROI analysis; Mischa Cooke, Lena Davidsson, Anders Hermansson,
Lena Kästner, Ramas Rentelis, Lilli Risner, and Guiping Xu for their
help with the recruitment of participants and the acquisition of MRI
data; Lena Kästner also for her contribution to the design of the stimuli.
We specially thank all the deaf and hearing participants who took part
in the study.
References
Bavelier, D., Brozinsky, C., Tomann, A., Mitchell, T., Neville, H., Liu, G., 2001. Impact of early
deafness and early exposure to sign language on the cerebral organization for motion
processing. J. Neurosci. 15, 8931–8942.
V. Cardin et al. / NeuroImage 124 (2016) 96–106
Calvert, G.A., Bullmore, E., Brammer, M., Campbell, R., Williams, S., McGuire, P., David, A.,
1997. Activation of auditory cortex during silent lipreading. Science 276, 593–596.
Calvert, G.A., Campbell, R., Brammer, M., 2000. Evidence from functional magnetic
resonance imaging of crossmodal binding in the human heteromodal cortex. Curr.
Biol. 10, 649–657.
Campbell, R., 2008. The processing of audio–visual speech: empirical and neural bases.
Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 1001–1010.
Capek, C.M., Macsweeney, M., Woll, B., Waters, D., McGuire, P.K., David, A.S., Brammer,
M.J., Campbell, R., 2008. Cortical circuits for silent speechreading in deaf and hearing
people. Neuropsychologia 46, 1233–1241.
Cardin, V., Orfanidou, E., Rönnberg, J., Capek, C.M., Rudner, M., Woll, B., 2013. Dissociating
cognitive and sensory neural plasticity in human superior temporal cortex. Nat.
Commun. 4, 1473.
Da Costa, S., van der Zwaag, W., Marques, J.P., Frackowiak, R.S., Clarke, S., Saenz, M., 2011.
Human primary auditory cortex follows the shape of Heschl's gyrus. J. Neurosci. 31,
14067–14075.
Dale, A.M., Fischl, B., Sereno, M.I., 1999. Cortical surface-based analysis: I. Segmentation
and surface reconstruction. NeuroImage 9, 179–194.
De Martino, F., Moerel, M., Xu, J., van de Moortele, P.-F., Ugurbil, K., Goebel, R., Yacoub, E.,
Formisano, E., 2014. High-resolution mapping of myeloarchitecture in vivo: localization
of auditory areas in the human brain. Cereb. Cortex pii: bhu150. [Epub ahead of print]
PubMed PMID: 24994817.
Dick, F., Tierney, A.T., Lutti, A., Josephs, O., Sereno, M.I., Weiskopf, N., 2012. In vivo functional
and myeloarchitectonic mapping of human primary auditory areas. J. Neurosci. 32,
16095–16105.
Emmorey, K., Allen, J.S., Bruss, J., Schenker, N., Damasio, H., 2003. A morphometric analysis
of auditory brain regions in congenitally deaf adults. Proc. Natl. Acad. Sci. U. S. A. 100,
10049–10054.
Fine, I., Finney, E.M., Boynton, G.M., Dobkins, K.R., 2005. Comparing the effects of auditory
deprivation and sign language within the auditory and visual cortex. J. Cogn.
Neurosci. 17, 1621–1637.
Finney, E.M., Fine, I., Dobkins, K.R., 2001. Visual stimuli activate auditory cortex in the
deaf. Nat. Neurosci. 4, 1171–1173.
Fischl, B., Dale, A.M., 2000. Measuring the thickness of the human cerebral cortex from
magnetic resonance images. Proc. Natl. Acad. Sci. U. S. A. 97, 11050–11055.
Fischl, B., Liu, A., Dale, A.M., 2001. Automated manifold surgery: constructing geometrically
accurate and topologically correct models of the human cerebral cortex. IEEE Trans.
Med. Imaging 20, 70–80.
Fischl, B., Salat, D.H., Busa, E., Albert, M., Dieterich, M., Haselgrove, C., van der Kouwe, A.,
Killiany, R., Kennedy, D., Klaveness, S., Montillo, A., Makris, N., Rosen, B., Dale, A.M.,
2002. Whole brain segmentation: automated labeling of neuroanatomical structures
in the human brain. Neuron 33, 341–355.
Fischl, B., Van Der Kouwe, A., Destrieux, C., Halgren, E., Ségonne, F., Salat, D.H., Busa, E.,
Seidman, L.J., Goldstein, J., Kennedy, D., Caviness, V., Makris, N., Rosen, B., Dale, A.M.,
2004. Automatically parcellating the human cerebral cortex. Cereb. Cortex 14, 11–22.
Formisano, E., Kim, D.S., Di Salle, F., van de Moortele, P.F., Ugurbil, K., Goebel, R., 2003.
Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron 40,
859–869.
Geers, A.E., 2002. Factors affecting the development of speech, language, and literacy in
children with early cochlear implantation. Lang. Speech Hear. Serv. Sch. 33, 172–183.
Geers, A.E., Moog, J.S., Biedenstein, J., Brenner, C., Hayes, H., 2009. Spoken language scores
of children using cochlear implants compared to hearing age-mates at school entry.
J. Deaf. Stud. Deaf. Educ. 14, 371–385.
Giraud, A.L., Lee, H.J., 2007. Predicting cochlear implant outcome from brain organization
in the deaf. Restor. Neurol. Neurosci. 25, 381–390.
Giraud, A.L., Price, C.J., Graham, J.M., Truy, E., Frackowiak, R.S.J., 2001. Cross-modal plasticity
underpins language recovery after cochlear implantation. Neuron 30, 657–663.
Grosvald, M., Lachaud, C., Corina, D., 2012. Handshape monitoring: evaluation of linguistic
and perceptual factors in the processing of American Sign Language. Lang. Cogn. Process.
27, 117–141.
Hackett, T.A., 2011. Information flow in the auditory cortical network. Hear. Res. 271,
133–146.
Han, X., Jovicich, J., Salat, D., van der Kouwe, A., Quinn, B., Czanner, S., Busa, E., Pacheco, J.,
Albert, M., Killiany, R., Maguire, P., Rosas, D., Makris, N., Dale, A., Dickerson, B., Fischl,
B., 2006. Reliability of MRI-derived measurements of human cerebral cortical thickness:
the effects of field strength, scanner upgrade and manufacturer. NeuroImage 32,
180–194.
Hensch, T.K., 2004. Critical period regulation. Annu. Rev. Neurosci. 27, 549–579.
Herman, R., Roy, P., 2000. The influence of child hearing status and type of exposure
to BSL on BSL acquisition. Proceedings of the 1999 Child Language Seminar, City
University, London. 1, pp. 116–122.
Hubel, D.H., Wiesel, T.N., 1970. The period of susceptibility to the physiological effects of
unilateral eye closure in kittens. J. Physiol. 206, 419–436.
Humphries, C., Liebenthal, E., Binder, J.R., 2010. Tonotopic organization of human auditory
cortex. NeuroImage 50, 1202–1211.
Hunt, D.L., Yamoah, E.N., Krubitzer, L., 2006. Multisensory plasticity in congenitally
deaf mice: how are cortical areas functionally specified? Neuroscience 139,
1507–1524.
Johnson, J.A., Zatorre, R.J., 2005. Attention to simultaneous unrelated auditory and visual
events: behavioral and neural correlates. Cereb. Cortex 15, 1609–1620.
Jovicich, J., Czanner, S., Greve, D., Haley, E., van der Kouwe, A., Gollub, R., Kennedy, D.,
Schmitt, F., Brown, G., Macfall, J., Fischl, B., Dale, A., 2006. Reliability in multi-site
structural MRI studies: effects of gradient. NeuroImage 30, 436–443.
Karns, C.M., Dow, M.W., Neville, H.J., 2012. Altered cross-modal processing in the primary
auditory cortex of congenitally deaf adults: a visual-somatosensory fMRI study with a
double-flash illusion. J. Neurosci. 32, 9626–9638.
105
Kral, A., Sharma, A., 2012. Developmental neuroplasticity after cochlear implantation.
Trends Neurosci. 35, 111–122.
Kral, A., Hartmann, R., Tillein, J., Heid, S., Klinke, R., 2001. Delayed maturation and sensitive periods in the auditory cortex. Audiol. Neurootol. 6, 346–362.
Kral, A., Schroder, J.H., Klinke, R., Engel, A.K., 2003. Absence of cross-modal reorganization
in the primary auditory cortex of congenitally deaf cats. Exp. Brain Res. 153, 605–613.
Lambertz, N., Gizewski, E.R., de Greiff, A., Forsting, M., 2005. Cross-modal plasticity in deaf
subjects dependent on the extent of hearing loss. Cogn. Brain Res. 3, 884–890.
Langers, D.R.M., 2014. Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments. Hum. Brain
Mapp. 35, 1544–1561.
Laurienti, P.J., Burdette, J.H., Wallace, M.T., Yen, Y.F., Field, A.S., Stein, B.E., 2002. Deactivation of sensory-specific cortex by cross-modal stimuli. J. Cogn. Neurosci. 14, 420–429.
Leonard, M.K., Ferjan-Ramirez, N., Torres, C., Travis, K.E., Hatrak, M., Mayberry, R., Halgren,
E., 2012. Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. J. Neurosci.
32, 9700–9705.
Lewkowicz, D.J., Hansen-Tift, A.M., 2012. Infants deploy selective attention to the mouth
of a talking face when learning speech. Proc. Natl. Acad. Sci. U. S. A. 109, 1431–1436.
Lomber, S.G., Meredith, M.A., Kral, A., 2010. Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat. Neurosci. 13, 1421–1427.
Lyness, C.R., Woll, B., Campbell, R., Cardin, V., 2013. How does visual language affect
crossmodal plasticity and cochlear implant success? Neurosci. Biobehav. Rev. 37,
2621–2630.
Lyness, C.R., Alvarez, I., Sereno, M.I., MacSweeney, M., 2014. Microstructural differences in
the thalamus and thalamic radiations in the congenitally deaf. NeuroImage 100,
347–357.
MacSweeney, M., Woll, B., Campbell, R., McGuire, P.K., David, A.S., Williams, S.C., Suckling,
J., Calvert, G.A., Brammer, M.J., 2002. Neural systems underlying British Sign Language
and audio–visual English processing in native users. Brain 125, 1583–1593.
MacSweeney, M., Capek, C.M., Campbell, R., Woll, B., 2008. The signing brain: the neurobiology of sign language. Trends Cogn. Sci. 12, 432–440.
Mangus, B.D., Krueger-Fister, J., Stevenson, R.A., Sheffield, S.W., Hedley-Williams, A.J.,
Gifford, R.H., Labadie, R.F., Wallace, M.T., 2012. Plasticity in multisensory temporal processing after cochlear implantation. Society for Neuroscience Conference, p. 369.304.
McGurk, H., MacDonald, J., 1976. Hearing lips and seeing voices. Nature 264, 746–748.
Merabet, L.B., Pascual-Leone, A., 2010. Neural reorganization following sensory loss: the
opportunity of change. Nat. Rev. Neurosci. 11, 44–52.
Meredith, M.A., Allman, B.L., 2012. Early hearing-impairment results in crossmodal reorganization of ferret core auditory cortex. Neural Plast. 601591.
Meredith, M.A., Lomber, S.G., 2011. Somatosensory and visual crossmodal plasticity in the
anterior auditory field of early-deaf cats. Hear. Res. 280, 38–47.
Meredith, M.A., Kryklywy, J., McMillan, A.J., Malhotra, S., Lum-Tai, R., Lomber, S.G., 2011.
Crossmodal reorganization in the early deaf switches sensory, but not behavioral
roles of auditory cortex. Proc. Natl. Acad. Sci. U. S. A. 108, 8856–8861.
Mills, A.E., 1987. The language of blind children: normal or abnormal. In: Jordens, P.,
Lalleman, J. (Eds.), Language Development. Foris Publication Holland, Dordrecht.
Morosan, P., Rademacher, J., Schleicher, A., Amunts, K., Schormann, T., Zilles, K., 2001.
Human primary auditory cortex: cytoarchitectonic subdivisions and mapping into a
spatial reference system. NeuroImage 13, 684–701.
Neville, H.J., Bavelier, D., Corina, D., Rauschecker, J., Karni, A., Lalwani, A., Braun, A., Clark,
V., Jezzard, P., Turner, R., 1998. Cerebral organization for language in deaf and hearing
subjects: biological constraints and effects of experience. Proc. Natl. Acad. Sci. U. S. A.
95, 922–929.
O'Donoghue, G.M., Nikolopoulos, T.P., Archbold, S.M., 2000. Determinants of speech perception in children after cochlear implantation. Lancet 356, 466–468.
Olulade, O.A., Koo, D.S., LaSasso, C.J., Eden, G.F., 2014. Neuroanatomical profiles of deafness in the context of native language experience. J. Neurosci. 34, 5613–5620.
Orfanidou, E., Adam, R., McQueen, J.M., Morgan, G., 2009. Making sense of nonsense in
British sign language (BSL): the contribution of different phonological parameters
to sign recognition. Mem. Cogn. 37, 302–315.
Pekkola, J., Ojanen, V., Autti, T., Jääskeläinen, I.P., Möttönen, R., Tarkiainen, A., Sams, M.,
2005. Primary auditory cortex activation by visual speech: an fMRI study at 3 T.
Neuroreport 16, 125–128.
Penhune, V.B., Cismaru, R., Dorsaint-Pierre, R., Petitto, L.A., Zatorre, R.J., 2003. The morphometry of auditory cortex in the congenitally deaf measured using MRI.
NeuroImage 20, 1215–1225.
Reich, L., Szwed, M., Cohen, L., Amedi, A., 2011. A ventral visual stream reading center independent of visual experience. Curr. Biol. 21, 363–368.
Rönnberg, J., Soderfeldt, B., Risberg, J., 1998. Regional cerebral blood flow during
signed and heard episodic and semantic memory tasks. Appl. Neuropsychol. 5,
132–138.
Rouger, J., Lagleyre, S., Fraysse, B., Deneve, S., Deguine, O., Barone, P., 2007. Evidence that
cochlear-implanted deaf patients are better multisensory integrators. Proc. Natl.
Acad. Sci. U. S. A. 104, 7295–7300.
Sadato, N., Pascual-Leone, A., Grafman, J., Deiber, M.P., Ibanez, V., Hallett, M., 1998. Neural
networks for braille reading by the blind. Brain 121, 1213–1229.
Sadato, N., Okada, T., Honda, M., Matsuki, K., Yoshida, M., Kashijura, K., Takei, W., Sato, T.,
Kochiyama, T., Yonekura, Y., 2005. Cross-modal integration and plastic changes revealed by lip movements, random-dot motion and sign languages in the hearing
and deaf. Cereb. Cortex 15, 1113–1122.
Sakai, K.L., Tatsuno, Y., Suzuki, K., Kimura, H., Ichida, Y., 2005. Sign and speech: amodal
commonality in left hemisphere dominance for comprehension of sentences. Brain
128, 1407–1417.
Schönwiesner, M., Dechent, P., Voit, D., Petkov, C.I., Krumbholz, K., 2014. Parcellation of
human and monkey core auditory cortex with fMRI pattern classification and
106
V. Cardin et al. / NeuroImage 124 (2016) 96–106
objective detection of tonotopic gradient reversals. Cereb. Cortex pii: bhu124. [Epub
ahead of print] PMID: 24904067.
Scott, S.K., Johnsrude, I.S., 2003. The neuroanatomical and functional organization of
speech perception. Trends Neurosci. 26, 100–107.
Scott, G.D., Karns, C.M., Dow, M.W., Stevens, C., Neville, H.J., 2014. Enhanced peripheral
visual processing in congenitally deaf humans is supported by multiple brain regions,
including primary auditory cortex. Front. Hum. Neurosci. 8, 177.
Ségonne, F., Dale, A.M., Busa, E., Glessner, M., Salat, D., Hahn, H.K., Fischl, B., 2004. A hybrid
approach to the skull stripping problem in MRI. NeuroImage 22, 1060–1075.
Sharma, A., Campbell, J., 2011. A sensitive period for cochlear implantation in deaf
children. J. Matern. Fetal Neonatal Med. 1, 151–153.
Shore, S.E., Koehler, S., Odakowski, M., Hughes, L.F., Syed, S., 2008. Dorsal cochlear nucleus
responses to somatosensory stimulation are enhanced after noise-induced hearing
loss. Eur. J. Neurosci. 27, 155–168.
Söderfeldt, B., Ingvar, M., Ronnberg, J., Eriksson, L., Serrander, M., Stone-Elander, S., 1997.
Signed and spoken language perception studied by positron emission tomography.
Neurology 49, 82–87.
Stevenson, R.A., Mangus, B.D., Krueger-Fister, J., Sheffield, S.W., Hedley-Williams, A.J.,
Dwyer, R.T., Gifford, R.H., Labadie, R.F., Wallace, M.T., 2012. Visual temporal processing is associated with cochlear implant auditory proficiency. Society for Neuroscience
Conference, p. 316.302.
Striem-Amit, E., Hertz, U., Amedi, A., 2011. Extensive cochleotopic mapping of human
auditory cortical fields obtained with phase-encoding fMRI. PLoS One 6, e17832.
Striem-Amit, E., Cohen, L., Dehaene, S., Amedi, A., 2012. Reading with sounds: sensory
substitution selectively activates the visual word form area in the blind. Neuron 76,
640–652.
Tahmasebi, A.M., Abolmaesumi, P., Geng, X., Morosan, P., Amunts, K., Christensen, G.E.,
Johnsrude, I.S., 2009. A new approach for creating customizable cytoarchitectonic
probabilistic maps without a template. Med. Image Comput. Comput. Assist. Interv.
12, 795–802.
Talavage, T.M., Sereno, M.I., Melcher, J.R., Ledden, P.J., Rosen, B.R., Dale, A.M., 2004.
Tonotopic organization in human auditory cortex revealed by progressions of
frequency sensitivity. J. Neurophysiol. 91, 1282–1296.
Teoh, S.W., Pisoni, D.B., Miyamoto, R.T., 2004. Cochlear implantation in adults with
prelingual deafness. Part II. Underlying constraints that affect audiological outcomes.
Laryngoscope 114, 1714–1719.
van den Bogaerde, B., Baker, A., 2005. Code mixing in mother–child interaction in deaf
families. Sign Lang. Linguist. 8, 153–176.
Wang, N., Liu, C., Liu, S., Huang, K., Kuo, Y., 2011. Predictor of auditory performance in
Mandarin Chinese children with cochlear implants. Otol. Neurotol. 32, 937–942.
Wild, C.J., Davis, M.H., Johnsrude, I.S., 2012. Human auditory cortex is sensitive to the
perceived clarity of speech. NeuroImage 60, 1490–1502.
Wong, C., Chabot, N., Kok, M.A., Lomber, S.G., 2014. Modified areal cartography in auditory
cortex following early- and late-onset deafness. Cereb. Cortex 24, 1778–1792.
Wong, C., Chabot, N., Kok, M.A., Lomber, S.G., 2015. Amplified somatosensory and visual
cortical projections to a core auditory area, the anterior auditory field, following
early- and late-onset deafness. J. Comp. Neurol. http://dx.doi.org/10.1002/cne.23771.
Woods, D.L., Stecker, G.C., Rinne, T., Herron, T.J., Cate, A.D., Yund, E.W., Liao, I., Kang, X.,
2009. Functional maps of human auditory cortex: effects of acoustic features and
attention. PLoS One 4, e5183.
Zeng, C., Yang, Z., Shreve, L., Bledsoe, S., Shore, S., 2012. Somatosensory projections to cochlear
nucleus are upregulated after unilateral deafness. J. Neurosci. 32, 15791–15801.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement