Reuschel etal 2012

Reuschel etal 2012
2422 • The Journal of Neuroscience, February 15, 2012 • 32(7):2422–2429
Behavioral/Systems/Cognitive
Spatial Updating Depends on Gaze Direction Even after Loss
of Vision
Johanna Reuschel,1 Frank Rösler,2 Denise Y. P. Henriques,3 and Katja Fiehler4
1Department of Psychology, Philipps-University Marburg, D-35032 Marburg, Germany, 2Department of Psychology, University Potsdam, D-14476
Potsdam, Germany, 3School of Kinesiology and Health Science, Center for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada, and
4Department of Psychology, Justus-Liebig-University Giessen, D-35394 Giessen, Germany
Direction of gaze (eye angle ⫹ head angle) has been shown to be important for representing space for action, implying a crucial role of
vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here,
we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People
who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target
while facing it. Before they reached to the target’s remembered location, they turned their head toward an eccentric direction that also
induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a
function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was
solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts
on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from
the eyes and head is available.
Introduction
Spatial coding for action depends heavily on where we are looking and what we see. Results from behavioral and physiological
studies suggest that spatial memory for reaching is at least initially
coded and updated relative to gaze (where we are looking) both
for visual (Henriques et al., 1998; Batista et al., 1999; Medendorp
et al., 2003) and nonvisual (Pouget et al., 2002; Fiehler et al., 2010;
Jones and Henriques, 2010) targets, before being transformed
into a more stable motor-appropriate representation (for review,
see Andersen and Buneo, 2002; Crawford et al., 2011). Likewise,
vision (what we see) influences spatial coding for action. Altered
visual feedback of the environment using prisms (Harris, 1963;
Held and Freedman, 1963; Jakobson and Goodale, 1989; van
Beers et al., 2002; Zwiers et al., 2003; Simani et al., 2007) or of the
hand using virtual reality (Ghahramani et al., 1996; Vetter et al.,
1999; Wang and Sainburg, 2005) systematically changes reaching
movements toward visual and nonvisual targets. Even perceptual
localization of visual (Redding and Wallace, 1992; Ooi et al.,
2001; Hatada et al., 2006) and nonvisual (Lackner, 1973; Cressman and Henriques, 2009, 2010) stimuli is affected by distorted
Received June 1, 2011; revised Dec. 3, 2011; accepted Dec. 24, 2011.
Author contributions: J.R., D.Y.P.H., and K.F. designed research; J.R. and K.F. performed research; J.R. and K.F.
analyzed data; J.R., F.R., D.Y.P.H., and K.F. wrote the paper.
This work was supported by Grant Fi 1567 from the German Research Foundation (DFG) to F.R. and K.F., the
research unit DFG/FOR 560 ‘Perception and Action’, and the TransCoop-Program from the Alexander von Humboldt
Foundation assigned to D.Y.P.H. and K.F. D.Y.P.H. is an Alfred P. Sloan fellow. We thank Oguz Balandi and Christian
Friedel for help in programming the experiment, Jeanine Schwarz and Charlotte Markert for support in acquiring
data, and Mario Gollwitzer and Oliver Christ for statistical support.
Correspondence should be addressed to Katja Fiehler, Justus-Liebig-University Giessen, Department of Psychology, Otto-Behaghel-Strasse 10F, D-35394 Giessen, Germany. E-mail: [email protected]
DOI:10.1523/JNEUROSCI.2714-11.2012
Copyright © 2012 the authors 0270-6474/12/322422-08$15.00/0
vision. Given the important role of gaze direction and vision for
representing space, how does the absence of vision affect spatial
coding for action?
There is converging evidence that lack of vision from birth
shapes the architecture of the CNS, and thus cortical functions, in
a permanent manner (Lewis and Maurer, 2005; Pascual-Leone et
al., 2005; Levin et al., 2010; Merabet and Pascual-Leone, 2010).
Studies in congenitally and late blind people have demonstrated
that visual experience early in life is crucial to develop an externally anchored reference frame. Late blind persons, like sighted
individuals, primarily code targets in an external reference frame,
while congenitally blind individuals mainly rely on an internal
body-centered reference frame (Gaunet and Rossetti, 2006;
Pasqualotto and Newell, 2007; Röder et al., 2007; Sukemiya et al.,
2008). However, spatial coding strategies in the congenitally
blind also depend on early nonvisual experience of space (Fiehler
et al., 2009). Humans who were visually deprived during a critical
period in early life show permanent deficits in cross-modal interactions (Putzar et al., 2007), spatial acuity, and complex form
recognition (Maurer and Lewis, 2001; Fine et al., 2003), while
similarly deprived nonhuman primates show imperfect reach
and grasp movement (Held and Bauer, 1974). Yet, we do not
know how the loss of vision affects spatial coding and updating of
targets for goal-directed movements.
Here, we test the impact of visual experience on people’s ability to localize targets and update their locations following movements of the head, i.e., shifts in gaze direction. Three groups of
people with different visual experience (sighted, late blind, and
congenitally blind) faced in the direction of a briefly presented
proprioceptive reach target (left hand), before changing gaze direction and reaching to the remembered target location. In this
Reuschel et al. • Spatial Updating without Vision
J. Neurosci., February 15, 2012 • 32(7):2422–2429 • 2423
Table 1. Description of the congenitally blind, late blind, and sighted participants
Congenitally blind
Late blind
ID
Sex
Age
(years)
1
2
3
4
5
6
7
8
9
10
11
12
F
F
M
M
F
M
F
M
F
M
F
M
28
32
35
23
42
30
18
26
31
34
31
29
Onset
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Birth
Cause of blindness
Age
(years)
Onset age
(years)
Cause of blindness
Retinal detachment and cataract
Retinopathy of prematurity
Inherited retinal dysplasia
Retinitis pigmentosa
Birth injury
Leber’s congenital amaurosis
Retinitis pigmentosa
Retinitis pigmentosa
Retinopathy of prematurity
Recessively inherited disease
Optic nerve atrophy
DDT during pregnancy
26
32
40
23
48
35
23
24
26
36
29
26
21
20
14
13
28
17
15
18
6
24
4
7
Retinitis pigmentosa
Morbus Behçet
Retinal detachment and glaucoma
Retinitis pigmentosa
Macular degeneration
Glaucoma
Cataract
Macular degeneration
Haematom excision at optic nerve
Retinitis pigmentosa and macular degeneration
Retinoblastoma
Inherited disease
Glass
eye(s)
Glass eyes
since age of (years)
R
24
L
31
L
L
27
11
R, L
5
Sighted
Age
(years)
26
29
37
20
40
28
20
23
27
33
27
25
The side of glass eye(s) in the late blind is indicated with R for right and with L for left.
was performed in accordance with the ethical
standards laid down by the German Psychological Society and in the Declaration of Helsinki
(2008).
Equipment. Participants sat in front of a
movement apparatus mounted on a table. Two
servomotors controlled by LabVIEW (http://
www.ni.com/labview/) steered a handle of the
apparatus in the x and y plane with an acceleration of 0.4 m/s 2 and a maximum velocity of
0.2 m/s. Thus, we were able to bring the participants’ left hand to a proprioceptive target site
through a straight movement of the device. We
registered reach endpoints to proprioceptive
targets with a touch screen panel (Keytec) of
430 ⫻ 330 ⫻ 3 mm (Fig. 1, light gray rectangle)
mounted above the movement apparatus to
prevent any tactile feedback of the target hand.
Head position was fixed by a moveable bite bar
equipped with a mechanical stopper, which allowed participants to perform controlled head
movements to a predefined position. We recorded head movements with an ultrasoundbased tracking system (Zebris CMS20; Zebris
Medical) using two ultrasound markers
mounted on a special holder along the interhemispheric midline (Fig. 1). Data were sampled with 100 Hz and analyzed offline.
Figure 1. Setup and procedure for the main experiment (left, above view; right, side view). t1, Proprioceptive target (left Horizontal eye movements were measured by
thumb) was presented at one of three possible locations (cross, 0°; circle, 5° left; diamond, 5° right) for 1 s, while the head was means of a horizontal electrooculogram
directed toward the target (dashed lines). t2, After the left-target hand was moved back to the starting point, the speaker indicated (HEOG) with a sampling rate of 500 Hz (cf.,
the direction that subjects had to move their head, by 10° or 15° leftwards or rightwards. With the head in this direction, Fiehler et al., 2010). Silver/silver chloride elecparticipants used their outstretched right index finger to reach to the remembered proprioceptive target.
trodes were placed next to the canthi of the left
and right eyes (bipolar recording) and at the
left mastoid (ground). Impedances were kept
⬍5 k⍀. The locations of all stimuli were deway, we investigated how reaching endpoints vary as a function of
fined as angular deviations (degrees) from the midline, centered at the
gaze and visual experience.
rotational axis of the head. We presented task-specific information by
auditory signals through three loud speakers placed 125 cm from the
Materials and Methods
participant.
Subjects. We investigated three groups of participants (for details, see
Procedure. The participants’ left thumb served as the proprioceptive
Table 1) consisting of 12 congenitally blind (CB; 30.2 ⫾ 5.92 years old;
reach target. Participants grasped the handle of the apparatus with a
onset of blindness at birth), 12 late blind (LB; 30.7 ⫾ 7.78 years old; onset
power grip of the left hand with the thumb on the top. The left hand was
of blindness at age ⬎3 years), and 12 sighted controls (SC; 27.9 ⫾ 6.17
passively moved by the apparatus within 1500 ms along a straight horiyears old; normal or corrected-to-normal vision). All three groups were
zontal path of 10 cm (Fig. 1, white dashed lines) from a starting point
matched by gender (6 females, 6 males), age (maximal deviation, 8 years;
located 25 cm in front of the chest at the body midline to one of three
mean, 2.89 years), and level of education (high school degree). Furthertarget positions. The target positions lay straight at 0°, 5° left, or 5° right
more, all of them were right-handed according to the German translation
(Fig. 1, white circle, white cross, or white diamond, respectively) relative
of the Edinburgh Handedness Inventory (EHI) (Oldfield, 1971) (EHI
to the body midline. During the movement, a speaker placed in target
score: CB, 70 ⫾ 22; LB, 75 ⫾ 19; SC, 91 ⫾ 10). Participants were either
paid for their participation or received course credits. The experiment
direction (Fig. 1, upper left, black speaker) produced a continuous low-
Reuschel et al. • Spatial Updating without Vision
2424 • J. Neurosci., February 15, 2012 • 32(7):2422–2429
pitched 440 kHz tone. A high-pitched 1 kHz tone (500 ms) prompted
participants to shift the head in the direction of the tone and then to reach
to the target with the outstretched right index finger as accurately as
possible. Before the experiment, participants underwent several training
trials to ensure correct task execution.
Conditions and tasks. In the main experiment (Fig. 1), blindfolded
participants performed controlled head movements that inevitably induced corresponding eye movements (gaze shifts). First, the apparatus
guided the hand to the proprioceptive target position (0°, 5° left, or 5°
right relative to the body midline, with the origin at the head central axis;
Fig. 1, upper left, white dashed lines) while participants directed their
gaze toward the target with the help of a low-pitched tone and tactile
guidance for the tip of the nose. The left hand (proprioceptive target)
stopped for 1 s at the target site and then was moved back to the starting
point (Fig. 1, bottom). Before the reaching movement, gaze was varied
across four different heading (facing) directions that lay 10° and 15°
leftwards and rightwards (Fig. 1, bottom). A speaker on the left or right
side (at 10° or 15°) presented the high-pitched tone (Fig. 1, bottom, black
speaker) that instructed participants to change gaze in the direction of
the tone, i.e., they turned the head until it reached the predefined head
position marked by a mechanical stopper. While keeping their head in
this direction, participants reached to the remembered proprioceptive
target (Fig. 1, bottom). Thus, gaze was shifted after target presentation
and before reaching (forces updating of target location). Change in gaze
direction was varied blockwise, i.e., participants turned their head either
10° leftwards or rightwards in one block and 15° leftwards or rightwards
in another block. In total, participants performed 15 reaching movements to each of the three targets with four different heading directions
(10° and 15° to the left and to the right), resulting in 12 combinations.
In addition, we ran three proprioceptive-reaching control conditions for
all three groups and one eye movement control condition for the sighted
group. In the proprioceptive-reaching control conditions, the presentation
of the proprioceptive target was the same as in the main experiment. The
target either remained in that outward location (Fig. 1, top) for the online
and baseline reaching control condition, or, like in the main experiment, was
returned to the start position before reaching for the remembered reaching
control condition (Fig. 1, bottom). The main difference in these three control tasks was that the head was not deviated but remained facing the target
site during reaching (for the online and remembered reaching control conditions), or the head direction was not restricted to any direction (baseline
reaching control condition). We used the baseline control condition to measure any individual biases in reaching and then to remove them from all
other conditions. Here, participants reached to each of three target sites five
times. The purpose of the online and remembered proprioceptive controls
was to determine how precisely participants can reach to proprioceptive
targets when gaze/head is directed toward the target (for results, see Fig. 4A).
In each of these two conditions, participants reached to the three targets 15
times. The timing and signals were otherwise identical to those in the main
experiment as described above.
The fourth control condition was an eye movement control condition
performed only on sighted participants, in order to compare the blindfolded
eye movements that accompanied the head shift in the main experiment
with those when the eyes were uncovered and free to move to the visual target
while the head remained stationary (see Fig. 3B). In this condition, both eyes
and head were initially directed toward one of three visual targets (LED lit for
1.5 s) located at 0°, 5° left, or 5° right, but then only the eyes moved to a visible
saccade target (LED lit for 3 s) presented at 10° or 15° to the left or right of the
body midline (analogous to the head directions of the main experiment).
The presentation of the saccade target was accompanied by a high-pitched
tone from the same direction. Participants had to saccade 15 times from each
of the initial visual targets (0°, 5° left, 5° right) to each of the four saccadic
targets (15° left, 10° left, 10° right, 15° right).
Participants performed the baseline condition first, followed by the
main experiment and the proprioceptive control conditions (online and
remembered reaching task) in separate blocks, whose order was randomized across participants. Additionally, sighted participants underwent
the eye movement control condition at the end of the experiment. The
whole experiment lasted ⬃180 min and was split into two sessions.
Data processing. In the main experiment, participants turned their
head (10° or 15° leftwards or rightwards) before they initiated the reaching movement. To verify whether participants started the reach movement after the head had reached the required heading direction, we
recorded head movements with an ultrasound-based tracking system.
Positional changes of the ultrasound markers were analyzed offline in the
x and y plane using R 2.11.0 and SPSS 17.0. The continuous head position
signals were segmented in time windows of interest ranging from 100 ms
before to 2000 ms after the high-pitched tone, comprising the time of
head direction change together with the subsequent reach movement.
Finally, the segments were baseline corrected by setting the head direction to zero at the time when the high-pitched tone occurred, i.e., when
the head movement started. On the basis of the head movement data, we
excluded all trials from further analyses where participants did not move
their head in the time window of interest or moved their head in the
wrong direction (in total, 9.97% of all trials).
Eye movements were recorded by the HEOG and analyzed offline with
VisionAnalyzer Software (http://www.brainproducts.com). First, HEOG
signals were corrected for DC drifts and low-pass filtered with a cutoff
frequency of 5 Hz. Second, the time windows of interest were set from
100 ms before to 2000 ms after the high-pitched tone that induced the
gaze direction change. Third, we excluded those trials with muscle artifacts (5.57% of all trials) from all analyses of eye movements. Artifactrejected eye movement segments were then baseline corrected using the
first 100 ms of each HEOG segment and down-sampled to 200 Hz. For
further analysis of eye movement data, we did not include the two CBs
who showed nystagmus-like eye movements and the two CBs and five
LBs who had at least one glass eye.
Since eye movements were recorded in microvolts, eye and head amplitudes are difficult to compare. Yet, to roughly gauge how much the
eyes moved with the head, we ran the eye movement control condition in
the sighted participants and compared these visually directed eye movements with the eye movements under blindfolded conditions. We did
this by computing an individual gain factor ( G) for each sighted participant consisting of the eye movement distance in the eye movement
control condition, i.e., the angular difference between the visual target
(vT) at 0° or 5° left or right and the saccade target position (sT) at 10° or
15° left or right, divided by the HEOG signal:
G ⫽
sT ⫺ vT
HEOG
(1)
The mean value of this gain factor represents the mean change in eye
position in degrees corresponding to 1 ␮V. We transformed the individual eye movement signal measured in microvolts in the main experiment
and the eye movement control condition by multiplying the HEOG signal with the individual gain factor per participant, resulting in corresponding values in degrees (Fig. 2 B). In this way, we were able to
approximate the size of the eye movements in degrees in the main experiment for the sighted participants (Fig. 2 B, solid line) and use this rough
approximation to simply scale the HEOG amplitude across all groups
(Fig. 2 A). This allowed us to compare the extent to which the movement
of the eyes contribute to final gaze shift across the three groups.
Statistical analyses. Average touch times in milliseconds measured
from the start of the high-pitched tone that induced the change in gaze
direction were calculated for each group. To test whether touch times
differed for the four different head directions within each subject group
(CBs, LBs, SCs), we performed a repeated-measures ANOVA with the
within-subject factor head direction (15° left, 10° left, 10° right, 15°
right). In addition, we tested for differences in touch times between
groups by calculating a repeated-measures ANOVA with the withinsubject factor head direction (15° left, 10° left, 10° right, 15° right) and the
between-subject factor group (CBs, LBs, SCs).
The reaching endpoints recorded by the touch screen panel were subtracted from the respective target location (derived from the reaching
endpoints during the baseline reaching condition) to calculate horizontal
reaching errors. These were converted into degrees relative to the body
midline. One degree was ⬃6 mm. Since we varied the horizontal direction of the head and target in degree (relative to the body midline), we
only report the results of horizontal reaching errors in degree as well,
which were analyzed using R 2.11.0 and SPSS 17.0.
Reuschel et al. • Spatial Updating without Vision
J. Neurosci., February 15, 2012 • 32(7):2422–2429 • 2425
Figure 2. A, Head traces (dashed lines) for all participants (n ⫽ 12) of the three groups and eye traces (full lines) for the congenitally blind group (left, n ⫽ 8), the late blind group with real eyes
(middle, n ⫽ 7), and the sighted control group (right, n ⫽ 12) during the main experiment. Eye movements shown in head-centered coordinates are depicted in microvolts (y-axis) and head
movements are represented in degrees (x-axis). Time 0 ms marks the start of the auditory signal prompting the head turn. Participants were instructed to initiate the reaching movement after their
head reached the predefined head position marked by a mechanical stopper (⬃1200 ms after the auditory signal). The vertical dark gray lines indicate the average time of touch for each group (CB,
1724 ms; LB, 1776 ms; SC, 2002 ms), which did neither differ statistically between head displacements nor between groups ( p ⬎ 0.1). B, Eye movements of the sighted control group (n ⫽ 12)
transformed into degree, shown for the main experiment (full lines) and the eye movement control condition (dashed lines). A, B, Head and/or eye traces are shown for the segmented time window
of 100 ms before and up to 2000 ms after the high-pitched tone that induced the change in gaze direction or a saccade. Movement amplitudes were averaged across all trials of the four different head
directions or saccade target positions (blue, 15° left; cyan, 10° left; orange, 10° right; red, 15° right). Reference lines are shown for all four head directions (light gray lines).
The main experiment contained three variables of interest: visual experience that varied across the three subject groups (CBs, LBs, SCs),
target position (0°, 5° left, 5° right), and head direction (15° left, 10° left,
10° right, 15° right). To determine how reach errors varied as function of
these variables, we used a regression analysis with a nested design, i.e., the
multilevel analysis (MLA) for the nested variables, target location, and
head direction. In addition to the hierarchical structure, the MLA can
properly deal with empty cells and small number of cases in the present
dataset. We performed a fixed-effects MLA with target position and head
direction as repeated variable at the first level and participants with their
different visual experience as subject variable at the second level. We
defined fixed effects for all variables and interactions.
A change in head direction was accompanied by an eye movement in
the same direction in LB and SC participants. In the next step, we examined the extent to which reaching errors varied with the obtained head
and eye movements (in addition to visual experience). Therefore, we
defined the magnitude of change in head direction and eye position by
the maximal amplitude of movement within each trial. Since both movements were highly correlated, we conducted two separate MLAs including either measured change in head direction or measured change in eye
position as covariate (z-standardized). Both head direction and eye position were treated as repeated variables at the first level and participants
differing in visual experience were treated as a subject variable at the
second level.
To investigate the impact of head and eye movements on reach errors,
we exploited the fact that our LB group included both late blind individuals with two real movable eyes (LB_RE) and late blind individuals with
one or two glass eyes (LB_GE). While both LB_RE and LB_GE had visual
experience before they lost vision, only LB_RE showed systematic headand-eye coupling, whereas LB_GE with two glass eyes lacked eye movements and LB_GE with one glass eye showed low-amplitude eye
movements that were more variable than in LB_RE and elicited no systematic pattern with respect to the head turn. This enabled us to examine
the influence of head direction and eye position on reach errors by comparing LB_RE and LB_GE. Including the CBs and SCs, we had four
subject groups that did not differ only in their visual experience but also
in their proprioceptive and oculomotor feedback from the eyes. With
these four groups, we again conducted the MLA with target position and
head direction as repeated variables at the first level and participants
divided into four groups as subject variables at the second level. Likewise,
we again used the MLA for four groups that additionally contained the
measured change in head direction as covariate. In this way, we obtained
indirect evidence about the role of eye position changes in addition to
visual experience for updating proprioceptive targets for reaching.
Furthermore, we were interested in how accurately CBs, LBs, and SCs
can localize proprioceptive target positions in the dark when their gaze/
head is directed toward the target, both when the proprioceptive target is
at the target site (online) and when it is removed before reaching (remembered) in these two proprioceptive-reaching control conditions.
One-sample t tests were conducted to test whether the horizontal reaching errors significantly deviated from zero. We performed three t tests
(one per target) for all groups and proprioceptive control tasks (online
and remembered) and corrected the ␣ value accordingly by Bonferroni
correction.
Results
Head and eye movements
Figure 2 A illustrates the head and eye movements for the four
changes in head direction performed in the main experiment (15°
left, 10° left, 10° right, 15° right) averaged across trials and participants per group (CB, LB, SC). All participants followed the
instructions and turned their head in the correct direction and
with the required amplitude.
While CB participants (who completely lacked visual experiences) produced no systematic movements of the eyes along with
head, the LBs and SCs did move their eyes in the same direction
and ⬃300 ms before they initiated the head movement. Thus, at
the time of maximal head displacement (corresponding to the
time where participants were asked to initiate the reach-to-touch
movement; cf., Fig. 2 A, ⬃1200 ms after the auditory signal), eyes
were eccentric in the LBs and SCs but not in the CBs. Shifts in eye
position were larger for SC than LB; however, gaze-dependent
reaching errors tend to saturate beyond 10° gaze relative to target
(Bock, 1986; Henriques et al., 1998). The sighted individuals continued to maintain their eyes eccentrically with respect to the
head; this likely reflects the natural contribution of the eyes and
head seen for volitional large gaze shifts (Goossens and Van Opstal, 1997; Phillips et al., 1999; Populin and Rajala, 2011). Moreover, it is known for peripheral gaze shifts that the eyes of the
sighted deviate further than the head (Henriques and Crawford,
2002; Henriques et al., 2003). However, if no fixation stimulus is
2426 • J. Neurosci., February 15, 2012 • 32(7):2422–2429
Reuschel et al. • Spatial Updating without Vision
Figure 3. Horizontal reaching errors represented in degree on the left y-axis and approximate millimeters on the right y-axis. A–C, Errors are averaged across the three targets and across
participants in each group. A, Proprioceptive-reaching control conditions: online and remembered reaching conditions in the congenitally blind (red), late blind (blue), and sighted control (green)
groups. B, C, Main experiment: Errors plotted as a function of the four different changes in head direction for congenitally blind (red), late blind (blue), and sighted control (green) groups (B), and
with the late blind divided into subgroups with glass eyes (violet) and with real movable eyes (cyan) (C). Error bars are SEM. Target location is represented by the dotted horizontal line at zero degrees
on the y-axis. D, E, Individual differences of reaching errors to the left (mean of 10° and 15° left) minus reaching errors to the right (mean of 10° and 15° right) for congenitally blind (red), late blind
(blue), and sighted control (green) groups (D), and with the late blind divided into subgroups with glass eyes (violet) and with real movable eyes (cyan) (E).
present and head movements are constrained, the deviations of
the eyes relative to the head seem to be even larger and no
vestibulo-ocular reflex seems to arise. In contrast, LBs did not
maintain their eyes eccentric along with the head, but the eyes
appear to mechanically drift back to the center of the orbit. At the
time of touch (Fig. 2 A, vertical gray lines), on average, 1840 ms
after the auditory signal (which was consistent across groups and
across head directions; F stats, p ⬎ 0.1), the head had reached and
maintained its final position, while the eyes appeared to revert
back to the center of the orbit (aligned with the head) for LBs, but
only partly so for the SC. In summary, we find that head turns
evoked accompanying eye movements in sighted people and in
the late blind (although somewhat diminished in the latter), but
not in congenitally blind individuals.
Reaching to targets in gaze direction
When gaze was directed toward the target, participants reached
very accurately to both the online and remembered proprioceptive targets regardless of their visual experience (Fig. 3A). The
reach errors were not significantly different from zero in
proprioceptive-reaching control in all three groups (Bonferroni
corrected p ⬎ 0.1).
Reaching to remembered targets after change in gaze
direction (target updating)
To determine the effect of gaze shifts and visual experience on
reaching errors, we plotted the horizontal reach errors as a function of head direction for the three groups (Fig. 3B) and for the
four groups with the LB divided into those with real eyes and
those with one or two glass eyes (Fig. 3C). The results of the MLA
analysis indicate that reaching errors are affected by the interaction of head direction and group (F(2,36.08) ⫽ 17.33, p ⬍ 0.001).
Hence, reaching errors can be well explained by the change in
head direction in SCs (slope ⫽ ⫺0.10, t(36.13) ⫽ ⫺5.64, p ⬍ 0.01)
and in LBs (slope ⫽ ⫺0.08, t(35.96) ⫽ ⫺4.28, p ⬍ 0.01), but not in
CBs (slope ⫽ ⫺0.01, t(35.96) ⫽ ⫺0.69, p ⫽ 0.49). In addition to a
change in head turn, SCs and LBs who produced these systematic
reaching errors also showed a corresponding change in eye position, in contrast to the CBs (Fig. 2 A). Thus, reaching errors of
participants with visual experience (SCs and LBs) systematically
varied with gaze, i.e., they misreached to the right of the target if
gaze was directed to the left and vice versa. In contrast, participants who never had any visual experience (CBs) produced
reaching errors that did not vary with changes in gaze direction,
but were merely shifted to the right of the target (intercept ⫽
5.08, t(35.7) ⫽ 4.27, p ⬍ 0.01). This general rightward bias was
observed in all following analyses.
To explore the significance of eye position on reach errors, we
considered the two late blind subgroups, LB_RE (change in head
direction accompanied by a systematic change in eye position)
and LB_GE (change in head direction accompanied by no eye
movements in subjects with two glass eyes and by small and unsystematic changes in eye movements in subjects with one glass
eye). We found that the interaction of head direction and group
influenced reaching (F(3,36.02) ⫽ 13.50, p ⬍ 0.001). Thus, as
shown in Figure 3C, reach errors varied significantly and somewhat linearly with head direction in SCs (slope ⫽ ⫺0.10, t(36.17) ⫽
⫺5.66, p ⬍ 0.01) and LB_REs (slope ⫽ ⫺0.10, t(35.9) ⫽ ⫺4.90.
p ⬍ 0.01). But head shifts did not significantly influence reaching
errors in the CBs and LB_GEs. Comparable to the results in the
SCs, the LB_REs misreached the targets in the direction opposite
to the shift in head and corresponding eye (gaze). This means that
only participants with visual experience and real movable eyes
Reuschel et al. • Spatial Updating without Vision
J. Neurosci., February 15, 2012 • 32(7):2422–2429 • 2427
former visual experience (SCs and
LB_REs) misreached targets in the opposite direction to their eye movements that
accompanied shifts in head direction.
Furthermore, we explored whether
late blind and congenitally blind individuals differed in their reaching error
depending on the years of blindness or
on the age at mobility training, as we
showed in a previous study where we
Figure 4. A, B, Horizontal reaching errors as a function of standardized change in head direction (A) and eye position (B), assessed spatial discrimination ability of
averaged across targets and plotted for the congenitally blind, the sighted controls, and the late blind, separated into those with sighted and congenitally blind individuals (Fiehler et al., 2009). However, we
glass eyes and with real movable eyes. Colors, scales, and error bars as in Figure 3.
did not find any interaction or effect of
those variables in the present data (years
(i.e., SCs and LB_REs) demonstrated gaze-dependent reaching
of blindness: F(1,24) ⫽ 1.55, p ⫽ 0.23; age at mobility training:
errors.
F(1,24) ⫽ 0.67, p ⫽ 0.42).
We further calculated individual reaching errors as the differDiscussion
ence of the average reaching errors for leftward head direction
The aim of this study was to examine whether gaze-dependent
minus the average reaching errors for rightward head direction.
updating of targets for action depends on visual experience early
If participants show the typical gaze-dependent error pattern as
in life and, if so, whether gaze-dependent coding still remains
observed in previous studies (Henriques et al., 1998), positive
after the loss of vision. We examined spatial updating of propridifference errors are expected. SCs and LBs revealed a positive
oceptive reach targets in congenitally blind, late blind, and
difference error pattern that significantly differed from zero (SCs:
sighted people by having them move their head after being pret(11) ⫽ 5.95, p ⬍ 0.001; note, we also found a significant effect
when we exclude the participant with the highest positive differsented with a proprioceptive target (left thumb) but before reachence error: t(10) ⫽ 8.45, p ⬍ 0.001; LBs: t(11) ⫽ 5.04, p ⬍ 0.001) in
ing toward this remembered target. A change in head direction
contrast to CBs where difference errors varied around zero (t(11) ⫽
was accompanied by a corresponding change in eye position
2.15, p ⬎ 0.05; Fig. 3D). For the two late blind groups, we found that
(gaze shift) due to inherent head-and-eye coupling, but only
difference errors for LB_REs were more positive than zero (t(11) ⫽
within subjects who had visual experience (sighted and late
5.73, p ⬍ 0.001) but not for LB_GEs (t(11) ⫽ 2.60, p ⬎ 0.05; Fig. 3E).
blind). Participants systematically misreached toward the target
Next, we tested how well measured movements of the head
depending on shifted gaze direction. However, in the late blind,
and eye can explain reach errors as a function of visual experithis effect was mainly driven by the subgroup of people with two
ence. Since head and eye movements were highly correlated (Fig.
real moveable eyes. Participants who never acquired any visual
2 A), we separately analyzed horizontal reaching errors as a funcexperience in life (congenitally blind) showed a general rightward
tion of change in head direction (Fig. 4 A) or as a function of
bias regardless of gaze direction. Thus, only participants with
change in eye position (Fig. 4 B) using MLAs. For measured
visual experience early in life and two moveable eyes (sighted and
change in head direction, we found that reaching errors varied as
late blind with real eyes) demonstrated reach errors that varied as
a function of head direction and group (F(2,36.39) ⫽ 12.01, p ⬍
a function of gaze, i.e., change of head direction and eye position.
0.001), i.e., that head direction influenced reaching in SCs
Our results may imply that the brain updates remembered
(slope ⫽ ⫺1.53, t(36.35) ⫽ ⫺4.90, p ⬍ 0.01) and in LBs (slope ⫽
proprioceptive targets with respect to gaze (eye angle ⫹ head
⫺0.84, t(36.54) ⫽ ⫺2.71, p ⬍ 0.05), but not in CBs (slope ⫽ ⫺0.34,
angle), similar to remembered visual targets following movet(36.53) ⫽ ⫺1.52, p ⫽ 0.14). However, the effect found in LBs was
ments of the eyes and/or movements of the head (Henriques and
mainly driven by LB_REs, since the analysis with four groups
Crawford, 2002; Henriques et al., 2003). This would be consistent
(Fig. 4A) also resulted in an interaction of head direction and group
with previous studies that reported gaze-dependent spatial up(F(3,36.34) ⫽ 9.92, p ⬍ 0.001) and revealed a significant effect of meadating of proprioceptive reach targets following saccadic eye
sured head direction on reaching error only for SCs (slope ⫽ ⫺1.52,
shifts to visual fixation points (Pouget et al., 2002; Fiehler et al.,
t(36.32) ⫽ ⫺5.13, p ⬍ 0.01) and LB_REs (slope ⫽ ⫺1.14, t(36.55) ⫽
2010; Jones and Henriques, 2010). Here, we showed that sighted
⫺3.36, p ⬍ 0.01), but not for LB_GEs (slope ⫽ ⫺0.37, t(36.24) ⫽
people also produced gaze-dependent reaches to proprioceptive
⫺0.98, p ⫽ 0.34) or CBs (slope ⫽ ⫺0.35, t(36.49) ⫽ ⫺1.62, p ⫽ 0.11).
targets following movements of the head and accompanying eyes,
Hence, only participants with past visual experience and two
even when they were not visually directed. More importantly, we
real moveable eyes (SCs and LB_REs) systematically misestiare the first to investigate how blind people (with early or no
mated proprioceptive targets opposite to the direction of their
visual experience) compensate for movements of the head that
head movement.
occur between proprioceptive target presentation and reaching
To determine the relationship between reaching errors and
to the remembered site.
measured changes in eye position, we could only include those
participants with reliable HEOG signals (N: CB ⫽ 8, LB_RE ⫽ 7,
Role of developmental vision
SC ⫽ 12). The analysis showed that reaching errors varied as a
Gaze-dependent reaching errors were only present in those indifunction of eye position and group (F(2,23.85) ⫽ 7.31, p ⬍ 0.001),
viduals who have or had visual experience, i.e., sighted and late
meaning that the measured change in eye position significantly
blind but not congenitally blind people. This suggests that the
influenced reaching error only in SCs (slope ⫽ ⫺1.70, t(23.31) ⫽
process or format by which targets are coded in spatial memory is
⫺3.82, p ⬍ 0.01) and in LB_REs (slope ⫽ ⫺1.04, t(24.8) ⫽ ⫺2.34,
not innate but rather develops in interaction with visual input
p ⬍ 0.05). Thus, only participants with systematic eye signals and
early in life and even remains after loss of vision. Our results are
Reuschel et al. • Spatial Updating without Vision
2428 • J. Neurosci., February 15, 2012 • 32(7):2422–2429
consistent with studies in the auditory domain that found that
dichotic sound stimuli are misperceived opposed to head direction in sighted but not in congenitally blind individuals (Lewald,
2002; Schicke et al., 2002). However, those results and ours could
also be explained by a misestimation of head rotation, rather than
a representation of space anchored to gaze. Yet, the latter explanation that targets are coded and updated in gaze- or eyecentered coordinates would be consistent with the results of other
studies investigating the reference frames used in coding and
updating remembered visual and proprioceptive targets following intervening eye movements (Henriques et al., 1998; Pouget et
al., 2002; Fiehler et al., 2010; Jones and Henriques, 2010). Even if
it were the case that reaching errors to remembered targets were
due to a misestimate of a subsequent head turn, the absence of
gaze-dependent errors in congenitally blind people suggests that
using such estimates for localizing proprioceptive targets may
require early visual experience. However, this same lack of gazedependent errors may be better explained by congenitally blind
individuals primarily using a stable internal or anatomical reference frame fixed to the body, whereas sighted and late blind
individuals preferably code and update space with respect to external coordinates, i.e., they take into account relative changes
between target and gaze (Gaunet and Rossetti, 2006; Pasqualotto
and Newell, 2007; Röder et al., 2007; Sukemiya et al., 2008).
It is known that external coding of space develops on the basis
of visual input during critical periods early in life. A recent study
by Pagel et al. (2009) reported that sighted children remap tactile
input into an external reference frame after the age of 5 years.
Remapping of stimuli of different sensory modalities also requires optimal multisensory integration, which probably does
not develop before the age of 8 years (Ernst, 2008). Such functional changes are grounded in the maturation of different brain
areas. The posterior parietal cortex (PPC), which plays a central
role in gaze-centered spatial updating (Duhamel et al., 1992; Batista et al., 1999; Nakamura and Colby, 2002; Medendorp et al.,
2003; Merriam et al., 2003), slowly increases in gray matter volume and reaches the maximum at ⬃10 –12 years of age (Giedd et
al., 1999), i.e., higher-order association cortices like the PPC mature even during early adulthood after maturity of lower-order
sensory cortices (Gogtay et al., 2004; for review, see Lenroot and
Giedd, 2006). Moreover, corpus callosum areas substantially increase from ages 4 to 18 years (Giedd et al., 1996), which facilitates interhemispheric transfer and thus might enhance spatial
remapping (cf., Berman et al., 2005). As a consequence, a lack of
vision during that specific time may influence the development of
the corpus callosum, since congenitally blind compared with
sighted and late blind people show a reduced volume of a callosal
region that primarily connects visual–spatial areas in the PPC
(Leporé et al., 2010). Such changes in the structural and functional organization of the brain during childhood might trigger
the switch from internal to external spatial coding strategies.
When vision is lost after brain areas involved in spatial remapping have matured, as likely was the case for most, if not all, of the
late blind individuals in this study, spatial coding strategies similar to the sighted are implemented and, more importantly, may
remain dominant. This is consistent with our results: the gazedependent reaching errors for the late blind with two real eyes
were similar to the sighted, both when subjects who developed
blindness before the age of 8 years (n ⫽ 3) were included in the
analysis and when they were not. However, when vision is completely lacking during development, brain areas seem to mature
differently and reorganize as a consequence of visual loss. Such
structural brain changes in congenitally blind individuals probably induce functional changes in how spatial memory is coded.
Contribution of the eyes to spatial updating
To assess the contribution of the eyes to spatial updating, we divided
late blind individuals into those with one or two glass eyes and those
with two real eyes. We found that reaching errors varied as a function
of measured head direction and eye position only in individuals with
early visual experience and real movable eyes (sighted and late blind
with two real eyes). This suggests that spatial updating of proprioceptive targets with respect to gaze requires visual input during development, as well as the ability to sense or control the eyes. This
ability would naturally be absent in late blind people with two glass
eyes, and is probably lacking in congenitally blind people due to
atrophy of optomotor muscles (Leigh and Zee, 1980; Kömpf and
Piper, 1987). One can speculate that late blind people with one glass
and one real eye suffer from a combined effect, i.e., absent sensory
signals from the missing eye and muscle atrophy of the real eye due
to the loss of coherent signals from both eyes. This may be reflected
by the variable, low-amplitude electroocular signals that we recorded from the LB_GE. In contrast, late blind individuals with two
real eyes produce eye movements comparable to the sighted individuals (Leigh and Zee, 1980; Kömpf and Piper, 1987) and therefore
probably have access to the extraretinal signals necessary for updating reach targets with respect to gaze direction (Klier and Angelaki,
2008). Thus, it may be that early visual experience provides the basis
for normal oculomotor sensation and control, which in turn allows
for spatial updating of targets relative to gaze.
Given that reaching errors tend to vary systematically both when
the eyes and the eyes and head are shifted following target presentation in sighted individuals, and not (or less so) when only the head
turns but the eyes remain directed toward the target site (Henriques
and Crawford, 2002; Henriques et al., 2003), the movement of the
eyes away from the target appears to be critical. This is consistent
with the interpretation that reaching errors systematically vary as a
function of the angular distance between the target site and gaze
direction. Thus, although our goal was to explore how target locations are updated depending on subsequent gaze shifts, gaze could
only reliably be varied by having participants move the head (given
the poor control of the eyes in the CB and the lack of eyes in some
LB). Yet, it is significant that only when the eyes contributed to the
gaze shift did we see a pattern of errors consistent with previous
results on updating of remembered visual and proprioceptive targets
in the sighted. Since we observed gaze-dependent reaching errors to
proprioceptive targets when the direction of head rotation was restricted to a predefined path, this suggests that gaze does not have to
be aimed toward a fixation target to use this movement to update
reaching targets.
To conclude, our results suggest that gaze-dependent spatial
updating develops on the basis of early visual experience. Such
spatial coding strategies, once established, seem to be used even
after later loss of vision as long as signals from both eyes and head
(gaze) are available.
References
Andersen RA, Buneo CA (2002) Intentional maps in posterior parietal cortex. Annu Rev Neurosci 25:189 –220.
Batista AP, Buneo CA, Snyder LH, Andersen RA (1999) Reach plans in eyecentered coordinates. Science 285:257–260.
Berman RA, Heiser LM, Saunders RC, Colby CL (2005) Dynamic circuitry for
updating spatial representations. I. Behavioral evidence for interhemispheric
transfer in the split-brain macaque. J Neurophysiol 94:3228 –3248.
Bock O (1986) Contribution of retinal versus extraretinal signals towards visual
localization in goal-directed movements. Exp Brain Res 64:476 – 482.
Reuschel et al. • Spatial Updating without Vision
Crawford JD, Henriques DY, Medendorp WP (2011) Three-dimensional
transformations for goal-directed action. Annu Rev Neurosci 34:309 –331.
Cressman EK, Henriques DY (2009) Sensory recalibration of hand position
following visuomotor adaptation. J Neurophysiol 102:3505–3518.
Cressman EK, Henriques DY (2010) Reach adaptation and proprioceptive
recalibration following exposure to misaligned sensory input. J Neurophysiol 103:1888 –1895.
Duhamel JR, Colby CL, Goldberg ME (1992) The updating of the representation of visual space in parietal cortex by intended eye movements. Science 255:90 –92.
Ernst MO (2008) Multisensory integration: a late bloomer. Curr Biol
18:R519 –R521.
Fiehler K, Reuschel J, Rösler F (2009) Early non-visual experience influences
proprioceptive-spatial discrimination acuity in adulthood. Neuropsychologia 47:897–906.
Fiehler K, Rösler F, Henriques DY (2010) Interaction between gaze and visual
and proprioceptive position judgements. Exp Brain Res 203:485– 498.
Fine I, Wade AR, Brewer AA, May MG, Goodman DF, Boynton GM, Wandell
BA, MacLeod DI (2003) Long-term deprivation affects visual perception and cortex. Nat Neurosci 6:915–916.
Gaunet F, Rossetti Y (2006) Effects of visual deprivation on space representation: immediate and delayed pointing toward memorised proprioceptive targets. Perception 35:107–124.
Ghahramani Z, Wolpert DM, Jordan MI (1996) Generalization to local remappings of the visuomotor coordinate transformation. J Neurosci
16:7085–7096.
Giedd JN, Vaituzis AC, Hamburger SD, Lange N, Rajapakse JC, Kaysen D,
Vauss YC, Rapoport JL (1996) Quantitative MRI of the temporal lobe,
amygdala, and hippocampus in normal human development: ages 4 –18
years. J Comp Neurol 366:223–230.
Giedd JN, Blumenthal J, Jeffries NO, Castellanos FX, Liu H, Zijdenbos A, Paus
T, Evans AC, Rapoport JL (1999) Brain development during childhood
and adolescence: a longitudinal MRI study. Nat Neurosci 2:861– 863.
Gogtay N, Giedd JN, Lusk L, Hayashi KM, Greenstein D, Vaituzis AC, Nugent
TF 3rd, Herman DH, Clasen LS, Toga AW, Rapoport JL, Thompson PM
(2004) Dynamic mapping of human cortical development during childhood through early adulthood. Proc Natl Acad Sci U S A 101:8174 – 8179.
Goossens HH, Van Opstal AJ (1997) Human eye-head coordination in two
dimensions under different sensorimotor conditions. Exp Brain Res
114:542–560.
Harris CS (1963) Adaptation to displaced vision: visual, motor, or proprioceptive change? Science 140:812– 813.
Hatada Y, Rossetti Y, Miall RC (2006) Long-lasting aftereffect of a single
prism adaptation: shifts in vision and proprioception are independent.
Exp Brain Res 173:415– 424.
Held R, Bauer JA Jr (1974) Development of sensorially-guided reaching in
infant monkeys. Brain Res 71:265–271.
Held R, Freedman SJ (1963) Plasticity in human sensorimotor control. Science 142:455– 462.
Henriques DY, Crawford JD (2002) Role of eye, head, and shoulder geometry in
the planning of accurate arm movements. J Neurophysiol 87:1677–1685.
Henriques DY, Klier EM, Smith MA, Lowy D, Crawford JD (1998) Gazecentered remapping of remembered visual space in an open-loop pointing task. J Neurosci 18:1583–1594.
Henriques DY, Medendorp WP, Gielen CC, Crawford JD (2003) Geometric
computations underlying eye-hand coordination: orientations of the two
eyes and the head. Exp Brain Res 152:70 –78.
Jakobson LS, Goodale MA (1989) Trajectories of reaches to prismaticallydisplaced targets: evidence for “automatic” visuomotor recalibration. Exp
Brain Res 78:575–587.
Jones SA, Henriques DY (2010) Memory for proprioceptive and multisensory
targets is partially coded relative to gaze. Neuropsychologia 48:3782–3792.
Klier EM, Angelaki DE (2008) Spatial updating and the maintenance of visual constancy. Neuroscience 156:801– 818.
Kömpf D, Piper HF (1987) Eye movements and vestibulo-ocular reflex in
the blind. J Neurol 234:337–341.
Lackner JR (1973) Visual rearrangement affects auditory localization. Neuropsychologia 11:29 –32.
Leigh RJ, Zee DS (1980) Eye movements of the blind. Invest Ophthalmol Vis
Sci 19:328 –331.
J. Neurosci., February 15, 2012 • 32(7):2422–2429 • 2429
Lenroot RK, Giedd JN (2006) Brain development in children and adolescents: insights from anatomical magnetic resonance imaging. Neurosci
Biobehav Rev 30:718 –729.
Leporé N, Voss P, Lepore F, Chou YY, Fortin M, Gougoux F, Lee AD, Brun C,
Lassonde M, Madsen SK, Toga AW, Thompson PM (2010) Brain structure changes visualized in early- and late-onset blind subjects. Neuroimage 49:134 –140.
Levin N, Dumoulin SO, Winawer J, Dougherty RF, Wandell BA (2010) Cortical maps and white matter tracts following long period of visual deprivation and retinal image restoration. Neuron 65:21–31.
Lewald J (2002) Vertical sound localization in blind humans. Neuropsychologia 40:1868 –1872.
Lewis TL, Maurer D (2005) Multiple sensitive periods in human visual development: evidence from visually deprived children. Dev Psychobiol
46:163–183.
Maurer D, Lewis TL (2001) Visual acuity: the role of visual input in inducing
postnatal change. Clin Neurosci Res 1:239 –247.
Medendorp WP, Goltz HC, Vilis T, Crawford JD (2003) Gaze-centered updating of visual space in human parietal cortex. J Neurosci 23:6209 – 6214.
Merabet LB, Pascual-Leone A (2010) Neural reorganization following sensory loss: the opportunity of change. Nat Rev Neurosci 11:44 –52.
Merriam EP, Genovese CR, Colby CL (2003) Spatial updating in human
parietal cortex. Neuron 39:361–373.
Nakamura K, Colby CL (2002) Updating of the visual representation in
monkey striate and extrastriate cortex during saccades. Proc Natl Acad Sci
U S A 99:4026 – 4031.
Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychol 9:97–113.
Ooi TL, Wu B, He ZJ (2001) Distance determined by the angular declination
below the horizon. Nature 414:197–200.
Pagel B, Heed T, Röder B (2009) Change of reference frame for tactile localization during child development. Dev Sci 12:929 –937.
Pascual-Leone A, Amedi A, Fregni F, Merabet LB (2005) The plastic human
brain cortex. Annu Rev Neurosci 28:377– 401.
Pasqualotto A, Newell FN (2007) The role of visual experience on the representation and updating of novel haptic scenes. Brain Cogn 65:184 –194.
Phillips JO, Ling L, Fuchs AF (1999) Action of the brain stem saccade generator during horizontal gaze shifts. I. Discharge patterns of omnidirectional pause neurons. J Neurophysiol 81:1284 –1295.
Populin LC, Rajala AZ (2011) Target modality determines eye-head coordination in non-human primates: implications for gaze control. J Neurophysiol 106:2000 –2011.
Pouget A, Ducom JC, Torri J, Bavelier D (2002) Multisensory spatial representations in eye-centered coordinates for reaching. Cognition
83:B1–B11.
Putzar L, Goerendt I, Lange K, Rösler F, Röder B (2007) Early visual deprivation impairs multisensory interactions in humans. Nat Neurosci
10:1243–1245.
Redding GM, Wallace B (1992) Effects of pointing rate and availability of
visual feedback on visual and proprioceptive components of prism adaptation. J Mot Behav 24:226 –237.
Röder B, Kusmierek A, Spence C, Schicke T (2007) Developmental vision
determines the reference frame for the multisensory control of action.
Proc Natl Acad Sci U S A 104:4753– 47538.
Schicke T, Demuth L, Röder B (2002) Influence of visual information on the
auditory median plane of the head. Neuroreport 13:1627–1629.
Simani MC, McGuire LM, Sabes PN (2007) Visual-shift adaptation is composed of separable sensory and task-dependent effects. J Neurophysiol
98:2827–2841.
Sukemiya H, Nakamizo S, Ono H (2008) Location of the auditory egocentre
in the blind and normally sighted. Perception 37:1587–1595.
van Beers RJ, Baraduc P, Wolpert DM (2002) Role of uncertainty in sensorimotor control. Philos Trans R Soc Lond B Biol Sci 357:1137–1145.
Vetter P, Goodbody SJ, Wolpert DM (1999) Evidence for an eye-centered
spherical representation of the visuomotor map. J Neurophysiol 81:935–939.
Wang J, Sainburg RL (2005) Adaptation to visuomotor rotations remaps
movement vectors, not final positions. J Neurosci 25:4024 – 4030.
Zwiers MP, Van Opstal AJ, Paige GD (2003) Plasticity in human sound localization induced by compressed spatial vision. Nat Neurosci 6:175–181.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement