a r t I C l e S

a r t I C l e S
a r t ic l e s
Integration of visual motion and locomotion in mouse
visual cortex
© 2013 Nature America, Inc. All rights reserved.
Aman B Saleem1,2, Aslı Ayaz1, Kathryn J Jeffery2, Kenneth D Harris1,3,4 & Matteo Carandini1
Successful navigation through the world requires accurate estimation of one’s own speed. To derive this estimate, animals
integrate visual speed gauged from optic flow and run speed gauged from proprioceptive and locomotor systems. The primary
visual cortex (V1) carries signals related to visual speed, and its responses are also affected by run speed. To study how V1
combines these signals during navigation, we recorded from mice that traversed a virtual environment. Nearly half of the V1
neurons were reliably driven by combinations of visual speed and run speed. These neurons performed a weighted sum of the
two speeds. The weights were diverse across neurons, and typically positive. As a population, V1 neurons predicted a linear
combination of visual and run speeds better than either visual or run speeds alone. These data indicate that V1 in the mouse
participates in a multimodal processing system that integrates visual motion and locomotion during navigation.
There is increasing evidence that activity in area V1 is determined not
only by the patterns of light falling on the retina1, but also by multiple nonvisual factors. These factors include sensory input from other
modalities2,3, the allocation of spatial attention4 and the likelihood
of an impending reward5. In mice, in particular, firing of V1 neurons
shows strong changes with locomotion6–8, but the precise form and
function of this modulation are unclear. One possibility is that locomotion simply changes the gain of V1 neurons, increasing their responses
to visual stimuli when animals are running compared to when they
are stationary6. Another possibility is that V1 neurons respond to the
mismatch between what the animal sees and what is expected based on
locomotion7. Overall, the computational function of locomotor inputs
to V1 is still a mystery. Might they be useful for navigation?
One of the primary roles of vision is to help animals navigate.
Successful navigation requires an accurate estimate of one’s own
speed9–11. To obtain this estimate, animals and humans integrate
a measure of speed gauged from optic flow with one gauged from
the proprioceptive and locomotor systems9–15. Neural correlates of
locomotor speed have been found in high-level structures such as the
hippocampus16,17, but the neural substrates in which multiple input
streams are integrated to produce speed estimates are unknown.
Integration of multiple inputs has been observed in several neural circuits13,15,18–20 and typically involves mixed neuronal representations13,21–24. In early processing, the input streams are
integrated into a distributed population code, in which the input
signals are weighted differently by different neurons 21,24. Such a
mixed representation allows the integrated signal to be read out by
higher-level structures22. Properties of this mixed representation
(such as the statistical distribution of weights used by different
neurons) are not random but adapted to the specific integration that
must be performed21,25.
Here we studied how visual and locomotion signals are combined
in mouse V1, using a virtual reality system in which visual input was
either controlled by locomotion or was independent of locomotion.
We found that most V1 neurons responded to locomotion even in
the dark. The dependence of these responses on running speed was
gradual, and in many cells it was non-monotonic. In the presence
of visual inputs, most V1 neurons that were responsive encoded a
weighted sum of visual motion and locomotor signals. The weights
differed across neurons and were typically positive. As a population,
V1 neurons encoded positively weighted averages of speed derived
from visual and locomotor inputs. We suggest that such a representation facilitates computations of self motion, contributing to the estimation of an animal’s speed through the world.
To study the effects of visual motion and locomotion, we recorded
from mouse V1 neurons in a virtual environment based on an airsuspended spherical treadmill26,27. The virtual environment was a
corridor whose walls, ceiling and floor were adorned with a whitenoise pattern (Fig. 1a). We presented four prominent landmarks
(three gratings and a plaid) at equal distances along the corridor
(Fig. 1b). Head-fixed mice viewed this environment on three computer monitors arranged to cover 170° of visual angle (Fig. 1a) and traversed the corridor voluntarily by running on the treadmill at speeds
of their own choosing (Fig. 1c,d and Supplementary Fig. 1).
While the mice traversed this environment, we recorded from populations of V1 neurons with multisite electrodes and identified the spike
trains of single neurons and multiunit activity with a semiautomatic
spike-sorting algorithm28 (Supplementary Fig. 2). We then used grating stimuli to measure the receptive fields of the neurons (receptive field
size of 24 ± 2° (±s.e.), n = 81 neurons, receptive field centers with 10°–70°
1UCL Institute of Ophthalmology, University College London, London, UK. 2Department of Cognitive, Perceptual and Brain Sciences, University College London,
London, UK. 3UCL Institute of Neurology, University College London, London, UK. 4Department of Neuroscience, Physiology and Pharmacology, University College
London, London, UK. Correspondence should be addressed to A.B.S. ([email protected]).
Received 9 July; accepted 4 October; published online 3 November 2013; doi:10.1038/nn.3567
e 15
Firing rate
(spikes s–1)
10 30
Speed (cm s–1)
Firing rate
(spikes s–1)
Relative predictive power
using position
10 30
azimuth, n = 84 neurons, semisaturation
Speed (cm s )
contrast of 23 ± 3% (±s.e.), n = 38 neurons).
To measure the features that influenced
neural activity in the virtual environment,
we adopted a technique previously used for analysis of hippocampal
place cells29,30. For each neuron, we estimated the firing rate as a function of one or more predictor variables (for example, speed). Using a
separate data segment (the ‘test set’), we defined a prediction quality
measure Q as the fraction of the variance of the firing rate explained
by the predictor variables. This measure of prediction quality does
not require that stimuli be presented multiple times (Supplementary
Fig. 3), a key advantage when analyzing activity obtained during selfgenerated behavior.
We first measured responses of V1 neurons in a closed-loop condition, where the virtual environment was yoked to running speed and
thus faithfully reflected the movement of the animal in the forward
direction. For each neuron, we computed our ability to predict the
firing rate based on position alone (QP), on speed alone (QS), and on
both position and speed (QPS; Supplementary Figs. 4 and 5). The
responses of most V1 neurons (181/194 neurons) were predictable
based on both the position and speed in the environment (QPS > 0;
Supplementary Fig. 5). Here we concentrate on 110 of these neurons
whose firing rates were highly repeatable (QPS > 0.1). Some of these
neurons (51/110 neurons) showed clear modulation in firing rate as
a function of position (QP > 0.1). We expected to observe this tuning for position because of the different visual features present along
the corridor, and indeed neurons often responded most strongly as
the animal passed the visual landmarks (Fig. 1e and Supplementary
Fig. 5). The response of most neurons (81/110 neurons) also showed
a clear dependence on speed (QS > 0.1). Typically, speed explained
a larger fraction of the response than position did (75/110 neurons
with QS > QP, Fig. 1f,g). Indeed, for many neurons the responses were
as well predicted by speed alone as by speed and position together
Time (s)
Relative predictive power
using speed
Figure 1 Neuronal responses in V1 are
strongly influenced by speed. (a) Examples
of visual scenes seen by the mouse during
virtual navigation. (b) Schematic of the virtual
environment: a corridor with four landmarks
(three gratings and one plaid). Movement in
the environment was restricted to the forward
direction. Dots indicate the positions of the
example scenes in a. (c) Paths taken by a mouse
in 15 runs through the corridor in a session.
Representative example of 11 experimental
sessions. (d) Paths shown in c, expressed in
terms of speed at each position. Red dots,
spikes generated by a V1 neuron. (e,f) Firing
rate of two neurons as a function of animal
position and speed (bottom left); dependence
of firing rate on position alone (bottom right);
dependence of firing rate on speed alone;
the leftmost point corresponds to all speeds
≤1 cm s−1 (top left). Numbers to the right of
color bars indicate neuron identity. Error bars,
s.d. over n = 10 training sets. Representative
examples are shown from 110 neurons with
QPS > 0.1. (g) Relative power to predict firing rate
using position alone (QP/QPS) and speed alone
(QS/QPS). Dotted line denotes QP + QS = QPS.
Red dots indicate neurons in e,f. Solid dots
indicate well-isolated single units. Example
neurons shown here and in all figures are all
well-isolated units.
© 2013 Nature America, Inc. All rights reserved.
a r t ic l e s
5 10 15
Firing rate
(spikes s–1)
(QS/QPS = ~1, Fig. 1g and Supplementary Fig. 5). Speed, therefore,
exerts a powerful influence on the responses of most V1 neurons in
the virtual environment.
But what kind of speed is exerting this influence? In virtual reality,
the speed at which the virtual environment moves past the animal
(‘virtual speed’) is identical to the speed with which the animal runs
on the air-suspended ball (‘run speed’). Neurons in V1 can gauge
virtual speed through visual inputs, but to gauge run speed they must
rely on nonvisual inputs. These could include sensory input from
proprioception, and top-down inputs such as efference copy from
the motor systems. Ordinarily, virtual and run speeds are identical,
so their effects on V1 cannot be distinguished.
One way to isolate run speed from virtual speed is simply to eliminate the visual input. To do this, we turned off the monitors and
occluded all other sources of light, and thus measured responses to run
speed in the dark. Running influenced the activity of many V1 neurons even in this dark environment (39/55 well-isolated neurons were
significantly modulated, P < 0.001, sign-rank test), modulating their
activity by 50% to 200% (Supplementary Fig. 6). The dependence of
firing rate on run speed was graded, rather than a simple binary switch
between different rates in stationary and running periods (Fig. 2a–c).
Among the neurons modulated by running, most (27/39 neurons)
showed a significant (P < 0.001, sign-rank test) dependence on run
speed even when we excluded stationary periods (run speed < 1 cm/s).
About half of these neurons (16/27 neurons) showed a band-pass tuning characteristic, responding maximally to a particular run speed
(Fig. 2b,d and Supplementary Fig. 7); in the rest, firing rate showed
either a monotonic increase with run speed (Fig. 2a; 7/27 neurons)
or a monotonic decrease with run speed (Fig. 2c, 4/27 neurons).
a r t ic l e s
Number of neurons
We obtained similar results when the monitors were turned on but
displayed a uniform gray screen (data not shown). Thus, the responses
of V1 neurons depend smoothly and in diverse ways on the speed at
which the animal runs, even in the absence of visual inputs.
To understand how run speed affects V1 responses to visual stimuli,
we reconfigured the virtual reality environment into an open-loop
condition. In this condition we simply replayed movies of previous
closed-loop runs, irrespective of the animal’s current run speed.
Whereas in the closed-loop condition virtual speed and run speed
Firing rate
(spikes s–1)
10 30
Virtual speed,
V (cm s–1)
15 20 25
Firing rate
(spikes s )
Run speed,
R (cm s–1)
Run speed, R (cm s–1)
10 15 20
were always equal, in the open-loop condition the animal experienced the virtual environment at different combinations of virtual
and run speeds (Fig. 3a). We could investigate the influence of both
speeds because the mice did not attempt to adjust their running based
on visual inputs: there was little correlation between run speed and
virtual speed (r = 0.07 ± 0.05 (± s.e.)). Similarly, V1 neurons did not
modify their responses in the two conditions: when the two speeds
happened to match during the open-loop condition, responses of V1
neurons were similar to those measured in the closed-loop condition
10 30
Virtual speed,
V (cm s–1)
Predictive power
using model
firing rate = f(�V + �R)
� = tan
10 20
Firing rate
(spikes s–1)
�max = 90°
90° 180°
Interaction angle (�)
�max = ~45°
�max = ~11°
Predictive power
using model
Figure 3 V1 neurons are tuned for a weighted sum of run speed and virtual
speed. (a) Some paths in virtual speed and run speed taken by an animal
in the open-loop condition (representative example of the 11 experimental
sessions). (b–d) ‘Speed maps’ showing firing rate of three example neurons
(representative of 73 neurons with QRV >0.1) as a function of virtual speed
and run speed (bottom left); dependence of firing rate on run speed alone
(bottom right); and dependence of firing rate on virtual speed alone (top left).
Numbers to the right of color bars indicate neuron identity. Error bars, s.d. over
90° 135° 180°
n = 10 training sets. (e) In the weighted-sum model, firing rate is a nonlinear function f
Predictive power
Optimal interaction angle (�max)
using speed map
of a weighted sum of virtual speed V and run speed R. The weights α and β are summarized
by a single interaction angle θ = tan−1(α/β). (f–h) Predictions of the full-speed maps by the
model (left; compare to b–d). Color scales same as for corresponding neurons in b–d. The model’s predictive power as a function of θ (right). Optimal
interaction angle θmax is highlighted in red and indicated as a vector on the left. Dashed line represents the predictive power of the original speed map.
Color map is the same as for corresponding neurons in b–d. (i) Comparison of predictive power using the weighted sum model (Qmodel) at the optimal
interaction angle (θmax), to that using the speed map (QRV). Dashed line indicates equal performance. Blue points mark the examples in f–h. Solid dots
indicate well-isolated units. (j) Distribution of optimal interaction angles θmax across neurons. Black bars indicate the distribution for well-isolated units.
No. of neurons
© 2013 Nature America, Inc. All rights reserved.
Firing rate (spikes s–1)
Figure 2 Tuning of V1 neurons for run speed
in the dark. (a–c) Dependence of firing rate on
run speed for three V1 neurons measured in
the dark. Error bars, s.e., n > 1,800 time bins
(16.7 ms each). Sampling bins were spaced to
have equal numbers of data points; curves are
fits of a descriptive function (Online Methods).
Arrows indicate the speed at the maximal
response and open circles the firing rates when
the animal was stationary. d31, d113 and d158
10 30
10 30
10 30
20 High
indicate the neuron identity. (d) Preferred run
Run speed (cm s–1)
Run speed (cm s–1)
Run speed (cm s–1)
s )
speed (peak of the best-fit curve) for neurons
that showed a significant (P < 0.001, sign-rank
test) nonbinary modulation of firing rate as a function of run speed (n = 27 well-isolated neurons). Neurons where the preferred speed was <2 cm s −1
were considered low-pass (low; example in c); neurons where the preferred speed was >25 cm s−1 were considered high-pass (high; example in a), and
the remainder were considered bandpass (example in b).
a r t ic l e s
Neuron number
Decoding performance
© 2013 Nature America, Inc. All rights reserved.
Actual (� = 60°)
20 s
(Supplementary Fig. 8). We therefore used responses from the
open-loop condition to measure the effects and interactions of
the two speeds.
In the open-loop condition, responses of V1 neurons were modulated by both run speed and virtual speed. Some neurons were
strongly influenced by virtual speed (Fig. 3c,d; 28/173 neurons with
QV > 0.1); this was expected because translation of the virtual corridor
causes strong visual motion across the retina, and visual motion is
a prime determinant of responses of V1 neurons. The responses of
many neurons were also modulated by run speed (39/173 neurons
with QR > 0.1; Fig. 3b–d). This modulation was not due to eye movements (Supplementary Fig. 9). As in the absence of visual stimuli,
responses varied smoothly with run speed. Indeed, firing rates were
better predicted by a smooth function of speed than by a binary function with one value each for the stationary and running conditions
(Qbinary < QR: P < 10−8, sign-rank test).
There was no obvious relationship, however, between tuning for
virtual speed and for run speed (Supplementary Fig. 10); rather, the
firing of most neurons depended on the two speeds in combination,
with different cells performing different combinations. To study these
combinations, for each neuron we derived a ‘speed map’, an optimally
smoothed estimate of the response at each combination of run and
virtual speeds (Fig. 3b–d). Predictions of the firing rates based on
these speed maps were superior to predictions based on either speed
alone (QRV > max(QR,QV): P < 10−11; sign-rank test). In total, 73/173
neurons were driven by some combination of virtual and run speeds
(QRV > 0.1), and we selected these neurons for further analysis.
To summarize and quantify how a neuron’s firing rate depends on
virtual speed and run speed, we adopted a simple model based on a
weighted sum of the two speeds. This model requires a single parameter θ, the ‘interaction angle’ determined by the two weights, and a
nonlinear function f that operates on the weighted sum (Fig. 3e). We
found that the modeled responses not only visually resembled the
original speed maps (Fig. 3f–h) but also provided cross-validated predictions of the spike trains that were almost as accurate as those based
on the original two-dimensional speed map (Fig. 3i, Qmap – Qmodel =
0.006 ± 0.002 (± s.e.)). Therefore, each neuron’s responses depend on
a linear combination of virtual speed and run speed.
To study the relative weighting that neurons give to virtual and run
speeds, we estimated for each neuron the optimal interaction angle θmax
that gave the best cross-validated prediction of its spike trains. This angle
ranges from 0° to 180° and describes the linear combination of speeds
best encoded by each neuron. The most common value of θmax, seen
in about half of the neurons (34/73 neurons), was θmax of 45° ± 22.5°,
Normalized firing rate
Speed (cm s–1)
Figure 4 V1 neuron population activity encodes positive combinations
of virtual and run speeds. (a) Activity of 24 V1 neurons during an epoch
of open-loop navigation. Firing rate of each neuron was normalized to
range from 0 to 1 for illustration purposes. (b) Prediction of the linear
combinations of run speed and virtual speed based on the activity of a
population of V1 neurons, using a linear decoder that was trained and
evaluated on separate parts of the data set. The black curve shows a
weighted average of virtual speed and run speed ( θ = 60°), and the
red curve its prediction from population activity, for the same epoch as
in a. (c) Performance of the population decoder as a function of θ, for
a single experimental session. Error bars, s.e. across n = 5 runs of crossvalidation. Dashed lines, mean performance across all interaction angles.
The circled point indicates the example shown in b. Gray, performance
when the decoding is restricted to periods when both virtual speed and
run speed were >3 cm s−1. (d) Decoding performance as a function
of θ across recording sessions. Error bars, s.e. across sessions (n = 11
(red) and n = 9 (gray)).
Single session
All sessions
Interaction angle (�)
90° 135° 180°
Interaction angle (�)
indicating that these neurons responded to an equal weighting of run
and virtual speeds. Among the other neurons, some were selective
for run speed only (27/73 neurons with θmax = 90° ± 22.5°; Fig. 3j)
and a few for virtual speed only (6/73 neurons with θmax = 0° ± 22.5°).
Even fewer neurons (5/73 neurons) were selective for the difference
between virtual speed and run speed (θmax = 135° ± 22.5°), which
measures the mismatch between locomotion and visual motion7.
Overall, the relative weighting of virtual speed and run speed varied
between neurons but showed a clear preference for equal weighting.
The fact that V1 neurons encode diverse combinations of run and
virtual speed suggests that a downstream area could read out from the
population of V1 neurons diverse combinations of visual and nonvisual information. To investigate these population-level consequences
of the single-neuron properties, we applied a decoding method to
the spike data, aiming to infer the combination of run speed and
virtual speed that had generated the data. We used the activity of
simultaneously recorded V1 neurons (Fig. 4a) to estimate different
linear combinations of run and virtual speeds, parameterized by the
interaction angle. For example, to test whether the V1 population
encodes a given combination of run and virtual speed (θ = 60°) we
asked whether the decoder could predict this particular combination
as a function of time (Fig. 4b), based on parameters obtained from a
separate training set. The quality of the prediction depended on the
linear combination of speeds being predicted (Fig. 4c,d). The combination that was decoded best had positive and approximately equal
weighting of the two speeds (at θ of ~45°, circular mean = 52°). By
contrast, virtual speed (θ of ~0 or ~180°) and run speed (θ of ~45°)
were decoded less precisely. The difference between the speeds (θmax
of ~135°) was predicted least precisely of all (circular-linear correlation, P < 10−3, Fig. 4d).
To assess the robustness of these results, we performed various
additional tests. First, we established that these results did not overly
© 2013 Nature America, Inc. All rights reserved.
a r t ic l e s
rely on responses measured during stationary conditions. To do this,
we restricted the decoding to epochs when both run speed and virtual
speed were >3 cm/s, and we found a similar dependence of decoding
performance on interaction angle (Fig. 4c,d: circular mean = 42°;
circular-linear correlation, P < 0.01; the reduction in decoding performance in the restricted analysis suggests that the population
encodes both smooth changes of speed and binary changes: stationary versus moving). Second, we ensured that these results did not
depend on the precise choice of decoding algorithm. We observed
the same dependence of decoding performance on interaction angle
with two alternate decoders (Supplementary Fig. 11). Finally, we
asked whether these results reflect the distribution of optimal interaction angles that we measured. We used simulations of V1 populations
with different distributions of interaction angles (Supplementary
Fig. 12). We could replicate the profile of decoding performance
present in the data only when the distribution of interaction angles
in the simulated population resembled that of the real population
(Supplementary Fig. 12c–j).
The population of V1 neurons therefore encodes positively weighted
linear combinations of run speed and virtual speed more accurately
than virtual speed or run speed alone.
Measuring activity in visual cortex in a virtual reality environment
revealed a number of interactions between visual motion and locomotion. First, we replicated the observation that running affects V1
responses6,7, even in the absence of visual stimuli, and we extended
it by showing that these responses vary smoothly and often nonmonotonically with run speed. Further, we found that V1 neurons
typically respond to combinations of run speed (gauged from locomotor and proprioceptive systems) and visual speed (gauged from
optic flow). This combination of run speed and visual speed is simply a weighted sum, with weights varying across neurons. Most neurons gave positive weights to visual and run speeds (0° ≤ θmax ≤ 90°).
Accordingly, the population of V1 neurons was most informative as to
positively weighted linear combinations of run and visual speeds.
The fact that V1 integrates locomotor and visual signals in this
manner suggests that it may be an early stage of a pathway for estimating an animal’s speed through the world, which can then help functions such as navigation. However, two alternate hypotheses have also
been suggested for the presence of run speed signals in V1.
A first alternate hypothesis is that locomotion simply changes
the gain of sensory responses of V1 neurons without affecting
the selectivity of visual neurons. In support of this hypothesis are
observations that locomotion scales, but does not modify, the visual
preferences of V1 neurons6. Our data are not fully consistent with
this interpretation for multiple reasons. First, we and others 7, find
responses to run speed even in the absence of visual stimuli, suggesting that locomotor signals provide a drive to V1 neurons and not
just a modulation of visual responses. Indeed, there is evidence that
locomotion alters the visual preferences of V1 neurons particularly
preferred stimulus size8. We also found that responses to running
were different across neurons, inconsistent with modulation of an
entire visual representation by a single locomotor-dependent gain
function. Previous data suggested that the effect of locomotion was
binary6, as would be expected if running caused a discrete change
in cortical state31. However, we found that a binary model did not
predict the firing-rate responses as well as a continuous dependence
on firing rate. Our data therefore indicate that locomotor effect on
the responses of V1 neurons go well beyond a uniform difference in
gain between running and stationary animals.
A second alternate hypothesis holds that V1 signals the mismatch
between actual visual stimuli and those that should have been encountered given the locomotion. This explanation fits the theoretical
framework of predictive coding32 and is supported by a recent report
using two-photon imaging of superficial V1 neurons 7. By exploring
all combinations of run speed and visual speed, however, we found
that only a small minority of V1 neurons (5/73 neurons) were selective for mismatch. Perhaps this discrepancy results from different
selection biases in the two recording methods: whereas our silicon
probe recordings primarily recorded neurons from the deeper layers
of cortex, two-photon imaging reports only neurons from layer 2/3;
the possibility that prediction errors are specifically encoded in superficial layers has in fact been suggested by computational models33.
However, a more likely reason may be differences in stimulus design.
We avoided sudden perturbations of the visual stimulus, whereas
the previous study7 specifically focused on such sudden perturbations. Such perturbations may trigger a change in alertness and often
evoked behavioral response (slowing down) in that study7. Behavior
can evoke calcium responses in the sensory cortex34,35, making it hard
to disambiguate the influence of sensory mismatch from its behavioral consequences. Thus, the lack of sudden perturbations of the
visual stimulus in our experiments might explain the differences in
the observations.
The circuit mechanisms underlying the effects we described are
most likely the same ones that support the effects of locomotion on
spatial integration: locomotion can affect size-tuning, preferentially
enhancing responses to larger stimuli8. This finding is compatible
with our current results, but is not sufficient to predict them; for
example, the tuning for run speed that we observed here in the dark
certainly could not be predicted by changes in spatial integration.
The effects of locomotion may be caused by neuromodulators such
as norepinephrine36. Our data are not inconsistent with this possibility, although the smooth (and sometimes band-pass) modulation of
firing with running speed would require the neuromodulatory signal
to encode speed in an analog manner. Furthermore the diverse effects
of running we observed across neurons would suggest that a diverse
prevalence of receptors or circuit connections underlie the tuning
for run speed.
In our experiments we recorded from animals that navigated a
familiar environment, in which the distance between the animal and
the virtual wall (equivalently, the gain of the virtual reality system)
was held constant. The mice had experienced at least three training
sessions in closed-loop mode before recording, which would be sufficient for the hippocampus to form a clear representation of the virtual
environment15 and presumably would be sufficient for the animal to
learn the stable mapping between movements and visual flow. In a
natural environment, however, an animal’s distance to environmental
landmarks can rapidly change. Such changes can lead to rapid alteration in visuomotor gain, accompanied by changes in neural activity at multiple levels, as demonstrated, for instance, in zebrafish 37.
Furthermore, both animal behavior and neural representations can
adjust to the relative noise levels of different input streams, in a manner reminiscent of Bayes optimal inference19,38. In the case of mouse
navigation, such changes should cause a reweighting of run and visual
speeds in the estimation of an animal’s own running velocity. Such a
reweighting could occur through alteration of the V1 representation,
by changing the distribution of weights of visual and running speeds
to center around a new optimal value. Alternatively, however, such
changes could occur outside of V1. The latter possibility is supported
by the fact that the representation we observed in V1 allows readouts
of a wider range of run-visual mixtures than a simulated population in
© 2013 Nature America, Inc. All rights reserved.
a r t ic l e s
which all neurons encoded a single interaction angle (Supplementary
Fig. 12). Additional experiments will be required to distinguish these
possibilities. We also note that, although head-fixed animals are certainly capable of navigation in virtual reality15,27,39–41, animals that
are not head-fixed gain an important cue for speed estimation from
the vestibular system. To understand how this vestibular signal affects
integration of visual and locomotor information one should record
from V1 neurons in freely moving animals42,43.
Our results suggest that the function of mouse visual cortex
may be more than just vision. A growing body of evidence suggests
that neocortical areas are not specific to a single function and that
neurons of even primary sensory cortices can respond to a wide
range of multimodal stimuli and nonsensory features2,3,5,44,45. Our
results provide a notable example of such integration and suggest an
ethological benefit it may provide to the animal. Estimation of speed
through the world is a critical function for navigation and is achieved
by integrating inputs from the visual system with locomotor variables.
Our data indicate that, at least in the mouse, this integration occurs
as early as in V1.
Methods and any associated references are available in the online
version of the paper.
Note: Any Supplementary Information and Source Data files are available in the
online version of the paper.
We thank M. Schölvinck for help with pilot experiments, L. Muessig and
F. Cacucci for sharing open-field behavioral data, B. Haider, D. Schulz and other
members of the laboratory for helpful discussions, J. O’Keefe, N. Burgess and
K.D. Longden for comments on the project. This work was supported by the
Medical Research Council and by the European Research Council. M.C. and
K.D.H. are jointly funded by the Wellcome Trust. K.J.J. is supported by the
Biotechnology and Biological Sciences Research Council. M.C. is supported
as a GlaxoSmithKline/Fight for Sight Chair in Visual Neuroscience.
All the authors contributed to the design of the study and to the interpretation of
the data, A.B.S. and A.A. carried out the experiments, A.B.S. analyzed the data,
and A.B.S., M.C. and K.D.H. wrote the paper.
The authors declare no competing financial interests.
Reprints and permissions information is available online at http://www.nature.com/
1. Carandini, M. et al. Do we know what the early visual system does? J. Neurosci.
25, 10577–10597 (2005).
2. Ghazanfar, A.A. & Schroeder, C.E. Is neocortex essentially multisensory?
Trends Cogn. Sci. 10, 278–285 (2006).
3. Iurilli, G. et al. Sound-driven synaptic inhibition in primary visual cortex.
Neuron 73, 814–828 (2012).
4. McAdams, C.J. & Reid, R.C. Attention modulates the responses of simple cells in
monkey primary visual cortex. J. Neurosci. 25, 11023–11033 (2005).
5. Shuler, M.G. & Bear, M.F. Reward timing in the primary visual cortex. Science 311,
1606–1609 (2006).
6. Niell, C.M. & Stryker, M.P. Modulation of visual responses by behavioral state in
mouse visual cortex. Neuron 65, 472–479 (2010).
7. Keller, G.B., Bonhoeffer, T. & Hubener, M. Sensorimotor mismatch signals in primary
visual cortex of the behaving mouse. Neuron 74, 809–815 (2012).
8. Ayaz, A., Saleem, A.B., Scholvinck, M.L. & Carandini, M. Locomotion
controls spatial integration in mouse visual cortex. Curr. Biol. 23, 890–894
9. Jeffery, K.J. Self-localization and the entorhinal-hippocampal system. Curr. Opin.
Neurobiol. 17, 684–691 (2007).
10.Terrazas, A. et al. Self-motion and the hippocampal spatial metric. J. Neurosci. 25,
8085–8096 (2005).
11.Israel, I., Grasso, R., Georges-Francois, P., Tsuzuku, T. & Berthoz, A. Spatial memory
and path integration studied by self-driven passive linear displacement. I. Basic
properties. J. Neurophysiol. 77, 3180–3192 (1997).
12.Lappe, M., Bremmer, F. & van den Berg, A.V. Perception of self-motion from visual
flow. Trends Cogn. Sci. 3, 329–336 (1999).
13.DeAngelis, G.C. & Angelaki, D.E. Visual-vestibular integration for self-motion
perception. in The Neural Bases of Multisensory Processes Frontiers in Neuroscience
(eds., M. M. Murray & M. T. Wallace) (2012).
14.Duffy, C.J. & Wurtz, R.H. Sensitivity of MST neurons to optic flow stimuli.
I. A continuum of response selectivity to large-field stimuli. J. Neurophysiol. 65,
1329–1345 (1991).
15.Chen, G., King, J.A., Burgess, N. & O’Keefe, J. How vision and movement combine
in the hippocampal place code. Proc. Natl. Acad. Sci. USA 110, 378–383 (2013).
16.Sargolini, F. et al. Conjunctive representation of position, direction, and velocity in
entorhinal cortex. Science 312, 758–762 (2006).
17.Geisler, C., Robbe, D., Zugaro, M., Sirota, A. & Buzsaki, G. Hippocampal place
cell assemblies are speed-controlled oscillators. Proc. Natl. Acad. Sci. USA 104,
8149–8154 (2007).
18.Angelaki, D.E., Gu, Y. & Deangelis, G.C. Visual and vestibular cue integration for
heading perception in extrastriate visual cortex. J. Physiol. (Lond.) 589, 825–833
19.Ernst, M.O. & Banks, M.S. Humans integrate visual and haptic information in a
statistically optimal fashion. Nature 415, 429–433 (2002).
20.Wolpert, D.M., Ghahramani, Z. & Jordan, M.I. An internal model for sensorimotor
integration. Science 269, 1880–1882 (1995).
21.Zipser, D. & Andersen, R.A.A. Back-propagation programmed network that simulates
response properties of a subset of posterior parietal neurons. Nature 331, 679–684
22.Rigotti, M. et al. The importance of mixed selectivity in complex cognitive tasks.
Nature 497, 585–590 (2013).
23.Angelaki, D.E., Gu, Y. & DeAngelis, G.C. Multisensory integration: psychophysics,
neurophysiology, and computation. Curr. Opin. Neurobiol. 19, 452–458 (2009).
24.Pouget, A., Deneve, S. & Duhamel, J.R. A computational perspective on the neural
basis of multisensory spatial representations. Nat. Rev. Neurosci. 3, 741–747
25.Gu, Y. et al. Perceptual learning reduces interneuronal correlations in macaque
visual cortex. Neuron 71, 750–761 (2011).
26.Dombeck, D.A., Khabbaz, A.N., Collman, F., Adelman, T.L. & Tank, D.W.
Imaging large-scale neural activity with cellular resolution in awake, mobile mice.
Neuron 56, 43–57 (2007).
27.Harvey, C.D., Collman, F., Dombeck, D.A. & Tank, D.W. Intracellular dynamics of
hippocampal place cells during virtual navigation. Nature 461, 941–946 (2009).
28.Harris, K.D., Henze, D.A., Csicsvari, J., Hirase, H. & Buzsaki, G. Accuracy of tetrode
spike separation as determined by simultaneous intracellular and extracellular
measurements. J. Neurophysiol. 84, 401–414 (2000).
29.Harris, K.D., Csicsvari, J., Hirase, H., Dragoi, G. & Buzsaki, G. Organization of cell
assemblies in the hippocampus. Nature 424, 552–556 (2003).
30.Itskov, V., Curto, C. & Harris, K.D. Valuations for spike train prediction.
Neural Comput. 20, 644–667 (2008).
31.Harris, K.D. & Thiele, A. Cortical state and attention. Nat. Rev. Neurosci. 12,
509–523 (2011).
32.Rao, R.P. & Ballard, D.H. Predictive coding in the visual cortex: a functional
interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2,
79–87 (1999).
33.Bastos, A.M. et al. Canonical microcircuits for predictive coding. Neuron 76,
695–711 (2012).
34.Xu, N.L. et al. Nonlinear dendritic integration of sensory and motor input during
an active sensing task. Nature 492, 247–251 (2012).
35.Murayama, M. & Larkum, M.E. Enhanced dendritic activity in awake rats.
Proc. Natl. Acad. Sci. USA 106, 20482–20486 (2009).
36.Polack, P.O., Friedman, J. & Golshani, P. Cellular mechanisms of brain statedependent gain modulation in visual cortex. Nat. Neurosci. (2013).
37.Ahrens, M.B. et al. Brain-wide neuronal dynamics during motor adaptation in
zebrafish. Nature 485, 471–477 (2012).
38.Deneve, S. & Pouget, A. Bayesian multisensory integration and cross-modal spatial
links. J. Physiol. Paris 98, 249–258 (2004).
39.Harvey, C.D., Coen, P. & Tank, D.W. Choice-specific sequences in parietal cortex
during a virtual-navigation decision task. Nature 484, 62–68 (2012).
40.Domnisoru, C., Kinkhabwala, A.A. & Tank, D.W. Membrane potential dynamics of
grid cells. Nature 495, 199–204 (2013).
41.Schmidt-Hieber, C. & Hausser, M. Cellular mechanisms of spatial navigation in the
medial entorhinal cortex. Nat. Neurosci. 16, 325–331 (2013).
42.Szuts, T.A. et al. A wireless multi-channel neural amplifier for freely moving animals.
Nat. Neurosci. 14, 263–269 (2011).
43.Wallace, D.J. et al. Rats maintain an overhead binocular field at the expense of
constant fusion. Nature 498, 65–69 (2013).
44.Bizley, J.K., Nodal, F.R., Bajo, V.M., Nelken, I. & King, A.J. Physiological and
anatomical evidence for multisensory interactions in auditory cortex. Cereb. Cortex
17, 2172–2189 (2007).
45.Crochet, S. & Petersen, C.C. Correlating whisker behavior with membrane potential
in barrel cortex of awake mice. Nat. Neurosci. 9, 608–610 (2006).
Approval for experiments with animals. Experiments were conducted according
to the UK Animals (Scientific Procedures) Act, 1986 under personal and project
licenses issued by the Home Office following ethical review.
© 2013 Nature America, Inc. All rights reserved.
Surgery and training. Five wild-type mice (C57BL6, 4–9 week-old males;
20–26 g) were chronically implanted with a custom-built head post and recording chamber (4 mm inner diameter) under isoflurane anesthesia. No statistical
methods were used to predetermine group sizes; the sample sizes we chose are
similar to those used in previous studies. We did not require blinding and randomization as only wild-type mice were used. On subsequent days, implanted
mice were acclimatized to run in the virtual environment in 20–30-min sessions
(4–12 sessions), until they freely ran 20 traversals of the environment in 6 min.
One day before the first recording session, animals were anesthetized under
isoflurane and a ~1 mm craniotomy was performed over area V1 (centered at
2.5 mm lateral from midline and 0.5 mm anterior from lambda). The chamber
was then sealed using silicone elastomer (Kwik-Cast).
Recordings. To record from V1 we inserted 16-channel linear multisite probes
(with site of size 312 or 430 µm2, spaced 50 µm apart, NeuroNexus Tech.) spanning depths of 100–850 µm. Recordings were filtered (0.3–5 kHz), threshold
crossings were auto clustered using KlustaKwik46 followed by manual adjustment
using Klusters47. A total of 194 units were isolated, of which 123 were in deep
layers (channel >10, numbered surface to ‘deep’), 11 in superficial layers
(channel < 6; we found layer 4 to be located around channels 6–10 based on a
current source density analysis). 46 isolated units were judged to be well-isolated
(isolation distance48 >20). All examples shown in this paper are well-isolated
units. The analysis of data included all units (except dark condition), as restricting the analysis of the data to only well-isolated units did not affect the results.
There was no correlation between spike isolation quality and QR (ρ = 0.06) or
QPS (ρ = 0.07). The firing rate of each unit was calculated by smoothing its spike
train using a 150-ms Gaussian window. All the stimuli and spiking activity were
then sampled at 60 Hz, the refresh rate of the monitors.
Virtual environment. Visual scenes of a virtual environment were presented
on three monitors (19-inch LCD, HA191, Hanns.G, mean luminance 50 cd/m2,
35 cm away from the eye) that covered a visual angle of 170° azimuth and 45°
elevation). Mice explored this environment by walking over an air-suspended
spherical treadmill49 (Fig. 1a).
The environment was simulated in Matlab using the Psychophysics toolbox50,51. The virtual environment was a corridor (120 cm × 8 cm × 8 cm)
whose walls, ceiling and floor were adorned all along the corridor with a
filtered white noise patterns of full Michelson contrast (overall root mean squared
(RMS) contrast: 0.14), and four positions in the corridor had prominent patterns as landmarks: gratings (oriented vertical or horizontal of full Michelson
contrast, RMS contrast: 0.35) in three positions and a plaid (overlapping
half-contrast horizontal and vertical gratings, RMS contrast: 0.35) in the fourth
(Fig. 1a,b). Movement in the virtual reality was constrained to one-dimensional
translation along the length of the room (the other two degrees of freedom
were ignored). All speeds <1 cm/s were combined into a single bin unless
otherwise specified. The gratings had a spatial wavelength of 1 cm on the wall,
which is equivalent to a spatial frequency of 0.09 cycles/° at a visual angle of
45° azimuth and 0° elevation. The white noise pattern was low-pass Gaussian
filtered with a cutoff frequency of 0.5 cycles/° at 45° azimuth. Owing to the
three-dimensional nature of the stimulus, the spatial frequency (in cycles/s) and
visual speed (in °/s) presented are a function of the visual angle. Therefore, the
speed of the visual environment is defined in terms of the speed of movement
through the virtual-reality environment, the virtual reality speed (virtual speed
in cm/s). In the closed-loop condition, this is the speed matched to what the
animal would see if it were running in a real environment of the same dimensions. For reference, at a visual angle of 60° azimuth and 0° elevation, a virtual speed of 1 cm/s corresponds to a visual speed of 9.6°/s, 10 cm/s to 96°/s
and 30 cm/s to 288°/s. The running speed of the animal is calculated based on
the movement of the air-suspended ball in the forward direction, as captured by
the optical mice49.
Mice first ran the closed-loop condition (>20 runs through the corridor),
followed by two sessions of the open-loop condition. On reaching the end of the
corridor on each run, the animals were returned (virtually) to the start, after a
3-s period during which no visual stimuli were presented (gray screen). In the
open-loop condition, movies generated in closed-loop were simply played back,
regardless of the animal’s run speed. For three animals (6 sessions), the closedloop condition was repeated after the open-loop sessions. After the measurements
in virtual reality, we mapped receptive fields using traditional bar and grating
stimuli. Each animal was taken through 1–3 such recording sessions.
Response function. The response of each neuron (shown in Figs. 1 and 3) was
calculated as a function of the variables of the virtual environment and their various combinations using a local smoothing method previously used to compute
hippocampal place fields52–54. For example, a neuron’s firing rate y(t), at time t,
was modeled as a function χa(a(t)) over the variable a. To estimate the model χa,
the variable a was first smoothed in time (150 ms Gaussian) and discretized in
n bins to take values a1, a2,…,an (the number of bins n was taken as 150 for
position, 30 for speeds; the precise bin numbers were not important as response
functions were smoothed). We then calculated the spike-count map S and occupancy map Φ. Each point of the spike-count map was the total number of spikes
when a(t) had a value of ai: Si = Σt :(a(t ) = ai ) y(t ), where i is the index of bins
of variable a. The occupancy map was the total time spent when the variable
a had a value of ai: F i = Σt :(a(t ) = ai )∆t , where ∆t was the size of each time bin
(∆t = 16.67 ms). Both S and Φ maps were smoothed by convolving them with
a common Gaussian window whose width σ (ranging between 1 bin to total
number of bins) was optimized to maximize the cross-validated prediction quality
(see below). The stationary (run speed ≤ 1cm/s) or static (virtual speed ≤ 1 cm/s)
bins were not included in the smoothing process. The firing rate model was then
calculated as the ratio of the smoothed spike count and occupancy maps:
ca =
S *ha (0, s )
Φ *ha (0, s )
where ηa (0, σ) is a Gaussian of width σ and the operator ‘*’ indicates a convolution. Based on the model, fit over training data, we predicted the firing rate over
the test data as
y a (t ) = c a (a(t ))
where ya(t) is the prediction of firing rate based on variable a. A similar procedure was followed for the two-dimensional ‘speed maps’, where two independent
Gaussians were used to smooth across each variable.
Prediction quality. Response functions and models were fit based on 80% of the
data and the performance of each of models was tested on the remaining 20%
(cross-validation). We calculated the prediction quality Qa of model χa, based
on variable a, as the fraction of variance explained:
Qa = 1 −
∑t( y(t ) − y a (t ))2
∑t( y(t ) − m )2
where y(t) is the smoothed firing rate (150 ms Gaussian window) of the neuron
at time t, ya(t) was the prediction by model χa for the same time bin and µ is the
mean firing rate of the training data. A value of Q close to 0 suggests the model
does not predict the firing rate of the neuron any better than using a constant
and a higher Q suggests good performance. Values of Q close to 1 are unlikely as
the intrinsic variability in the response of neurons, which is uncorrelated to the
stimulus, is not captured by any of the models. Very low values of Q suggest that
the response is unreliable. Therefore, we only concentrated on neurons whose
responses were reliable for further analysis by setting a limit of Q > 0.1 (Fig. 1g:
110/194 neurons with QPS > 0.1 and Fig. 2j: 73/194 neurons with Qmap > 0.1).
To compare the metric explained variance of the mean response which is more
commonly used, we calculated Q and explained variance on direction tuning of
neurons. We found that neurons with a model for direction tuning of Q > 0.1
had an explained variance in the range of 0.75–0.97 (two examples are shown in
Supplementary Fig. 3). To test the alternative hypothesis of a binary model of
run speed, we only considered two bins, which were whether the animal ran
(speed > 1 cm/s) or not. We used the mean firing rate of the training data in these
bins, to predict firing rate of the test data and calculate Qbinary.
© 2013 Nature America, Inc. All rights reserved.
Responses in darkness. We trained a separate set of animals (n = 3 mice) with an
additional 5–10 min condition of complete darkness. The darkness was achieved
by turning all the non-essential lights and equipment in the room off. Light from
essential equipment outside the recording area (for example, recording amplifiers) was covered with a red filter (787 Marius Red; LEE filters), rendering any
stray light invisible to mice. As a result, the recording area was completely dark
both for mice and for humans (luminance <10−2 cd/m2, i.e., below the limit of the
light meter). We used 32-channel, 4-shank multisite probes (spaced 200 µm apart)
to record one session on each animal; each shank had two sets of tetrodes spaced
150 µm apart (each electrode had recording sites of 121 µm2, Neuronexus). We
recorded a total of 145 units of which 55 were well-isolated units (isolation distance48 > 20). We only considered the well-isolated units for analysis in the dark
condition. Similar results were obtained when considering all units (data not
shown). The dark condition during the three recording sessions lasted 8 min,
9 min and 13 min. In this condition, speed was defined as the amplitude of the
two-dimensional velocity vector.
Responses in the dark condition were calculated by discretizing the run speed
such that each speed bin contained at least 7% of the dark condition (>30 s). In
cases where the animal was stationary for long periods of time (>7%), the stationary speed (≤1 cm/s) bin had more data points. We calculated the mean and error
of the firing rate in each of the speed bins (Fig. 2). The speed in any bin was the
mean speed during the time spent at that speed bin. To assess statistical significance of modulation by run speed, for each neuron we recalculated the firing rate
as a function of speed in each bin after shuffling the spike times. As a conservative
estimate, we considered a neuron’s response to be significantly modulated by run
speed if the variance of its responses was greater than 99.9% of the variance of
the its shuffled responses (P < 0.001). To test whether the neuron’s response was
significantly nonbinary, we followed the same procedure as above, but restricting
the test to only periods when run speed was >1 cm/s.
To characterize the run speed responses we fit the mean responses to speed s
(s > 1 cm/s) by the following descriptive function55,56:
y(s) = y max exp −(s − smax )2 /s (s)
where σ(s) was σ− if x < xmax and σ+ if x > xmax, and ymax, xmax, σ− and
σ+ were the free parameters. We fit three curves by adding constraints on
xmax: (i) a monotonically increasing function was fit by constraining xmax to
be greater than ≥ 30 cm/s, (ii) a monotonically decreasing function was fit by
constraining xmax to be ≤ 1 cm/s, and (iii) a bandpass curve by not constraining
xmax. These three curves were fit on 80% of the data, and we tested the fraction
of explained variance of the firing rate on the remaining 20%. We considered
a neuron bandpass only if the variance explained by the band-pass curve was
greater than both a monotonically increasing or decreasing curve and when xmax
was > 2 cm/s and < 25 cm/s.
Weighted sum model. The firing rate of a neuron i at time t, yi(t) was modeled as
yi (t ) = f (a V (t ) + b R(t ))
where α = sin(θ), β = cos(θ), V(t) is the virtual speed and R(t) is the run speed
of the animal at time t. The function f was estimated using a one-dimensional
version of the same binning and smoothing procedure described above for
estimating response functions. For each cell, the model was fitted for a range
of integration angles θ from 0° to 180° in 16 steps. The optimal integration
angle θmax was chosen as the value of θ giving the highest cross-validated
prediction quality.
Population decoding. The smoothed (0.9 s Gaussian window) firing rate vector
of all simultaneously recorded neurons was used to predict a linear combination
of virtual speed and run speed (sin(θ) V(t) + cos(θ) R(t)) for a range of interaction
angles θ from 0° to 180° in eight steps. As our hypothesis is to test the decoding
of speed relevant to navigation, we consider all speeds <3 cm/s to be stationary
(Fig. 4b–d), or ignored all the times when either run speed or visual speed were
<3 cm/s (Fig. 4c,d; we only considered sessions (9/11) where >100 s fulfilled
this criterion). Reducing the limit to <1 cm/s or 0.03 cm/s did not affect the
trend in the decoding performance (Supplementary Figs. 11 and 12). We used
a linear decoder (ridge regression, with the ridge parameter optimized by crossvalidation), to evaluate how well an observer could decode a combination of
speeds given the firing rate of the population. The performance of the decoder
was tested as the fraction of variance explained on an independent 20% of the
data that was not used to train the decoder.
46.Harris, K.D., Henze, D.A., Csicsvari, J., Hirase, H. & Buzsaki, G. Accuracy of tetrode
spike separation as determined by simultaneous intracellular and extracellular
measurements. J. Neurophysiol. 84, 401–414 (2000).
47.Hazan, L., Zugaro, M. & Buzsaki, G. Klusters, NeuroScope, NDManager: a free
software suite for neurophysiological data processing and visualization. J. Neurosci.
Methods 155, 207–216 (2006).
48.Schmitzer-Torbert, N., Jackson, J., Henze, D., Harris, K. & Redish, A.D. Quantitative
measures of cluster quality for use in extracellular recordings. Neuroscience 131,
1–11 (2005).
49.Dombeck, D.A., Khabbaz, A.N., Collman, F., Adelman, T.L. & Tank, D.W. Imaging
large-scale neural activity with cellular resolution in awake, mobile mice. Neuron
56, 43–57 (2007).
50.Brainard, D.H. The psychophysics toolbox. Spat. Vis. 10, 433–436
51.Pelli, D.G. The VideoToolbox software for visual psychophysics: Transforming
numbers into movies. Spat. Vis. 10, 437–442 (1997).
52.Harris, K.D. et al. Spike train dynamics predicts theta-related phase precession in
hippocampal pyramidal cells. Nature 417, 738–741 (2002).
53.Harris, K.D., Csicsvari, J., Hirase, H., Dragoi, G. & Buzsaki, G. Organization of cell
assemblies in the hippocampus. Nature 424, 552–556 (2003).
54.Loader, C. Local Regression and Likelihood (Springer, New York, 1999).
55.Freeman, T.C., Durand, S., Kiper, D.C. & Carandini, M. Suppression without
inhibition in visual cortex. Neuron 35, 759–771 (2002).
56.Webb, B.S., Dhruv, N.T., Solomon, S.G., Tailby, C. & Lennie, P. Early and late
mechanisms of surround suppression in striate cortex of macaque. J. Neurosci. 25,
11666–11675 (2005).
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF