Modeling prediction and pattern recognition in the early visual and olfactory systems $(function(){PrimeFaces.cw("Tooltip","widget_formSmash_items_resultList_42_j_idt799_0_j_idt801",{id:"formSmash:items:resultList:42:j_idt799:0:j_idt801",widgetVar:"widget_formSmash_items_resultList_42_j_idt799_0_j_idt801",showEffect:"fade",hideEffect:"fade",target:"formSmash:items:resultList:42:j_idt799:0:fullText"});});

Modeling prediction and pattern recognition in the early visual and olfactory systems $(function(){PrimeFaces.cw("Tooltip","widget_formSmash_items_resultList_42_j_idt799_0_j_idt801",{id:"formSmash:items:resultList:42:j_idt799:0:j_idt801",widgetVar:"widget_formSmash_items_resultList_42_j_idt799_0_j_idt801",showEffect:"fade",hideEffect:"fade",target:"formSmash:items:resultList:42:j_idt799:0:fullText"});});
Modeling prediction and pattern recognition in the
early visual and olfactory systems
BERNHARD A. KAPLAN
Doctoral Thesis
Stockholm, Sweden 2015
TRITA-CSC-A-2015:10
ISSN-1653-5723
KTH School of Computer Science and Communication
ISRN KTH/CSC/A-15/10-SE
SE-100 44 Stockholm
ISBN 978-91-7595-532-2
SWEDEN
Akademisk avhandling som med tillstånd av Kungl Tekniska högskolan framlägges till offentlig granskning för avläggande av Doctoral Thesis in Computer
Science onsdag 27:e maj 2015 klockan 10.00 i F3, Lindstedtsvägen 26, Kungl
Tekniska högskolan, Stockholm.
© Bernhard A. Kaplan, April 2015
Tryck: Universitetsservice US AB
iii
Abstract
Our senses are our mind’s window to the outside world and determine how we perceive our environment. Sensory systems are complex
multi-level systems that have to solve a multitude of tasks that allow us
to understand our surroundings. However, questions on various levels and scales remain to be answered ranging from low-level neural
responses to behavioral functions on the highest level. Modeling can
connect different scales and contribute towards tackling these questions
by giving insights into perceptual processes and interactions between
processing stages.
In this thesis, numerical simulations of spiking neural networks are
used to deal with two essential functions that sensory systems have to
solve: pattern recognition and prediction. The focus of this thesis lies
on the question as to how neural network connectivity can be used in
order to achieve these crucial functions. The guiding ideas of the models
presented here are grounded in the probabilistic interpretation of neural
signals, Hebbian learning principles and connectionist ideas. The main
results are divided into four parts.
The first part deals with the problem of pattern recognition in a
multi-layer network inspired by the early mammalian olfactory system with biophysically detailed neural components. Learning based
on Hebbian-Bayesian principles is used to organize the connectivity between and within areas and is demonstrated in behaviorally relevant
tasks. Besides recognition of artificial odor patterns, phenomena like
concentration invariance, noise robustness, pattern completion and pattern rivalry are investigated. It is demonstrated that learned recurrent
cortical connections play a crucial role in achieving pattern recognition
and completion.
The second part is concerned with the prediction of moving stimuli
in the visual system. The problem of motion-extrapolation is studied using different recurrent connectivity patterns. The main result
shows that connectivity patterns taking the tuning properties of cells
into account can be advantageous for solving the motion-extrapolation
problem.
The third part focuses on the predictive or anticipatory response to
an approaching stimulus. Inspired by experimental observations, particle filtering and spiking neural network frameworks are used to address
the question as to how stimulus information is transported within a motion sensitive network. In particular, the question if speed information
is required to build up a trajectory dependent anticipatory response
is studied by comparing different network connectivities. Our results
iv
suggest that in order to achieve a dependency of the anticipatory response to the trajectory length, a connectivity that uses both position
and speed information seems necessary.
The fourth part combines the self-organization ideas from the first
part with motion perception as studied in the second and third parts.
There, the learning principles used in the olfactory system model are
applied to the problem of motion anticipation in visual perception. Similarly to the third part, different connectivities are studied with respect
to their contribution to anticipate an approaching stimulus.
The contribution of this thesis lies in the development and simulation of large-scale computational models of spiking neural networks
solving prediction and pattern recognition tasks in biophysically plausible frameworks.
v
Sammanfattning
Våra sinnen är vårt medvetandes fönster mot omvärlden och påverkar hur vi uppfattar vår värld. De sensoriska systemen är komplexa
flernivåsystem som måste lösa en mängd olika uppgifter för att vi ska
kunna förstå vår omgivning. Men frågor på olika nivåer och skalor återstår att besvaras, allt från hur nervceller reagerar på stimuli på låg nivå
till frågor om beteende på den högsta nivån. Med modellering kan man
koppla ihop problem på olika skalor och på så vis svara på dessa genom att skapa insikter i perceptuella processer och interaktioner mellan
dessa processer.
I denna avhandling har numeriska simuleringar av neuronnät använts för att undersöka två viktiga funktioner som dessa sensoriska
system har: mönsterigenkänning och förutsägelse. Fokus i denna avhandling ligger på frågan om hur anslutingsmönster i neuronnät kan
användas för att uppnå dessa viktiga funktioner. De vägledande idéer
som modellerna grundar sig på är förankrade i, statistisk tolkning av
neurala signaler, principer för inlärning inspirerade av Hebb och connectionist idéer. De viktigaste resultaten är indelade i fyra delar.
Det första delen handlar om problemet med mönsterigenkänning i
ett flerskiktnätverk inspirerat av luktsystemet på låg nivå hos däggdjur, med biofysiskt detaljerade neurala komponenter. Inlärning utifrån
Hebbianskt-Bayesian principer används för att organisera anslutningar mellan och inom neuronnät och demonstreras i beteendemässigt relevanta uppgifter. Förutom mönsterigenkänning av artificiella lukter,
undersöktes fenomen så som koncentrations-invarians, robusthet mot
störningar, mönster färdigställande och mönster rivalitet. Vi visar här
att inlärda återkopplade kortikala kopplingar mellan neuroner spelar en
avgörande roll för att uppnå mönsterigenkänning och färdigställande.
Den andra delen handlar om förutsägelse av rörelsen av föremål i
det visuella systemet. Problemet med rörelseextrapolering studeras med
hjälp av olika anslutningsmönster återkoppling mellan neuroner. De huvudsakliga resultaten visar att anslutningsmönster som tar hänsyn till
neuroners specifika stimulanssvar har en fördel i att lösa rörelseextrapolerings problem.
Den tredje delen fokuserar på prediktiva eller förutseende svar på
annalkande stimulans. Inspirerad av experimentella observationer, använd två ramverk (particle filtering och spikande neuronnät) för att ta
upp frågan om hur stimulans-information transporteras inom ett rörelseigenkännande nätverk. Vi tittade på om hastighets information krävs
för att bygga upp en förutsägelse av banan genom att jämföra olika
nätverks-anslutningsmönster. Våra resultat visar på att ett anslutnings-
vi
mönster som tar hänsyn till information om både hastighet och position
är nödvändigt för att banans längd ska vägas in i nätverkets svar.
Den fjärde delen kombinerar idéer om inlärning från den första delen med idéer om rörelseuppfattning som studeras i den andra och
tredje delen. I likhet med den tredje delen, studeras olika nätverksanslutningsmönster med avseende på deras möjlighet att förutse ett
annalkande rörelsemönster.
Bidraget från denna avhandling är utveckling och simulering av storskaliga beräkningsmodeller för spikande neuronnät använda för förutsägelse och mönsterigenkännings-uppgifter inom biofysiskt rimliga ramar.
vii
Acknowledgements
First, I would like to express my sincere gratitude to my supervisor, Anders Lansner, for being a great, enthusiastic mentor guiding my research at
KTH to a successful end, for always being available for discussions, helpful
and patient, providing support and encouragement, all the helpful feedback
(even on short notice) on manuscripts, which always improved the quality
significantly, for undersanding my urge to travel and giving me the necessary
freedom to explore different paths (both scientifically and geographically). I
also would like to express my honest gratitude to my second supervisor, Örjan
Ekeberg, for providing assistance and support in all aspects, the critical, but
very constructive and useful input during meetings and discussions. I would
also like to thank Erik Fransén for always being helpful and understanding,
giving valuable support and encouraging feedback whenever needed.
I would also like to thank my opponent professor Fred Hamker for his
feedback and generously offering his time, and the committee members MarjaLeena Linne, Johan Lundström and Giampiero Salvi for their commitment by
agreeing to serve on the review committee.
My research was made possible through generous KTH support and EU
funding: the FACETS-ITN project (grant number 237955) and the BrainScaleS project (grant number FP7-269921). Additional financial support
was received from IBRO, INCF and the Stockholm Brain Institute for travel
grants.
Furthermore, I’m taking this opportunity to express my sincere gratitude
to Laurent Perrinet, Mina Khoei and Guillaume Masson for the productive
collaborations, the support and motivation to push forward the work on motion extrapolation and anticipation. I would also like to thank Karlheinz Meier
for paving the way to start my PhD in Stockholm and the continuous support.
My colleagues at CB deserve particular gratitude for the great atmosphere,
both inside and outside the work place. I am greatly in debted to Arvind Kumar for his feedback and commitment during the review process, to Peter
Ahlström for translating the abstract and Tony Lindeberg for inspiring discussions and feedback. Overall, I had a wonderful time in the department
and it is impossible to express all the appreciation here. Still, I’d like to express particular gratitude to Pierre, Phil, Florian and Pawel for being great
colleagues and travel mates, but mostly for enduring my complaints and annoying comments with patience, helping me through tough times with advice,
valuable comments, intelligent brainstorming sessions, patience, and cheering
me up with absurd humour, jokes, drinks and sarcasm. I’d also like to thank
viii
Pradeep for all the fun and challenging discussions inside and even more outside the workplace, Benjamin Auffarth for the collaboration on the olfactory
system, Oliver Frings for being the best football and drinking companion (except for supporting the wrong team), Eric Müller for sharing his knowledge
and skills, continuous support and collaborations plus having a excellent music
taste, Ramón and Georgios for relaxing after-work meetings, Anu and Yann
for being great travel mates to Riga, Jan, and Peter for great discussions and
tooling tips, Marko, Ylva and Matthias for boosting social activities, Mikael
Lindahl for the brainstorming meetings, and all other former and current
members of CB, Simon, David, Jeanette, Omar, Alex, Nathalie, Dinesh, Jyotika, Marcus, Jenny, Nalin, Mikael, Malin, Roland, Anders Ledberg and all
the others I forgot because of sleep deprivation ...
I would also like to thank all colleagues of FACETS(-ITN) and BrainScaleS
projects for the collaboration and interesting meetings. Particular thanks to
the colleagues I met during my stays in Marseille and Gif-sur-Yvette: Laurent
Perrinet, Mina Khoei, Giacomo Benvenuti, Fredo Chavane, Guillaume Masson, Yves Fregnac, Jan Antolik, Andrew Davison, Michelle Rudolph-Lilith,
Lyle Muller, Domenico Guarino for the friendly atmosphere, good collaborations and inspiring attitudes.
I am sincerely thankful to have met such great people and for the time I
was able to spend with them. I am leaving with a mind full of good memories!
Finally, I want to express my deepest gratitude to the most important
persons in my life who also contributed to the success of this thesis. I am extremely grateful to my family who has supported and encouraged me throughout my life and the course of this thesis and enabled me to reach the point
where I am now. Most importantly, I want to express my outmost gratitude
to Katherine Walker for being an amazing, supportive and caring partner,
source of inspiration, joy and motivation, for all the effort and commitment
enabling the fantastic trips and time spent together which made my life so
indescribably wonderful.
Contents
List of Figures
xiii
I Introduction, Theory and Methods
1 Introduction
1.1 Aims of the thesis . . . . . . . . . . . . . . . . . . . . . . .
1.2 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Why study prediction & pattern recognition? Why
this work needed? . . . . . . . . . . . . . . . . . . .
1.4 List of papers included in the thesis and contributions . . .
1.5 Publications not included in the thesis . . . . . . . . . . . .
1
. .
. .
. .
is
. .
. .
. .
2 Biological background
2.1 The Visual System . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Structure of the visual system . . . . . . . . . . . . . .
2.1.1.1 Retina . . . . . . . . . . . . . . . . . . . . .
2.1.1.2 Lateral geniculate nucleus (LGN) . . . . . .
2.1.1.3 Visual cortices . . . . . . . . . . . . . . . . .
2.2 The Olfactory System . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Structure of the olfactory system . . . . . . . . . . . .
2.2.1.1 The epithelium . . . . . . . . . . . . . . . . .
2.2.1.2 The olfactory bulb (OB) . . . . . . . . . . .
2.2.1.3 The olfactory cortex or piriform cortex (PC)
2.2.1.4 Other olfactory areas . . . . . . . . . . . . .
2.2.2 Differences and commonalities to visual perception . .
3 Methods
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
5
6
7
8
9
9
9
10
11
11
13
16
16
17
19
20
20
23
ix
x
CONTENTS
3.1
3.2
Modeling . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Computational modeling . . . . . . . . . . . . .
3.1.2 Computational neuroscience . . . . . . . . . . .
3.1.3 Levels of description and understanding . . . .
3.1.3.1 Modeling approaches in neuroscience
Neuromorphic systems . . . . . . . . . . . . . . . . . .
3.2.1 What is neuromorphic engineering? . . . . . .
3.2.2 Examples of neuromorphic systems . . . . . . .
3.2.3 Simulation technology . . . . . . . . . . . . . .
3.2.4 Hardware emulations - a short overview . . . .
3.2.5 Applications of neuromorphic systems . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
25
28
30
34
34
35
37
39
41
4 Theoretical background
43
4.1 Connectionism . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Criticism against connectionism . . . . . . . . . . . . . . 45
4.2 Internal representations, neural coding and decoding . . . . . . 49
4.2.1 Continuous, rate or frequency coding . . . . . . . . . . . 50
4.2.2 Discrete coding, labeled line coding and interval coding 51
4.2.3 Population level: Local and distributed representations
53
4.2.4 Temporal coding . . . . . . . . . . . . . . . . . . . . . . 55
4.2.5 Probabilistic and Bayesian approaches . . . . . . . . . . 56
4.2.5.1 Relevance of probabilistic and Bayesian ideas
for this thesis and their implementation . . . . 58
4.2.6 Decoding approaches . . . . . . . . . . . . . . . . . . . . 59
4.3 Computations in sensory systems . . . . . . . . . . . . . . . . . 62
4.3.1 Receptive fields . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.2 Normalization, winner-take-all . . . . . . . . . . . . . . 67
4.3.3 Pattern recognition, associative memory . . . . . . . . . 69
4.3.4 Pattern completion, pattern rivalry and Gestalt perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3.5 Link between olfactory and visual system models . . . . 71
4.3.6 Prediction and anticipation . . . . . . . . . . . . . . . . 72
4.4 Function and connectivity . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Genetics and tendencies of pre-wiring in the brain . . . 75
4.4.1.1 Topography . . . . . . . . . . . . . . . . . . . . 76
4.4.2 Learning and activity dependent connectivity . . . . . . 78
4.4.3 Attractor networks and other approaches . . . . . . . . 80
4.4.4 Bayesian confidence propagation neural network (BCPNN) 83
4.4.4.1 Sketch of a BCPNN derivation . . . . . . . . . 84
4.4.4.2 Spiking version of BCPNN . . . . . . . . . . . 87
CONTENTS
xi
4.4.4.3
Application examples using BCPNN . . . . . .
88
II Results and Discussion
91
5 Results
5.1 Results on olfactory system modeling . . . . . . . . . . . . . . .
5.1.1 Paper 1 - A spiking neural network model of self-organized
pattern recognition in the early mammalian olfactory
system . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1.1 Summary of results from Paper 1 . . . . . . .
5.1.1.2 Conclusions and discussion - Paper 1 . . . . .
5.2 Results on visual system modeling . . . . . . . . . . . . . . . .
5.2.1 Paper 2 - Anisotropic connectivity implements motionbased prediction in a spiking neural network . . . . . .
5.2.1.1 Summary of results from Paper 2 . . . . . . .
5.2.1.2 Conclusions and discussion - Paper 2 . . . . .
5.2.2 Paper 3 - Signature of an anticipatory response in area
V1 as modelled by a probabilistic model and a spiking
neural network . . . . . . . . . . . . . . . . . . . . . . .
5.2.2.1 Summary of results from Paper 3 . . . . . . .
5.2.2.2 Conclusions and discussion - Paper 3 . . . . .
5.2.3 Ongoing work: Paper 4 - Motion-based prediction with
self-organized connectivity . . . . . . . . . . . . . . . . .
5.2.3.1 Conclusions and discussion - Paper 4 . . . . .
93
93
III Conclusions
93
95
96
101
102
104
104
107
109
111
111
121
123
6 Summary
125
7 Outlook and Future Work
127
IV Index and Bibliography
131
Index
133
Bibliography
135
xii
V Publications
CONTENTS
187
List of Figures
2.1
Schematic of the first three stages of the early mammalian olfactory
system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3.1
3.2
Membrane potential of a leaky integrate-and-fire neuron . . . . . .
Four fundamental electrical building blocks . . . . . . . . . . . . .
27
40
4.1
Relationship symbolic/representational theories of mind
or language of thought (LOT) and connectionism . . . .
Neural coding schemes . . . . . . . . . . . . . . . . . . .
Tuning properties for one and two spatial dimensions . .
Examples of Gestalt perception . . . . . . . . . . . . . .
Spiking BCPNN traces . . . . . . . . . . . . . . . . . . .
48
52
66
70
89
4.2
4.3
4.4
4.5
5.1
5.2
(SRTM)
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Paper 1: Pattern recognition and concentration invariance . . . . .
Paper 1: Activity during pattern completion task on various levels
of the olfactory hierarchy . . . . . . . . . . . . . . . . . . . . . . .
5.3 Paper 1: Influence of recurrent cortical connectivity on noise robustness and pattern completion capability . . . . . . . . . . . . .
5.4 Paper 1: Pattern rivalry . . . . . . . . . . . . . . . . . . . . . . . .
5.5 Paper 1: Rasterplots of cortical activity for various odor concentrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 Paper 2: From aniso- to isotropic connectivities . . . . . . . . . . .
5.7 Raster plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.8 Paper 2: Comparison of prediction performance . . . . . . . . . . .
5.9 Paper 3: Connection probabilities in tuning property space . . . .
5.10 Paper 3: From experiments to abstract models to spiking neural
networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.11 Paper 4: Tuning property and rasterplot during training . . . . . .
xiii
94
97
98
99
100
103
105
106
108
110
112
xiv
5.12
5.13
5.14
5.15
List of Figures
Paper 4: Trained connectivities . . . . . . . . . . . . . . . . . . . . 114
Paper 4: Anticipatory response in networks with trained connectivity117
Paper 4: Anticipatory response depends on background noise . . . 118
Paper 4: Comparison of different network architectures on anticipation behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.16 Paper 4: Influence of background noise on anticipatory response . 121
Part I
Introduction, Theory and
Methods
1
Chapter 1
Introduction
“The human brain is a complex organ with the wonderful power of enabling
man to find reasons for continuing to believe whatever it is that he wants to
believe.” − Voltaire
1.1
Aims of the thesis
Understanding how our brains develop, learn and work has long been regarded
as one of the most challenging tasks of our scientific epoch. With billions of
neurons and trillions of connections and a variety of other constituents the
human brain is currently regarded as one of the most complex objects we
know about. Naively posed, the question is how all the neurons and synapses
behind our forehead interact in order to enable us with the multitude of abilities we are using throughout our lives. The brain creates an internal model
of the world around us, processes the information it receives and uses the information in a variety of ways: to select and initiate an action, to memorize,
to plan or to predict what will happen in the future. The aim of this thesis
is to contribute to this quest of “understanding the brain” by studying two
fundamental functions that our brains accomplish:
1. Recognizing patterns from sensory stimuli
2. Predicting and anticipating future stimuli
Models of these functions have been studied by computational modeling in the
publications attached to this thesis. The goal of this first part of the thesis is
to provide the reader with the background knowledge required to understand
3
4
CHAPTER 1. INTRODUCTION
the contributions of these studies and to integrate the thesis in the existing
scientific landscape. For this purspose relevant references and reviews with
focus on recent literature are given. The motivation for this thesis and why
these two functions are particularly relevant to study are explained in section
1.3.
Following the Connectionist school of thought (see section 4.1), the studies
presented in this thesis are based on the idea that certain cognitive functions
can be modeled and implemented by interconnected neural networks. Hence,
this thesis focuses on the question how the above named functions can be
implemented using neural network models and more specifically what the underlying assumptions are. This approach of implementing certain functions
using components that resemble the biological counterpart in a reasonable way
is called neuromorphic engineering or neurocomputing (see section 3.2). Neuromorphic engineering seeks to provide engineering solutions inspired by the
nervous system. Hence, as the above named functions deal with generic problems, that could have applications in a variety of fields, this thesis contributes
to the field of neuromorphic engineering.
The overarching goal of this thesis is to develop models that could possibly resemble processes taking place in animal or human brains. Hence, it is
important to show that the desired functions or algorithms can be realized
in a neurophysiologically plausbile way. Therefore, the models presented in
this thesis are implemented using different kinds of spiking neurons, which is
described in more detail in section 3.1.1. The motivation for this approach is
be given in section 3.1.
The visual and the olfactory systems are complex and are composed of
many coupled neural systems, nuclei and areas and fulfill a multitude of different functions and operations. As the focus of this thesis is on prediction
and pattern recognition, only those areas which are relevant in the respective
context will be introduced. Regarding pattern recognition the focus is put
on the lower (or “early”) stages of the processing hierarchy in the olfactory
system. In order to deal with the question how brains could possibly predict
parts of the future, motion extrapolation in the visual system is treated as a
prototypical example problem.
1.2
Thesis overview
This thesis is structured as follows: First, I will motivate the questions addressed in this thesis and describe my contributions to the individual publications included in this thesis. Then, in order to help the understanding of the
1.3. MOTIVATION
5
presented work, two sections explaining the biological and theoretical background will follow. The biological background gives a short overview of areas
in the visual and olfactory system relevant to motion perception and pattern
recognition, respectively. The theoretical background sections are intended to
introduce fundamental concepts which are utilized or relevant in this context.
These background sections lay the foundation for the applied methods and let
the reader embed the presented work in the existing research landscape. After
these two chapters, the major results of the publications will be summarized
and their relation to the presented background will be highlighted. Finally,
two chapters providing a summary, discussion and outlook regarding future
work will conclude the thesis.
1.3
Motivation
The goal to understand brain functioning can be motivated in many differ- Why study
ent ways, of which the strongest but also most vague is pure curiosity and the the brain?
urge to understand how cognitive processes like perception, memory formation
and prediction work on different levels.
Isn’t it fascinating that the exact same physical stimulus, be it an odor,
image or a song, can lead to very different reactions and the same (historical)
facts and numbers can lead to very different predictions, depending on whom
you ask? Hence, one may ask how these differences in classification and prediction behavior may be explained. Following a connectionist approach this
leads to the question how mental phenomena can be understood using computational models inspired by the nervous system (see section 4.1).
A much more important motivational aspect for understanding brain functioning includes questions regarding mental diseases, what manifests malfunctioning (e.g. misclassifying the “self” or ownership of thoughts in schizophrenic
behavior) on a mechanistic or circuit level and which possibilities exist to provide an effective treatment. The long-term goal is to advance computational
models so that they can assist and contribute to better treatments.
A third motivation can is to use neural systems as inspiration for advancing
technology in the broadest sense, e.g. by developing algorithms that function
similar to our brains in filtering masses of information and are able to predict and reason. Another goal along this line is to build computing devices
following design principles closer to the nervous system than the traditional
von-Neumann architecture used for present computers. This approach is the
guiding principle of neuromorphic engineering aiming at hardware develop-
6
CHAPTER 1. INTRODUCTION
ment (see section 3.2) which can be used for a multitude of applications (see
section 3.2.5).
1.3.1
How is
predictive
coding implemented
on a neural
level?
Why study prediction & pattern recognition? Why is
this work needed?
This thesis is concerned with two functions which can be regarded as fundamental for brain functioning: pattern recognition and stimulus prediction.
Pattern recognition is an essential function as it provides us with the ability to
orient ourselves in the world by distinguishing known from unknown stimuli
and by assessing the value and quality of a stimulus, e.g. answering the question if something is dangerous, poisonous, or pleasant and what the stimulus
represents. A stimulus can be anything from a sound, a face, an odor and can
be seen as a pattern of neural activation typical for a particular stimulus or
object present at different levels of the sensory hierarchy.
Prediction, foresight, or forethought, an estimation about what will happen in the future, is another very important function that our brains try to
achieve in many different ways. Prediction is not only important for motion
perception (e.g. to estimate a moving object’s position in the future) or vision
in general (Rao and Ballard, 1999; Enns and Lleras, 2008), but plays also an
important role in other sensory modalities like hearing, language processing
(Farmer et al., 2013), motor control (Shadmehr et al., 2010), somatosensation
and proprioception (e.g. adjusting muscle activity when climbing stairs), see
also Bastos et al. (2012) and sections 4.3.6 for further findings arguing for
the predictive coding hypothesis. The general ability to predict the future
is of course highly valuable in many other contexts as well (in weather forecasts, predicting the trajectory of a hurricane, or predicting the reaction of
our fellows in a social context), which makes it attractive to study from a
computational perspective.
Despite the tendency that the predictive coding hypothesis is becoming
accepted (which is that the nervous system predicts future inputs on various
levels from lower (Hosoya et al., 2005; Huang and Rao, 2011; Clark, 2014)
to higher stages (Bar, 2007; Clark, 2013; Summerfield and de Lange, 2014)),
the question how the nervous system implements predictive functions on the
neural level remains not fully understood. This thesis tries to provide an
answer in the context of motion perception using a connectionist approach.
The combination of recognizing stimuli by matching them with information
stored in the memory and the ability to predict the future can be regarded
1.4. LIST OF PAPERS INCLUDED IN THE THESIS AND
CONTRIBUTIONS
7
as essential features of brain functioning (Hawkins and Blakeslee, 2007; Bar,
2009, 2011).
The novelty and contribution of this thesis is the combination of methods What is
used to model the variety of functions named above. Many previous models novel about
implemented at a similar level of detail as used in this thesis are concerned with this thesis?
the analysis and reproduction of certain dynamics observed in neural systems
(e.g. oscillations, spike statistics, synchrony). In contrast, this thesis is not focused on the questions surrounding neural dynamics, but the functional role of
neural circuits and in particular how connectivity can be organized to achieve
that function. That is, using Marr’s levels of analysis (Marr, 1982), priority
is given to the problem what the neural system tries to achieve (computationally), how this can be achieved (algorithmically) and only little importance
to the question how this is (or could be) realized physically (see section 3.1.3
for more on the levels of description used in this thesis). This is because one
possible approach is suggested and explored how a certain function could be
achieved, but it is assumed that what is modeled (i.e. the function) is more
important than the precise physical realization.
The computational modeling approach used in this thesis is rooted in the
connectionist school of thought (see section 4.1), but uses spiking neuron
models as substrate which represents a step towards an increased level of biophysical detail and realism compared to previous functional models. Another
important motivational aspect is to target neuromorphic hardware (see section
3.2.4) which promise fast, compact and low energy platforms for a multitude
of applications and rely on the usage of spiking neural networks.
1.4
List of papers included in the thesis and
contributions
Paper 1: Kaplan and Lansner (2014)
Kaplan, Bernhard A. and Lansner, Anders (2014). “A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system.” Frontiers in neural circuits, 8.
My contributions were: to implement the model, write the simulation code
(publicly available at Kaplan (2014b)), analysis of data, parameter tuning and
to write the paper
Paper 2: Kaplan et al. (2013)
Kaplan, Bernhard A., Lansner, Anders, Masson, Guillaum S., and Perrinet, Laurent U. (2013). “Anisotropic connectivity implements motion-based
prediction in a spiking neural network”. Frontiers in computational neuro-
8
CHAPTER 1. INTRODUCTION
science, 7.
My contributions were: design of the study, implement the model, write the
simulation code (publicly available at Kaplan (2013)), analysis of data, parameter tuning and to write the paper
Paper 3: Kaplan et al. (2014)
Kaplan, B. A.∗ , Khoei, M. A.∗ , Lansner, A., and Perrinet, L. U. (2014).
“Signature of an anticipatory response in area vi as modeled by a probabilistic model and a spiking neural network”. In Neural Networks (IJCNN), 2014
International Joint Conference on, pages 3205–3212. IEEE.
∗
both authors contributed equally to this work
My contributions were: design of the study, implementation of the model, simulation (code available at Kaplan (2014a)), data analysis, parameter tuning,
and to write the paper
1.5
Publications not included in the thesis
Auffarth et al. (2011): Auffarth, Benjamin, Kaplan, Bernhard, and Lansner,
Anders (2011). “Map formation in the olfactory bulb by axon guidance of olfactory neurons”. Frontiers in systems neuroscience, 5.
Kaplan, Bernhard A., Berthet Pierre, Lansner, Anders “Smooth-pursuit
learning in a closed-loop cortex - basal-ganglia model with spiking neurons”
(in preparation)
Chapter 2
Biological background
Sensory systems connect our body with the outside world and are re- On sensory
sponsible that we perceive the physical world around us. Mammals posses systems
six sensory systems (also called modalities): auditory (hearing), somatic sensation (touch), gustatory (taste), vestibular (balance/movement), olfaction
(smell) and vision (seeing). Some animals are able to sense magnetic fields
(called magnetoception, like e.g. birds, turtles, sharks, honeybees or fruit flies
amongst others) and electrical fields (electroception, e.g. sharks, rays, bees
and many families of fish). In general, a sensory system consists of the sensors that transduce a physical stimulus into a neural response, the pathways
which transfer these responses and the parts of the brain that receive these
responses and allow us to form a percept of the physical stimulus.
This thesis will focus on vision and olfaction and will touch some topics
that are relevant for other modalities as well, like neural coding (see section
4.2), for example. This chapter will briefly introduce the major parts of the
visual and olfactory sensory systems and other brain areas that are relevant
for the presented publications. As for the whole thesis, this chapter focuses
on the nervous system of mammals.
2.1
2.1.1
The Visual System
Structure of the visual system
The visual system in mammals consists of a large number of different parts
each serving one or more specialized tasks. Here, only the most relevant brain
areas in the context of this thesis are very briefly explained here.
9
10
2.1.1.1
CHAPTER 2. BIOLOGICAL BACKGROUND
Retina
Cell types
in the retina
The retina in the eye is responsible for sensing light, for which it carries
photoreceptors that transduce electromagnetic radiation or photons into neural signals. Our image of the world is formed on the retina by excitation of
mainly two cell types in the retina, the rod and cone cells. However, recently
more cell types have been identified in the retina (Sand et al., 2012; Marc
et al., 2012; Seung and Sümbül, 2014) which are believed to be responsible for
light sensing, as well. There exist approximately 125 million rod cells in the
human retina (Curcio et al., 1990). Rod cells are mostly present at the outer
edges of the retina and used for peripheral vision and night vision as they
can function in less intense lighting conditions compared to cone cells. Cone
cells are responsible for color vision. Different types of cone cells respond
highest to different ranges of the optic spectrum (blue, green, red). There
exist approximately 7 million cone cells in the human retina. They are most
densely found in the fovea giving us sharp central vision. The signals created
by the retina are sent via the optic nerve to various brain areas, most of them
targeting the lateral geniculate nucleus (LGN) which relays the information
to the cortex and the superior colliculus (Goodale and Milner, 2013).
Computations in the
retina
One prominent feature of retinal circuits is lateral inhibition (or center
surround inhibition) which is supposed to be involved in decorrelating spatial
information in order to reduce the spatial redundancy in natural images (Atick
and Redlich, 1990) (see also section 4.3 for computations in sensory systems).
However, this enhancement of coding efficiency through sparse output spike
trains from retinal ganglion cells may not only be due to lateral inhibition,
but also due to the nonlinear processing in the retina (Pitkow and Meister,
2012). Other important functions of retinal circuits are noise reduction under different lighting conditions, edge detection and extraction of contrast and
luminance intensity (Blaboa and Grzywacz, 2000). Specific connectivity patterns between neurons in the retina allow to extract information about motion
direction (Briggman et al., 2011), One approach to describe the computations
carried out by retinal circuits is to classify them as adaptation, detection, and
prediction (Kastner and Baccus, 2014), but the precise definition of computations carried out in the retina is still an ongoing debate. Recent advances
concerning the retinal “connectome” will allow further insights regarding cell
specific connectivity and the functions (Marc et al., 2012, 2013).
2.1. THE VISUAL SYSTEM
2.1.1.2
11
Lateral geniculate nucleus (LGN)
The LGN is part of the thalamus and is situated at the end of the optic
tract receiving both afferent signals from the retina and signals from cortical
areas. Due to its position between sensory neurons and higher cortical areas,
the LGN acts as a relay station between modulatory or “top-down” signals
and afferent or “driver” signals (Sherman and Guillery, 1998). Interestingly,
only about 10% of synapses onto geniculate relay cells are from the retina
(Van Horn et al., 2000). Van Horn et al. (2000) report that about 30% of the
synapses are inhibitory which are involved in feed-back inhibition from cortical
areas and feed-forward inhibition from afferent pathways. The remaining 60%
of synapses to LGN neurons are modulator inputs from cortical and brainstem
regions.
The LGN projects mostly to V1, but very likely also to other higher visual LGN acts as
cortical areas like V2, V3 and MT as experiments with blindsight patients functional
(Schmid et al., 2010) and recordings in macaque monkeys suggest (Sincich relay
et al., 2004). LGN outputs play important roles in binocular vision, as e.g.
giving order to the eyes where to focus, and object and motion perception by
extracting position and velocity information forwarded to V1. As the LGN is
dominated by feed-back synapses, and the fact that the LGN (as well as other
thalamic nuclei) project forward to many different cortical areas, the thalamic
nuclei play an important role in corticocortical communication by changing
the transfer of information between cortical areas. One presumed role of the
LGN is to modulate the information transfer according to attentional demands
(Guillery and Sherman, 2002). This modulatory input from cortex affects the
response mode of thalamic cells and can, for example, change the response
of relay cells between tonic spiking or burst spiking (Sherman and Guillery,
2002).
2.1.1.3
Visual cortices
Scientists have divided the brain in many different areas, from which a subset
is involved in the processing of visual information. Brain areas have throughout history been distinguished by their cytoarchitecture, that is by different
staining patterns of cell bodies. Modern imaging methods can reveal correlations between specific stimuli and certain brain areas (Villringer and Chance,
1997), which lead to the assumption that brain regions have specific functional
roles (e.g. Zeki et al. (1991); Sereno et al. (1995)). Here, from the abundance
of visual areas and theories aiming to integrate these, only those concepts and
areas that are most relevant in the context of this thesis will be mentioned.
12
CHAPTER 2. BIOLOGICAL BACKGROUND
The two
Visual processing is distributed across different areas that are believed to
streams be organized in a hierarchy (Van Essen and Maunsell, 1983). One prominent
hypothesis theory assumes that the flow of information along this hierarchy can be divided
into different streams (Mishkin et al., 1983; Ungerleider and Haxby, 1994): a
‘ventral stream’ for object vision (providing information for the “what?” question) and a ‘dorsal stream’ for spatial vision (“where?”). These two streams
are distinct in terms of connectivity (Felleman and Van Essen, 1991; Young
et al., 1992) and can be distinguished by serving different purposes: the dorsal
stream is believed to be specialized to the visual control of actions and the
ventral stream is dedicated to perception.
However, it has been argued that there exist more interaction than initially assumed between the two pathways and that different brain areas could
interact depending on the task (DeYoe and Van Essen, 1988; Milner and
Goodale, 2008). Furthermore, it has also been suggested that the dorsal
stream should be divided into three different streams that provide information for non-conscious visually guided actions and hence target the “how?”
question (Kravitz et al., 2011; Creem and Proffitt, 2001).
Why are
these
streams
relevant in
this thesis?
The dorsal stream is particularly relevant in the context of this thesis,
as it is concerned with transmitting and processing signals related to motion (DeYoe and Van Essen, 1988; Van Essen and Gallant, 1994). In the
processing of motion information, the areas that are most involved are the
retina and LGN, the primary visual cortex (V1), medial temporal area (MT,
sometimes also called V5), the medial superior temporal complex (MST) and
posterior parietal cortex (Van Essen and Gallant, 1994). As the dorsal stream
transports motion information it provides essential signals for visually guided
actions and particularly for the control of eye movements (Kravitz et al., 2011)
which makes it relevant for Paper 2 and Paper 3 (Kaplan et al., 2013, 2014).
These areas are strongly interconnected in different ways: in feed-forward fashion (e.g. retina → LGN → V1 → MT → MST), but also through feed-back
(top-down) connections and within one area through recurrent connections.
Furthermore, it has been shown that not all neural signals follow this sequence
of stages in a strict manner, but that some cells bypass one stage and project
immediately to the next “higher” stage, e.g. from LGN → to MT (Sincich
et al., 2004; Nassi et al., 2006). It is also to be noted, that areas like the
superior colliculus which is important in eye motor control and can be seen as
target or output of the dorsal stream project back to area MT (Lyon et al.,
2010). Hence, the formulation of higher and lower areas is to be interpreted
with caution, as there does not seem to be a clear and strict processing hierarchy but rather a tendency for the major direction of signals. To complicate
2.2. THE OLFACTORY SYSTEM
13
things even further, the number of connections projecting from area A to B
may not express the resulting effect of this pathway without taking the highly
variable weights of these connections (Cossell et al., 2015) and task-dependent
modulation into account (Womelsdorf et al., 2006).
As a short summary, visual motion information is distributed across many
different areas and the precise functional order of processing stages is difficult to infer from anatomy and physiological experiments alone, which makes
computational models fruitful instruments to study hypotheses.
The focus of this thesis does not lie on the anatomical structure of vi- Modeling
sual areas and the faithful integration of these different areas into functional motion
models, but rather on some of the computations carried out along the process- processing
ing hierarchy and how network connectivity relates to behaviorally relevant
functions. Therefore, little anatomical or physiological detail is integrated in
the models presented here. Instead, the approach that has been taken is to
represent motion information through the activity of dedicated cells, which
would correspond to cells that are in reality distributed across presumably
several motion sensitive areas. That is, for the models presented in this thesis
it is not relevant where exactly the cells that represent motion information are
situated, but rather how they are connected with each other and to present a
general idea for the processing of motion information. As described in Paper
2 and Paper 3 (Kaplan et al., 2013, 2014), each cell sensitive to motion is
equipped with a receptive field (in short: a preferred position, and a preferred
direction of motion, see section 4.3.1) which makes the cell receive increased
excitatory input if an object passes this preferred position in the visual field
and moves with the preferred speed. The underlying principles of topographic
mapping of the outside world on neural tissue and the concept of receptive
fields will be explained in more detail in sections 4.4.1.1 and 4.3.1. The question that is addressed in this thesis is how the connectivity between these
cells influences their response to a stimulus and how can this connectivity be
constructed so that it achieves a certain function, namely to extrapolate the
trajectory of the moving stimulus and to predict future sensory information
in the presence of noise and a temporarily interrupted stream of information.
2.2
The Olfactory System
The sense of olfaction allows organisms to detect and distinguish volatile chemicals in their environment. One can distinguish a main olfactory system that
detects volatile airborne substances and an accessory olfactory system which is
believed to detect fluid-phase substances and pheromones. This thesis focuses
14
CHAPTER 2. BIOLOGICAL BACKGROUND
on the main olfactory system and the model presented in Paper 1 (Kaplan
and Lansner, 2014) is inspired by the main olfactory system of mammals.
What is the
Odors (or scents or fragrances) are composed of one or several odorants
stimulus? (or aroma compounds) which can be classified by their chemical structure in,
e.g. esters, linear or cyclic terpenes, alcohols, aldehydes, aromatic, thiols, ketones, lactones, and amines (see (Axel, 2006) for a accessible introduction to
olfaction). Natural odors are usually mixtures consisting of of several odorants, but how to precisely describe odor “objects” is less clear. In contrast
to the visual world, in which stimuli can be described rather easily by very
intuitive concepts like color, brightness (light intensity), spatial extent, velocity, etc. the description of odors is less intuitive. The quest to describe an
odor quantitatively, to distinguish one odor from another which is in agreement with our qualitative perception regarding odor categories is a scientific
challenge in itself. For example, a quantitative odor metric would desirably
agree with the qualitative, intuitive perception that smells like strawberry and
cherry are more similar (i.e. close in the odor descriptor space) than cherry
and peppermint or garlic.
There are three major difficulties regarding the quantification or measurement of the odor stimulus space. One is the integration of perceptual categories based on human perception which is naturally of qualitative nature
and subjective (Wise et al., 2000). Hence, finding perceptual odor categories
is already a challenge and statistical methods have been applied to determine
the number of categories (Castro et al. (2013) report that there are ten perceptual categories). Another major difficulty is the way how to measure and
quantify an odor in a simple, yet sufficient way so that the chosen odor measurement correlates with both perceptual and neural responses (see Haddad
et al. (2008b); Secundo et al. (2014) for reviews). This question has been approached by using a large set of physico-chemical descriptors (e.g. molecular
weight, functional groups, etc. in total 1664 were used in (Khan et al., 2007)
and (Haddad et al., 2008a)) in order to combine a subset of these descriptors
so that perceptual (e.g. pleasantness) and neural responses can be explained
(Haddad et al., 2010).
Theories of
A third problem poses the question how olfactory receptors (ORs) interreceptor- act with odors in the first place. There exist three theories which try to
odor explain how ORs binds to odorant molecules. The “vibration theory” prointeraction poses that the signal transduction depends on the vibrational energy mode
of the molecule and the receptor (Malcolm Dyson, 1938; Wright, 1954; Turin,
1996; Zarzo, 2007; Franco et al., 2011; Gane et al., 2013). This theory has
been tested by replacing e.g. hydrogen atoms with deuterium which changes
2.2. THE OLFACTORY SYSTEM
15
the vibrational properties of the odor molecule and leads to different odor percepts in a differential conditioning paradigm in honeybees (Gronenberg et al.,
2014). However, the vibration theory has been challenged by contradicting
psychophysical experiments (Keller and Vosshall, 2004) and the “shape theory” which assumes that the molecular shape is the most determining factor
to describe the mechanism for receptor activation (Amoore, 1964; Zarzo, 2007;
Touhara and Vosshall, 2009). The “odotope theory” (odotope as analogy to
epitope) is a variation or extension of the shape theory and suggests that a
combination of shape factors (and functional groups) determine the coupling
between receptor and odorant molecule (Shepherd, 1987; Mori and Shepherd,
1994; Mori, 1995; Firestein, 2001). All three theories remain plausible and
none of them can be ruled out completely, as long as the detailed mechanisms of odorant-receptor activation is not fully known. Since the interaction
between the physical world and the nervous system is not very clear on this
fundamental level, the concept of receptive fields has been difficult to establish
and could be applied to a smaller degree compared to vision (a receptive field
of a cell is the part of the sensory world to which the cell responds, see section
4.3.1). A review on receptive fields in the olfactory cortex is given by (Wilson,
2001) and a comparison between vision and olfaction is given by (Auffarth,
2013).
Why is
Vision and olfaction are two very different sensory modalities at the first olfaction
sight. Still, both systems are fruitful example systems to study the interac- interesting?
tion between connectivity and specific functions. Olfactory perception shows
a very high discriminative power (Shepherd, 1994; Buck, 1996), which means
that a great diversity of stimuli can be distinguished. This makes the olfactory system fascinating from an information processing viewpoint, as it
operates in a high-dimensional input space and gives a low-dimensional output (Secundo et al., 2014), e.g. answering the question if a stimulus is known,
whether it originates from something pleasant, dangerous, edible, etc. As
indicated above, the olfactory system uses a combinatorial code to represent
this high-dimensional odor information across the processing hierarchy, as
odorant activates several receptors and one receptor is sensitive to several
odorants (Malnic et al., 1999). This combinatorial representation of sensory
information is transformed across different stages (introduced below) using
presumably dissimilar codes and representations on each stage (see e.g. Su
et al. (2009) for an introductory review on the olfactory system). Another
interesting aspect with respect to information representation is the question
how important the temporal domain is to identify odors or features of odorants (like concentration). Oscillations and temporal dynamics are believed to
16
CHAPTER 2. BIOLOGICAL BACKGROUND
play an important role in this respect, but will not be considered in this thesis
as the focus is put on functional aspects without requiring specific dynamical
patterns of activity to achieve a certain function. Section 4.2 gives a short
introduction into temporal coding and other code (see Rieke (1999) for a more
exhaustive introduction). For further information on the temporal aspects of
olfactory processing see e.g. the review by Uchida et al. (2014), the studies by
Laurent and Davidowitz (1994); Hendin et al. (1998); Margrie and Schaefer
(2003); Schaefer and Margrie (2007); Haddad et al. (2013) and Discussion in
Paper 1 (Kaplan and Lansner, 2014).
The olfactory system has been an attractive model system to study higher
cognitive functions, memory phenomena and object recognition since long time
(Haberly and Bower, 1989; Granger and Lynch, 1991; Wilson and Stevenson,
2003). This is due to several reasons. For example, the lack of a thalamic relay from the lower sensory areas to the olfactory cortex, the simpler
three-layered structure of the olfactory cortex (being part of the paleocortex,
assumed to be a phylogenetical precursor of the neocortex) and the accessibility of physiological and behavioral studies with animals that rely heavily
on olfactory perception like rodents and insects. Furthermore, many studies have shown the importance of synaptic plasticity in the olfactory system
(both between lower and higher and within cortical areas) which makes it
attractive for computational studies targeting the link between connectivity
and functions (Ambros-Ingerson et al., 1990; Hasselmo, 1993; Wilson et al.,
2006; Wilson and Sullivan, 2011b; Chapuis and Wilson, 2012).
2.2.1
2.2.1.1
Structure of the olfactory system
The epithelium
Odors enter the nervous system by activating olfactory receptors (ORs) which
belong to the family of G protein-coupled receptors (Buck and Axel, 1991).
These ORs are expressed in the cell membranes of olfactory receptor neurons (ORNs) situated in the epithelium (see e.g. Schild and Restrepo (1998);
Gaillard et al. (2004) for reviews on ORs) in different zones (Ressler et al.,
1993; Vassar et al., 1993). A single OR reacts to a set of odorants (Gaillard et al., 2002) which are often similar in their chemical structure and one
odorant in turn excites or inhibits ORNs from multiple OR families (Araneda
et al., 2000; Firestein, 2001; Wachowiak et al., 2005). This gives rise to combinatorial diversity and a combinatorial code present over several layers of
the processing hierarchy. As mentioned above the transduction mechanism of
an OR exposed to an odorant is not fully understood, but can be described
2.2. THE OLFACTORY SYSTEM
17
as cascade of biochemical reactions (see e.g. Kleene (2008); Rospars et al.
(2010); Leal (2013) for reviews). Importantly, ORNs expressing one OR i.e.
belonging to one family of ORNs, project to one or at most a few specific sites
called glomeruli in the olfactory bulb in each hemisphere (Vassar et al., 1994;
Ressler et al., 1994; Mombaerts et al., 1996). However, this number varies
from species to species and for simplicity it has been assumed in Paper 1 (Kaplan and Lansner, 2014) that one family of ORNs projects to one glomerulus.
Hence, there is a convergence of connections from spatially distributed ORNs
in the epithelium to very few sites in the bulb.
2.2.1.2
The olfactory bulb (OB)
The olfactory bulb is the first target site for projections from ORNs and hence
plays a crucial role in the olfactory processing hierarchy and was the focus of
early computational modeling studies (Rall and Shepherd, 1968). The precise
structure, functions and coding mechanisms implemented in the OB circuitry
are complex and the subject of many studies that are too numerous to be
exhaustively discussed here. However, I will mention a few aspects which are
relevant with respect to the questions addressed in this thesis and particularly
in Paper 1.
The OB is composed of different layers: the glomerular layer where axons from ORNs connect to dendrites from OB cells, the mitral cell body layer
which is surrounded by the external and internal plexiform layer, and the granule cell layer (see e.g. Shepherd (1972); Mori et al. (1999); Mombaerts (2006);
Wilson and Mainen (2006); Linster and Cleland (2009); Zou et al. (2009);
Sakano (2010); Adam and Mizrahi (2010); Murthy (2011); Nagayama et al.
(2014) for reviews on the olfactory bulb addressing various aspects). There are
at least two main classes of projection neurons in the olfactory bulb. The mitral cells (situated in the mitral cell body layer) and the tufted cells (situated
in the external plexiform layer) are the major output neurons projecting to
olfactory cortical areas (Pinching and Powell, 1971). Both cell types receive
excitation from ORNs and project to higher olfactory areas, but may play
different roles in information processing (Fukunaga et al., 2012). The combination of activated glomeruli form segregated maps of activity which provides
information about the olfactory stimulus to other brain areas (Murthy, 2011).
A prominent structural feature of the OB is the presence of several glomeruli
(spherical structures composed of axons from ORNs, dendrites from mitral
/ tufted cells and other cells, and glia cells (Kosaka et al., 1998)). Each
glomerulus receives sensory information from one family of ORNs (Vassar
et al., 1994; Ressler et al., 1994; Mombaerts et al., 1996) and the number of
18
CHAPTER 2. BIOLOGICAL BACKGROUND
Figure 2.1: Schematic of the early stages of the mammalian olfactory system
as modeled in Kaplan and Lansner (2014). Odors bind to receptors in the cilia
of ORNs in the epithelium and lead to input currents based on the affinity
between odorant and OR and on the ORN sensitivity. ORNs from one family
(indicated by colors blue, yellow) project to specific glomeruli where they
make contact with inhibitory periglomerular (purple) and excitatory mitral
/ tufted cells (gray). Recurrent inhibition provided by granule cells (dark
green) modulate signal transmission from mitral / tufted cells to the olfactory
cortex. Based on connectivity patterns from the bulb to cortex and within
cortex, odor object recognition takes places and odors can be identified and
distinguished from each other. ONL: olfactory nerve layer; Glom: glomerular
layer; EPL: external plexiform layer; MBL: mitral cell body layer; GL: granule
cell layer. Colors represent odorants, ORN family, cell type or odor identity
respectively. Figure and caption taken from Kaplan and Lansner (2014).
2.2. THE OLFACTORY SYSTEM
19
glomeruli varies between 1100 and 2400 depending on the species (Kosaka
et al., 1998). ORNs provide intraglomerular inhibition via juxtaglomerular
(or periglomerular) cells and inter- and intraglomerular inhibition through a
network of mitral-granule cell interaction (Su et al., 2009). Other prominent
features of the OB is the strong interaction between dendrites (Shepherd, 1972;
Isaacson, 2001; Wilson and Mainen, 2006) (which is mostly inhibitory), the
columnar organization (Willhite et al., 2006) of OB circuits, and restructuring
of circuits through neurogenesis (Lazarini and Lledo, 2011; Lepousez et al.,
2014), however their precise meaning for odor information processing is less
clear. Also the role of other prominent cell types like the granule cell is subject
of an ongoing debate (Shepherd et al., 2007; Koulakov and Rinberg, 2011).
Recent studies about the question how temporal coding influences coding in
the OB and from OB to higher processing areas can be found in Gire et al.
(2013a); Uchida et al. (2014).
2.2.1.3
The olfactory cortex or piriform cortex (PC)
The olfactory cortex is a three-layered paleocortex (in contrast to the sixlayered neocortex) receiving sensory information from the OB (Haberly and
Price, 1977) and is crucial for odor recognition, odor memory formation (Gottfried, 2010; Wilson and Sullivan, 2011a) and decision making (Gire et al.,
2013b). Neurons in the PC receive convergent synaptic input from different
glomeruli (Apicella et al., 2010; Miyamichi et al., 2011) in the OB, that is
they integrate information originating from different receptors. Connections
from mitral/tufted cells belonging to one glomerulus to cortical target areas
are divergent with largely non-overlapping projection patterns (Ghosh et al.,
2011). This divergence hints towards a reorganization of sensory information
from lower to higher areas.
In recent years, due to the technological advances in experimental methods,
the question how odor information is represented in the PC could be addressed
more systematically. It has been found that spiking activity in response to
odors in PC is sparse and distributed with unspecific recurrent inhibition and
specific excitation (Poo and Isaacson, 2009; Stettler and Axel, 2009; Isaacson,
2010). As there seems to be no clear spatial distribution of neurons in PC
responsive to an odorant, the spatial maps of activity visible in the glomerular
layer of the OB is discarded in PC. Furthermore, it has been observed that
only few neurons can account for behavioral speed and accuracy in an odor
mixture discrimination task and that odor identity can be better decoded from
spike counts during bursts than from temporal patterns (Miura et al., 2012),
which is a relevant finding in the context of neural coding (see section 4.2).
20
CHAPTER 2. BIOLOGICAL BACKGROUND
It is important to note that despite the large number of studies addressing the question how odor information is represented and stored across the
different levels of the olfactory system, very little is known on the functional,
mechanistic level which makes computational models particularly valuable.
2.2.1.4
Other olfactory areas
Olfaction is regarded as the phylogenetically oldest sense, as detecting chemicals in the environment and deciding whether they are harmful or beneficial
for the organism is crucial for any organism’s survival. Hence, it appears
plausible that signals from the olfactory system are connected with areas involved in control of emotions and arousal. This is supported by the fact that
the amygdala, as part of the limbic system and strongly involved in decision
making, memory and emotional reactions, receives input from the OB and
the PC (Amunts et al., 2005; Carlson, 2012; Miyamichi et al., 2011). Besides
the amygdala, mitral and tufted cells project to the olfactory tubercle (OT),
the anterior olfactory nucleus (AON) and entorhinal cortical areas (Haberly
and Price, 1977; Nagayama et al., 2010; Ghosh et al., 2011; Miyamichi et al.,
2011; Sosulski et al., 2011). The entorhinal cortex is regarded as being the
interface between hippocampus and neocortex and plays an important role in
memory and navigation (Fyhn et al., 2004; Hafting et al., 2005) (research concerning neural mechanisms of spatial navigation has been awarded with the
Nobel prize in physiology in 2014 (Moser and Moser, 2015; Burgess, 2014)).
The AON is compared to OB and PC a less extensively studied structure, but
is strongly connected with both (Broadwell, 1975) and hence plays also an
important role in olfactory sensory processing (see (Brunjes et al., 2005) for a
review). The OT is involved in reward processing (Ikemoto, 2003, 2007) and
multi-sensory integration (Wesson and Wilson, 2010).
2.2.2
Differences and commonalities to visual perception
There are a couple of fundamental differences between the olfactory and the
visual world. The first fundamental difference lies in the different type of interaction between sensory neurons and the physical world. Olfaction senses
volatile molecules, whereas the purpose of visual sensors is to detect electromagnetic waves of different wavelengths and integrate spatio-temporal patterns of luminance. The purpose of the olfactory system is mainly to distinguish between harmful or dangerous stimuli (e.g. rotten food, predators) and
beneficial stimuli (e.g. edible food, kin or partners). At least for humans,
this happens in a spatio-temporal static manner, i.e. the patterns of activ-
2.2. THE OLFACTORY SYSTEM
21
ity representing a certain odor object (e.g. a banana) in the high-dimensional
odor space can be regarded as being static in time and space, because we only
rarely use our olfactory sense to determine the exact location of an odorous
object or whether it is approaching (rodents may well do this, however). This
perspective of viewing odor stimuli as static patterns is of course not true,
as they can be used to localize food through different varying odor concentrations. The purpose of the visual system in contrast, is to integrate and
analyze the spatial and temporal structure of the stimuli and construct different levels analysis and detail from the physical stimuli. This analysis is
depending on both the temporal and spatial structure of the perceived stimuli
as the information is crucial for our behavior, e.g. estimating the speed of
an approaching vehicle is based on both spatial and temporal information,
whereas in olfaction the perception of certain patterns (e.g. a predator) does
not require extensive temporal analysis to trigger certain behaviors. Hence,
in the context of this thesis on the background of pattern recognition and
prediction, olfactory stimuli can be regarded as static patterns, whereas the
visual stimuli are modeled as dynamic patterns of activation.
Chapter 3
Methods
3.1
Modeling
“Remember that all models are wrong; the practical question is how wrong do
they have to be to not be useful.” — (Box and Draper, 1987)
Modeling is the process of creating a (simplified) image of the world includ- What is
ing a (limited) selection of factors, objects, features and processes from the modeling?
real world. A model can reflect the world only to a certain extent that is
determined by the assumptions made in the model. Hence, a model has some
similarities to a caricature or a cartoon, as a model is supposed to include
the most prominent attributes, but emphasizes certain features possibly at
the expense of others. The decision to include some factors and leave out
others automatically introduces a bias and can affect the quality of a model.
The quality of a model depends on various things: A model using too little factors might ignore important information and oversimplify the problem
and does not capture all relevant aspects of the real world. A model that
includes many parameters may reproduce a behavior or certain phenomenona
very accurately but may not perform well in other regimes, that is it does not
generalize which is also called overfitting (Hawkins, 2004). For more about
overfitting and model selection see Burnham and Anderson (2002).
There are many reasons to use models for studying and understanding What is the
a system. First of all, a model is intended to show the major behavioral motivation
characteristics of a real-world object or process and identify the parameters of building
and factors that determine the outcome the most. A model helps to define models?
the inner workings of a system, for example the interaction between processes
23
24
CHAPTER 3. METHODS
or subcomponents that appear complex “from the outside”. This makes it
possible to study cases when a system fails and to understand why a system
fails.
Often a system can be better understood with a definition or mathematical
formulation of the process to be modeled. A mathematical formulation of the
model is often essential for developing theories that describe the phenomenon
to be modeled in general or abstract terms (see section below). However, models often go beyond the mere description of components of an object or process,
but ideally try to predict behaviors or circumstances (in form of parameters)
that are difficult (or costly) to implement in the real system. In order to
verify or falsify a theory, predictions about the phenomenon (or experiments)
need to be done and tested. This hypothesis testing is a key element for
scientific progress: based on the design of a model and the fit to collected
data one can assess whether the assumptions put into the model are correct
or not. Furthermore, and maybe most importantly, the modeling ideas can
guide the design of experiments so that the number of costly experiments can
be reduced, particularly in the context of neuroscience. Hence, modeling can
help to extend the knowledge about complex interactions and to understand
the circumstances and mechanisms that constitute the system, often without
building and executing each setting.
3.1.1
Computational modeling
Computational modeling denotes a special form of modeling formulated in
the language of mathematics, often using equations that represent real-world
quantities in more or less abstract form (depending on the problem to be
modeled). This mathematic formulation is the essential feature of the model,
as it has several implications. First, it makes the model explicit and forces
the modeler to decide which factors are to be integrated in the model and
which ones can be ignored. That is, a weighting of factors is performed when
formulating the equations. Second, this explicitness offers the possibility to
use numerical tools like computer simulations to study the equations. Hence,
computational models enable the modeler to choose the level of detail according to the question to be addressed (see section 3.1.3 for comments on the
problem of detail). In particular, computational modeling is required when
the behavior of a system can not be easily determined by qualitative models or
analytic models, when the system is very complex and an analytical solution is
not possible (as is the case for networks of neurons obeying discontinuous nonlinear differential equations). Then, a stepwise approach to understand the
3.1. MODELING
25
behavior of the system is desired which is realized using computer simulations
approximating the analytical behavior of the elements to be modeled.
Furthermore, the explicitness of computational models permits to model
the reality in a quantitative precise manner which is a big advantage as it
offers the possibility to assess the quality of a model through measurements
comparing model predictions and reality. Likewise, computational models
that use quantities representing real world measurements (e.g. firing rate of
neurons) can be easily attacked if this quantity does not match the reality.
However, with respect to the focus of this thesis on functional models, it Modeling
is to be kept in mind that not all models should be judged upon every quan- approach in
titative measurement. That is, if a model does not aim to explain certain this thesis
behaviors quantitatively, assessing the quality of model requires a different
approach. It is important to mention that the computational models presented in this thesis were developed with two main ideas in mind. First, the
models are intended to model a certain task or function using biophysically
plausible mechanisms, but not focusing on reproduction of specific dynamics
and not with the goal to achieve a precise quantitative agreement with experimental data. Second, the models have been developed as part of international
projects (FACETS-ITN, 2009; BrainScaleS, 2011) and are intended to be run
on specialized neuromorphic hardware which biased the choice of the basic
neuron model type towards spiking neurons (Schemmel et al., 2010; Furber
et al., 2013).
This thesis however does not fulfill all promises that computational modeling makes as the models do not profit from all advantages that computational
modeling has to offer. For example, since the focus was on the functionality of the models, rather little attention was paid to a quantitative match of
parameters and hence no explicit quantitative predictions were formulated.
The modeling approach taken in this thesis was inspired by rather high-level,
abstract ideas and the goal of this work was to show that a more biophysically plausible implementation is possible and could be in agreement with the
biological substrate (see also section 3.1.3).
3.1.2
Computational neuroscience
The purpose of this section is to very briefly introduce the basic idea of spiking neurons which is used in this thesis. The study of nerves as electrically
excitable tissues gained attention by the famous studies by Galvani (Piccolino, 1998). The first attempts to model the nervous system with the means
of mathematics is believed to be the study by Lapicque (1907) (see Brunel
26
CHAPTER 3. METHODS
and Van Rossum (2007) for a recent review) and more systematically by Hill
(1936).
Spiking
These early studies are worth mentioning as they laid the foundation for
neuron others (Gerstein and Mandelbrot, 1964; Stein, 1965) and contributed to the
models formulation of a neuron model which is still very popular today and also used
in parts of this thesis (with some modifications) which is the leaky (or ‘forgetful’) integrate-and-fire (LIF) neuron first mentioned by Knight (1972). The
‘forgetful’ behavior is due to ion flux through the neuron’s membrane which
drives the voltage Vm (the electric potential between interior and exterior of
the cell) towards an equilibrium (the resting potential Vrest ). The response of
the membrane voltage to an input current I(t) is then described by:
dVm (t)
= −(Vm (t) − Vrest ) + I(t)
(3.1)
dt
where Cm is the membrane capacitance. The neuron “fires”, i.e. sends
an action potential to all its targets when the membrane voltage reaches a
certain threshold (Vthresh ) and thereby inducing an input current onto the
target neuron through the connecting synapse. When a neuron fires a spike,
the membrane potential is kept for some time at the so-called reset potential
during which the neuron does not react to any input. After this refractory
period, the neuron can continue to integrate input (see figure 3.1).
Other
This is the most basic way to describe the basic properties of nervous cells:
neuron integration, forgetting (leakage) and excitability (firing). The spike generation
models mechanism, currents, synapses and how the voltage reacts can be modeled in
different ways (e.g. introducing additional variables and interdependencies)
depending on the model, for example a quadratic model (Izhikevich et al.,
2003), an exponential model (Brette and Gerstner, 2005; Gerstner and Brette,
2009), or the famous Hodgkin-Huxley formalism (Abbott and Kepler, 1990;
Burkitt, 2006). The usage of more than one state variable can lead to much
richer (and realistic) dynamics like spike-frequency adaptation and different
bursting modes (see figure 1 in (Izhikevich, 2004) for an overview of different
response properties). For the sake of brevity these different models will be not
explained here, but are explained in standard literature (see e.g. (Dayan and
Abbott, 2001; Gerstner and Kistler, 2002; Izhikevich, 2007a) and for a review
on simulation strategies (Brette et al., 2007)).
This notion reflects the focus on neurons as important building blocks for
cognitive functions, however other perspectives focusing on other aspects of
neuronal behavior require different approaches and formalisms (see section
3.1.3). Due to its relative simplicity (compared to models that focus on the
detailed spatial neuronal structure) the LIF model makes it computationally
Cm
3.1. MODELING
27
Figure 3.1: Example of a noise-free membrane potential of a leaky integrateand-fire neuron model with conductance based synapses. The neuron integrates incoming spikes through an excitatory synapse with constant weight
until the membrane reaches Vthresh when a spike is induced (the spike generation (fast depolarization) is not shown as it is abstracted away in this
neuron model, since the shape of the action is not assumed to carry information in spike transmission). After the spike the membrane potential remains
(hyperpolarized) at Vreset for a refractory period during which the neuron is
insensitive to input.
feasible and allows to study the interplay between numerous neurons connected to networks of neurons. This approach is to be distinguished from the
“classical” artificial neural network (ANN) approach, since spiking neurons
are able to exhibit discontinuous behavior (spiking) which makes their behavior biophysically significantly more plausible and introduces richer dynamics
in the network behavior. This is also true in comparison with other more abstract approaches, e.g. the neural field approach which assumes non-discrete
neuronal elements and focuses on the population level (Amari, 1983). Other
notable approaches in computational neuroscience will briefly be mentioned
in the next section 3.1.3 when addressing the question of possible levels of
description (see Sejnowski et al. (1988) for an early introductory paper and
28
CHAPTER 3. METHODS
Piccinini and Shagrir (2014) for a more general introduction to computational
neuroscience).
Particle
Another framework used in this thesis in parts of Paper 3 (Kaplan et al.,
filtering 2014) works on a more abstract level called particle filtering (also known
as sequential Monte Carlo sampling). This approach abstracts away neural
dynamics and uses a pool of particles to model the spread of information in
a neural population over time. These particles are to be interpreted from a
probabilistic point of view - as neural activity often is (see section 4.2.5 for
an introduction of the probabilistic interpretation of neural activity). Each
particle represents a hypothesis about a stimulus, comparable to the view that
the activity of a neuron which has a preferred stimulus represents the presence
of that stimulus (with some certainty). The presence of a particle in the pool
makes the hypothesis (that the specific information, e.g. position and speed of
a stimulus, or rather the internal representation of that hypothesis) more likely
to be true. As the system perceives the (moving) stimulus, the hypotheses
the particle pool represents evolve over time. The temporal evolution of this
particle pool is affected by adding new particles (e.g. with a fixed rate) and
removing particles if their hypothesis is not verified by the measurement. By
studying the development of this particle pool the spread of information can
be modeled. See also section 4.2.5 for more comments on particle filtering
and Djuric et al. (2003) for a review or Perrinet and Masson (2012); Khoei
et al. (2013) for detailed descriptions of an implementation which guided ideas
implemented in the model presented in Kaplan et al. (2013, 2014).
3.1.3
Complexity
of in neuroscience
modeling
requires a
multi-scale
approach
Levels of description and understanding
This section will briefly address the question which level of abstraction exist
in computational neuroscience, which level of description is adequate for a
given problem and which level has been chosen in this thesis.
The complexity of the brain arises from the huge amount of different interacting components: single neurons, synapses, astrocytes, networks of neurons,
brain areas, hormones, energy and oxygen levels, or other molecular and homeostatic processes, just to name a few. This gives rise to a multitude of possible
levels of interaction among the different components covering multiple temporal and spatial scales ranging from milliseconds to years and from nanometers
to decimeters. Every component could in principle be regarded as a complex
system in itself with different interaction mechanisms between its elements.
Hence, the question is to decide which scale a computational model should
cover and which formulation is adequate for the desired level of detail. However, in neuroscience there is no definite answer to this since even for a specific
3.1. MODELING
29
problem the real system comprises multiple temporal and spatial scales and
there seems to be no generic way (yet) to model this. Hence, the problem remains in the unanswered questions how many levels are to be considered and
how to integrate these multiple levels in a meaningful and computationally
feasible way. These problems do not only exist in modeling, but already in
the design and analysis of experiments: Craver (2002) presents a system of
experimental strategies to address multilevel mechanisms in neuroscience of
memory. Aarts et al. (2014) addresses the problem of multilevel analysis in
neuroscience. Boccaletti et al. (2014) give a review on the use of network theory in modeling structure and dynamics of multilayer networks, and Bechtel
(1994) describes the problem of description level for cognitive science, just to
name a few references that deal this wide topic.
One simple possibility to distinguish modeling approaches is the classification in bottom-up and top-down approaches. Bottom-up approaches focus
on a high level of detail and act on the assumption that higher level functions emerge from the interactions of the system’s components. (prominent
example is “The blue brain project” (Markram, 2006)). However, due to the
arising complexity and problems finding an adequate scale mentioned above,
this approach is being regarded critically when it comes to explain higher
cognitive functions. The contrary top-down approach starts at a high level of
description and tries to maintain certain functions (or model features) while
increasing the level the detail gradually. This thesis follows the top-down approach and realizes ideas originating from rather abstract models (described
later) with spiking neurons and an increased level of biophysical detail.
Levels of
Marr introduced the following three levels of analysis or description at analysis
which an information processing system like the brain should be analyzed
(Marr, 1982):
Computational level
Problem specification.
Describes the theory,
its goal and the motivation for the solution
strategy.
Representation and algorithm
Describes how the
computational theory
can be implemented,
focuses on the representations
and
input/output transformations
Physical implementation
Describes the physical
(or neurophysiological)
realization of the algorithm and the representations used.
30
CHAPTER 3. METHODS
Using this notion, this thesis deals with all three levels, but contributions
are focused most on the representational level and only partly on the computational and implementation level.
3.1.3.1
Single and
multicompartment
neurons
Subcellular
scale
Modeling approaches in neuroscience
With respect to the question how connectivity and function are linked together, there exist different possible levels of description from which some
prominent approaches will be named here. In the previous section 3.1.1 the
notion of the leaky integrate-and-fire (LIF) neuron model was briefly introduced. However, this notion focuses on the single neuron perspective and the
question how the membrane voltage behaves in reaction to input currents.
Instead of looking at the neuron as a single point, a more detailed perspective
is to emphasize the spatial structure of a neuron and the processes involved
for example in the dendritic tree. This is sometimes referred to as “dendritic
computation” (London and Häusser, 2005), and (some simple) effects resulting from dendritic structures can to some extent be modeled by specific filter
properties in the synapse (see Abbott and Regehr (2004) for “synaptic computation”). Modeling the spatial structure of a neuron and its dendrites is done
by splitting the neuron into several compartments that have the same electric
potential (Hines and Carnevale, 1997; Bower and Beeman, 1995) and applying cable theory to describe the conductances and electric potentials in the
respective compartments. Neurons modeled using this approach are called
multi-compartmental models and often use the Hodgkin-Huxley formalism
(Hodgkin and Huxley, 1952) which describes the input currents and conductances by sets of differential equations that are computed numerically. Each
of this compartments can have active and passive ion channels and the spatial
structure can be arbitrarily complex as in real neurons (see Koch (2004) on
the complexity of single neuron processing). The Hodgkin-Huxley approach is
applied in Paper 1 (Kaplan and Lansner, 2014) by using neuron models with
one, three and four compartments in order to show that functional models
(originating from rather abstract ideas and relying on specific types of information coding) can be implemented with rather detailed models making use
of well known biophysical processes like dendritic interaction among neurons
in the olfactory bulb (Rall et al., 1966).
Depending on the research question approaches focusing on finer scales are
more suitable, for example the role of molecular mechanisms (Kotaleski and
Blackwell, 2010) and biochemical networks (Bower and Beeman, 1995; Bhalla,
2014). See Byrne et al. (2014) for an introduction to cellular and molecular
neuroscience and Feng (2003); Martone et al. (2004) on challenges regarding
3.1. MODELING
31
analysis and modeling at this more detailed scale. For a more comprehensive
overview of the different levels of description in computational neuroscience,
please refer to the book by Trappenberg (2009). For single neurons (Herz
et al., 2006) offers a concise review discussing the balance between detail and
abstraction.
Regarding the description of brain connectivity and dynamics one can
distinguish three scales. The microscopic scale comprises few neurons (100 −
102 ), the mesoscopic scale (102 − 105 neurons) deals with small and medium
sized networks on the circuit level and the macroscopic scale (105 − 1012 ).
The relevance of these scales in the context of structure-function relationships
is discussed in section 4.4 and in the context of neural representations and
coding in section 4.2.
Similarly to the approach of modeling single neuron behavior, the principle
idea of modeling the behavior of neuronal populations uses differential equations to describe the activities at the population level. These type of models
aim to represent the spatiotemporal dynamics larger populations of neurons
(comprising at least 104 neurons up to 108 neurons) and to build hierarchies
of populations for modeling certain tasks. Prominent examples (which will
only be named here with examples of their application, but not explained in
detail) are neural fields (Wilson and Cowan, 1972; Amari, 1972; Wilson and
Cowan, 1973), Freeman K-sets (Freeman, 2000; Freeman and Erwin, 2008)
and adaptive resonance theory (Carpenter and Grossberg, 2010; Grossberg,
2013).
Neural fields look at the neural tissue as a continuous medium and try to
explain the dynamics of coarse grained variables like population firing rates
(Coombes, 2006). The neural or dynamic field formalism can be used to
model multiple populations and can be extended to include more physiological
detail like stereotypical connectivity, axonal delays, refractoriness, adaptation
and neuromodulation. This neural or dynamic field approach has been used
to study pattern formation (Amari, 1977; Ermentrout, 1998) and often uses
attractor dynamics to model a multitude of phenomena, including movement
planning and motor control (Erlhagen and Schöner, 2002), embodiment and
higher cognition (Sandamirskaya et al., 2013), which makes it an attractive
framework for autonomous robotics (Schöner et al., 1995). See Pinotsis et al.
(2014) for an overview of recent advances using neural masses and neural fields
in modeling brain activity.
Instead of looking at the dynamics of a full population (possibly representing a brain region), Freeman K-Sets focuses on cell assemblies of ca. 104
neurons with ca. 108 synapses and the different forms of interactions between
Scales of
connectivity
and
dynamics
Population
level
32
CHAPTER 3. METHODS
and within excitatory and inhibitory populations forming complex hierarchies
of populations (Freeman and Erwin, 2008). These hierarchies can exhibit
chaotic dynamics and have been used to study learning and pattern recognition in the olfactory system (Yao and Freeman, 1990; Kozma and Freeman,
2001; Li et al., 2005; Tavares et al., 2007), amongst other things (see Freeman
and Erwin (2008) for further information).
Adaptive resonance theory as well abstracts the behavior of neural populations to few differential equations and hierarchies of different complexity have
been used to model a multitude of phenomena (see Carpenter and Grossberg
(2010); Grossberg (2013) for details).
Hierarchies built with the three above named approaches are often based
on a similar idea to the memory-prediction framework (Hawkins and Blakeslee,
2007) in the sense that bottom-up perception and top-down expectation processes are regarded as crucial to model high level behavior. Other classical
approaches include Hopfield networks (Hopfield, 1982, 2007) known for their
influence on associative memory modeling and mean-field approaches (Amit
et al., 1985; Treves, 1993) aims at the analytical treatment of neural dynamics
on the population level.
These examples represent methodological frameworks contrasting the spiking neural network approach as they are formulated on a more abstract level
and operate at a coarser spatial and temporal resolutions. However, to achieve
a full understanding of brain functioning and network hierarchies, modeling
the relationship between structure and function with higher resolution is desirable (Markov and Kennedy, 2013). On the level of spiking neurons only
few functional models exist. One notable example is the model by Schrader
et al. (2009) which explores this interplay between feed-forward (bottom-up)
and feed-back (top-down) processing using spiking neurons in a simple visual
pattern recognition task.
Rate-based
As spike-based models can become arbitrarily complex (in terms of spatial
vs.
structure, internal variables and interdependencies), a comparison between
spike-based
rate-based models and spike-based models is not trivial and has been done
models
for very simple spiking neurons as described in (Wilson et al., 2012) with
the conclusion that rate-based models can capture major characteristics of
simple spike-based population models. In contrast, other conductance-based
neural field models report qualitative differences in the frequency spectrum
between spike-based and rate-based models (Pinotsis et al., 2013). Thus,
the question how to combine spike-based neuron with population models and
bridge different scales in favor of computationally less costly models with
3.1. MODELING
33
equivalent behavior remains to be researched (see also (Rieke, 1999; Gerstner
and Kistler, 2002)).
Prominent examples of large-scale models using millions of neurons are by Global scale
Izhikevich and Edelman (2008) which focuses on dynamical aspects of brain
activity and not on functional aspects. More recently Eliasmith et al. (2012)
presented a framework aiming to bridge the gap between large-scale dynamics
and function based on an abstract learning framework (in the biophysical
sense). Another prominent framework (not related to spiking neurons) aiming
to explain brain function on a global scale is the free-energy principle (Friston,
2009, 2010) also employing the predictive coding hypothesis (Rao and Ballard,
1999; Friston and Kiebel, 2009).
Summary:
The level of description depends on the problem and phenomenon to be modeled and is always a trade-off between detail and abstraction, realism, feasibility and computational costs. The level of description chosen for studies
in this thesis was determined by the following principles (and pragmatic reasons): The fundamental idea underlying the models presented here is the connectionist approach which assumes that cognitive functions can be explained
by neuronal elements and their interactions via synaptic connections (see also
section 4.1). The connectionist approach requires a representation of units
and connections which makes a spiking neural network approach appropriate.
Spiking neurons have a greater level of detail than classical connectionist models (which are basically artificial neural networks with less realistic activation
functions). A finer scale or resolution would not add relevant information (e.g.
the detailed dendritic structure) within the computational/theoretical framework employed in this thesis. Hence, the chosen approach is an advancement
of functional connectionist models towards more biological realism and a good
compromise facing the trade-off between detail of description (and the desired
realism coming along) and computational costs.
Similar to other approaches in computational neuroscience (Piccinini and
Shagrir, 2014), the micro- and mesoscopic scale (focused for the representational level) was chosen in this thesis attempting to model how components
at this level are to be organized in a larger (network) system in order to give
rise to properties and functions exhibited by that larger system.
Another important reason why spiking neural networks have been chosen is
to offer the possibility to run the developed models on neuromorphic hardware
systems based on spiking neurons which are being developed within affiliated
projects (BrainScaleS, 2011; Schemmel et al., 2010; Furber et al., 2013).
34
CHAPTER 3. METHODS
Furthermore, the development of neuron models (or other computational
units) from scratch (i.e. based on experimental data or simply on the computational requirements) is a time-consuming and intricate process, hence existing
spiking neuron models were used and priority given to the integration of these
into larger functional systems.
Summing up, the modeling scale and level of description depends on the
problem addressed. As this thesis deals with fundamental functions (like
pattern recognition for perception and motion prediction), but not highest
level cognitive functions (like attention, planning, memory, language), the
level of description that has been chosen was the intermediate scale (both in
the spatial and temporal dimensions).
3.2
Neuromorphic systems
“What I cannot create, I do not understand” — Richard Feynman
The purpose of this section is to clarify the term neuromorphic system
and the motivation behind neuromorphic engineering principles which are
important for this thesis, since many principles that guided the modeling
work of this thesis can be related to neuromorphic engineering ideas.
3.2.1
What is neuromorphic engineering?
The term neuromorphic was coined in the 1980’s based on the work by Carver
Mead and originally stands for electronic circuits that mimic the behavior of
neural elements (Mead, 1990). Neuromorphic engineering also has the goal
to build systems that imitates parts of or certain aspects of the nervous system in order to solve real-world problems. When the focus is more on the
computational and information processing aspects of the nervous system as
inspiration for solutions and less on building physical systems, the terms neural computation or neurocomputing are used (for the sake of brevity I will in
the following only use the term neuromorphic engineering for aspects related
to both hardware and neural information processing).
Motivation
The motivation behind this approach can be seen in the following two
directions. One direction aims at developing models that represent neural
processes in order to understand the way the brain works, similar to reverse
engineering. Another direction is to build or engineer systems that inspired by
neural structures provide novel approaches to real-world problems for which
“conventional” (i.e. deterministic, pre-programmed) computers have not (yet)
3.2. NEUROMORPHIC SYSTEMS
35
yielded good solutions despite the large investment in terms of energy and
computational resources. Since our brains solves many difficult tasks with
little resources (particularly in terms of energy and space), the idea to use the
brain as an alternative model for computer design is being pursued (Watson,
1997; Markram, 2012).
Therefore, neuromorphic engineering can also be seen as an approach of
transforming the typical characteristics of computer systems from being inflexible and frail, but deterministic, precise and fast towards being more “brainlike”, i.e. adaptive, flexible and fault tolerant, without becoming error-prone
and slow as brains are – even if this comes at the price of becoming indeterministic. This includes the goal of including principles of self-organization
and the ability to learn and adapt, which has until recently been lacking in
most biophysically detailed computational approaches. Besides the engineering challenges, that is the need to understand, reverse engineer neural systems
and possibly improve computer architectures, there is the need to provide an
energy-efficient alternative to simulators of neural systems running on conventional multi-purpose computing architectures (Meier, 2013). Due to the
emulation of neural processes with specialized circuits (neuromorphic hardware), it is possible to build small, low weight, energy efficient and fast devices
that can complement existing technologies in various contexts.
Indiveri and Horiuchi (2011) give a short introductory overview and history of the field, which dates back to the first attempts by McCulloch and
Pitts (1943) and Rosenblatt (1958) to design simplified neuron and population models. More extensive reviews in the concepts of neuromorphic engineering can be found in (Furber and Temple, 2007, 2008; Liu and Delbruck,
2010). Delbruck et al. (2014) summarizes a small snapshot of recent advances
in neuromorphic engineering. The progress in attempting to build artificial
neural networks in hardware is reviewed in Misra and Saha (2010). For more
technical details on implementation principles see the review by Indiveri et al.
(2011) explaining approaches developed by many leading researchers in the
field.
3.2.2
History of
neuromorphic
engineering
Examples of neuromorphic systems
Vision sensors or “silicon retinas” are successful examples of applying neu- Artificial
romorphic engineering principles to real-world problems (Fukushima et al., sensors
1970; Andreou and Boahen, 1991; Delbruck, 1993; Mahowald, 1994b), e.g visual motion sensors (Andreou et al., 1991; Delbruck, 1993) or see the book
by Moini (2000) for further example applications. Analog circuits have also
been used to implement auditory sensors (Lyon and Mead, 1988; Lazzaro and
36
Accelerated
hardware
Nonaccelerated
hardware
CHAPTER 3. METHODS
Mead, 1989; Watts et al., 1992; Liu and van Schaik, 2014) often called “silicon cochleas”. In the domain of olfaction, artificial gas sensors (“electronic
noses”) offer possibilities for industrial application in the context of quality
control and health and security (Gardner and Bartlett, 1999; Pearce et al.,
2006; Wilson and Baietto, 2009). Especially in the context of olfaction, approaches for pattern analysis (Gutierrez-Osuna, 2002) and pattern recognition
(Kaplan and Lansner, 2014) are highly relevant.
Systems that are not inspired by a certain sensory modality are intended
to serve as generic computing platforms which can be used to run large configurable networks of neurons. In this respect, two major types of neuromorphic design principles can be distinguished: accelerated and non-accelerated
platforms. Another design/distinction criteria is whether processes are implemented using digital electronics (like the SpiNNaker platform, see below) or
analog circuits or a mixture between the two (called mixed-signal (Schemmel
et al., 2008)).
Accelerated systems implement neural components that operate faster
than their biological counterpart, which means, for example, that the refractory period of a neuron after a spike which is typically in the range of
a few milliseconds takes only 1/speed − up on the accelerated hardware. The
development of such systems has benefited particularly from advancements
in industrial VLSI 1 technology and opens the possibility to build large-scale
systems comprising millions of neurons. One prominent recent example is the
project lead by IBM sponsored by the Defense Advanced Research Projects
Agency (DARPA) (Merolla et al., 2014). It is important however to distinguish accelerated hardware architectures in two broad categories. One category aims to implement biophysically plausible neurons and synapses, whereas
the other favors large numbers of simpler stochastic elements like the one presented by Merolla et al. (2014).
The other design principle focuses on a more detailed and faithful representation of biological mechanisms in analog circuits. This often employs the
use of Hodgkin-Huxley type neuron models (Renaud et al., 2007; Lewis et al.,
2009) and offers the possibility to interface real neural tissue with hardware
neurons as both operate at the same temporal scale (Yu and Cauwenberghs,
2010). Hence providing the basis for bi-directional brain-machine interfaces
(Indiveri et al., 2011; Lebedev and Nicolelis, 2006) offering the possibility to
improve neural prosthetics (Musallam et al., 2004).
In the context of this thesis, two platforms are particularly relevant as they
have (to some extent) influenced the direction of the modeling framework used
1 very
large scale integration
3.2. NEUROMORPHIC SYSTEMS
37
in this thesis. The SpiNNaker platform uses digital circuits to simulate largescale spiking neural networks with the intention to run near real-time (Furber
and Temple, 2008; Furber et al., 2012; Painkras et al., 2012). The other
platform, also developed within the BrainScaleS and FACETS-ITN projects
(BrainScaleS, 2011; FACETS-ITN, 2009), is based on custom design analog
circuits that emulate neurons and synapses (Schemmel et al., 2008, 2010) much
faster than real-time, which will be explained further in the sections 3.2.3 and
3.2.4.
3.2.3
Simulation technology
Similar to other fields that deal with complex systems, a widely accepted approach has become the idea of describing the behavior of the systems elements
through sets of coupled differential equations. However, in the case of neural elements studying the behavior of spiking neural networks with analytical
methods is difficult due to the non-linear and discontinuous behavior. Therefore, in order to study biologically plausible neural systems (that is spiking
neural networks in particular), numerical simulations have become one standard method of choice. An alternative method is to use dedicated hardware to
emulate neural behavior is explained in section 3.2.4. However, emulation on
novel neuromorphic computing architectures (if truly inspired by the nervous
system) will suffer from indeterminism which is not desirable for the development and understanding of models, but offer advantages in terms of energy
and space consumption compared to the simulation of equivalent models on
supercomputers.
Due to the discrete nature of spikes a neural system can be describe by a
hybrid approach (as described in Brette et al. (2007)):
dX
= f (X),
dt
X ← gi (X) upon arrival of a spike at synapse i
(3.2a)
(3.2b)
where X is a vector of state variables describing the state of a neuron (membrane potential, ion concentrations, synaptic or external input currents, adaptation variables), f (X) describes how the state variable(s) develop (e.g. in
the LIF model decay towards the resting potential) and gi (X) describes how
the state variables change when receiving an incoming spikes (e.g. increase
in synaptic conductance, weight update). Furthermore, state variables (like
adaptation currents) or connection weights can be updated when the neuron
sends a spike. The evolution of state variables is then approximated with
Euler or Runge-Kutta methods (Butcher, 1987).
38
Clock vs.
event-driven
Parallelization
CHAPTER 3. METHODS
As the study of spiking neural networks makes high demands on the software implementing the numerical calculations in terms of flexibility, extensibility and performance, the (computational) neuroscience community has put
large efforts in the development of dedicated simulator software to meet these
requirements. In the course of this thesis two common simulators have been
used: the NEURON simulator environment (Hines and Carnevale, 1997) and
NEST (NEural Simulation Tool) (Gewaltig and Diesmann, 2007).
In general, algorithms that compute the development of neural and synaptic equations can be divided into two categories, clock-driven (synchronous)
and event-driven (or asynchronous) algorithms Brette et al. (2007). In clockdriven algorithms, the neural states get updated periodically at a fixed time
interval. In event-driven algorithms, neurons are only updated when they receive a spike which requires that the states can be computed at any given time.
However, this is only possible when explicit solutions of the differential equations exist, which is not true for more complex models like Hodgkin-Huxley
or the adaptive exponential integrate-and-fire neuron (Brette and Gerstner,
2005). Thus, the clock-driven strategy has been chosen for simulations performed for this thesis.
Due to the complexity of neuron, synapse and plasticity models, the need
to simulate large numbers of neurons and the interest in the development
of weights during learning, most common simulators offer the possibility to
distribute the problem across several processors and simulate networks in parallel. Most simulators are intended to be run on conventional general-purpose
(multi-core) CPUs, but some simulators offer the possibility to be executed
on GPUs (Nageswaran et al., 2009; Fidjeland and Shanahan, 2010; Richert
et al., 2011).
For the simulation of other models than spiking units, the research community has developed other dedicated simulation tools, e.g. CEDAR for the
simulation of neural fields (Lomp et al., 2013), or ANNarchy for mean-firing
rate (in combination with spiking) neural networks (Dinkelbach et al., 2012).
These tools are particularly useful to build hierarchies of networks as the basic
neuron models are simpler than spiking networks, however the costs for continuous communication is higher than in spiking networks. Lansner and Diesmann (2012) give a more extensive review on the methodology and challenges
related to spiking neural networks and their large-scale implementations.
As an alternative to the numerical simulation of neural behavior, specialized hardware has been developed to emulate neural behavior which will be
explained in the following section. However, it is important to note that
despite the apparent advantages and promises of neuromorphic hardware as
3.2. NEUROMORPHIC SYSTEMS
39
mentioned below, conventional multi-purpose computing architectures (like
supercomputers) guarantee reliability and reproducibility which is crucial for
model development and debugging.
3.2.4
Hardware emulations - a short overview
This section deals with the concepts underlying neuromorphic hardware systems. Numerical values can be represented in digital (usually using an approximate binary representation) or in analog form.
As one of the pioneers Carver Mead (Mead and Ismail, 1989; Mead, 1990)
proposed the following fundamental idea which contrasts conventional digitial
computer design principles: If information is represented by the relative values of analogue signals instead of using the absolute values of digital signals
(approximated by a binary representation using several transistors), analogue
circuits making use of this principle can be designed so that computations are
carried out much efficiently as they require less power, and more efficiently
in their use of silicon, compared to computations carried out by digital circuits. Hence, using the graded, analogue properties of a transistor (partly in
the subthreshold regime) forms the fundamental idea underlying “analogue
computations”.
These insights lead to development of a neuromorphic engineering community, which focuses on the fact that computations can be emulated instead of
being simulated. Emulation means that e.g. the “output voltage” of a circuit
represents the result of a equation, whereas in a simulated computation, the
result is computed by a set of operations carried out by digital circuits. A
review describing the most common building blocks used to date with silicon
circuits is given by Indiveri et al. (2011). Novel materials are currently being researched to implement memristive circuits, which may be particularly
useful to emulate synaptic behavior. This is due to the fact that the electrical resistance of a memristor is not constant, but depends on the amount
of current that had previously flowed through the device (see Fig 3.2). This
“memristance” behavior can be used to model spike-time-dependent plasticity
(Linares-Barranco and Serrano-Gotarredona, 2009; Jo et al., 2010).
Neuromorphic engineering does not solely include analogue circuits, but
includes also digital parts and circuits – depending on the problem to be
modeled by the hardware. For example, the binary nature of a spike is predestined to be represented in digital form. Spikes are usually transmitted
using an address-event representation (Mahowald, 1994a), which is an asynchronous digital communication protocol that sends the address of the spike
emitting neuron (Indiveri and Horiuchi, 2011) off chip or broadcasts it across
40
CHAPTER 3. METHODS
Figure 3.2: Conceptual symmetry between the four fundamental electrical
building blocks: resistor, capacitor, inductor, and the memristor. Figure by
Tan Jie Rui (2015).
the network (Schemmel et al., 2008). Furthermore, probabilistically spiking
neurons may well be modelled by digital circuits instead of analogue (Farahini
et al., 2014; Lansner et al., 2014).
Another approach that has been explored is the concept of making the customly designed circuits operate faster than biological realtime. That is, the
time in the biological system is scaled down when emulated by the equivalent
electronic circuit (e.g. the refractory period after a neuron has spiked is 1 ms
for real neurons, but can be scaled down to 10 or 100 ns). Hence, a speedup factor of several orders of magnitude can be achieved. This speed-up is
due to the small physical size of the electronic circuits and the resulting little
time required to emulate neural behavior and offers interesting possibilities
for hardware usage as has been explored by the FACETS and BrainScaleS
projects. Particularly interesting is the possibility to model long-term processes like development, learning and aspects like stability of learning rules,
for example, making use of the speed-up factor.
3.2. NEUROMORPHIC SYSTEMS
41
Summing up, an alternative approach to study the behavior of neural
systems is to emulate them using electrical circuits instead of simulating them.
3.2.5
Applications of neuromorphic systems
As this thesis was performed in a research network focused on neuromoprhic
hardware and intends to provide candidate models that could possibly be run
on such hardware, it is important to mention also other (potential) applications of neuromorphic hardware.
As already mentioned in section 3.2.2, an area where neuromorphic hardware designs have been successfully applied is the emulation of peripheral
sensory transduction (Indiveri and Horiuchi, 2011) for different types of sensors (see also the review by Liu and Delbruck (2010)). But also beyond the
mere sensory transduction areas of application for neuromorphic technology
are manifold.
Possible fields where neuromorphic technology could be advantageous compared to “traditional” computing approaches, i.e. based on mostly sequential
algorithms run on computers based on deterministic von-Neumann architectures, include but are not limited to prosthetics, algorithms and data processing, robotics, and simulation technology.
For example in prosthetics vision sensors (Chow et al., 2004) or limbs
(Jung et al., 2014) implementing “neuromimetic” sensors or actuators which
interact in a more direct way with the body than previous approaches. Further advancements in brain machine interface technology will lead to stronger
links between mind and (a partly artificial) body. Similarly, assistive technologies can profit from neuromorphic design principles and lead to more efficient
technology in rehabilitative devices.
Fields related to robotics may advance rapidly and benefit especially from
neuromorphic design principles, as functions like categorization, pattern recognition, completion and prediction are performed by animals almost with seemingly no effort, i.e. with very low energy cost, but high reliability and precision. In particular, the ability to interact with the environment in a continuous closed-loop fashion, the ability to learn, adapt and self-calibrate to
individual operation conditions (e.g. smooth pursuit behavior (Teulière et al.,
2014)), instead of using pre-programed, hardwired control algorithms or manual calibration procedures can be advantageous for robotics systems. Example
application for neuromorphic robotics could be assistive robots that interact
with humans in a domestic or industrial (commercial) environment, rescue
missions or space explorations. Important in this context become the advantages attributed to (future) neuromorphic devices to be small, low weight,
Artificial
sensors
Prosthetics
Robotics
and
autonomous
systems
42
CHAPTER 3. METHODS
energy efficient and fast which can act in concert (or maybe even replace)
conventional technology.
Algorithms
Despite the temporal loss of optimism in neural network research during
and data phases nowadays referred to “AI-winter”, progress made by artificial intelliprocessing gence is increasing (Kurzweil, 2005). Problems which have been tough to solve
for “standard” technology historically are becoming more and more feasible
with neuromorphic engineering approaches like deep or convolutional neural
networks (Schmidhuber, 2015), as e.g. different recognition or classification
problems: face, image (Ciresan et al., 2012), object, scene (Sermanet et al.,
2014; Farabet et al., 2013) or speech recognition (Hinton et al., 2012). Algorithms inspired by neural processing have been applied to large-scale data
analysis, e.g. fMRI2 (Benjaminsson et al., 2010) or PET 3 data (Schain et al.,
2013), and will increasingly be applied to in problems related to “big data” often relying on some means of dimensionality reduction (Hinton and Salakhutdinov, 2006). Other notable areas in which neuromorphic approaches have
lead (and will likely continue to do so) to significant improvements is intrusion and fraud detection (Ryan et al., 1998; Brause et al., 1999), data mining
(Orre et al., 2000; Bate et al., 2002), knowledge representation (Lu et al., 1996;
Larose, 2014) and vehicle control and navigation (Pomerleau, 1991; Miller
et al., 1995; Wöhler and Anlauf, 2001).
All these advancements can and likely will be integrated in different applications that might lead to far-reaching changes in our every day life, for
example in automotive safety by enabling assisted driving or semi- or fully
autonomous cars (Thrun, 2010; Göhring et al., 2013; Castillo et al., 2014).
Another active area of research is to study if neuromorphic systems can be
used for accelerated large-scale simulations (in combination with supercomputers) to help to understand “how the brain works” serving as an experimental testbed as e.g. pursued by the human brain project (Markram, 2012;
Calimera et al., 2013; Markram, 2014) or other projects (Eliasmith et al.,
2012). Thus, neuromorphic engineering approaches have the potential to affect and improve various fields not only in basic research, but also medical,
commercial and industrial applications.
2 Functional
3 Positron
magnetic resonance imaging
emission tomography
Chapter 4
Theoretical background
The purpose of this chapter is to introduce the most important concepts and
definitions (connectionism, neural coding and decoding, computations in sensory systems) that are fundamental to understand the context and contributions of this thesis and to give a short overview of existing theories and
approaches regarding the relationship of function and connectivity in neural
systems.
4.1
Connectionism
The aim of this section is to explain the fundamental concept underlying
the computational theories used in the models presented in this thesis. It
briefly explains the basic ideas of the connectionist approach and advantages
attributed to it. Furthermore, criticism against connectionism and the conflict
between classical and connectionist approaches will be discussed afterwards.
Here, principles of connectionism will be introduced only briefly. For a more in
depth introduction to connectionism see for example the book by Bechtel and
Abrahamsen (2002) or the work by Feldman and Ballard (1982). Experimental
evidence supporting the idea of connectionism linking structure and function
is discussed in section 4.4.
The term connectionism and central ideas date back to the work by Hebb Fundamen(1949). The basic idea of connectionism is that behavioral or cognitive phe- tal
nomena can be described by some form of representation in networks of in- principles
terconnected units. The mental representations or behavior (e.g. a thought,
items in working memory, the execution of a motor command) are expressed
in patterns of activation across units (for the debate on distributed versus
43
44
CHAPTER 4. THEORETICAL BACKGROUND
localized representations see section 4.2.3). The complexity of these units
and connections depends on the specific model and can vary from very simple binary units, to rate-based units representing populations of neurons, to
multicompartmental neurons with high biological detail. Connections can represent simple synapses performing the addition of activation from the source
unit or more complex synapses with different forms of filtering properties or
short- and long-term plasticity (Abbott and Regehr, 2004). The term parallel
distributed system introduced by Rumelhart et al. (1988) describes the early
form of connectionist systems.
Each neuron usually has one variable indicating its activity (in more complex connectionist models one could think of several variables, e.g. membrane
potential, spike rate, Calcium concentration) and this activity spreads across
the network based on the connections which are usually to be understood as
directed edges between nodes (using terminology from graph theory). An important feature of connectionist models is learning which is done by adjusting
the connection strengths between dedicated units. One way of thinking about
connectionist models is to regard the knowledge (or capabilities) of the model
to be represented in the connection patterns or connection strengths.
Advantages
One of the main advantages is the biological plausibility (units ∼ neurons
or populations, connections ∼ synapses, activations ∼ neural signals or variables). However, the grade of plausibility depends of course strongly on the
details of the model.
Due to the spread of activation determined by the connectivity patterns,
connectionist models (including some types of neural networks) are able to
implement associative or content addressable memory which makes them an
attractive framework to design models that compute a certain function (which
is the goal of neuromorphic systems as well, see 3.2). Another feature which
can be implemented with connectionist models is related to “graceful degradation” (or fault tolerance) which means maintaining (a limited) functionality
when parts of the network units or connections are removed (in contrast to
“catastrophic forgetting”). This implies some sort of robustness against incomplete input and fault tolerance by yielding the same output to similar but
not identical inputs (which is related to pattern recognition and demonstrated
in Paper 1 (Kaplan and Lansner, 2014)). Another feature is the ability to
generalize to some extent, i.e. giving reasonable output for novel, untrained
stimuli, if some prior knowledge is integrated and pitfalls like overfitting is
avoided. See, for example, the review by French (1999) on catastrophic forgetting (or catastrophic interference) in connectionist networks and (Sietsma
4.1. CONNECTIONISM
45
and Dow, 1991; Wolpert, 1992; Bishop et al., 1995) on generalization in neural
networks.
4.1.1
Criticism against connectionism
Critics argue that considering all the cognitive abilities we have, including
language skills, reasoning about abstract concepts, self reflection etc. connectionism is a wrong model of how the human mind works. Criticism against
connectionism comes from earlier “classical” approaches in the field of cognitive science and philosophy of mind, like computational, symbolic or representational theories of mind (SRTM). Fodor and Pylyshyn (1988) formulated
the criticism against connectionism seen from a “classical” cognitive science
point of view and argues for the “language of thought” (LOT) hypothesis to
describe how the mind works.
The criticism against connectionism was targeted towards early connectionist models or “parallel distributed processing” (PDP) systems as described
in (Rumelhart et al., 1988). This lead to year long discussions and split the
cognitive science community in two opposing schools as the classical, established theories were challenged by an novel, alternative approach. However,
this criticism is mitigated by more recent developments based on extensions
of the original connectionist ideas.
The debate between the symbolic/representational theories of mind and
connectionism is about the question what kinds of processes and representations are needed to explain behavior at the cognitive level. The debate
between connectionism and SRTM is not about the question if representations exist (both agree on that) and how explicit the rules are that transform
representations (both allow explicit and implicit rules). However, both approaches differ in their assumptions about the nature of representations and
processes.
The classical or SRTM point of view uses language-like symbol-level representations and is grounded in the idea that mental states or cognitive processes
can be described as “representational states that have combinatorial syntactic
and semantic structure” (Fodor and Pylyshyn, 1988). In contrast, connectionist representations are regarded as high-dimensional vectors and subsymbolic (which means that the activity of a single unit does not itself represent
something, but only the distributed activity pattern) (Eliasmith and Bechtel,
2003). In LOT/SRTM models, syntactic/semantic ’molecular’ representations
are structurally composed of other ’molecular’ or ’atomic’ syntactic/semantic
representations. Furthermore, in LOT/SRTM the relations between basic representations are causal and constitutive (i.e. they are constituents of higher
Assumptions of
SRTM /
LOT
46
Alleged
failures of
connectionist
models
CHAPTER 4. THEORETICAL BACKGROUND
representations and relationships), and the semantics are preserved by syntactic manipulation. Since the representations have an inherent combinatorial
structures, processes in LOT/SRTM models can be sensitive to those structures, which is not a priori true for connectionist models. This extremely short
introduction is just to give a feeling for the difference between LOT/SRTM
and connectionist/PDP approaches, but not to state that the two are absolutely incompatible.
The major arguments raised by proponents of SRTM / LOT against (early)
connectionist models are:
Productivity: As our minds can - with some idealization - build seemingly an infinite number of combinations (different sentences in natural language, or thoughts) from a finite alphabet of thoughts (words, objects, concepts in mind) and a finite set of rules, this (idealized) infinite productivity is
expected to be achieved by frameworks that aim to explain the mind (Aydede,
2010). However, critics claim that connectionist models would need basically
an infinite number of primitives, since the processing in “traditional” PDP
is not inherently structure sensitive (but rather associative through weighted
associations). Thus, the claim is that productivity is limited in “traditional”
PDP approaches.
Systematicity: Connectionist models are associative networks, but have
limited abilities to generalize on a wide scale and transfer knowledge that has
not been trained. A common example is that if a system is able to understand
a sentence “A loves B” it can also understand and recognize the sentence “B
loves A” (even though it may not be true) (see Smolensky (1988b,a) for arguments in favor of connectionism and (Fodor and McLaughlin, 1990; Hadley,
1994) for a more elaborate argumentation and discussion of this problem).
Compositionality: If x = a + b + c, the meaning of x is based on the
meaning of its subparts m(x) = m(a) + m(b) + m(c). The meaning of parallel
distributed representations (being subsymbolic) is in general not compositional (when thinking about artificial neural networks) which makes structure
sensitive processing difficult (i.e. activity of pattern m(x) is likely unrelated
to activity patterns representing m(a), m(b) and m(c)). Hence, it is a challenge for connectionist models to represent relations based on logic, rules or
analogies.
The three above named (and other minor) arguments raised against connectionism have been discussed extensively in the literature since the late
1980s, see e.g. Chalmers (1990); Davies (1991); Garson (2012); Goldblum
(2001). Importantly however, these arguments against connectionism do not
imply that all LOT/SRTM models fulfill the above requirements (e.g. natural
4.1. CONNECTIONISM
47
language is not compositional and limitations concerning productivity exist
there as well)! In the following counter arguments from connectionist advocates will be briefly discussed explaining the misunderstanding from SRTM /
LOT advocates towards connectionist ideas. These misunderstandings focus
on artificial neural networks and classical PDP systems as used in the 1980s,
but do not hold for more recent approaches and extensions of the connectionist
approach.
The argument or debate between advocates of connectionism and SRTM Connection/ LOT originates from several misunderstandings:
ist
responses
• One misunderstanding is the idea that connectionist models are strictly
distributed representations and that representations in neural networks
can not be localist (as counter example the readout layer in Paper 1
(Kaplan and Lansner, 2014) can be seen as a localist or symbol-level
representation for odor objects).
• Connectionist models can not behave like syntactic or semantic symbols
or operators claiming that there would be no corresponding neural representation on a symbolic level (which is not valid as one could think of
symbolic representations in a connectionist model using different hierarchies and coding schemes).
• Regarding systematicity: distributed representations as used by connectionist models can support structure sensitive processing when extended
accordingly (i.e. not using the classical connectionist approach attacked
by Fodor and Pylyshyn (1988)), see Niklasson and van Gelder (1994).
Other approaches and coding schemes like “vector symbolic architectures” (Gayler, 2004; Levy and Gayler, 2008; Gayler, 2004) can bind
vectors (distributed representations) into new representations which are
sensitive to the constituent elements.
• Regarding productivity and compositionality: non-classical, hierarchical
connectionist systems composed of several subnetworks can be built in
a way so that compositionality and productivity is given.
• One can find cases where natural language and our mind does not fulfill the above three requirements (do animals and humans really think
systematic and productive in the above named sense?), hence one can
argue that LOT/SRTM theories are no good models of describing how
the mind works either (Goldblum, 2001).
Conclusions
48
CHAPTER 4. THEORETICAL BACKGROUND
Figure 4.1: This figure clarifies the relationship between “classical” symbolic/representational theories of mind (SRTM) or language of thought (LOT)
and connectionistic models like associative networks. Connectionist models
are suitable to handle peripheral processes, like perception and motor control (interaction with the environment). SRTM / LOT models handle central
processes, for example reasoning and language. As connectionist models can
be constructed so that SRTM / LOT features are included, one can argue
that connectionist networks can also be used to realize central processes and
hence all sorts of mental processes. Figure redrawn from lecture material on
connectionism and LOT (Boley, 2008).
As a summary, the criticism against early connectionist models can not
be regarded as being valid any longer in the context of advanced hierarchical
models which remain inspired by the connectionist/PDP approach. Hence, it
can be argued that connectionist models can be set up so that they implement LOT/SRTM on a more detailed scale. One way of seeing this debate
is that connectionist models aim to describe cognitive processes on a more
fundamental, mechanistic level, whereas LOT/SRTM approaches take a more
abstract point of view, but both approaches are not mutually exclusive (see
figure 4.1). The question remains if connectionism is the best or simplest way
to describe mental phenomena. The development of connectionist model is far
from being finalized and novel approaches trying to combine the advantages
of LOT/SRTM and connectionism are being developed (Velik and Bruckner,
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
49
2008). A promising approach showing how symbolic reasoning can be implemented using spiking neurons has been presented by Stewart et al. (2010);
Rasmussen and Eliasmith (2011) and which has been applied in the “spaun”
model (Eliasmith et al., 2012) emulating somewhat intelligent behavior.
4.2
Internal representations, neural coding and
decoding
This section deals with the question how information is represented in the
nervous system and how cell activities relate to stimuli. Here, the basic concepts be explained and the most prevalent views will briefly be summarized
as they are relevant to relate this thesis to the existing research landscape.
What is coding?
A code is a rule for converting a piece of information into another form. Speaking from an information theoretical point of view, different codes can be better
and worse for a given application, and the two major aspects regarding the
quality of a code is on one side the robustness of the code against errors in
transmission (which is the focus of error-correction codes or channel coding)
and on the other side the amount of resources needed to represent information
(which is the focus of source coding, data compression). Communication in
neural networks requires some sort of redundancy as chemical signal transmission through synapses is in general unreliable (partly due to stochastic
molecular processes (Allen and Stevens, 1994; Ribrault et al., 2011)). On
the other hand, signal generation in form of spikes consumes energy which is
costly and should be minimized (often discussed using terms like efficient or
optimal coding (Levy and Baxter, 1996)). Hence, the nervous system needs
to find one or several ways to take these two opposing effects into account.
What is neural coding?
Neural coding deals with the question how information is represented by the
neural activity in a given ensemble of neurons. With respect to the early
sensory system, it deals with the question how neurons react towards stimuli
from the environment, how the neural activity changes depending on certain
stimulus features (e.g. light intensity, odor concentration or the position and
speed of a moving stimulus) and how the stimulus representation looks on
different levels (or areas) of the processing hierarchy. The idea that a neuron
50
CHAPTER 4. THEORETICAL BACKGROUND
codes for something means that spike activity varies in a correlated way as the
stimulus changes. Hence, one way to study neural coding is to change a specific
feature and record the changes in neural activity across this feature dimension.
It is important to note however, that the nervous system might employ very
different coding schemes for the same feature on different levels. For example,
in the early sensory areas the purpose is simply to represent information about
the environment, but in higher cortical areas the function is to bring different
pieces of information together and interpret them in different contexts which
might require a different form of internal representation than on the lower
levels.
What is neural decoding?
Neural decoding deals with the opposite process, i.e. how the information
about sensory or other stimuli can be retrieved from the activity of neurons.
Neurophysiologists often try to correlate the measured neural activity with
behavioral variables (e.g. performance in a stimulus discrimination task) or
stimulus variables. Traditionally, neural coding and decoding focuses on the
neural level on the electrical activity, i.e. the spikes fired by a neuron, and
disregards other traces or indicators of activity. This is because electrical
signals are the most prominent feature of nerve cells and “easier” to detect
from neural tissues. This, however, depends on the nature of the recorded
signal and different recording or imaging techniques use different observables,
e.g. magnetic fields, Calcium concentration, mean voltage, blood flow, oxygen
level or others. See section 4.2.6 for a short overview of decoding approaches.
In the following, different coding theories are explained in more detail and
the relation to my work will be mentioned, for a more extensive introduction
in the field of neural coding and decoding see (Dayan and Abbott, 2001). See
figure 4.2 for a schematic overview of different coding schemes.
4.2.1
Continuous, rate or frequency coding
Continuous coding means that the (external) variable can be mapped to the
neural activity in a unique (or isomorphic) way, e.g. through a monotonously
increasing spike rate as a certain stimulus feature increases. Examples for
a rate coding (or continuous coding) are the encoding of stimulus strength
(weight) in muscle stretch receptors (Adrian, 1926). The assumption is that
the firing rate (measuring the number of spikes in a fixed time window) of a
neuron contains all the relevant information about the encoded signal. This
most simple form of coding is however not the only form of coding and en-
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
51
tails complications on different levels. First, spike rates are measured over
some time window which means that the information supposedly transported
by the rate is only available after some time. This time for integrating the
activity is in some cases too long since a decision (e.g. to trigger the motor
activity to flight (or fight)) needs to be taken already after very few action
potentials when a new stimulus appears (e.g. flies can react within 30-40 ms).
The resolution for perceptual discrimination is in some species very high, for
example in barn owls less than 5 microseconds (Moiseff and Konishi, 1981)
which makes a rate code unable to use in this context. Furthermore, neural
adaptation changes the spike rate response of a neuron to the same stimulus
over time which makes the simple form of rate coding unreliable. For further
discussion of rate codes see (Rieke, 1999).
4.2.2
Discrete coding, labeled line coding and interval
coding
These three types of coding are all based on spike rates, but do not need to
rely on spike rate exclusively (that is, a temporal code can be implemented at
the same time in form of spike latency).
If a neuron spikes at much higher rates (compared to its resting firing
rate) for a certain interval of a stimulus feature, this coding scheme can be
called interval coding. Prominent examples of an interval code are orientation
selective cells in the primary visual cortex (Hubel and Wiesel, 1962), the
direction selectivity in medial temporal area (MT/V5) (Albright, 1984) or the
frequency of a tone in the auditory system (Bendor and Wang, 2005).
In comparison to this interval code, one can speak of a discrete coding
when the stimulus variable is discrete, e.g. the presence of a specific person
on a picture. This coding scheme is very much related to the view that the
brain employs local representation (see section 4.2.3 for a discussion).
Labeled line code means the conservation of a certain feature (e.g. color,
position, pitch) in the activity of dedicated neurons in different areas. This
labeled line (or anatomical) coding is based on two ideas. First, sensory neurons express certain receptors and are specialized for their adequate stimulus
(e.g. for light or sound frequency range). Second, the afferent fibers from
sensory neurons up to the specialized cortical areas carry information only
from this receptor type, which leads to an ordered representation within that
sensory area, e.g. tonotopic maps in the auditory cortex or topographic maps
in the visual cortex (see section 4.4.1.1 for more on the concept of topography). These assumptions have been verified for parts of the visual system
52
CHAPTER 4. THEORETICAL BACKGROUND
and other sensory modalities, but not for the olfactory system in which some
receptors are activated by a wide range of stimuli and afferent fibers are involved in more than one type of sensation. This gives rise to a distributed (or
sometimes called combinatorial) code in the olfactory system (see also figure
4.2).
Figure 4.2: Coding schemes. The vertical axis represents the spike output
rate of a neuron in the presence of a certain stimulus. The horizontal axis
represents the stimulus feature of different (discrete) objects. A: Continuous
rate coding. Horizontal axis could represent stimulus intensity, for example
weight, light, odorant concentration. B: Interval coding. Stimulus features
could be orientation of a bar, odorant concentration or sound frequency. C:
Discrete coding / localist representation. One neuron reacts to one (invariant)
stimulus only. According to localist view, some neurons code for the identification of the stimulus with a single object in the visual system, e.g. prominent
building or person (Quiroga et al., 2005). Here, the horizontal axis represents
discrete stimuli in contrast to A, B where it represented continuous stimuli.
D: Population coding / distributed representation. One neuron reacts to several stimuli (objects or patterns) with the same or different output rates. As
the object identity can not easily be inferred from the neuron’s output rate
the stimulus representation needs to involve several neurons. Hence, the combined, distributed response from several neurons represents the presence of a
certain stimulus. An example system implementing this type of coding is the
olfactory cortex (Miura et al., 2012).
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
4.2.3
53
Population level: Local and distributed
representations
When looking at the population response to a certain stimulus, one can think
of two opposing ways of stimulus representation. One is a distributed representation in which the stimulus is distributed by patterns of activity involving
several neurons. The other is that an object (or other cognitive entities) is
represented by a distinct population (or in the extreme case a single neuron)
which only codes for this thing (see (Barlow, 1972; Bowers, 2009) for an early
and more recent view on this principle). Other terms used to distinguish these
concepts are modular (for “localist” representations ) and population or ensemble coding (for distributed representations) (Erickson, 2001) (in there is
also an extensive historical overview of these opposing views and the prominent partisans for the respective views). Both views require some degree of
redundancy and rely on cell assemblies (a term coined by Donal Hebb (Hebb,
1949)) which describes a distinct, strongly interconnected group of neurons
whose spatiotemporal activity represents a cognitive entity. The difference
between the two concepts lies in the question as to how many representations
a unit (i.e. a population or cell assembly) takes part in.
Local (or localist) representations imply that a (cell population or assembly) unit is only part of one single cognitive entity. Cells implementing this
localist representation are also called “grandmother cells” or “concept cells”
as they code for a single object, concept or category that can be presented in
a multimodal way (e.g. through visual or auditory stimuli) and invariant way
(i.e. independent of shape, size, orientation etc.) (Quiroga et al., 2005). Arguments for a localist representation is the simplicity and concreteness (usually
one population per object even for complex concepts) of the assumed code
which can make computations efficient and attractive (see Page (2000); Bowers (2009); Roy (2012, 2013, 2014) for supportive experiments and a more
extensive discussion on localist representations).
In contrast, in a distributed representation cells take part in the representations of many different objects, hence “with distributed coding individual
units cannot be interpreted without knowing the state of other units in the
network” (Thorpe, 1998). A distributed representation can be classified as
one of three: dense, coarse and sparse (see Bowers (2009) for an excellent
review on the difference between localist and distributed representations).
In a dense distributed code very little information can be gained from
single units. In a coarse distributed code, cells have broad tuning curves and
respond to similar stimuli. In a sparse distributed code, complex stimuli are
coded by only very few cells (or units) which is advantageous for storing large
Local representation
Distributed
representation
54
CHAPTER 4. THEORETICAL BACKGROUND
numbers of different objects in long-term memory with a limited amount of
resources (Meli and Lansner, 2013).
The small number of activated units (cells) makes it difficult to distinguish
sparse distributed codes from localist (or “grandmother”) codes, as in both
codes the activity of a unit correlates strongly with the object they encode.
This could be tested by checking that cells coding for one thing are involved
only in representing this one object (Quiroga et al., 2005) (e.g. imagine extending the horizontal axis in Figure 4.2 C to include all possible objects).
However, this becomes increasingly difficult as the number of test objects (or
concepts) increases if one wants to rule out a very sparse distributed code
(see Quiroga (2012) for a discussion in the context of declarative memory),
which keeps the discussion between these two concepts alive (see Quiroga et al.
(2008); Plaut and McClelland (2010) for critical replies to Bowers (2009), and
Bowers (2010) for a counter reply).
Sparse coding also refers to a low mean activity, i.e. not only few cells
participate in an internal representation, but those that participate also spike
at very low rates. This has the advantage of requiring much less energy
(which some argue is even a metabolic requirement), see Laughlin (2001);
Lennie (2003) for a more in depth discussion.
Sparse codes are believed to be used in the visual system (e.g. sparse coding
of natural images in V1 (Vinje and Gallant, 2000), faces in inferior temporal
cortex (Young and Yamane, 1992; Hung et al., 2005) or medial temporal lobe
(Quiroga et al., 2005; Quiroga, 2012)), in the hippocampus (Wixted et al.,
2014) to code for episodic memories and in the olfactory system in the olfactory
bulb (as indicated by sparse connectivity (Willhite et al., 2006; Kim et al.,
2011), see (Yu et al., 2013) for a model) and in the olfactory cortex in mammals
(Poo and Isaacson, 2009; Isaacson, 2010) and insects (Perez-Orive et al., 2002;
Laurent, 2002).
In the context of this thesis one could regard the activity in the olfactory
cortex model in Paper 1 (Kaplan and Lansner, 2014) to implement a sparse
distributed code and the readout layer to implement a localist code as each
readout neuron indicates the presence of a single pattern just as concept cells.
Summary
Some coding schemes are applicable to characterize the early stages of a
sensory system (e.g. orientation tuning by interval coding), but might be less
useful to describe coding on higher levels (e.g. object recognition in inferior
temporal cortex). Different coding schemes can be present simultaneously at
different levels and act in concert in order to help recognition of stimuli and
orientation in the environment.
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
55
It is important to note, that the debate about local vs. distributed representations is only one facet about the question how information is coded in
the brain. Another field of debates is concerned with the role of the temporal
dimension in neural coding.
4.2.4
Temporal coding
As already mentioned, rate codes are insufficient to account for all behaviors,
especially when a high temporal resolution is crucial including tasks in audition (Moiseff and Konishi, 1981), vision (Butts et al., 2007; VanRullen et al.,
2005; Butts et al., 2007) and motor control for vocal behavior in songbirds
(Tang et al., 2014). Hence, it has become clear that the nervous system must
somehow make use of the temporal dimension in information coding in order
to support survival over time.
One suggestion is that temporal correlation between spikes and synchrony
plays an important role in processing and binding information together (see
e.g. Singer (1994); Singer and Gray (1995)). As temporal correlation dramatically modulate the neural responsiveness, it has become clear that temporal
coding plays an important role for neural processing. Other temporal coding schemes do not rely on correlations on the basis of a narrow time window
(measured for example by the period of underlying oscillations), but are based
on sparse precisely timed spikes (Kloppenburg and Nawrot, 2014). This idea is
related to the idea of “rank order coding” in the visual system (Thorpe et al.,
2001), which has been shown to transmit information more efficiently than
rate codes when looking at retinal ganglion cells (Van Rullen and Thorpe,
2001). Masquelier (2012) presents a model of the early visual system based
on relative spike times and studies the robustness of these codes.
It has also been suggested that temporal coding making use of oscillations
might play a role in tasks like memory or pattern recognition (Hopfield, 1995),
for example in the olfactory system (Brody and Hopfield, 2003). Similarly,
first-spike latencies have been argued to play an important role in the encoding of odor information (Margrie and Schaefer, 2003; Junek et al., 2010). As
will be discussed in context of the results on olfactory system modeling in
section 5.1, temporal coding and rate coding are not mutually exclusive. Another interesting phenomenon called “polychronization” observed in models
by Izhikevich (2006) is the repetition of spike patterns that are distributed
over both neurons and in time and thereby making use of sparse distributed
representations on the population level and in the temporal domain.
56
CHAPTER 4. THEORETICAL BACKGROUND
As this thesis is more focused on the task to implement rate-based learning
and rate-based coding in spiking networks, other aspects of temporal coding
are not further discussed here.
4.2.5
Probabilistic and Bayesian approaches
This section introduces a slightly different concept of how to think about
the brain and provides a different access to interpret neural activity. The
focus of the previous section on traditional neural coding ideas was focused
on the question how to correlate neural responses with real-world stimuli or
objects in a most unambiguous way. The probabilistic approach to see the
brain is to emphasize the fact that the nervous system has to deal with many
sources of uncertainty and that the brain likely has some way to represent
this uncertainty. This contrasts the quest to establish hard and fast coding
rules based on the correlated activity between neurons X and Y when object
A is present. Even though a subject can be 100% certain that object A is
present in an experimental setup, this is often not true for real world stimuli
and neural circuits need to “know” how to integrate these uncertainties.
Sources of uncertainty and noise in the nervous system are manifold (see
Faisal et al. (2008) for a review), just to name a few examples:
• Stochastic processes on the cellular and the synaptic level: release and
fusing of vesicles in the synaptic cleft influenced by molecular Brownian motion, stochastic opening and closing of ion channels (Sakmann
and Neher, 2009; Kispersky and White, 2008) inducing electric noise
and membrane potential (often modeled as Poisson processes) and spike
threshold fluctuations (Rubinstein, 1995; Faisal et al., 2008) influence
spike initiation and spike timing.
• On the receptor level: Despite the fact that rod photoreceptors detect
photons very reliably (Rieke and Baylor, 1998), photons arrive at a
random rate and decrease the signal to noise ratio at low light levels
(Bialek, 1987; Faisal et al., 2008). In olfaction, thermodynamic noise
and turbulences in the medium carrying odor molecules (Balkovsky and
Shraiman, 2002) lead to random arrival rates of odor molecules (Berg
and Purcell, 1977; Bialek and Setayeshgar, 2005).
• Additional uncertainty originates from environmental influences (e.g.
by background stimuli, wind disturbing the transport of sound and odor
signals, humidity and dust in the atmosphere influences light scattering)
which adds significant amounts of noise to the sensory signals.
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
57
• However, noise can also have advantages, e.g. for information transmission by boosting subthreshold signal through stochastic resonance
(Longtin, 1993), and unpredictable behavior can be advantageous (e.g. a
rabbit or other prey escaping from its predator requires a good (pseudo)
random choice) thereby introducing (semantic) uncertainty in the perception of other individuals behavior (how will my opponent react?), see
the review by Faisal et al. (2008) for more on benefits of noise in the
nervous system.
Maybe it is due to the different sources of inherent uncertainty on various
levels (noisy stimuli, unreliable synaptic transmission etc.), that it seems so
difficult to find irrevocable coding rules as discussed in the previous section.
The Bayesian perspective regarding neural activity provides a mathematical framework how to interpret neural activity in the presence of uncertainties.
Bayes rule offers rigorous mathematical means how to update one’s beliefs
based on previous knowledge and integrating new evidence. This framework
it attractive for three reasons as described in Doya (2007). First, it offers a
way to deal with the question how an ideal perceptual system would combine
noisy sensory observations with prior knowledge. Secondly, Bayesian estimation algorithms can inspire new interpretations how neural circuits function
on a mechanistic level (provided the assumption that neural circuits implement Bayesian estimation in some way). Third, as neural data is generally
very noise, Bayesian methods offer the possibility to optimally decode the
underlying signals.
The Bayesian approach does not only allow to optimally update one’s beliefs combining prior knowledge and new information, but also to infer the
causal structure of the world. Knowing the causal structure of the world
improves estimates and predictions about the future (Pearl, 2000), which
makes the Bayesian framework particularly interesting for computational neuroscience providing relevant guidelines for the development of mechanistic
models. There exists a large number of studies that link this probabilistic
approach to cognitive processes (see Chater et al. (2006) for a historical and
conceptual review, and Kording (2014) for a short review on experimental evidence for Bayesian behavior in perception, action and cognition) and neural
coding (Pouget et al., 2003; Ma et al., 2006; Doya, 2007; Beck et al., 2008).
Just to name a few, Vilares and Kording (2011) provides an excellent introductory review of Bayesian models in modeling the brain and behavior. In
there, examples are given for a Bayesian interpretation of behavior when integrating multiple sensory cues (Deneve and Pouget, 2004), see for example the
reviews by Ernst and Bülthoff (2004); Ma and Pouget (2008). Furthermore,
Why
Bayesian?
Existing
literature
58
CHAPTER 4. THEORETICAL BACKGROUND
Bayesian approaches have been linked to decision making (Beck et al., 2008),
sensorimotor control (Körding and Wolpert, 2006), visual perception (Yuille
and Kersten, 2006), language processing and learning (Chater et al., 2006),
conditioning (Courville et al., 2006), semantic memory (Steyvers et al., 2006)
and reasoning (Oaksford and Chater, 2007).
4.2.5.1
Relevance of probabilistic and Bayesian ideas for this
thesis and their implementation
Particle
The probabilistic interpretation of neural activity has been used in several
filtering for ways throughout this thesis. In models of the visual system (section 5.2), one
modeling of the guiding question is how sensory information from separate sensors with
vision limited receptive fields (a limited view of the sensory world) can be integrated
to form a global and coherent percept in the presence of noise and uncertainty
(Perrinet and Masson, 2012). This leads to a probabilistic detection of motion which was modeled using Bayesian particle filter frameworks in (Perrinet
and Masson, 2012) and (Kaplan et al., 2014) (also known as sequential Monte
Carlo sampling. In these frameworks the probability distribution function
(PDF) describing the likelihood of a certain sensory stimulus is approximated
by weighted particle samples. The evolution of the PDF represented by the
weighted population of particles can be described using a Markov random
chain on the sensory variables represented by the particles (e.g. position).
This is done by sampling particles at discrete time steps and weighting them
according to the total number of particles and the measurement of the sensory variable. Individual particles move according to their prediction about
the stimulus (making use of a prior on smoothness of trajectories) which allows modeling of motion-based extrapolation and prediction phenomena. In
each step, the population of particles changes to some extent as particles with
low weights are removed (to avoid numerical problems) and renewed by duplicating particles with highest weights (representing high confidence to be in
agreement with the input). Hence, the particles can be understood as sensors
that receive a high weight if the variable they represent (e.g. position) is in
agreement with the measurement (e.g. input from the retina). At the beginning, the parameters (direction and their respective weights) of the particles
is initialized randomly and develops according to the stimulus movement and
the parameters defining the model. This can lead to different behaviors like
tracking, or false tracking in which an initial hypothesis is followed blindly
without taking novel contradicting sensory evidence into account. Removal of
particles can be interpreted as suppression of false predictions (which could
be implemented through lateral inhibition in a neural network context, see
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
59
below and 5.2). For further details on this approach see sections 1.2 and 1.3
in (Perrinet and Masson, 2012), the review by Khoei et al. (2013) and (Isard and Blake, 1998; Weiss et al., 2002) for similar approaches. In the work Bayesian
on the olfactory system (Kaplan and Lansner, 2014), a different framework learning in
guiding the neural network development has been used. There, the question olfaction
was how the connectivity from lower sensory areas to cortical and within the
cortical area could be organized with the goal to perform pattern recognition and Gestalt processing (pattern completion, rivalry, see section 4.3.4).
The question was addressed by using ideas inspired from machine learning
and the Bayesian confidence propagation neural network (BCPNN) algorithm
(Lansner and Ekeberg, 1989) which are described for rate-based models in
Lansner et al. (2009); Benjaminsson et al. (2013) (see section 4.4.4). The
principle idea is that neurons in the lower sensory areas act as probabilistic
sensors that code for certain features (odor features, groups of odorants and
their concentration) in form of their firing rates. The learning process can be
regarded as gathering information and collecting evidence (in form of correlations) about which sensors belong together and hence which stimulus features
are to be linked together in order to identify a pattern. The result of collecting
these correlations is then manifested in the connection weights derived from
the BCPNN algorithm (see section 4.4.4). The inference process is done by
providing sensory cues (in form of activation patterns in the lower sensory
areas) which triggers activity that propagates through the network according
to the learned weight matrices. The result of the inference process is the then
read out from the activity on the highest network layers. In accordance with
connectionist ideas, the memory of the system, that is the ability to recognize
and “remember” which stimulus belongs to which pattern, is implemented in
the connection weights. This idea that memory can be regarded as an inference process dates back (at least) to the first connection ideas (Rumelhart
et al., 1988) and can be seen as a guiding framework to understand human
semantic memory (Steyvers et al., 2006) or memory in general (Abbott et al.,
2013).
4.2.6
Decoding approaches
How can one explain cognitive and behavioral responses when looking at the
neural responses to sensory stimuli? This question is not only relevant for
physiologists that record neural activity and try to relate it with animal or
human behavior, but also of interest from a general perspective as it is fundamental to the interpretation of neural signals and processes. In order to decode neural activity, it is required to relate the recorded activity with behavior
60
Winnertake-all
Population
vector or
center-ofmass
CHAPTER 4. THEORETICAL BACKGROUND
(creating a mapping between activity and “output”) or a stimulus (mapping
between “input” and neural activity). For simplicity, I will refer in the following mostly to the “input” mapping, but the decoding principles can be applied
for the decoding the “output” equivalently. A number of different approaches
to address this questions have been proposed, from which a few will briefly be
reviewed here.
Supposed that each cell in a recorded population has a preferred stimulus
(or motor output) si which has been identified beforehand. Then, a very
simple approach to estimate the stimulus ŝ from the population response ~r is to
look at the neuron with the strongest response (e.g. highest output rate) ŝ = j,
j = argmaxi (ri ), where ri is the response of neuron i. This interpretation of
neural activity is particularly efficient for generating output behavior when
only one movement can be executed, for example moving the eye to a single
target position.
A slightly more elaborate way of decoding is to take into account the
output activity of all neurons and to estimate the stimulus (or output) from
a weighted average:
PN
ri si
ŝ = Pi N
(4.1)
i ri
This decoding scheme can seen as a sort of “vote” where each vote is weighted
by the relative activity, that is the higher the activity the higher is the weight
of the vote, equivalent to the concept of center-of-mass in physics.
Regarding the support from experiments, population vector methods have
been used to decode the arm movement direction when applied to a population
of motor cortical neurons in Rhesus monkeys (Georgopoulos et al., 1982, 1986).
Furthermore, activity in the superior colliculus has been observed to predict
direction, velocity and amplitude of eye movements when read out with a
weighted average of the active population (Lee et al., 1988). In contrast, Treue
et al. (2000); Liu and Wang (2008) argue for a mixed strategy of combined
population vector averaging and winner-take-all mechanisms depending on
the distribution of input stimuli and the task to be performed (Groh et al.,
1997) (see also Zohary et al. (1996)), while earlier studies argue for a “pure”
winner-take-all decision process (Salzman and Newsome, 1994).
A linear readout mechanisms, as the population vector method, has the
advantage that a weighted summation of inputs could be easily implemented
by downstream neurons (e.g. neurons in inferotemporal cortex projecting to
prefrontal cortex as mentioned by Hung et al. (2005)) under the assumption
that synaptic integration is linear, which may not be generally true.
Template
matching
4.2. INTERNAL REPRESENTATIONS, NEURAL CODING AND
DECODING
61
If the neural response function (or tuning curve) fi (s) for different stimuli
s is known, one can approximate the real stimulus by asking which stimulus
response is closest to the observed one (with the sum of squared differences
as error measurement):
ŝ = argmins
N
X
(ri − fi (s))2
(4.2)
i
As Pouget et al. (2000) point out, “population vector decoding (equation 4.1)
fits a cosine function to the observed activity, and uses the peak of the cosine
function ŝ as an estimate of the encoded” stimulus, whereas the template
matching fits the measured tuning curve (e.g. a Gaussian shaped curve) to
the observed population response and uses the peak position as an estimate
of the stimulus.
Again, supposed a stimulus s has been presented and the network activity Bayesian
~r has been recorded. Then, Bayes rule can be used to decode the response:
decoding
P (s|~r) =
P (~r|s)P (s)
P (r)
(4.3)
as P (~r|s) is basically the histogram of responses to stimulus s (measured over
several trials), and P (s) represents the prior knowledge available before the
experiment (the likelihood of presenting stimulus s). P (~r) is the likelihood to
observe the response vector ~r and can be estimated from (see Pouget et al.
(2000)):
Z
P (~r) = P (~r|s)P (s)ds
(4.4)
s
Just to name an example, Bayesian decoding has been applied to infer an
animal’s position from firing patterns of hippocampal place cells (Brown et al.,
1998) or hand motion from motor cortex activity (Gao et al., 2002). Brockwell
et al. (2004) apply particle filtering to perform Bayesian decoding on motor
cortical signals.
In order to judge the quality of a decoding approach one can use the
distribution of estimates gained over several trials and several stimuli. Of
major interest is first of all how close the estimated values are to the real
stimuli, but also how the estimate varies with respect to the stimulus, that is
how big the variance of the decoder is and whether it is biased. The minimum
variance of a decoder is determined by the Cramér-Rao-bound based on the
bias of the estimator and a quantity called the Fisher information IF for
Decoder
quality and
further
reading
62
CHAPTER 4. THEORETICAL BACKGROUND
the population code (Pouget et al., 2003). IF reflects the maximum amount
of information that can be extracted from a population and is a decoderindependent measure of the information content of a population of neurons.
For further information on decoding approaches see for example the study by
Pouget et al. (1998) which introduces different types of decoders (maximum
likelihood, optimum linear estimator, center of mass and complex estimator)
or the reviews by Oram et al. (1998); Pouget et al. (2000, 2003); Quiroga and
Panzeri (2009).
Temporal
These approaches were mainly based on the spike-rate response of neurons.
coding and In contrast to rate-based approaches, it has been proposed that the neural sysdecoding tems employs also the temporal domain to transmit and extract information.
The interplay between oscillations and spike times provides a rich framework
for information processing and learning through spike-time dependent plasticity (Guyonneau et al., 2005; Masquelier et al., 2009a,b). For example, it
is argued that the olfactory system decodes odor information (concentration
and identity) from the spike latency (Laurent and Davidowitz, 1994; Hopfield,
1995; Laurent et al., 2001; Margrie and Schaefer, 2003; Brody and Hopfield,
2003; Schaefer and Margrie, 2007).
4.3
Computations in sensory systems
The purpose of this section is to explain the essential concepts of computations
assumed to take place in sensory areas. However, due to the high number of
computations (or generally speaking operations) carried out by neural structures, only those that are relevant in the context of this thesis will briefly
be discussed. We will begin at the lower sensory areas and the question how
to describe the reaction of neurons towards visual stimuli and then move the
processing hierarchy upwards towards higher level computations relevant for
this thesis.
4.3.1
Receptive fields
How do
When we perceive our environment, a stimulus triggers a cascade of reacneurons tions in different parts of our nervous system. In the quest to characterize how
react to neurons respond to stimuli, the concept of receptive fields has been developed.
sensory
Receptive fields (RFs) extract features from the rich variety of stimuli that
stimuli? our senses are confronted with. According to Alonso and Chen (2009) “the
receptive field is a portion of sensory space that can elicit neuronal responses
when stimulated.” In order to build a picture of the world around us, recep-
4.3. COMPUTATIONS IN SENSORY SYSTEMS
63
tive fields are organized in a hierarchy which allow us to identify stimuli and
recognize complex objects (Riesenhuber and Poggio, 1999; Rao and Ballard,
1999). The receptive field of a neuron is normally measured by recording the
firing rate of a neuron in response to variations of the stimulus parameter
of interest (Sharpee, 2013), e.g. changing the stimulus position in the visual
field, the orientation of a bar or the frequency of a tone. From a computational
perspective the concept of receptive fields is of fundamental interest, as it describes the input-output relationship for a neuron and serves an important
role in characterizing the functional role of a neuron (or neural population)
when brought in a network context. However, measuring this input-output
relationship is very difficult, as measurements are influenced by many factors
such as depth of anesthesia (Friedberg et al., 1999), on the task or attention
(Hamed et al., 2002; Womelsdorf et al., 2006) and the contrast of the stimuli
(for visual stimuli) (Polat et al., 1998; Sceniak et al., 1999), just to to name a
few examples of additional influences. Hence, this input-output relationship
should not be regarded as static or irrevocable, like the traditional segregation
between “simple” and “complex” cells applied to visual cortical neurons might
suggest (Hubel and Wiesel, 1962; Skottun et al., 1991), but may depend on
other factors which are not included in the standard formulation of the receptive field, e.g. the input statistics (Fournier et al., 2011), contextual input
(Kapadia et al., 2000) into neurons surrounding the measured neuron (which
do not necessarily belong to the receptive field of the measured neuron) and
short-term plasticity within cortical circuits modifying the orientation tuning
(Felsen et al., 2002). Fairhall (2014) gives summary of difficulties arising from
measuring receptive fields with passive stimulation protocols and discusses the
role of neural circuitry on receptive field measurement in the light of recent
experimental advances.
The question how receptive fields emerge is highly relevant and is related
to the topic of this thesis in a broad sense, as it also addresses the question
how certain functions can be achieved through connectivity (see Harris and
Mrsic-Flogel (2013) for a review on the relationship between connectivity and
sensory coding and Hirsch and Martinez (2006) for circuits involved in visual
receptive fields). Section 4.4.1 will pick up the question how much receptive
fields are pre-wired and can be (re-)shaped by experience. Other questions
of fundamental interest is why the nervous system shows certain properties
or types of receptive fields, or phrased differently, what should be encoded
by lower sensory areas? Lindeberg (2013a,b) introduces an axiomatic approach to this question with respect to visual receptive fields. Interestingly,
this approach yields receptive fields structures that are similar to those seen in
64
RFs used in
this thesis visual
system
CHAPTER 4. THEORETICAL BACKGROUND
biology simply based on necessity requirements regarding their spatial, spatiotemporal and spatio-chromatic structure, e.g. symmetry, temporal causality
and the ability to handle affine image transformations (like rotation, scaling,
translation) amongst others which represent basic characteristics of our physical world. Another notable approach to these questions is formulated by the
predictive coding hypothesis (Rao and Ballard, 1999; Spratling, 2010) which
suggests that the visual cortical hierarchy performing predictions about future inputs could shape receptive field responses as seen experimentally. This
idea extend the view that lower sensory areas act as static feature detectors
and tries to embed the emergence of functional properties in area V1 (or generally in lower sensory areas) into a wider and more unified framework for
understanding cortical function (Spratling, 2010).
Since these questions are too broad to be addressed here comprehensively
and are not of essential importance in the context of this thesis, I will refer to
the following literature which in part tries to answer parts of these question
(focusing on visual receptive fields). Sharpee (2013) gives a review of the
characterization of receptive fields and computational aspects, Bair (2005)
focuses on dynamic aspects of visual receptive fields, Huberman et al. (2008)
presents a review on the question how receptive visual receptive fields are
formed, Maex and Orban (1996); Carver et al. (2008); Wei and Feller (2011);
Rochefort et al. (2011) deal with the development of direction selectivity and
(Ferster and Miller, 2000) for orientation selectivity and (Weinberger, 1995)
gives a review of how receptive fields change in adult sensory cortices.
In the context of this thesis, a very simple interpretation of receptive field
properties is used (Kaplan et al., 2013). Concerning studies on the visual system, each cell is modeled as having a static receptive field defined a position,
direction, speed and the respective tuning width (see figure 4.3). The receptive field is implemented as a 2- (or 4) dimensional Gaussian function with
a mean defined by position and direction, respectively. The tuning widths
are defined by the distance to the neighboring cell so that the whole feature
space is covered by the network with bell-shaped tuning curves. A stimulus
is described in a similar way with a Gaussian intensity profile:
βx :
describes the size (radius) of the stimulus in x- and y-dimension,
or the “blurriness” of the dot
βvx,y : describes how accurate the speed of stimulus appears (i.e. the
“blurriness” in the velocity dimension vx,stim , vy,stim respectively)
if β{x,v} = 0 the stimulus behaves like an idealized sharp point stimulus.
The interaction between a stimulus and a cell i is modeled as an excitatory
input current which is generated by an inhomogeneous Poisson process with
rate Li (t) defined by the stimulus parameters and the cell’s tuning properties.
4.3. COMPUTATIONS IN SENSORY SYSTEMS
65
Li (t) describes the envelope of the input rate transmitted through excitatory
synapses into neurons and product of Gaussian tuning widths:
Li (t) =
max
fstim
(vy,i − vy,stim )2
di,stim (t)2
(vx,i − vx,stim )2
−
−
· exp −
2
2
2
2
2(σx + βx )
2(σv + βvx )
2(σv2 + βv2y )
!
where the distance between the stimulus and the center of the cell’s receptive
field is the Euclidean distance:
p
di,stim = (xi − xstim (t))2 + (yi − ystim (t))2
In a one-dimensional model, stimuli can move only along the x-axis, hence
the equations simplify to:
|xi − xstim (t)| (vx,i − vx,stim )2
max
Li (t) = fstim
· exp
−
2(σx2 + βx2 )
2(σv2 + βv2x )
In Paper 1 (Kaplan and Lansner, 2014) on the olfactory system, the receptive field of neurons in the olfactory epithelium and bulb can be understood
as the set of odorants to which the neuron is sensitive. This set of odorants
has been determined by random choice, as each artificial test pattern activated between 30% to 50 % according to experimental literature (Ma et al.,
2012). However, this number is biased towards high numbers as it is in the
interest of both studies (Ma et al., 2012; Kaplan and Lansner, 2014) to study
the activation patterns of odorants to which the receptor population reacts (in
(Ma et al., 2012) 60 - 100 out of 200 receptor families were activated by the
60 selected odors). The number of odorants to which real receptor reacts is
likely lower. In the olfactory bulb, we explored the feasibility of a hypothetical
concentration interval code in the concentration domain. That is the receptive
field of a mitral/tufted cell in the bulb can be seen as a concentration range
for a given odorant. The activation of an olfactory receptor was modeled as an
excitatory input current with two temporal profiles, a single pulse and short
oscillations, depending on the task to be tested. For further implementation
details about the lower sensory stages see section 2.2. in (Kaplan and Lansner,
2014).
For model neurons in the olfactory cortex, the concept of a receptive field
can not be clearly defined due to the divergent connectivity pattern from bulb
to cortex and the sparse distributed code in cortex. As the cortical neurons
in the model sample from several different glomeruli they are responsive to
a larger set of odorants and one could regard their receptive field as being
RFs used in
this thesis olfactory
system
66
CHAPTER 4. THEORETICAL BACKGROUND
(a) 1-D model
(b) 2-D model
Figure 4.3: Tuning properties (a) In a one dimensional model the tuning
space is only two dimensional: horizontal axis shows the preferred position of
cells (shown as black dots), the vertical axis represents speed in that direction.
Receptive fields are modeled as Gaussian curves and the blue circle indicate
the width of the bell-shaped tuning curve. (b) Example distribution similar to
the one used in Paper 2 (Kaplan et al., 2013). Cells are distributed according
to a hexagonal grid in order to have maximum packing density. Each hexagon
contains cells with different preferred speeds and directions. Color code stands
for the preferred direction and the length of the vector for the preferred speed.
This is only an example figure to give an idea, the network used in (Kaplan
et al., 2013) used a larger number and more randomly distributed tuning
properties.
composed of a different set of odorants compared to their input neurons on
lower levels, very similar to what has been reported from experiments (Wilson, 2001; Wilson and Sullivan, 2011b). Hence, the concept of hierarchical
receptive fields as applied in the visual system has no equal correspondence
in the olfactory system (Secundo et al., 2014), since representations of intermediate complexity do not appear to exist (Miura et al., 2012). However, one
way to imagine a receptive field in the olfactory cortex could be to assume
that a hypercolumn codes for a certain odor attribute, e.g. fruitiness, and
cells belonging to a minicolumn would code for a more specific value of that
attribute, eġḟresh (like apple), sweet (like mandarin) or sour (like citrus).
4.3. COMPUTATIONS IN SENSORY SYSTEMS
4.3.2
67
Normalization, winner-take-all
Normalization in neural circuits means that neural activity is rescaled and
bound within a certain range while preserving the relation between the units.
From a functional point of view, there are two motivations for this type of
operation: One stems from the biological perspective and is grounded from the
limitation of neural resources. As the bandwidth for neural signal is limited
and high spiking activity costs energy and is expensive for the metabolism,
neural activity needs to be limited. Another motivation originates from the
interpretation of neural activity as representing probabilities (see 4.2.5 and
paragraph below, and section 4.4.4).
There is some agreement that the brain performs divisive operations at
many stages (presumably distributed across and implemented in several “brain
modules”) and normalization plays an important role in the processing of sensory signals, for example in the form of contrast-normalization through divisive
normalization (Heeger, 1992, 1993; Carandini et al., 1997) (implemented presumably through lateral inhibition). Furthermore, normalization is believed
to be an important mechanism for attention and its modulation of neural
signals (Reynolds and Heeger, 2009) and context dependent decision making
(Louie et al., 2013). Hence, normalization is often regarded as a canonical
operation in neural systems (Carandini and Heeger, 2012), but the way how
it is implemented is less clear.
In the context of probabilistic coding, normalization can be seen as a prerequisite for marginalization, which is an essential step in probabilistic inference (summing up random variables and their respective probabilities to gain
joint probabilities over all conditions). In detail, following the interpretation
that neural activities represent probabilities for certain features, the normalization operation in a neural circuit makes sure that the random variables (or
conditions) that are summed up (or “marginalized out”) remain bound just as
probabilities are (Beck et al., 2011). A hypothetical example for this type of
operation would be a network which performs Bayesian or population vector
decoding of a stimulus ŝ from another network, which requires
PN an integral
PN
(or sum) of neural responses, similar to equation 4.1: ŝ = R i ri si / i ri
or equation 4.4 which computes the response prior P (~r) = s P (~r|s)P (s)ds.
PN
The normalization step (division by i ri or limiting the activity representing probabilities P (~r|s)) happens naturally in mathematics, but requires some
non-trivial mechanism in neural systems due to the inherent nonlinearities in
neural behavior. Jansson (2014) has studied the problem how to implement
different forms of normalization in a spiking neural networks through lateral
inhibition and highlighted the difficulties in this task. See Beck et al. (2011)
Normalization
requires
divisive
operation
Normalization as
marginalization in
probabilistic
inference
68
CHAPTER 4. THEORETICAL BACKGROUND
for a more abstract (rate-based) implementation approach of marginalization
through divisive normalization and references therein for further examples of
marginalization in neural systems.
In this thesis, lateral inhibition (as an approximative attempt to implement
normalization) is used in the following situations:
1. The olfactory system model implements some form normalization in coding the odor concentration by mitral cells in the olfactory bulb in form
of the hypothesized interval code. Due to strong lateral inhibition via
periglomerular and granule cells the mitral cell activity remains bound
to . 30Hz and thereby forwarding a roughly equal amount of excitation
to the olfactory cortex for a large range of concentrations.
2. For the derivation of connections from OB to OC and with the OC, cell
activations across different odor patterns are interpreted as probabilities
and are half-normalized within a glomerular or hypercolumnar unit (see
equation 3, 4 and 14 in Paper 1 Kaplan and Lansner (2014)) through a
common pool of inhibitory neurons.
3. In the olfactory cortex, following the probabilistic interpretation described earlier, cell activities represent the certainty that some odor pattern or feature has been detected. Hence, in order to stay close to this
idea (and the Bayesian learning framework explained in section 4.4.4)
the olfactory cortex was structured as modular with lateral inhibition
approximating a normalization operation in the spiking model.
4. Similarly, in the visual system model, unspecific lateral inhibition has
been used in order to limit and normalize the activity representing a
moving stimulus.
As the precision of the implemented normalization is not of crucial relevance
for the respective model function, the effect of the lateral inhibition (and the
form and precision of the normalization) was not further studied.
A winner-take-all operation is not precisely a form of normalization as
it does not preserve the relation between units, but only picks one winner.
However, it limits neural activity and the propagation thereof. This might be
desirable in some cases when the identity of a single a unit is required, for
example for discrete classification (as in the olfactory system), attention (Koch
and Ullman, 1987; Lee et al., 1999; Itti and Koch, 2001) or action selection.
Proposed mechanisms implementing winner-take-all include lateral inhibition
(Coultrip et al., 1992; Shoemaker, 2015) on the network level or target the
presynaptic input (Yuille and Grzywacz, 1989).
4.3. COMPUTATIONS IN SENSORY SYSTEMS
4.3.3
69
Pattern recognition, associative memory
The ability to recognize a stimulus (e.g. as being familiar or novel, harmful
or harmless) and to distinguish different stimuli from each other is one of the
most basic functions any living being has to perform in order to survive. From
an information theoretical point of view, the goal is to map the stimulus from
the environment originating from a feature-rich, high-dimensional input space
to a typically low-dimensional space (answering simple questions like “Do I
know this person?”, “Is this harmful to eat?”). In order to do so, different
stimulus features are extracted and associated with each other and with the
target output which is learned over time. This is a simplified formulation
of a complex process taking place in several layers of a sensory processing
hierarchy. Generally speaking, input patterns can be recognized based on
the associations between the dominant stimulus features extracted along a
hierarchy (see for example Trier et al. (1996) for character recognition, or for
more general perspectives Jain et al. (2000)). Hence, pattern recognition can
be seen as the output function of an associative memory system. Due to its
obvious appeal in information technologies, pattern recognition is a classical
problem in machine learning, computer vision and related fields. Artificial
neural networks have since long time been particularly successful in pattern
recognition applications (Bishop et al., 1995; Ripley, 1996). This is due to
their layered structure which maps typically higher dimensional spaces (the
input space) to lower dimensional spaces (represented by the hidden layer)
thereby performing dimensionality reduction or data compression (see e.g.
Lerner et al. (1999) for an overview of feature extraction paradigms using
neural networks). Similarly, Bayesian approaches have been applied to the
estimation of class membership (Pao, 1989) which is an equivalent formulation
of the task was addressed in Paper 1 (Kaplan and Lansner, 2014).
In this thesis, this approach has been implement in a multi-layered network with greater biophysical detail inspired by the olfactory system. The
lower sensory areas, the olfactory epithelium and bulb, extract features which
are mapped through a mutual information measure and a correlation-based
learning algorithm (BCPNN) to the cortex (representing the hidden layer) and
finally to readout neurons. Following the idea of naive Bayesian classifiers the
network calculates the probabilities of an input to belong to certain classes
given the observed evidence (MacKay, 1995). As in classical neural networks,
the training is done in a supervised manner, that is the target outputs (the
specific readout neurons) are activated while the respective input is presented
and hence readout neurons “know” that a fixed number of different patterns
are presented during training (also equaling the number of readout neurons
Pattern
recognition
and
completion
in the
olfactory
system
model
70
CHAPTER 4. THEORETICAL BACKGROUND
to the total number learned patterns). A detailed description of the implementation of pattern recognition are found in Paper 1 (Kaplan and Lansner,
2014).
4.3.4
Pattern completion, pattern rivalry and Gestalt
perception
Pattern completion is the process of filling the missing pieces into an incomplete input so that the incomplete stimulus is perceived as a congruent object
or pattern. This function is very important in many perceptual processes as
noise or obstacles impair or mask out parts of the input signal. A simple example can be seen in the right part of figure 4.4 as the visual system combines
the corners into a triangle percept and fills in the missing edges.
MultistabilPattern rivalry is the phenomenon of perceiving an ambiguous stimulus as
ity different, despite the fact that the physical nature has not changed. Classical
examples are the Necker cube or the Rubin vase seen in figure 4.4.
Figure 4.4: Examples of Gestalt perception: multistability (pattern rivalry)
and reification (pattern completion). Left: Necker cube is an example of
multistable perception as the brain can switch back and forth to decide which
side of the cube is in front. Middle: Rubin vase showing both a vase and /
or two faces. Both figures are examples of pattern rivalry. Right: Reification
example. Reification is the process of regarding something abstract as a material thing. In this example, a triangle is perceived despite the fact that there
is no concrete triangle since edges are missing and only three circular sectors
exist in the figure. According to Gestalt psychology, the global percept can
be very different compared to the sum of the stimulus constituents.
These phenomena are related to concepts in Gestalt psychology which aims
to understand how the brain creates meaningful percepts through a combination of physical constituents (Gestalt meaning form or shape in German).
Gestalt psychology stresses the fact that perception is based on more than
4.3. COMPUTATIONS IN SENSORY SYSTEMS
71
a simple summation of discrete elements, but can create a different percept
from the combination of constituents (often summarized as “The whole is other
than the sum of the parts” by Kurt Koffka, a Gestalt psychologist). This idea
is also related to the idea of holism arguing for the view that systems and
their functioning must be seen as a whole in order to be understood and understanding is difficult (if not impossible) when focusing on their component
parts alone. This approach can be seen as the basis for all complex systems
theories and the connectionist approach to understand cognitive phenomena
as emerging behavior from the interaction of neural components.
These types of “holistic” behavior, namely pattern recognition and pattern rivalry have been explicitly addressed in Paper 1 (Kaplan and Lansner,
2014). Furthermore, the question how the visual system composed of neurons
with limited receptive fields combines responses of its constituents to create a
coherent motion percept is addressed in Paper 2 and 3 (Kaplan et al., 2013,
2014).
4.3.5
Link between olfactory and visual system models
The olfactory and the visual system are very different modalities, so how
is the work on these two systems linked to each other? The work on these two
modalities is united by the driving question how to make sense out of sensory
input that provides a noisy and incomplete stream of information. The overarching goal is to offer an answer to the question how nervous systems could
possibly be able to create a global and coherent percept from a fragmented
picture provided by limited receptive fields, even in the presence of noise and
incomplete input signals. In both systems we use specific, i.e. non-random
and partly learned connectivity to overcome these obstacles and to achieve a
higher level, behaviorally relevant function.
Besides the structural differences in the networks (one layer vs. multi-layer
network) and neuron models used the most important difference lies in the
nature of the stimuli. The olfactory cortex model used attractor theory based
on static patterns whereas in the visual domain the input is very dynamic
(either because of the movement of a stimulus or eye movements exploring
an image or scenery). In the olfactory system model we applied associative
Hebbian-Bayesian learning, whereas in the visual system models (Paper 2, 3
(Kaplan et al., 2013, 2014)) a static connectivity was studied.
Despite these dissimilarities one could regard the function implemented in
the visual system models as performing some sort of pattern “recognition”
on moving stimuli as dynamic patterns of activity that are “recognized” by
the network and “completed” in the absence of a stimulus as demonstrated
Similarities
and ...
...dissimilarities
Visual
stimuli are
dynamic
patterns
72
CHAPTER 4. THEORETICAL BACKGROUND
through the extrapolation during the blank. The network tries to predict the
subsequent stimulus position by recognizing the features determining the motion and completing the movement “pattern”. In order to let this formulation
hold, the network dynamics need to be finely tuned so that the internal dynamics do not die out or “explode” due to recurrent excitation (the stimulus
representation either runs away or gets stuck). In the pioneering work by
Lashley (1951) the idea of associative chains has been suggested as a solution to the problem of serial order and sequence organization which can be
related to the sequence of unit activations in visual processing. More recent
psychophysical studies with human subjects revealed that our visual systems
is able to integrate a dynamic contour path (rotating bars) when the temporal
gap between contour elements is less than 200 ms (Hall et al., 2014) which
is attributed to horizontal processes within V1 as in primates (Guo et al.,
2007). Hence, the “completion” of a moving stimulus requires some temporal
coherence and presumably lateral interactions among the cells.
4.3.6
Prediction and anticipation
TerminolIn order to shortly clarify the terminology, I will address the question reogy garding the difference between anticipation (as used in (Kaplan et al., 2014))
and prediction (as used in (Kaplan et al., 2013)). Both terms can be widely
synonymous, however anticipation used in this thesis describes the state of
expecting an event (or a future stimulus) whereas prediction describes the interpretation of network activity (i.e. the output or function). In other words,
anticipation focuses on the perspective of one or several neurons (as reaction to
an approaching stimulus), whereas prediction reflects the act of a network to
forecast or extrapolate recently integrated information into the future. Hence,
one could phrase the difference as anticipation describing the “passive” act of
receiving information carried out by a subset of the network and prediction
as an “active” task in which larger parts of the network is involved. Nevertheless, both phenomena can be modeled by the same underlying mechanisms
(anisotropic lateral connectivity) as described later (see section 5.2).
Prediction
As motivated in section 1.3, prediction is an important ability of the nerin vous system and is manifested in many different ways. Examples of processes
perception where prediction plays an important role are for example the processing of
language (both written and heard) when anticipating forthcoming utterances
and words. Another example of prediction is carried out by the motor system
when one is walking the stairs (in the darkness) and expects the next stair to
be at a certain position and the rapid change in motor behavior when this expectation is not fulfilled (e.g. step is missing) (see the book by Clark (2014) on
4.3. COMPUTATIONS IN SENSORY SYSTEMS
73
links between perception and prediction). As pointed out earlier, also visual
perception relies heavily on prediction which takes place on the lowest sensory
areas like the retina (Berry et al., 1999; Hosoya et al., 2005) and higher areas
(Rao and Ballard, 1999; Enns and Lleras, 2008). Summerfield and de Lange
(2014) have shown that expectation modulates neural signals in perception
and decisions in the visual system. Rauss et al. (2011) argue that early visual
processing in humans can be embedded in a predictive coding framework. As
Lee and Mumford (2003) point out prediction is one aspect besides resonance
(Carpenter and Grossberg, 2010) and competition (between different hypotheses) that views visual processing and the hierarchical structure of the cortex
as an implementation of Bayesian inference.
Neural
Hence, the idea that the nervous system continuously predicts future inimplemenputs and potential rewards provides a framework for brain theories acting on
tations of
different scales (on coarser scale (Friston and Kiebel, 2009; Friston, 2009),
prediction
and on the single cell level (Fiorillo, 2008)). The predictive coding hypothesis assumes that the brain tries to anticipate future incoming stimuli and
various implementation models have been suggested (Rao and Ballard, 1999;
Spratling, 2010), partly relying on hierarchical Bayesian inference algorithms
(Lee and Mumford, 2003; Bastos et al., 2012). From an engineering perspective the Kalman filtering framework is a standard method to deal with
the problem of predicting future inputs from the history of previous inputs
(Kalman, 1960) and approaches how neural systems could implement Kalman
filters have been presented by Iiguni et al. (1992); Haykin et al. (2001); Deneve et al. (2007); de Xivry et al. (2013) However, these approaches employed
rather abstract formulations and the neurophysiological basis for predictive
coding remains elusive (see Mauk and Buonomano (2004) for a review on
temporal processing as a requirement for predictive capabilities).
When trying to make judgments under uncertain conditions, we usually
base estimations on past observations. In other words, the learning process
consists of observing and collecting data (updating priors about the observed
distribution) and prediction can be understood as an inference based on the
observed or learned relationships. However, it is well known that humans
are not statistically optimal predictors when it comes to judgments under
uncertainty (Kahneman and Tversky, 1973; Tversky and Kahneman, 1974)
and little is known about the question how predictive behavior emerges on
a biophysically plausible basis. Theoretical studies suggest that long-term
potentiation is functionally significant for the learning of sequences and prediction thereof (Abbott and Blum, 1996). Rao and Sejnowski (2003) present
a biophysical model implementing temporal-difference learning as a basis for
Learning
how and
what to
predict
74
CHAPTER 4. THEORETICAL BACKGROUND
predictive behavior inspired by conditioning in bees. Seriès and Seitz (2013)
offer an interesting review how prediction in visual perception is represented
in neural data, how it relates to Bayesian frameworks and how predictions of
individuals develop given changing distributions, that is how short-term learning changes perceptual biases and thereby the estimation behavior. In there,
open questions that need to be addressed are formulated, as e.g. the question where, how and when priors and likelihoods are integrated, updated and
encoded, in order to bridge the gap between theoretical models, physiological
and behavioral data.
4.4
Modularity
Functional,
structural
and
dynamic
connectivity
Function and connectivity
The purpose of this section is to address the question how function and behavior can be linked to structure or anatomy and to give some experimental
evidence for supporting the idea of connectionism.
Brain imaging technology has advanced our understanding of brain organization with increasing resolution of neural structures (Filler, 2009). Many
imaging methods report a change of metabolism during specific tasks (Raichle
and Mintun, 2006) which has lead to the perception that the brain is constructed of different “functional” modules, that are predominantly involved in
solving the given task (Sarter et al., 1996; Biswal et al., 2010).
Continuous progress in imaging technology and analysis methods argue
for an improved capability to infer cognitive processes or mental states from
imaging data gained by various methods (Haynes and Rees, 2006; Poldrack,
2006), for example linking prefrontal cortex to memory tasks (D’Esposito
et al., 2000) just to name one basic example. However, the localization of
function can not always be formulated by an isomorphism (an unambiguous
link from tasks → brain structures), as there is not always a clear inverse
(explicit and unique) mapping (brain region 6→ task), as every brain region
is involved in a multitude of tasks (Sarter et al., 1996) and localization or
imaging data is very noise and underlies strong individual variations (Brett
et al., 2002; Fornito et al., 2013).
Nevertheless, it is by now widely accepted that brain structures are somehow related to different functions and if so, the question arises how does this
link emerge? In order to better understand the link between structure and
function, researchers distinguish the above mentioned functional connectivity (correlation-based activation on a coarse temporal and spatial scale) from
structural connectivity (the mere existence of anatomical fibres) and more recently dynamic or effective connectivity which focuses on the dynamics of the
4.4. FUNCTION AND CONNECTIVITY
75
imaging signal in order to extract the temporal order of local activations and to
infer causal relationships between areas (Friston, 2011; Hutchison et al., 2013;
van den Heuvel and Sporns, 2013). In order to address the question of causality (in the sequence of activations), one approach is to apply Granger causality
and dynamic causal modeling to deduce directed connectivity between brain
regions (Friston et al., 2013). While human brain imaging methods have to
be non-invasive to the largest extent, a more fine grained picture is required
to understand the linkage between structure and function on a deeper and
mechanistic level (Denk et al., 2012; Bargmann and Marder, 2013; Fishell and
Heintz, 2013). For example, it is more informative to understand the neural dynamics and emerging connectivity patterns (including computational
models thereof, see (Honey et al., 2010) for a review on the structure-function
question) involved in processes like motion perception (Field et al., 2010; Briggman et al., 2011) than to merely observe that several areas (including V1,
MT and MST) are involved.
In the following sections two opposing (yet not mutually exclusive) attempts to explain how the sought-after structure-function relationships could
emerge will be discussed. One approach includes observations for pre-wired
connectivities, the other will focus on plasticity of connectivity and the importance of learning.
4.4.1
Genetics and tendencies of pre-wiring in the brain
As it seems from imaging studies the brain follows a highly modular organization and distributes processing across specialized areas in a task-dependent
manner (Phillips et al., 1984; Biswal et al., 2010). Analysis of brain connectivity data indicates that this modularization is also reflected in the network
structures, resembling ‘small-world’ features in brain connectivity (Hilgetag
and Kaiser, 2004; Sporns et al., 2004; Bullmore and Sporns, 2009). Where
does this specialization or functional segregation into brain modules come
from and what are the advantages thereof? (see also Kingsbury and Finlay
(2001))
The modular and hierarchical network organization as described by network connectivity is attributed with advantages including greater adaptivity,
robustness, and evolvability of network function (Meunier et al., 2010). Hence,
there is reason to believe that this functional segregation and the advantages
linked to it is a product of evolution (Redies and Puelles, 2001) and inherited
across generations since the layout of task-specific brain modules is very similar across the population (Meunier et al., 2009b). It seems very unlikely that
this specialized, modular organization does rely solely on self-organization
Modularity
due to
connectivity
Arguments
for
pre-wiring
76
CHAPTER 4. THEORETICAL BACKGROUND
mechanisms as humans learn and experience in very different ways, but their
brains appear similar on that very coarse scale. However, the assumption that
this (inherited) functional specialization enables all higher cognitive abilities
(other than sensory and motor processes) is controversial (Buller, 2005; Mahon
and Cantlon, 2011). A contrasting view is that modules possess more general
computational properties (e.g. associative learning). As Barrett (2012) argues, the question what mechanisms determine brain specialization is unlikely
to be implemented by only one of the two options (specialized mechanisms:
leading to innate, domain-specific, and isolated from other brain systems versus generalized leading to developmentally plastic, domain-general, and interactive brain systems), but rather a (hierarchical) mix of the two. In principle,
this question is related to the argument between the view of connectionism
(arguing for the general computational abilities of neural networks) and “classical” cognitive theories focusing on symbol-manipulation modularity (Fodor,
1983) (see (Marcus, 2006) and section 4.1 and 4.1.1 for a discussion about the
two concepts). The question how the semantic representations are organized
in the brain is relevant as it helps to interpret the connections and interactions between elements, e.g. when forming higher level features (“objects”)
from lower level features (e.g. when looking at olfactory bulb to olfactory
cortex connections). This is not only relevant for advancing the understanding of the brain in itself, but also for neuromorphic engineering and building
(hierarchical) computational models inspired by the nervous system.
Pre-wiring
With respect to sensory systems, one well studied example of modularizain sensory tion is the visual hierarchy (Van Essen and Maunsell, 1983; Ungerleider and
systems Haxby, 1994; Grill-Spector and Malach, 2004). To study the question about
the origin and nature of the functional organization in sensory systems, the
phenomenon of maps is a fruitful subject of research (see Wandell et al. (2007)
for a review on visual maps).
4.4.1.1
Topography
Visual maps (and many other sensory maps) are organized in a topographic
manner, that is the spatial layout of stimulus sensitivity resembles the physical
relationships of the stimulus (for example organization according to position,
or tone frequency (Zhang et al., 2003)). As defined by Feldheim and O’Leary
(2010): “Topographic maps are a two-dimensional representation of one neural
structure within another and serve as the main strategy to organize sensory
information.” Also in the olfactory system map formation has been observed
(Vassar et al., 1994; Meister and Bonhoeffer, 2001; Takahashi et al., 2004),
but whether or not one can speak of a topographic mapping from the sensory
4.4. FUNCTION AND CONNECTIVITY
77
space onto neural tissue is debated (see Auffarth (2013) for a review and a
comparison between vision and olfaction). So how do sensory maps come
about?
Generally speaking, there is a critical period during which sensory systems are highly plastic and the neural response properties are being defined
(Antonini and Stryker, 1993; Hensch, 2005). After that period, plasticity is
generally reduced and expression of certain proteins (in mice Lynx1 (Morishita
et al., 2010)) inhibits plasticity and stabilizes the network structure.
There exist three supposed mechanisms to explain the establishment of
these initial cortical maps:
1. Molecular guidance cues directing the projections to form sensory maps
(Crowley and Katz, 2002; Huber et al., 2003; Sur and Leamey, 2001;
Skaliora et al., 2000)
2. Activity-dependent mechanisms driven by patterns of either spontaneous (or later stimulus evoked activity) in the retina and LGN (Yuste
et al., 1995; Miller et al., 1999; Wong et al., 1993; Weliky, 1999; Shatz,
1994, 1996) (see Auffarth et al. (2011) for a model focusing on the connectivity from olfactory epithelium to bulb)
3. Statistical models arguing for statistical processes in combination with
the layout of lower level sensors (e.g. retinal ganglion cell mosaic)
(Wassle et al., 1981; Soodak, 1987; Reid et al., 1995; Alonso et al., 2001;
Ringach, 2004, 2007; Paik and Ringach, 2011)
Presumably, all three approaches act in concert (or in sequence) to form the
circuits enabling sensory perception. See Huberman et al. (2008) for a review
on the interplay between molecular cues and activity-dependency and (Price
et al., 2000) for more on mechanisms in cortical development.
The layout of receptive field properties determines the organization of sensory maps. Interestingly, receptive fields are at least to some extent pre-wired,
as stimulus-selective responses (for orientation, direction) have been observed
in thalamus and cortex already before or at eye opening (Hubel and Wiesel,
1963; Chapman and Stryker, 1993; Krug et al., 2001; Rochefort et al., 2011).
However, this selectivity varies across species (Rochefort et al., 2011) and
changes after eye-opening manifested by changes in neural connectivity which
can be understood by combining high-resolution experimental data and computational models of plasticity (Ko et al., 2013). For further reading see
e.g. Cang and Feldheim (2013) who deal with developmental aspects of topographic map formation in the superior colliculus, and references given below.
78
CHAPTER 4. THEORETICAL BACKGROUND
With respect to olfaction, see Sakano (2010) for a review on map formation
mechanisms in the olfactory system of mice, (Komiyama and Luo, 2006) for
wiring specificity in Drosophila and (Murthy, 2011; Ghosh et al., 2011) for
maps in the olfactory cortex.
Thus, in order to understand the interplay between structure and function,
one has to take neural dynamics and the plasticity of neural circuits into
account which is believed to be the neurological basis for learning.
4.4.2
What is the
neurophysiological
basis for
learning?
Experimental
evidence
Learning and activity dependent connectivity
Despite the pre-wiring tendencies presented above, sensory experience can reshape neural circuits. This reshaping can take place on various levels ranging
from as low as receptive fields in early sensory cortices (Weinberger, 1995;
White et al., 2001; Bamford et al., 2010) up to the restructuring of brain
modules due to aging (Meunier et al., 2009a) or concept learning (e.g. in primate inferotemporal cortex after learning pattern recognition tasks (Srihasam
et al., 2012, 2014), see (Connor, 2014) for a summary of the two studies). The
purpose of this section is to name experimental evidence for plasticity and
computational implementations thereof.
Since Cajal (y Cajal, 1995) and Sherrington (Sherrington, 1966) we assume information processing happens via synapses between neurons and it
now seems obvious that modification of synaptic strengths is fundamental for
learning. The first notion that learning is based on temporal correlation was
formulated by James in 1890 (James, 1984) and later by Konorski (1948) and
Hebb (1949). Since then, the idea that connections between neurons forming cell assemblies and the dynamic change of these connections has guided
neuroscientific research and connectionist modeling.
First experimental evidence was gained in the hippocampus (Lomo, 1966)
of anesthetized (Bliss and Lømo, 1973) and unanesthetized (Bliss and GardnerMedwin, 1973) rabbits after observing a potentiation of synaptic transmission
after stimulating the perforant path (connections from enthorhinal cortex to
hippocampus). Other early experiments are reviewed in (Buonomano and
Merzenich, 1998).
The hippocampus as a key structure in learning and memory (Squire,
1992, 2009) has served as a fruitful target site to prove long-term potentiation
(LTP) (see (Bliss et al., 1993) for a review), but changes of synaptic efficacy
have been observed in other brain regions as well. For example, Holtmaat and
Svoboda (2009) provides a review on experience-dependent synaptic plasticity
and White and Fitzpatrick (2007) reviews experience-based modifications of
visual and cortical maps.
4.4. FUNCTION AND CONNECTIVITY
79
Due to the induction protocol of LTP (induced pre- and post-synaptic firing leading to increased synaptic strength as measured in the post-synaptic
potential), synaptic plasticity is assumed to be involved in functional changes
and memory formation (see Takeuchi et al. (2014) for a review on methods
and analysis of this hypothesis). Depending on the timing of the pre-synaptic
spikes (sent by the “source” neuron) and the post-synaptic spikes (“target”
neuron), weights can be up- or down regulated (Markram et al., 1997; Zhang
et al., 1998). This “Hebbian” characteristic of joint source and target activation lead to the development of two-factor learning rules serving as a computational model to describe synaptic plasticity phenomenologically. First,
Grossberg (1976); Bienenstock et al. (1982); Kohonen (1988) presented ratebased formulations of the learning process. Later, the precise timing of preand post-synaptic spikes was taken into account which allows to model Hebbian and anti-Hebbian forms of LTP and long-term depression (Gerstner et al.,
1993, 1996; Kempter et al., 1999; Van Rossum et al., 2000; Song et al., 2000;
Dayan and Abbott, 2001), see Morrison et al. (2007); Markram et al. (2012);
Sjöström and Gerstner (2010) for reviews on STDP models and Izhikevich
and Desai (2003) relating the rate-based Bienenstock-Cooper-Monroe rule and
STDP.
However, several experiments have confirmed that spike timing is not the
only factor determining the direction of the weight change. Other factors include the rate (Sjöström et al., 2001), dendritic location (Froemke et al., 2005),
synaptic strength and cell type (Bi and Poo, 1998; De Paola et al., 2006) (see
for example Feldman (2012); Dan and Poo (2004); Caporale and Dan (2008)
for reviews on STDP and Lamprecht and LeDoux (2004) for a review on the
molecular mechanisms involved in structural plasticity). Hence, the criticism
against the simplifications of the “classical” local two-factor STDP rule is
strong (Lisman and Spruston, 2005) and learning rules still remain subject
to research to incorporate more neurophysiological detail (Lisman and Spruston, 2010). Thus, more and more elaborate learning rules (three-factor rules)
have been developed including additional factors which can better match experimental data. For example, Pfister and Gerstner (2006) present a “triplet
rule” involving a third spike, Senn et al. (2001) incorporate neurotransmitter
release probability, Clopath et al. (2009) addresses voltage effects, Izhikevich
(2007b) dopamine as reward and signal and Tully et al. (2014) considers also
intrinsic excitability and reward signals. In order to incorporate also the receptor type to capture learning phenomena seen in the basal ganglia system,
learning rules need to be extended by a fourth factor (Gurney et al., 2015)
representing the receptor type on the target synapse (on top of the other three
Spiketiming
dependent
plasticity
(STDP) and
learning
rules
Critique
against and
modifications of
STDP
80
Nonsynaptic
plasticity
Metaplasticity
CHAPTER 4. THEORETICAL BACKGROUND
factors which are pre-synaptic and post-synaptic spike timing and the reward
signal). Another desirable aspect of a learning is homeostatis (Turrigiano and
Nelson, 2000, 2004), that is the stabilization of network activity acting against
the positive feedback loop introduced by Hebbian weight dynamics which can
lead to an explosion of activity.
Furthermore, plasticity that is not constrained to the location of the synapse
has been observed which modifies the “intrinsic excitability” of neurons (Debanne et al., 1999; Desai et al., 1999) making them more or less responsive to
stimuli. “This form of plasticity depends on the regulation of voltage-gated
ion channels” in the neuron, as reviewed in (Debanne and Poo, 2010) and is
believed to play an important role for memory and learning, as well (Mozzachiodi and Byrne, 2010). Daoudal and Debanne (2003) present a review about
learning rules and mechanisms combining LTP and intrinsic excitability.
Another aspect that needs to be considered for a more complete picture of
learning mechanisms shaping functional connectivity is the change of plasticity
over time, called metaplasticity (Abraham and Bear, 1996; Abraham, 2008).
Metaplasticity is based on the idea that a synapse’s history of activity influences the ability to change weights and could be related to synaptic tagging
(Frey and Morris, 1997; Redondo and Morris, 2011). Hence, metaplasticity
adds another level of complexity both to the interpretation of experimental
data and the formulation of adequate computational models (see Clopath et al.
(2008) for a model of synaptic tagging).
4.4.3
Attractor networks and other approaches
What is an
The term attractor originates from dynamical system theory and repreattractor? sents a preferred state of the system into which the system moves into (in
contrast to transient states). In this context, an attractor is a subset in the
phase space of the system and represents the corresponding typical behavior of
the system. Attractors may contain the different types of limit sets (states the
system reaches after infinite amount of time), like fixed points, periodic orbits
or limit cycles. For a more in depth introduction see for example (Milnor,
2006; Teschl, 2012).
In neuroscience, the concept of attractors has been applied to describe
different phenomena and functions in various brain systems (Amit, 1992; Rolls
and Deco, 2010). In the context of memory, one possibility is to see Hebb’s
idea of (dynamically) connected cell assemblies acting as attractors (Lansner
and Fransén, 1992; Lansner, 2009).
Attractor dynamics can be related to various phenomena, including items
in working memory (Lansner et al., 2013), long-term memory patterns in an
4.4. FUNCTION AND CONNECTIVITY
81
associative memory (introduced by Hopfield (1982) as Hopfield networks),
stimulus evoked and spontaneous activity, or activity related to navigation.
Knierim and Zhang (2012) review the application of and experimental support
for (and against) attractor dynamics in the limbic system of the rat including
experiments in hippocampus and enthorhinal cortex, where “head-direction
cells have been modeled as a ring attractor, grid cells as a plane attractor,
and place cells both as a plane attractor and as a point attractor”. Perceptual
phenomena like multi-stability and pattern rivalry (see section 4.3.4) can be
related to attractor networks as a conceptual overarching framework. Rolls
(2007) presents theory and experiments in the context of learning and memory
formation in hippocampus. Rolls and Deco (2002) illustrate the use of (continuous) attractor network models applied to visual perception. Furthermore,
Goldberg et al. (2004) and Ben-Yishai et al. (1995) apply attractor theory
in visual perception phenomena. Attractor models have also been applied to
model attentional phenomena (Silverstein and Lansner, 2011), and categorization and decision making (Rolls and Deco, 2010; Volpi et al., 2014). Latham
et al. (2003) shows that attractor networks can be used to estimate stimulus variables in the presence of noise. However, their approach relies on the
construction of networks that are described by abstract equations (abstract
meaning that the variables can not be easily linked to neural processes).
In contrast to the application of attractor theory as an abstract conceptual
framework, the approach employed in this thesis is intended to be linked more
to biophysical processes than the previously named studies. This is done in
the olfactory system model by the following two approaches. First, by using
detailed neuron models to implement attractor dynamics in a biophysically
plausible framework. Second, the employed learning rule (explained in section
4.4.4) leads to an attractor network and can be linked to neurophysiological
processes due to its Hebbian nature (Tully et al. (2014) explains a spikebased implementation thereof). Previous work has been devoted to study
the biophysical phenomena like oscillations emerging in modular attractor
networks and their role in memory (Lundqvist et al., 2006, 2013; Lundqvist,
2013) Moreover, roles of different (inhibitory) cell types on attractor dynamics
has been studied by Krishnamurthy et al. (2012).
Boltzmann
Hopfield networks incorporate an energy function defined by the network’s
Machines
weight matrix and the units’ activation. Attractor states are then defined as
minima in the energy landscape to which the system converges, e.g. when
the network activity represents a certain pattern. Similar to this concept,
Boltzman machines (BM) also consist of recurrently connected binary units
that represent the likelihood of certain features (e.g. black/white pixels of an
82
CHAPTER 4. THEORETICAL BACKGROUND
image) (Hinton and Sejnowski, 1983a,b). BMs have an energy function which
determines the probability of a unit being in the respective state (on/off) and
weights can be trained (also using Hebbian local learning) so that the network
represents certain input data (after having reached the “thermal equilibrium”)
(Ackley et al., 1985), for example images. Due to the iterative probabilistic
training procedure (Markov Random Field), BMs can be regarded as a Monte
Carlo implementation of Hopfield networks. More recently, the principle idea
of BMs have become very popular in practical applications involving inference
and classification due to the restricted connectivity in Restricted Boltzmann
Machines (RBM) (leading to a significant speed-up) and advances in learning
algorithms (Salakhutdinov and Hinton, 2009).
Deep Belief
Furthermore, several one-layered RBMs can be stacked on top of each
Networks other as building blocks for so called Deep Belief Networks (Hinton et al.,
(DBN) 2006) which have an improved power to model the input data (Le Roux and
Bengio, 2008). For more on statistical learning see, for example, Hastie et al.
(2009).
Reservoir
As a contrast to the concept of performing computations with stable states
computing to which the system converges after training, ideas called “echo state networks” by (Jaeger, 2001) or “liquid state machines” (LSM) by (Maass et al.,
2002) have been proposed. The idea behind this type of networks is to train
a population of linear discriminant units to perform any mathematical computations by sampling from the activity exhibited in a network of randomly,
recurrently connected units (see Jaeger (2007) for a review). However, despite
the fact that some brain functions can be approximated with this approach,
it does not explain brain functioning as it is not guaranteed to gain insights
about how and why the network performs the desired computations. Furthermore, from an application focused point of view, the learning process can not
be well controlled and is (arguably) inefficient due to the costly simulation
of a neural network compared to other approaches (for example a bucket of
water (Fernando and Sojakka, 2003)).
Chaos
Another approach that employs chaotic network dynamics in networks
computing (Aihara et al., 1990) has been used to model associative memory (Adachi and
(CC) Aihara, 1997) and pattern recognition in an olfactory context (Kozma and
Freeman, 2001; Li et al., 2005). These type of models “have advantages on
computational time and memory for numerical analyses because the complex
dynamics of the neurons, including deterministic chaos, is described by simple
and deterministic difference equations rather than the models of differential
equations or stochastic process” (Adachi and Aihara, 1997) as compared to
most biophysically detailed models.
4.4. FUNCTION AND CONNECTIVITY
83
The above named approaches (RBM, LSM, DBNs, chaos computing) are
interesting to study the computational capabilities of systems and hence for
specific applications or algorithmic purposes, but may not be motivated so
easily from a biologically perspective. That is, the underlying mechanisms,
especially concerning the learning process, can not be linked to the neurophysiological substrate in a straight-forward way. This is why it is important to
find biophysically plausible answers to questions how the brain functions and
how it learns to exert these functions. The next section will briefly introduce a
learning algorithm for neural networks which is both appealing from an algorithmic, computational perspective and biophysically due to its Hebbian local
nature (see Tully et al. (2014) for a more extensive introduction and biological
motivation).
4.4.4
Bayesian confidence propagation neural network
(BCPNN)
In this thesis, the BCPNN learning has been used in the olfactory system
model for training the connection weights between olfactory bulb and olfactory
cortex and for the recurrent connectivity within olfactory cortex on the basis
of spike-rates.
BCPNN has initially been proposed by Lansner and Ekeberg (1989) as a What is
one-layer, feed-forward neural network implementing a learning rule inspired BCPNN?
by Bayes theorem. The fundamental idea is to take a probabilistic perspective on the learning and recall process. According to this perspective, learning
means gathering data, observing and collecting statistics and recall is a statistical inference process. The activation of units in the network represent the
probability or confidence of observing a certain event, input feature or category. Synaptic weights between units are estimated based on the activation
statistics relating the joint or co-activation to the overall activation statistics.
The spread of activation between units corresponds to calculating the subsequent (posterior) probabilities. Phrased with the words of Holst (1997): “The
idea is to make the activities of the output units equal to the probabilities of
the corresponding classes given the attributes represented by the stimulated
input units classes” (classes meaning possible output values or events and
referring to the classification context).
The neuroscientific motivation for taking a probabilistic or Bayesian per- Motivation
spective on learning and brain functioning was outlined already in section
4.2.5. A mathematical motivation for using a Bayesian approach to classification is that the Bayesian classifier minimizes the classification error probability
84
CHAPTER 4. THEORETICAL BACKGROUND
(given uncertainty in observations due to noise etc. reflected in statistical variations), see for example Theodoridis and Koutroumbas (2006) for proofs. A
BCPNN (i.e. a network trained with the BCPNN learning rule) is inspired by
the naive Bayesian classifier. Thus, BCPNN serves as a guiding framework
for the development of learning rules combining the computational appeal of
Bayesian inference and the biophysically plausibility, particularly the spiking
implementation of BCPNN as described in Tully et al. (2014).
What
Bayes rule gives a framework for optimal estimation of stochastic variables
makes (yj , later representing target classes) under consideration of prior knowledge
BCPNN and observations (xi ):
Bayesian?
P (x1...n |yj )
(4.5)
P (yj |x1...n ) = P (yj )
P (x1...n )
where
• P (yj |x1...n ) is the desired information, the conditional probability that
yj is true when x1...n has been observed,
• P (yj ) and P (x1...n ) are the probabilities of (observing) yj and x1...n ,
respectively
• P (x1...n |yj ) is the probability of observing x1...n and yj (together at the
same time)
The weight update rule of the BCPNN algorithm is inspired by Bayes theorem
as the short derivation below shows.
4.4.4.1
Sketch of a BCPNN derivation
Independece
One important issue now is to assume that the events or features xi
assumption are independent of each other, as becomes clear from the BCPNN derivaand hyper- tion below, it is crucial to be able to write the probabilities in equation 4.5
columns P (x1...n |yj )/P (x1...n ) as a product which is only possibly if xi are independent:
P (xn |yj )
P (x1 |yj ) P (x2 |yj )
···
(4.6)
P (yj |x1...n ) = P (yj )
P (x1 ) P (x2 )
P (xn )
However, this may – in a real world – not always be true. Hence, as described
by Holst (1997) one possibility is introduce an additional layer (a “hidden”
layer, corresponding to the olfactory cortex in the model described in Kaplan
and Lansner (2014)). This additional layer contains complex columns or hypercolumns that code for features which are not independent of each other
4.4. FUNCTION AND CONNECTIVITY
85
(as e.g. the concentration information transmitted by cells that express or
code for one receptor, or common features of “fruity” odorants) and thereby
incorporate the dependency structure of the input variables. These complex
columns are (as also described in Holst (1997), chapter 2.4.5 or Lansner et al.
(2009)) constructed based on the correlation between attributes (represented
by the input units, in the olfactory system model the mitral cells) using the
mutual information between the attributes. Hence, a mathematical motivation for using a modular structure (not only in the olfactory cortex presented
here, but possibly in a more general sense) is to embed the dependence of
the non-independent units in the network structure (see Lansner and Holst
(1996); Holst (1997) for a mathematically rigorous and deeper investigation
of the independence issue and the construction of complex columns).
One could think of one hypercolumn coding for a certain odor attribute
(e.g. fruitiness) and the minicolumns within that hypercolumn code for the
different values (or occurrences) (e.g. sweet like mandarin, sour like citrus,
or fresh like apple). On an early stage of the visual hierarchy (like V1), the
attribute of a pattern could represent the orientation of an edge or bar stimulus
at a certain position, i.e. a hypercolumn would code for the position and the
orientation is coded by an “orientation minicolumn” (Mountcastle, 1997; Li
et al., 2012). Seen from a probabilistic point of view (possibly implemented
on a higher level of the visual hierarchy that is involved in object recognition),
a hypercolumn would represent an attribute of an object (like size, color or
shape) and the minicolumns the values of this attribute.
If one assumes a modular network topology with H hypercolumns, equation 4.6 becomes
P (yj |x1...n ) = P (yj )
nh
H X
Y
P (xhi |yj )
πxhi
P (xhi )
i=1
(4.7)
h=1
where nh is the number of minicolumns per hypercolumn, a minicolumn being
a population of neurons coding for the same attribute feature, and πxhi being the relative activity representing the certainty or “confidence” regarding
the attribute value xhi . The relative activity represents the minicolumn activity compared to other minicolumn activities and following the probabilistic
interpretation corresponds to the certainty of having observed a given feature.
Equation 4.7 indicates an important process taking place within one hypercolumn, namely that the activities (representing (un)certainties) of minicolumns must be normalized, so that a complex attribute h is represented by
Interpretations of
hyper- and
minicolumns
86
CHAPTER 4. THEORETICAL BACKGROUND
the relative activations πxhi of all minicolumns within that hypercolumn h
and the probabilistic interpretation holds:
nh
X
πxhi = 1
(4.8)
i=1
This normalization can be implemented by a strict or soft winner-take-all
operation which can be implemented by lateral inhibition as described section
4.3.2. However, it is to be noted that spiking implementations of this model or
framework are only an approximation as the normalization may not work as
straight-forward in a spiking network as desired from the mathematical point
of view.
By taking the logarithm of equation 4.7 the products become an additive
operation
#
"n
H
h
X
X
P (xhi |yj )
πxhi
(4.9)
log
log(P (yj |x1...n )) = log(P (yj )) +
P (xhi )
i=1
h=1
The idea to identify the terms given above with a neural network implementation was introduced in Lansner and Ekeberg (1989) but without the
notion of hypercolumns, which was introduced later in Lansner and Holst
(1996). In particular, network units integrate support values sj from activations or outputs πxi from other units in the network through weights wxhi yj :
sj = βj +
H
X
log
h=1
nh
X
!
whi j πxhi
(4.10)
i=1
with a bias value βj representing the prior knowledge about the likelihood of
event or class j occurring
βj = log(P (yj ))
(4.11)
and a connection weight determined by the coactivation of units hi and j
whi j =
P (xhi |yj )
P (xhi )
(4.12)
The mapping between probabilities of activation P (xhi |yj ), P (xhi ) and
the weight changes when the following assumption is made. Provided that
attributes defining a feature are mutually exclusive (e.g. a visual stimulus at
4.4. FUNCTION AND CONNECTIVITY
87
a single position can only have one orientation) that is within each hypercolumn only one minicolumn is active (implemented through a winner-take-all
mechanism) the sum
the contributions from all the minicolumns within
Pnover
h
a hypercolumn h ( i=1
in equations 4.7, 4.10 4.9) contains only one non-zero
element and the sum reduces to a single element i∗ . Hence, the support can
be rewritten as
H
X
sj = βj +
log whi∗ j πxh∗
(4.13)
h=1
i
where hi∗ represents the minicolumn coding for the dominant value in hypercolumn h. Accordingly, the weights between two units can be identified as the
ratio of log probabilities:
!
P (xh∗i |yj )
(4.14)
whi∗ j = log
P (xh∗i )
which is used in the spiking version of the learning rule (see figure 4.5 below
and Tully et al. (2014)).
After a unit has integrated the network activity a normalization needs
to be done (within a hypercolumn) so that the probabilistic interpretation
holds. See also Holst (1997); Sandberg et al. (2002); Sandberg (2003) for
more detailed derivations.
4.4.4.2
Spiking version of BCPNN
The BCPNN learning rule allows to give a probabilistic interpretation of
spiking activity in which output rates ∼ posterior probability of observing
an attribute. In the spiking implementation described in Tully et al. (2014)
the spike trains are filtered (with exponential kernels of different time constants) so that the spiking activity can be interpreted as a probability of activation after temporal averaging (exponentially weighted moving averages).
This introduces a number of free parameters which can be related to various
neurophysiological phenomena. The time constants introduced through the
filtering of spike trains make it possible to control for example the stability
of the learned weights (which represent the memory) and to implement delayed reward learning through adding an additional trace which corresponds
to an eligibility trace. Furthermore, this allows to control a time window with
which correlations are to be detected as a short first time constant for the
filtering would only detect correlations on a short timescale (as the “standard
STDP” rule mentioned before). The above mentioned bias (equation 4.11)
Learning
rule features
- spiking
implementation
88
CHAPTER 4. THEORETICAL BACKGROUND
can be related to neuronal intrinsic excitability with a probabilistic interpretation (Bergel, 2010). The learning rate can be modulated through varying
the parameter κ and can be related to dopamine signals (Berthet et al., 2012;
Berthet and Lansner, 2014).
An example of a cell pair spiking at different times (spikes are 100 ms
apart) is shown below in figure 4.5. The figure shows the development of
different traces involved in the BCPNN learning algorithm for two different
parameter values of τz,i which controls the learning window. The variables
related to the pre-synaptic (source) neuron i are represented in blue, and
variables related to the post-synaptic (target) neuron j are depicted in green.
The source neuron fires spikes at times ti = 220, 230, 240 ms and the target
neuron fires three spikes at times tj = 320, 330, 340 at ∆T = 100 ms later.
Each spike train is filtered with its own exponentially decaying function (with
time constants τz,i and τz,j , respectively) yielding primary synaptic traces
zi (t), zj (t).
The z-traces (can) get filtered once again with exponential kernels using a
single time constant τe in order to gain e-traces which can be used as eligibility
traces. This is particularly useful to solve the distal reward problem, when
learning is reward-dependent and the reward signal is available only after some
time (typically longer than ∆T ).
The e-traces get filtered again with a time constant τp in order to arrive at
the p-traces which can be interpreted as approximations of the probabilities
pi , pj , and the joint probabilities pi,j . The p-traces determine the value and
development of the weight wij (t) and hence control the stability and time
course (or speed) of learning through the time constant τp .
The weight from source to target becomes negative for τz,i = 10 ms (dashed
curve) and becomes positive for τz,i = 100 ms (solid curves). As this example
shows, the time window with which pre- and post-neural activity is interpreted
as correlated (and according to Hebb’s postulate should be wired together) is
controlled by τz,i .
4.4.4.3
Application examples using BCPNN
Networks trained with the BCPNN learning algorithm have been studied in
many different contexts. Examples include classification and expert systems
(Holst, 1997), data mining (Orre et al., 2000), attractor memory models (Sandberg, 2003), storage capacity in memory models (Meli and Lansner, 2013),
memory consolidation processes (Fiebig, 2012; Fiebig and Lansner, 2014), data
analysis in the context of neuroinformatics (Benjaminsson et al., 2010; Benjaminsson, 2013) and activity dependent map formation mechanisms through
4.4. FUNCTION AND CONNECTIVITY
89
Figure 4.5: Learning the connection weight from a source neuron to a target
neuron are learned with the spiking BCPNN learning rule (as implemented in
Tully et al. (2014), see main text for explanations). All sub-figures share the
same time-axis. Top left: The solid blue line shows zi (t) with τz,i = 100 ms
and the dashed line shows zi (t) with τz,i = 10 ms. The solid green line shows
zj (t) with τz,j = 10 ms. The dotted vertical lines (from 0 - 1) indicate the
spike times. Center left: E-traces as derived from the z-traces, filtered with
τe . The red trace is the joint trace and follows the product of the primary
z-traces: zi (t) × zj (t). Again, the blue dashed line corresponds to the e-trace
computed from the dashed z-trace using τz,i = 10 ms. Bottom left: The etraces are filtered with τp to calculate the corresponding p-traces. Top right:
The learning rate κ controls the update of p-traces and can be varied over
time in order to signal reward and thereby allow or freeze the development
of weights. In order to clarify this, κ in this example is set to zero after
t = 600 ms which freezes the p-traces and w(t). Center right: Two weight
curves wij (t) are shown, one corresponding to the traces initially filtered with
τz,i = 10 ms and the solid line to the traces computed with τz,i = 100 ms.
Bottom right: A bias value is calculated from pj (t) which can be linked
the intrinsic excitability of the neuron if interpreted as an additional input
current.
90
CHAPTER 4. THEORETICAL BACKGROUND
axon guidance from olfactory epithelium to bulb (Auffarth et al., 2011). More
recently, the spiking learning rule has been modified with the aim to lower the
computational cost and speed-up the computation time for potential applications in real-time learning (Vogginger et al., 2015).
Part II
Results and Discussion
91
Chapter 5
Results
5.1
5.1.1
Results on olfactory system modeling
Paper 1 - A spiking neural network model of
self-organized pattern recognition in the early
mammalian olfactory system
The purpose of this study was to test a modular attractor memory model in
the context of odor object recognition and holistic sensory processing. Here,
the main hypothesis and results will be briefly summarized and an additional
observation concerning temporal coding will be mentioned, which has not
found place in the original publication (Kaplan and Lansner, 2014).
The system was inspired by the mammalian olfactory system and constructed using biophysically detailed neuron models using the Hodgkin-Huxley
formalism published in earlier studies (Davison et al., 2003; Pospischil et al.,
2008; Garcia, 2010). It comprises the three first stages of the olfactory processing hierarchy (olfactory epithelium, olfactory bulb and olfactory cortex)
comprising approximately 85000 single- and multi-compartmental cells and
an additional readout layer to indicate the outcome of the given task in form
of spiking rate activity (see figure 2.1).
The functionality of the model was studied in five tasks:
1. Pattern recognition (supervised) (figure 5.1 A, B)
2. Concentration invariance (figure 5.1 C, D)
3. Noise robustness (figure 5.3)
93
94
CHAPTER 5. RESULTS
4. Pattern completion with modified temporal input (figure 5.2 and 5.3)
5. Pattern rivalry with modified temporal input (figure 5.4)
The system was trained with 50 different random patterns (each pattern
was presented for a duration of ca 1 second) in a supervised manner by applying a mutual information measure to the mitral/tufted cell responses and
the BCPNN learning algorithm to train the connectivity from OB to PC. Recurrent connections within the PC and from PC to the readout layer were
likewise trained with BCPNN.
In order to test the robustness of the system against temporal dynamics
of the input signal, we used two different types of stimuli: one “single-puff” of
ca. 400 ms duration and “sniffing” input consisting of four shorter consecutive
input currents (corresponding to a sniffing frequency of approximately four Hz,
see figure 2 C in Kaplan and Lansner (2014)). The system has always been
trained with the single-puff input and tested for tasks 1, 2 and 3 and the
sniffing-input was used for tasks 4, 5.
For the pattern completion task, a random fraction of ORs (∼ pattern
incompleteness) that would have been activated by the complete pattern are
silenced. When incompleteness was varied, a new random set of ORs were
silenced.
Figure 5.1: Basic pattern recognition and concentration invariance. A: Patterns to train and test the basic pattern recognition capabilities of the system
(Task 1). 40 ORs are activated in 50 different patterns. Per pattern 30 − 50%
of all ORs are activated. B: Readout activity for pattern recognition test.
As input served the 50 patterns shown in A. C: Patterns to train and test
concentration invariance. Shown are the first 10 patterns from A with varying
concentration (affinity). D: Readout activity response to the patterns shown
in B after training the system with these. Independent of the concentration,
all patterns get recognized correctly after the training. Figure and parts of
the caption taken from Kaplan and Lansner (2014).
5.1. RESULTS ON OLFACTORY SYSTEM MODELING
95
The main hypotheses underlying the design of the model were:
• The activations of an artificial receptor is mostly determined by the
physico-chemical properties of the odorant molecule following ideas from
the odotope theory (see section 2.2 and Shepherd (1987); Mori (1995)).
• Activity dependent connectivity from epithelium, that is connections
from ORNs to periglomerular and mitral/tufted cells were precomputed.
• Fuzzy concentration interval code in olfactory bulb, leading to mitral/tufted
cells in OB act as probabilistic sensors for odorant features.
• Rate-based learning for connections between OB and PC and within the
PC.
• PC acts as an associative attractor network, implemented with a modular structure.
• Learning is supervised in the sense, that the system knows how many
different odor patterns will be presented and which sensory information
belongs to which pattern.
• Modular (columnar) organization in PC, however: no assumptions are
made regarding the spatial layout of minicolumns or hypercolumns.
That is, the columnar structure does not rely on the spatial layout but is
defined through the connectivity only and hence, might not be directly
observable as the orientation columns in V1.
5.1.1.1
Summary of results from Paper 1
The proposed model has shown the following results in the five above named
tasks. The framework for deriving OB-OC and OC-OC connectivities was
proven to successfully accomplish pattern recognition after a single exposure
training (each pattern has been presented only once).
When trained with different concentrations, the system is able to perform concentration invariant recognition. However, when the system was not
trained with different concentrations, but trained with only one concentration and tested with different concentrations, a few patterns were misclassified. Hence, we conclude that in order to achieve concentration invariant
odor recognition exposure to different patterns at varying concentrations during training is crucial. This happens as we assume naturally in a real system
through time-varying odor intensity (and hence variable concentrations) when
breathing in and sensing odor inputs.
96
CHAPTER 5. RESULTS
The system showed some robustness to small degrees of noise, but performance dropped at higher levels of noise (see blue curve in 5.3). Still, performance was well above chance level (0.02) even for higher levels of noise.
Despite an additionally modified temporal structure of the input stimuli
during the pattern completion task, the system was able to complete more
than 90% of all presented patterns even when odor patterns were only 60%
complete (i.e. activated only 60% randomly selected ORs). For this task,
the recurrent connectivity in PC is crucial as a system without (trained) connectivity within PC shows significantly worse performance (see figure 5.3).
Chance level is reached at about 60-70 % incompleteness. The remaining
recognition capability in a system without connectivity within PC is due to
feed-forward processing. Notably, the recurrent connectivity in PC is very
sparse and specific (the connection probability between pyramidal cells in PC
is only 0.06%), but highly efficient for holistic processing as the high pattern
completion performance shows.
As figure 5.5 shows (not shown in Kaplan and Lansner (2014)), temporal
coding of odorant concentration can act in concert (or in parallel) with a sparse
and distributed code based on firing rates. The figure shows different spiketime latencies for pyramidal cells in the PC for different odorant concentration
and thus could enable temporal coding of odor information.
Furthermore, the difference in temporal dynamics for the odor input showed
little effect on the higher stages and already at the level of the OB responses
with respect to temporal characteristics between the two different input stimuli were similar. The left panel in figure 5.2 shows the overlaid output of
mitral/tufted cell responses in response to one complete pattern with “singlepuff” input and incomplete pattern with “sniffing” input. A scalable method
to generate artificial odor patterns from physico-chemical descriptors of realworld odorants inspired by the odotope theory Shepherd (1987); Mori (1995)
was presented and applied for training and testing of the system. The method
is scalable as it allows to generate arbitrary number of patterns and change
the system size to smaller or higher number of receptors. Dendrodendritic
inhibition could be a mechanism to implement soft WTA in OB, supporting
observations suggested by uncorrelated activity in “sister cells” (mitral cells
belonging to the same glomerulus) (Egana et al., 2005)
5.1.1.2
Conclusions and discussion - Paper 1
This study shows that it is possible to implement gestalt-like processing and
behaviorally relevant functions with spiking neural networks integrating a high
degree of biophysical detail. Gestalt-like or holistic processing of odor patterns
5.1. RESULTS ON OLFACTORY SYSTEM MODELING
97
Figure 5.2: Activity during pattern completion task (with modified temporal
input) in olfactory bulb, olfactory cortex and readout layer Example activity
shown as raster plots during pattern 0 in Task 4 (pattern completion). The
system as trained to 50 complete patterns (as in Task 1) is exposed to an
incomplete version of a training pattern in which 50% (randomly chosen) of
the previously active ORs are silenced. The y-axis shows the cell number of
the respective neuron type. Gray dots mark the activity during the training,
blue dots show the activity during the test with the incomplete pattern. For
the test pattern temporal dynamics of stimulation are more variable due to
sniffing input as compared to the puff like input used during training. Left:
mitral/tufted cell spike patterns clearly show the incomplete test pattern, but
only faint difference in dynamics. Middle: Pyramidal cells in PC show a
very similar activity on the population level because of the recurrent cortical
connectivity. The temporal dynamics are different as compared to the complete pattern, partly due to incompleteness but also due to the sniffing input.
Right: The correct readout cell begins to spike approximately 150 ms later
during the test pattern compared to the training pattern activity, but clearly
shows higher activity than other readout cells. Figure and parts of the caption
taken from Kaplan and Lansner (2014).
was established in the olfactory cortex by means of pattern completion and
pattern rivalry (see figures 5.2 and 5.4).
Our model features important characteristics observed in experiments with
respect to the recurrent connectivity, in the sense that recurrent excitation
is very specific whereas recurrent inhibition is unspecific (Poo and Isaacson,
2009; Stettler and Axel, 2009; Isaacson, 2010). Furthermore, the model implements a sparse and distributed code for odor patterns, which is also observed
experimentally (Poo and Isaacson, 2009). Hence, the model is in qualitative
agreement despite the (apparently contradicting) assumptions of a columnar
organization in PC.
98
CHAPTER 5. RESULTS
Figure 5.3: Influence of recurrent cortical connectivity on performance in
Task 3 (noise robustness) and Task 4 (pattern completion) with and without
recurrent cortical connections. The system as trained to 50 complete and
noise free patterns (as in Task 1) is exposed to odor patterns with increasing
number of deactivated ORs and to patterns with increasing degree of noise.
The blue curve marked with circles corresponds to the lower x-axis and shows
performance in Task 3. The red curve with solid lines and triangle markers
corresponds to the upper x-axis and shows performance in Task 4. The dotted
red curve shows the Task 4 performance of a network without recurrent longrange connectivity in PC as trained in Task 1. Figure and parts of the caption
taken from Kaplan and Lansner (2014).
An important future direction of the model would be to introduce feedback
from higher to lower areas, particularly from PC to OB. This could be done
by “inverting” the connectivity from OB to PC derived from our proposed
training procedure and apply the feedback as a filter for upstream sensory information. This “inverting” could be implemented through connections from
pyramidal cells targeting granule cells to specifically inhibit glomeruli (or subparts of a glomerular module) which are not involved in the representation of
the pattern the pyramidal cell code for.
Furthermore, it would interesting to test a different code in OB (for odorant concentration) and study how different types of codes influence the functionality of the system, particularly with respect to tests performed (recognition, completion, and most importantly concentration invariance). However,
this question is more suitable to be addressed in a more abstract framework
5.1. RESULTS ON OLFACTORY SYSTEM MODELING
99
Figure 5.4: Task 5: Pattern rivalry with sniffing input. Left: Raster plot
showing pyramidal cell responses to a .6/.4 mixture of two distinct patterns.
The fraction of both patterns stays constant during the whole stimulation.
Blue dots show spikes from cells being active during the “blue” odor in Task
1. Red dots show spikes from cells being active during the “red” odor in Task
1. Gray dots show spikes from cells that are active during the test pattern, but
have not been active in either of the two mixture components. During the first
200 ms the red pattern evokes activity in both pyramidal and readout cells,
but is then suppressed by the blue pattern becoming active after ∼ 500 ms of
odor stimulation. A substantial part of the pyramidal cell activity is related to
none of the two patterns, exemplary for the often occurring misclassifications
during the recognition of odor mixtures. Middle: Raster plot corresponding
to the pattern from left panel showing spikes emitted by readout cells that
were active during the two respective training patterns. The stronger pattern
is being recognized starting from ∼ 700 ms, i.e. approximately after 500 ms or
two sniff cycles (simulated sniffing frequency is around 4 Hz. Right: Average
curves showing the mean number of spikes emitted by the readout cells trained
to recognize one of the two test patterns. Black curve shows the mean response
from readout cells that code for none of the two test patterns. The blue and
the red curve indicate a smooth transition from one pattern to the other
depending on the relative strength in the mixture. Figure and parts of the
caption taken from Kaplan and Lansner (2014).
100
CHAPTER 5. RESULTS
Figure 5.5: Spiking activity in olfactory cortex model for odor patterns with
increasing concentration. This figures shows the output the PC layer for five
different stimuli (representing the same odor pattern at five different concentration) on top of each other with the color indicating the respective input
stimulus strength. The spike latency can give information about odorant concentration, hence employing a “temporal” code. Interestingly, the system has
not been trained or set up in any way to exhibit a temporal code. Hence, a
temporal code can be seen in this example as “emergent” phenomenon.
with less biophysical detail (due to the computation complexity of the model),
similar to the framework studied in (Benjaminsson et al., 2013; Benjaminsson,
2013).
Another possibility for exploration is to study the activation patterns
elicited by real world odorants in animals in order to challenge the system
with more realistic odor patterns. Furthermore, the application and enhancement of the described model to artificial odor sensor arrays could be further
pursued (Benjaminsson and Lansner, 2011; Marco et al., 2014).
5.2. RESULTS ON VISUAL SYSTEM MODELING
5.2
101
Results on visual system modeling
The works focusing on the visual system (Kaplan et al., 2013, 2014) were
guided by the following questions in mind. How does the visual system with
its sensors providing information from limited receptive fields create a global
and coherent percept of the outside world and how can the network connectivity contribute to achieve this? Based on work by Perrinet and Masson
(2012) the idea was developed that network connectivity renders a mechanism for coherent diffusion of motion information within a network of motion
sensitive neurons. The principle idea is to set up the network connectivity
on the basis of the source and target’s tuning properties which represents a
bias for smooth trajectories, that is continuous motion in contrast to abrupt
changes. Hence, the goal of Kaplan et al. (2013) was to show that anisotropic
network connectivity in a spiking neural network can implement this idea and
thereby contribute to the functional meaningful task of motion extrapolation
(or motion-based prediction).
Furthermore, there is experimental evidence that in higher visual areas
(area MST) neural activity is sustained during a motion perception even in
the temporary absence of the stimulus (Assad and Maunsell, 1995; Newsome
and Pare, 1988) or while tracking an imaginary target covering the visual field
outside of the receptive field currently being recorded (Ilg and Thier, 2003).
This lead to the idea to test the ability of a network to extrapolate motion in
a blanking experiment in which coherent input is temporarily absent.
In Kaplan et al. (2014), the principle approach was similarly implemented
(with a slightly modified network reduced to only one dimension, but increased
network size) and was studied with respect to the anticipatory signal observed
in motion-perception experiments in different stages of the visual hierarchy (in
V1 Guo et al. (2007), in the retina Berry et al. (1999), area MST Ilg and Thier
(2003)).
The overarching question of these two studies is again not focused on a
specific area or an accurate model thereof, but rather on a generic information
processing principle and the question how this important function could be
implemented in a spiking neural network.
The contribution of (Kaplan et al., 2013, 2014) is to present a way how
probabilistic framework introduced in (Perrinet and Masson, 2012; Khoei
et al., 2013) can be implemented in networks of spiking neurons (focusing
on qualitative match). In there, the network’s spiking activity represents
the certainty of perceiving a moving stimulus in the presence of noise and
the question addressed there targets the capability of a network to predict the
trajectory even if the sensory information disappears. Thus, the focus of these
Neural
network
implementation of
probabilistic
models
102
CHAPTER 5. RESULTS
studies was on a specific function rather than to develop a universal mapping
between these two frameworks.
5.2.1
Paper 2 - Anisotropic connectivity implements
motion-based prediction in a spiking neural network
In this publication the hypothesis that anisotropic connectivity can implement
motion-based prediction was presented and tested in moving dot blanking experiment inspired by psychophysical and neurophysiological studies. We propose that the patterns of local, recurrent connections play a crucial role in processing motion information and can be used to solve the motion-extrapolation
problem. In order to show that, we compared three different connectivity patterns in response to an interrupted input stream of motion information (see
figure 1 in (Kaplan et al., 2013)) which corresponds to a blanking experiment
in which a moving dot stimulus temporarily disappears, for example due to
an obstacle or because of eye blink.
The network model consists of one excitatory and one inhibitory population. Excitatory and inhibitory (and inhibitory to inhibitory) neurons are
connected to each other in an isotropic, random, distance-dependent way.
For simplicity only the role of the recurrent connectivity within the excitatory
population in the context of motion processing is studied here by testing three
different connectivity patterns.
The first connectivity pattern makes use of all motion information that is
presumed to be represented in the system, i.e. the stimulus position (represented by the receptive field position), the stimulus speed (represented by cells
responding to their preferred speed) and direction of movement (represented
by cells’ preferred direction). This type of connectivity is named motionbased anisotropic connectivity (see left panel in figure 5.6) and is grounded
in the idea that moving stimuli follow normally smooth trajectories and that
cells that sense this information should forward this information accordingly
by making use of the full motion information in their outgoing connectivity.
Hence, this idea is strongly inspired by a connectionist way of thinking, that
is implementing function through connectivity.
The second connectivity pattern makes only use of the position information
and the direction of movement (but not the speed) and is called directiondependent connectivity (see middle panel in figure 5.6).
The third connectivity pattern discards all available motion information
and connects cells in a random distant-dependent manner (with a Gaussian
probability profile).
5.2. RESULTS ON VISUAL SYSTEM MODELING
103
Figure 5.6: From aniso- to iso-tropic connectivities. In the three panels,
incoming and outgoing connections for the same single neuron (as marked by
the yellow diamond) for different connection rules are shown The preferred
direction of that neuron is shown by the yellow arrow. Cells targeted by the
yellow cell are marked with black circles and cells projecting to the yellow
cell are marked with red triangles. The relative weights of incoming (outgoing) connections are indicated by the size of the source (target) neuron,
respectively. The preferred direction of source and target neurons is shown by
solid arrows. Connection delays are distance dependent and color coded (see
color-bar). Left: Motion-based prediction anisotropic connectivity. Inspired
by previous work on motion-based prediction (Perrinet and Masson, 2012), we
propose a first pattern of connectivity based on connecting a source neuron to
a target neuron if and only if the position and velocity of the target is compatible with a smooth trajectory that would originate from the source neuron.
The strength of this prediction is parameterized by the width of the lateral
tuning selectivity. Middle: Direction-dependent connectivity. To create a
more realistic connectivity pattern, we used the same rule but independently
of speed, but only as a function of the direction of motion. The target neuron
is connected if and only if its direction is close to the source’s direction and
if its position is predicted to be in the direction given by the source neuron.
Additionally, to account for physiological constraints on lateral interaction,
only connections within a limited radius (rConn = 0.10) or latencies shorter
than 100 ms are allowed. This leads to a more local connectivity and smaller
connection delays compared to the previous connectivity. Right: An isotropic
connectivity pattern ignoring the cells’ tuning properties was chosen as a control. There is no prediction in velocity space, but we still predict that activity
should diffuse locally, as the connection probability drops with the distance
between cells. Figure and caption taken from Kaplan et al. (2013).
104
5.2.1.1
CHAPTER 5. RESULTS
Summary of results from Paper 2
In (Kaplan et al., 2013) the role of different recurrent connectivity patterns in
the context of processing motion-information has been studied. It has been
shown, that anisotropic, tuning-property dependent connectivity in a spiking
neural network can implement motion-based prediction as tested in a moving dot-blanking experiment. We argue that anisotropic connectivity facilitates the propagation of motion information compared to random or isotropic
(tuning-propery independent) connectivity as this type of connectivity fails to
propagate the motion information in a coherent way. Hence, motion information diffuses in a random, non-meaningful way through the network and the
remaining excitation is not sufficient to create a coherent motion percept in
the absence of a stimulus.
Furthermore, we have shown that a simple, linear population vector average as readout mechanism is sufficient to obtain the motion information from
the recorded spiking activity (see also section 4.2.6 and equation 4.1). The
applied population vector average can thus be used to extract motion information from spiking networks and represent this information in a probabilistic
manner (see figure 4 and 5.8).
5.2.1.2
Conclusions and discussion - Paper 2
The spread of activity in forms of waves or pulses is a prominent feature in
cortical networks (Sato et al., 2012) and has been studied theoretically in excitable neural media (see Coombes (2005); Goulet and Ermentrout (2011) and
Bressloff (2014) for a comprehensive overview). However, these approaches
mostly focus on isotropic and homogeneous media (i.e. networks with isotropic
connectivities and evenly distributed response properties) without a specific
notion of speed tuning and more abstract approaches than spiking networks
based on neural fields. As these studies with excitable neural media have
shown, “it is necessary to include some form of local negative feed-back mechanism such as synaptic depression or spike frequency adaptation” (Bressloff,
2014) to enable the existence and stability of traveling pulses. In this context
a local negative feedback mechanism needs to be adaptive and dynamic (like
synaptic depression or spike frequency adaptation) as a negative inhibitory
feed-back alone (through static synapses) is not sufficient to enable traveling
pulses. Thus, in the absence of a local negative feed-back mechanism, this
study shows that anisotropic connectivity can lead to sustained propagation
of a traveling pulse and hence could serve as an mechanism for information
transmission. One conclusion that one could draw from this study is that
5.2. RESULTS ON VISUAL SYSTEM MODELING
105
Figure 5.7: Rasterplot of input and output spikes. The raster plot from
excitatory neurons is ordered according to their position. Each blue dot is an
input spike and each black dot is an output spike. While input is scattered
during blanking periods, the network output shows some tuned activity during
the blank (compare with the activity before visual stimulation). To decode
such patterns of activity we used a population vector average based on the
neurons’ tuning properties. Figure and caption taken from Kaplan et al.
(2013).
in order to promote the diffusion of relevant motion information (within an
area or between areas of the visual hierarchy) and facilitate motion prediction, a connectivity pattern that makes use of the tuning (or receptive field)
properties of neurons is advantageous.
Future research could target the question how to overcome the need to
rather long axonal delays, which we propose can be achieved through an increase in the network size and thereby reducing the subsampling of the neural
tissue, using more local connectivity (as indicated by the connectivity pattern based in the middle panel of figure 5.6) and (or) using longer synaptic
time constants (e.g. governed by NMDA currents). Another relevant issue
concerns the role of inhibitory circuits in motion perception and extrapolation. For simplicity, the inhibitory population does not receive any input in
this model. However, the question how specific connectivity involving the
inhibitory population could be used for a sustained coding of motion signals
remains to be addressed. Other directions could target performance tests with
moving stimuli with varying velocity and real-world stimuli.
106
CHAPTER 5. RESULTS
Figure 5.8: Comparison of prediction performance for the different
connectivities. The performance of direction (top panels) and position (bottom panels) prediction as decoded from the network activity is shown. First
and second columns show the horizontal and vertical components, respectively, while the last column shows the mean squared error of the predicted
position with respect to the known position of the target. The color of the
lines correspond to the different connectivities presented in figure 5.6: motionbased prediction (solid blue), direction-dependent prediction (dashed green),
isotropic (dash-dotted red). While an isotropic connectivity clearly fails to
predict the trajectory of the stimulus during the blank (due to the absence of
any negative local feed-back mechanism), we show here that the anisotropic
connectivities may efficiently solve the motion extrapolation problem, even
with an approximate solution such as the direction-based prediction. Figure
and caption taken from Kaplan et al. (2013).
5.2. RESULTS ON VISUAL SYSTEM MODELING
5.2.2
107
Paper 3 - Signature of an anticipatory response in
area V1 as modelled by a probabilistic model and a
spiking neural network
Bernhard A. Kaplan and Mina A. Khoei contributed equally to this work.
Similarly to the previous study, the overarching question is which neural mechanisms influence the positional encoding of moving stimuli and how prior
knowledge affects the internal representation of motion. For this purpose,
we here focused on the anticipatory response of an approaching stimulus in
a network of motion sensitive neurons and was guided by previous studies
performed using a more abstract framework (Khoei et al., 2013). As signs
of anticipation have been been observed at different levels of the visual processing hierarchy (Berry et al., 1999; Ilg and Thier, 2003; Guo et al., 2007),
the important question remains which mechanisms the visual system employs
to predict or anticipate motion signals in the presence of neural delays and
uncertainty.
In order to focus on the mechanisms, we studied two different connectivity schemes which represent different ways of transferring motion information within the network. The implemented mechanism of these two different
connectivity schemes represent different usage of prior motion information
regarding the previous position and direction of motion. Both connectivity
schemes are anisotropic in the sense, that preferentially connect in one direction, namely in the forward direction of the source cell’s preferred direction.
One connectivity however makes only use of the position information and ignores the preferred speed of target cells, hence called position-based-prediction
(PBP) model. The other connectivity scheme integrates all available motion
information and both dimensions of the tuning property space, position and
preferred speed of source and target cells, and is thus called motion-basedprediction (MBP) model. Thus, the research question in focus can be formulated as: Does the way how motion information is distributed in the network
over time influence the anticipation signal? Both connectivities are shown in
figure 5.9.
The idea was first implemented in a Bayesian particle filter framework to
study delay compensation and motion anticipation and to serve as a proof-ofconcept. Equivalently to the PBP and MBP model in the spiking network, the
two models used different ways of predicting the stimulus parameters. The
MBP model predicts both position and velocity (using equations (8), (9) in
(Kaplan et al., 2014)), whereas the PBP model does not predict the stimulus
direction (nor velocity) but only position (with γ = 0 in equations (9), i.e.
velocity estimation is only based on noisy information). The particle filtering
108
CHAPTER 5. RESULTS
framework was implemented and studied by Mina A. Khoei (Khoei et al.,
2013). For a detailed description of this approach and further implementation
details see also Khoei (2014).
Another question addressed in this study is how delays naturally occurring
through sequences of neuronal, synaptic and axonal transmissions of sensory
information and the resulting error in position estimation can be overcome.
Hence, in both configurations the received sensory input codes the state of a
visual scene at some steps back in time and the estimated motion is based on
delayed stimulus information. The solution proposed here is based on ideas
suggested by Nijhawan and Wu (2009) and integrates delayed information to
predict the current state of motion. This was implemented in the particle
filtering framework by compensating for the (known) delay by extrapolating
the motion state and in the spiking implementation similarly by sampling
stimulus information shifted by the known delay.
Figure 5.9: Connectivity schemes shown in the tuning property space (horizontal axis reflects position, vertical axis preferred speed) from the perspective
of one example source cell. Cells are represented by black dots, the source cell
is indicated by the yellow star, outgoing connections by white circles with
relative connection strength indicated by the radii and the center-of-mass of
all outgoing connections from that source cell is shown by the green diamond.
Left: In the position-based-prediction (PBP) model, connectivity uses position information only. Right: In the motion-based-prediction (MBP) model,
connectivity uses both position and speed information to connect cells. Figure
and caption taken from Kaplan et al. (2014).
5.2. RESULTS ON VISUAL SYSTEM MODELING
5.2.2.1
109
Summary of results from Paper 3
In order to study the temporal dynamics of the anticipation signal during an
approaching stimulus and to reproduce observations gained in experimental
studies in awake and alert macaque V1 (Benvenuti, 2011) in a fixation task,
an equivalent experiment was simulated and the spike activity at three different positions along the trajectory has been recorded. The spike trains of 30
cells positioned close to each other were pooled together and filtered with an
exponentially decaying function (with decay time constant of 25 ms) in order
to get a rough approximation of an LFP signal and an anticipatory signals for
different positions along the trajectory. The filtered spike trains recorded from
three different positions along the trajectory of an approaching stimulus were
normalized across the network in order to gain a measure for the network’s
confidence regarding the predicted stimulus position. This was done for the
two different connectivity schemes and the anticipatory response compared to
both experimental data (Benvenuti, 2011) and the abstract particle filtering
framework (see figure 5.10). The two connectivity schemes (see figure 5.9)
differ in the way velocity information is integrated. In the MBP model, a
source cell takes into account both preferred speed and position of the target
cell. in the PBP, connectivity is set up only based on the target position and
not the preferred speed (or direction).
The most important result is that in both (particle filtering and spiking)
frameworks the anticipatory response in the MBP network is trajectory dependent, whereas in the PBP model there is no significant difference in the
anticipatory signal with respect to the stimulus trajectory. As the experimental results suggest (as investigated by the experimental partners (Benvenuti,
2011)) that the anticipatory signature is more similar to the trajectory dependent response, we conclude that the visual system uses both position and
velocity information internally to update the anticipated trajectory of moving
stimuli.
An explanation for the behavior observed in the PBP model is that a part
of the relevant information is distributed (or diffuses) across a larger part
of network and is “thinned out”. Due to this information diffusion process,
important information (regarding the speed) is gradually lost and the anticipation signal is weakened and not trajectory dependent.
Furthermore, delays were overcome by incorporating the known delay and
extrapolating the sampled information accordingly.
110
CHAPTER 5. RESULTS
Figure 5.10: From experiments to abstract models to spiking neural networks.
Left: Top shows a schematic of the experiment. A moving bar stimulus is
presented to an awake alert macaque (in a fixation task) and approaches the
classic receptive field (CRF) in V1 with different starting positions. Bottom
show the extracellular spiking response (in spikes per second) averaged over
a population of macaque monkey V1 neurons. The recordings show an anticipation of the response with respect to the trajectory length (adapted from
(Benvenuti, 2011)). Middle: Average response in the MBP (top) and PBP
(bottom) model from the particle filtering framework (averaged over 10 trials).
The color codes correspond to the trajectory lengths. Dotted line represents
the time when the stimulus reaches the target position, the dashed line indicates the time when the time at which the delayed stimulus information
reaches the target position. Right: Anticipation response in the spiking neural network for three different trajectory lengths. The confidence (displayed on
the y-axis) is derived from filtered and normalized spike trains pooled over 30
cells at different positions along the trajectory. The MBP model (top) shows
a trajectory dependent response with the maximum response at the correct
time of stimulus arrival. The PBP model shows similar responses for all trajectory lengths and can not compensate for the delayed arrival of stimulus
information. Figure and caption taken from Kaplan et al. (2014).
5.2. RESULTS ON VISUAL SYSTEM MODELING
5.2.2.2
111
Conclusions and discussion - Paper 3
In this paper, we studied the interplay of position and direction information in
shaping the anticipatory response of an approaching stimulus in two different
model configurations (MBP, PBP) and two different modeling frameworks
(particle filtering and spiking neural network). In both modeling frameworks,
PBP configuration serves a control simulation, to highlight the importance of
velocity related sensory information in precise position coding. We conclude
that the diffusion of motion information requires both dimensions (position
and speed) in order to precisely predict the stimulus position and to reflect
the trajectory dependent signature as observed in experiments.
From these insights we predict that the connectivity in motion sensitive
networks should be tuning property dependent and be guided by the common
response (i.e. correlated within a certain time window) in accordance with
Hebbian learning.
Future work would need to address the question how the recurrent connectivity could self-organize, i.e. how connection weights could be learned in
order to allow optimal inference of sensory information, e.g. using HebbianBayesian learning and following similar approaches as described in Rao and
Sejnowski (2003); Rao (2004).
5.2.3
Ongoing work: Paper 4 - Motion-based prediction
with self-organized connectivity
This section contains preliminary material that is intended for a publication titled “A probabilistic framework for motion-based prediction with self-organized
connectivity in spiking neural networks” authored by Bernhard A. Kaplan,
Philip J. Tully and Anders Lansner
In the previous two sections it has been shown how recurrent connectivity
can influence the predictive behavior and shape the anticipatory response of a
network. But how could this recurrent connectivity be learned and emerge in
a self-organized way? The purpose of this work is to combine anticipation and
motion-extrapolation with self-organization using the BCPNN learning algorithm for spiking neurons. Hence, the goal to achieve function through connectivity is approached here by linking neurons using their response properties
in a self-organized manner through Hebbian-Bayesian learning, in particular
the spiking version of BCPNN.
The basic idea of this model is to study how Hebbian-Bayesian learning in
a network inspired by the models described in Paper 2 and 3 (Kaplan et al.,
2013, 2014) (see sections 5.2.2 and 5.2.1) influences the anticipatory response
Link to
previous
studies
Model
summary
112
CHAPTER 5. RESULTS
during motion perception. In brief, the model consists of a one-layered spiking
neural network in which each cell has certain predefined tuning properties, a
preferred position x and a preferred orientation θ. The network is organized
in twenty hypercolumns each consisting of several minicolumns in which cells
have very similar preferred orientations inspired by area V1 and similar to
previously published models (Paper 1 (Kaplan and Lansner, 2014; Lundqvist
et al., 2006)). A schematic of the distribution of tuning properties displaying
the columnar structure is shown in figure 5.11a. For simplicity this model
only includes a small number of different orientations to study the response
to a moving orientated stimuli and to allow for possible interactions between
orientations.
(a) Distribution of tuning properties.
(b) Rasterplot showing activity during
training.
Figure 5.11: Left: The left figure shows the two-dimensional tuning property
space with one spatial dimension and one dimension for stimulus orientation.
Each black dot represents a cell with the width of the Gaussian tuning curve
represented by the blue ellipses. Horizontal dashed lines indicate the trajectory of the first six training stimuli with the blue (red) dots showing the start
(stop) position of the stimulus. For simplicity this model only contains four
different orientation columns per hypercolumn. Right: Rasterplot showing
the network bursting activity during training as response to five out of 400
stimuli (50 for each orientation). The network is stimulated with an oriented
bar moving with constant speed. As different stimuli can have leftward or
rightward direction, symmetric connections emerge. Color indicates the preferred orientation of the cell.
5.2. RESULTS ON VISUAL SYSTEM MODELING
113
During training, a minicolumn contains only 8 neurons in order to reduce Training
the computation time (which takes several hours on nowadays supercomput- procedures
ers). From the resulting weights of the training procedure, a connectivity
matrix is computed containing weights between minicolumns averaged over
all cell pairs within minicolumns. This allows to increase the number of neurons per minicolumn to 32 during testing as cells are connected based on the
average weight matrix.
The training procedure consists of a random sequence of stimulus presentations of 50 moving stimuli for each orientation. A stimulus can be seen as
an oriented bar and begins at one end and moves with constant speed to the
other end of the spatial axis stimulating cells with similar preferred orientation along its trajectory. The network response to the first twenty stimuli is
shown in figure 5.11b. For simplicity, only one speed is used during training.
During training we apply the spiking version of the BCPNN learning to
calculate weights. As described earlier and shown in figure 4.5, each synapse
computes traces in order to compute a measure for joint activation of source
i and target neurons j according to the probabilistic interpretation of neural
activity. That is, for each cell pair the pre- and post-synaptic spike trains are
filtered with exponential kernels using the corresponding time constants τz,i ,
τz,j respectively, to get so-called primary synaptic traces and further filtered
with a longer time constant to get a measure for the average probability of
activation (see Tully et al. (2014) for a detailed description of the spiking
BCPNN learning rule). Based on the ratio between joint activation pij and
the individual probabilities of activation pi , pj the weights are calculated accordingly. This leads to a Hebbian-Bayesian learning rule which yields weights
depending on the co-activation of cells:
pij
(5.1)
wij = gain · log
pi pj
In contrast to equation 4.14 a gain is used here to translate the weight into a
conductance represented by the strength of a synapse. It is important to note
that connection weights do not have any effect during learning (gain = 0, i.e.
learning is “offline”), but only afterwards during “testing”. The gain value is a
free parameter and used to scale the incoming conductances (see explanation
on normalization below).
In order to study how visual information propagates within the network, Resulting
it is important to look at the connection weights resulting from the train- connectivity
ing procedure. For this purpose, one may ask: how does one cell connect to profiles
other cells with a similar or different preferred orientation depending on the
114
(a) Symmetric (isotropic) connectivity
CHAPTER 5. RESULTS
(b) Asymmetric (anisotropic) connectivity
Figure 5.12: Left: Resulting isotropic connectivity profile for different symmetric learning time constants (τz,i = τz,j ). Right: Asymmetric learning
time constants (τz,i 6= τz,j ) and training stimuli with only one direction leads
to anisoptric connectivity profiles.
spatial distance between the cells? This question is answered by the connectivity profiles shown in figure 5.12. The figures show the outgoing weights
(averaged over the cells belonging to one minicolumn) plotted against the
spatial distance between cells for two different settings (in the following abbreviated with setting A, setting B). The left subfigure shows the results from
a training procedure using stimuli moving both left- and right-ward and uses
symmetric learning time constants (setting A) τz,i = τz,j . The color code
indicates different learning time constants τz,i with which the output spike
trains are filtered. The right subfigure shows the connectivity profile resulting from a training procedure using a stimulus moving in only one direction
(and using a fixed τz,j = 2ms (setting B). The parameter τz,i controls the
time-scale of correlation, that is on which time-scale activity is interpreted
as being correlated (or uncorrelated). Each setting has been trained with
τz,i = 5, 10, 20, 50, 100, 150, 200 [ms]
Setting B can be seen as a speed-sensitive network that is only responding
to the trained stimulus speed (in this example it would be left-ward motion
with the given speed), similar to the network models presented previously in
Paper 2 and 3. In setting B the resulting weights are anisotropic, that is
cells connect preferred into the direction of motion, qualitative similar to the
connectivity studied before.
5.2. RESULTS ON VISUAL SYSTEM MODELING
115
Setting A shows a symmetric (or isotropic) connectivity profile with different spatial ranges depending on the learning time constant τz,i = τz,j ) which
determines the window of correlation.
The impact of different connectivities on the network dynamics in the
context of anticipation is discussed below. In the following, only networks
using symmetric connectivity kernels (according to setting A) will be studied,
however trained with different learning time constants τz,i for fast (AMPA)
and slow (NMDA) synaptic transmission. For simplicity the impact of varying
N M DA
AM P A
=
= 5 ms, τsyn
the time constants for synaptic transmission (τsyn
150 ms) is not studied here and kept constant for all of the networks presented
below.
The next question to be asked is how the different resulting connectivities
influence the ability of the network to anticipate or predict future stimuli.
That is, if the learned network connectivity can contribute to predictive coding. In order to address this question a measure that enables a comparison of
the anticipatory response between different networks (using different connectivities) needs to be defined.
Figure 5.13 shows two measures for the anticipatory response for a test run
where a moving stimulus is presented to an example network after training.
Figure 5.13a shows the anticipatory as measured from output spike trains. As
in Paper 3 the anticipatory response was studied by looking at the averaged
and normalized spike response of cells along the trajectory of a moving stimulus by filtering the spike trains with exponentially decaying functions (decay
time constant of 25 ms). In addition to the spike response (focusing on the
output of the network) the membrane potential (representing incoming signals
already in the subthreshold regime) can be used to measure the anticipatory
response, as well (see figure 5.13b). In order to see the temporal development of the respective signal (spike output or membrane potential) needs to
be aligned to the “estimated time of arrival” (ETA), i.e. the time when the
stimulus reaches the receptive field center of the given cells. As membrane
potential recordings are both memory and time consuming, membrane potentials are only recorded from a small subset of the excitatory neurons (in the
order of one or two out of thirty per minicolumn). After aligning the signal
to the ETA an average response can be calculated. Based on this averaged
response, an (arbitrary) threshold needs to be set which defines the beginning
of an anticipatory response, for example when the average signal crosses 0.25
of the difference between baseline and maximum (where baseline is defined as
the average early response when the stimulus is well out of the receptive field).
The point in time when the average anticipatory response crosses this thresh-
Characterizing
anticipatory
responses
116
How does
connectivity
shape
function?
Background
noise
influences
anticipation
signal
CHAPTER 5. RESULTS
old is defined as tspikes
anticipation for the filtered and normalized spike response and
volt
tanticipation for the membrane response. The absolute value of tanticipation is
obviously dependent on the threshold (and in minor ways on other factors),
however the absolute value is not crucial. In contrast, the relation between
tanticipation values for different network connectivities is informative and will
be discussed in the following sections.
In order to answer the question as to how recurrent connectivity influences the anticipation, it is important to study different connectivity profiles
(similar to the studies presented previously). In this work however, the recurrent connectivity is learned with the spiking version of the BCPNN learning
algorithm. As the spiking BCPNN learning allows to detect correlations at
different temporal scales (which is controlled by the time constant of the primary traces τz,i , see figure 4.5) one can study the effect of the connectivity
profile (as shown in figure 5.12) on the build-up of an anticipatory response.
As pointed out in the discussion of Paper 2 (section 5.2.1.2), one problem
with the model presented earlier using static pre-computed connectivity is the
requirement for large axonal delays. We suggested the involvement of longer
synaptic time constants, such as NMDA which is on the order of 150 ms (Watt
et al., 2000), to overcome this requirement and to guarantee a meaningful diffusion of motion information across the network which serves motion extrapolation and anticipation. Hence, in this model NMDA synapses were introduced
and the effect of using different connectivity profiles for AMPA and NMDA is
studied. In principle, this model allows to study the time-course of predictive
signals (as seen during in the anticipatory response during testing) in relation
to different learning parameters, for example by using different weight matrices gained from different learning parameters τz,i . That is this model allows
to study the question whether fast and slow synaptic transmission via AMPA
and NMDA synapses is to be self-organized with different learning time constant and to be conveyed through different connectivity patterns. In order to
AM P A
do that, networks that were trained with different combinations of τz,i
,
N M DA
τz,i
providing different weight matrices are studied with respect to their
anticipatory response below. However, since this is ongoing work and for the
sake of brevity, only a subset of all possible combination of weight profiles
shown in figure 5.12a will be shown here.
Before looking at the impact of different connectivities on the anticipation
signal, a baseline measure is done as follows. In order to compare the effect
of different connectivities (in terms of the shape of the connectivity kernel as
shown in 5.12a) on the anticipation behavior, it is important to understand
that the amount of excitation transported through the network strongly in-
5.2. RESULTS ON VISUAL SYSTEM MODELING
117
(a) Anticipatory response measured from output spikes
(b) Anticipatory response measured from membrane potential
Figure 5.13: Top: Output spike trains are filtered with exponentially decaying functions (time constant of 25 ms), normalized so that the sum of filtered
traces equals one, aligned to the stimulus arrival time and averaged to gain
a mean spike response. Bottom: Membrane traces were aligned to the time
of stimulus arrival of the respective cell and averaged. The thin black lines
indicate the minimum and maximum range of V (t) that serves as basis to
compute tvolt
anticipation . The elevated mean membrane potential after stimulus
has passed the receptive field center originates from the symmetric connectivity kernel (setting A) and is less pronounced when using the asymmetric
(setting B) (results not shown).
118
CHAPTER 5. RESULTS
fluences the network dynamics and thereby the anticipation signal. This can
be demonstrated by looking at a network that has no recurrent connectivity
while varying the background noise inserted into each neuron (see 5.14). An
increase of background noise (modeled as separate Poisson processes leading
to random excitatory and inhibitory input currents) increases the anticipatory response. This effect can be explained by the elevated excitability of
neurons with increased background noise. In other words, the likelihood of an
approaching stimulus to trigger spikes is increased when the amplitude of the
noise is increased.
Figure 5.14: Top (bottom) figure shows the anticipatory response of the network measured in the spike (voltage) response, respectively, as described in
figure 5.13. The x-axis represents an increase of the background noise (the
weight of Poisson spike trains) inserted into each cell via excitatory and inhibitory synapses.
The impact of the noise on the anticipatory response tells two things.
Firstly, the baseline measure for the anticipatory response, which is important to assess the impact of the connectivity on the anticipatory response.
Secondly, increased synaptic input – even incoherent – can modify the anticipation signal when measured as described above. From this it follows that the
5.2. RESULTS ON VISUAL SYSTEM MODELING
119
anticipatory response will be strongly dependent on the spread of recurrent
excitation within the network. Hence, in order to allow for a “fair” comparison between different network connectivities, the following normalization has
been done.
In order to be able to see the effect of the shape (spatial range) of the
connectivity kernel on the anticipatory signal, the total amount of excitation
received by a cell needs to be controlled. For each minicolumn, the sum of
incoming positive weights resulting from both weight matrices are scaled so
that the sum of incoming excitation into a cell reaches a target value. The
same normalization is done for the inhibitory weights, so that the incoming
excitation and inhibition sum up to the same value and the network is close
to a balanced state. Alternatively, one could normalize or control the amount
of outgoing excitation and inhibition (i.e. the post-synaptic currents) which
should however not lead to qualitative changes in network behavior.
This distorts the “pure” connection matrices gained from the BCPNN
learning algorithm quantitatively, but allows to isolate the effect of the connectivity kernel from the influence of varying total excitation transported through
different kernels. In other words, the goal here is to compare the impact of the
shape of the different connectivities for AMPA and NMDA connections and
not the impact of the total amount of excitation, which can be different when
using different connectivities, obviously. If this weight normalization would
have been omitted, tanticipation would be different when using different connectivities, solely due the differences in excitation received by a cell – hiding
the effect of the spatial shape of the connectivity kernel.
Still, the effect of total excitation transported through the connectivity can
be studied and plays an important role in creating an anticipatory response
as shown in figure 5.15 below.
As figure 5.15 shows, the anticipation measure gained through filtering of
output spiketrains can not reveal a clear picture between the different networks. Nevertheless, a tendency that increased recurrent weights promote
motion anticipation can be seen in all of the four compared networks. This
tendency becomes more visible when looking at the subthreshold response or
the mean membrane potential. However, since only spikes are transmitted
between neurons, the subthreshold response does not play a crucial role for
the network function.
AM P A
A network which has been trained with τz,i
= 5 ms for the AMPA
N M DA
connections and τz,i
= 150 ms for the slower NMDA connections shows
the strongest anticipatory response which becomes more apparent the higher
the recurrent weights are. Interestingly, the network in which the learning
Normalizing
network
connectivities
Comparison
of networks
120
CHAPTER 5. RESULTS
Figure 5.15: Comparison of different network architectures on anticipation behavior. Again, the top (bottom) figure shows the anticipatory response of the
network measured in the spike (voltage) response, respectively, as described
in figure 5.13. The x-axis indicates the total amount of incoming excitation
(and inhibition, measured in nS) received by each cell. The y-axis shows the
increase in tanticipation compared to the baseline measurement shown in figure
5.14. The different curves represent different networks using weight matrices
trained with different learning time constants.
AM P A
time constant matches the time-course for synaptic transmission (τsyn
=
AM P A
τz,i
for learning of AMPA connections and equivalently for NMDA) shows
the strongest anticipatory response. One may speculate if this match between
constants for learning and synaptic transmission is particularly useful and
may even be implemented in the real system, but this question remains to be
addressed by future studies.
As already seen in the “baseline” network without recurrent connections,
the amount of background noise influences the anticipation signal of the network as it affects the responsiveness of the neurons by increasing the variance
of the membrane potential. Figure 5.16 shows the influence of background
noise on the spike response. From this figure two insights can be gained.
5.2. RESULTS ON VISUAL SYSTEM MODELING
121
First, an increase in background noise leads an increase in tspikes
anticipation . SecAM P A
AM P A
ond, it is shown that again the network trained with (τsyn
= τz,i
(and
equivalently for NMDA) again has the strongest anticipatory response and
reacts slightly stronger to changes in the background noise. However, these
results are in a preliminary stage and the findings require further investigation.
Figure 5.16: Background noise influences anticipation in spike response. Here,
only the anticipatory response measured in the spike response is shown. The xaxis indicates the increase of weights providing both excitatory and inhibitory
Poisson noise to all cells of the network.
5.2.3.1
Conclusions and discussion - Paper 4
The purpose of this work is to link the approaches introduced in the Papers
1 - 3 in order to address the overarching question of this thesis, namely how
connectivity can be learned in order to achieve a certain function.
As an example, the anticipatory response of networks trained with a
Hebbian-Bayesian learning rule (BCPNN) to a moving stimulus was studied
and compared. For this purpose, two measures for the anticipatory response
were introduced and used to study the effect of recurrent connectivity. In
particular, the results show that increased strength of recurrent connectivity
leads to stronger, i.e. earlier anticipation of approaching stimuli.
122
CHAPTER 5. RESULTS
Furthermore, the role of background noise on anticipatory response has
been studied. Since background noise affects the excitability and likelihood
of emitting spikes, the amount of background noise influences the anticipation signal. Thus, one can speculate that the neural system might employ
background noise to shape and enhance anticipatory responses.
We have shown that an anisotropic or asymmetric connectivity similarly
to the one used in previous studies (Papers 2, 3) can be learned using the
BCPNN learning rule. This connectivity requires that network is sensitive to
speed and trained with different learning time constants τz,i , τz,j (setting B,
see figure 5.12). In V1, however, only the minority of cells is speed-tuned and
speed-selectivity is more prominent in higher areas like MT/V5 (Priebe et al.,
2006). Hence, this model is not to be seen as a faithful implementation of
one or the other area but rather as a framework to study how selectivity to
various visual features (speed, direction, orientation) influences the emergent
connectivity and in turn the processing capabilities.
As this work is in a preliminary stage, many questions remain to be addressed. One important issue is which network architecture (defined by the
learned connectivity kernels) offers the most reliable ability to exhibit motionextrapolation (as studied in Paper 2). Also, the impact of varying ratios between fast and slow synaptic transmission can be studied and related to the
transmission speed of motion information across the network. Thus, the question as to how the internal network dynamics need to be tuned (or learned)
so that reliable motion-extrapolation can be achieved remains to be answered
by future studies.
Another direction to be explored is to relate the model to experimental
measurements. For example, a detailed comparison study between the anticipatory responses could possibly allow to infer the spatial range of recurrent
connectivity and restrict the model parameters.
A comparison with the anticipatory response studied earlier shows that
the build-up of the anticipation signal measured in the spike response starts
earlier in the model described in Paper 3 compared to the examples shown
here. This might be explained by the different network sizes resulting in
different effective connectivities and the usage of different target normalization
values for excitatory currents. It remains to be studied how these parameters
(network size, individual connection strength) influences the spread of activity
and how it shapes the anticipatory response.
Part III
Conclusions
123
Chapter 6
Summary
The topic of this thesis was to study the interplay between structure and
function in spiking neural networks with the help of computer simulations.
As exemplar models, the early visual and olfactory system of mammals have
been used to study behavioral relevant functions like pattern recognition, completion and prediction. In both model systems, the guiding question was to
find principles how specific connectivity based on response properties of the
network constituents can be used in order to support the given function. In
addition both systems have to deal with noisy, incomplete streams of sensory
information and sensory neurons detect only small parts of the outside world
(due to their specialization to subparts of the environment). Hence, the overarching question was how neurons with limited receptive fields can integrate
distributed sensory information to create a global and coherent percept that
can guide behavior. The answer that was suggested here makes use of the
connectivity between and within populations of neurons.
In the context of olfaction, a self-organization paradigm combining mutual information and Hebbian-Bayesian learning has been applied to a multilayered network in order to achieve holistic perceptual processing including
pattern recognition, completion and rivalry phenomena. The performance of
the model was demonstrated in five different tasks and the recurrent connectivity derived from the correlation-based learning rule BCPNN was proven to
be crucial in pattern completion. Furthermore, the model was robust against
variations in the temporal dynamics of the input signal and noise on the lowest
senory stage. As the learning rule was implemented on a rate-based scale, the
wiring process between sensory layers is assumed to happen on a time scale
exceeding the time-window of spike-timing dependent plasticity rules. Never125
126
CHAPTER 6. SUMMARY
theless, the model allows the existence of a temporal code for concentration
information. The results of the employed top-down modeling approach are in
qualitative agreement with several important findings regarding the connectivity and coding in the olfactory cortex. Our model makes use of unspecific
inhibitory and specific excitatory connectivity and creates a sparse and distributed representations of odor objects in an associative memory network.
In the context of vision, the recurrent connectivity in a single-layer network
has been studied for motion-extrapolation and anticipation tasks. The basic
idea for constructing the connectivity was to make use of statistical regularities
that characterize natural stimuli, in particular the smoothness of trajectories.
For the motion-extrapolation task it was shown that anisotropic connectivity
can be used as a mechanism for motion-based prediction.
In the motion-anticipation task we demonstrated qualitative agreement
between a spiking neural network model, a Bayesian particle filter framework
and experimental observations with respect to the trajectory dependent anticipatory response of an approaching stimulus. Furthermore, the importance
of velocity information for the precise estimation of position in the presence
of neural delays has been studied.
Finally, we studied the anticipatory response in a network trained with
the spiking version of the BCPNN learning rule in the presence of noise and
compared different connectivities. It has been shown that both isotropic and
anisotropic connectivities can develop depending on the learning and network
parameters. Hence, the applied Hebbian-Bayesian learning rule can be used
to self-organize connectivity relevant for motion-processing tasks.
A fundamental principle in both visual and olfactory systems was a probabilistic interpretation of neural signals. This interpretation was used in the
olfactory model for self-organizing the connectivity based on joint activations
representing the probabilities of detecting certain stimulus features. In the
context of the visual system, the spiking activity has been linked to the estimation of stimulus variables and the confidence of network’s estimation.
Chapter 7
Outlook and Future Work
The purpose of this final chapter is to identify open questions, unsolved problems and possible future directions that could be addressed based on the work
presented in this thesis.
More or less
Integrating the insights gained in the studies presented earlier, the question detail?
arises which level of detail or abstraction future works should use. In general,
there are three directions possible. One direction would be to integrate more
physiological detail (e.g. in terms of neuron and synapse models, network
size, additional influences) in order to achieve a quantitative match between
experiments and a more faithful model of the neurobiology Along this line
of thought, one could introduce an important factor that is not often taken
into account in computational models, which is the role of neuromodulators
and their effect on functional models (Linster and Fontanini, 2014; Bargmann,
2012). Furthermore, including more detail could eventually open the possibility to study the effect of drugs (blocking of ion-channels, neurotransmitters) in
a virtual setting. Hence, there remains hope that computational models might
help to develop effective treatments for various conditions (Montague et al.,
2012; Stephan and Mathys, 2014), however further advances in multi-scale
modeling are required.
Another direction would use a similar level of detail but target larger
systems including additional areas (or nuclei) in order to model more complete
systems and understand the interplay between networks and hierarchies. This
could allow the study of more realistic tasks and conditions requiring multimodal input, motor output and closed-loop interactions between perception
and action.
127
128
CHAPTER 7. OUTLOOK AND FUTURE WORK
Yet another direction would be to increase the level of abstraction (using
simpler, e.g. rate-based models) in order to decrease model complexity and
computational costs, simplify the parameter space and thereby to facilitate
the understanding of model behavior in a more comprehensive and systematic
way. In the end, the decision whether models should contain more or less
detail depends on the specific research question and modeling goal.
Multi-scale
These three directions are not mutually exclusive, but would optimally be
modeling combined in a future multi-scale approaches. However, extension of computaposes tional models (e.g. by bringing astrocytes into the picture (Min et al., 2012)),
challenges new challenges for existing simulation and hardware technology might occur.
This is because most simulators (and neuromorphic hardware systems) rely on
the concept of locality of variables (e.g. membrane potential) and a globally
fixed time-step. However, multi-scale models could violate these concepts, e.g.
by sharing ion, neuromodulator or drug compound concentrations across parts
of a network or by other factors requiring multiple temporal and spatial scales,
and thereby pose additional requirements for simulators (e.g. when distributing elements across processors, or due to additional connectivity patterns for
shared variables).
Future neuThis is particularly important for the design of neuromorphic hardware
romorphic which has currently been inspired by the idea to provide a complementary
hardware modeling platform inspired by numerical simulators (see e.g. the approach
described by Brüderle et al. (2011)). In contrast, future neuromorphic hardware might not only focus on the analog representation of numerical values
and the emulation of equations as implemented equivalently in simulators, but
could be designed to represent models on a higher level. This could be done,
for example, by emulating populations of neurons (e.g. minicolumns representing qualitative values or concepts) instead of single neurons, as proposed
by Lansner et al. (2014) and Farahini et al. (2014). Applications of dedicated
neuromorphic hardware might not be limited to assist computational modeling
efforts by providing an emulation platform, but rather focus on specific tasks
like processing of “big data” and application of neurocomputing algorithms in
fields like (autonomous) robotics or brain-machine interfaces.
Flexible
Another aspect that gained more and more attention is neurogenesis, espenetworks cially with respect to olfactory processing (Lepousez et al., 2013), and its role
in learning and memory (Alonso et al., 2012). So far, most computational
models considered networks with a fixed size (on rather short time scales),
but modeling real systems more faithfully would require to integrate developmental, self-organization and structural plasticity processes acting on several
levels including variable network sizes.
129
Regarding the olfactory system model a next step would be to integrate
feedback from the olfactory cortex (or other areas or nuclei) to the olfactory
bulb and study the question how this top-down feedback can improve performance similarly to the tasks studied here. An advanced model could integrate
neurogenesis and granule cells that receive feedback from cortex and thereby
elucidate functional roles of neurogenesis in specific tasks.
With respect to the visual system two issues could be addressed. First,
the construction of recurrent connectivity based on self-organization principles and learning rules leading to task-relevant connectivity (i.e. continuing
the work presented as planned for Paper 4) needs to be addressed more deeply.
Second, a more general question that could be asked would be to study the
differences arising from combining (or distributing) sensory information (position, direction, speed, orientation) across different brain areas (e.g. V1, MT
vs primary somatosensory cortex S1) in terms of coding, network layout and
task performance. This direction of research could help to identify advantages
(or disadvantages) of using a more (or less) hierarchical organization (e.g. primates vs. rodents) and relate to the organization of tuning properties within
an area, e.g. columnar vs. non-columnar or neural maps vs. salt-and-pepper
organization (Kaschube, 2014).
Furthermore, the integration of the prediction and holistic processing capabilities into a framework allowing a tight interaction with an environment
(e.g. a closed-loop perception-action selection framework) would represent an
important step forward towards a more complete model of behavior. This
includes the question as to how global network connectivities need to be organized in order to achieve a broader variety of tasks (including recognition
and prediction but also action-selection, planning, postdiction or reasoning),
by combining information not only from a two- or four-dimensional space, but
higher dimensional spaces.
The future will tell if the ideas and approaches studied in this thesis contributed to the advancement of “understanding the brain” or to the development of brain-inspired technology and algorithms.
Cortical
feedback
Extending
functionality
Part IV
Index and Bibliography
131
Index
activity-dependent connectivity, 73
Adaptive resonance theory, 29
anticipation, 69
attractor, 76
interval coding, 49
labeled line coding, 49
large-scale models, 30
leaky integrate-and-fire neuron, 23
learning, 74
levels of analysis (Marr), 27
liquid state machines, 77
local representations, 49
Bayes rule, 58
Bayesian models, 53
Bayesian models of the brain, 55
chaos computing, 78
coding, 47
connectivity scales, 28
critical period, 73
Markov chain of particles, 55
membrane potential, 24
memristor, 36
model complexity, 26
multi-compartment neuron models,
27
decoding, 57
discrete coding, 49
distributed representations, 49
neural coding, 47
neuromorphic hardware, 31
neuron models, 23
noise, 53
normalization, 63
echo state networks, 77
epithelium, 16, 66
Feature extraction, 65
Freeman K-Sets, 29
olfactory bulb, 16, 66
olfactory cortex, 18
olfactory epithelium, 66
oscillations, 59
Gestalt psychology, 66
guidance cues, 73
hippocampus, 74
holism, 66
Hopfield networks, 29
particle filtering, 25, 55
pattern recognition, 65
133
134
population level, 28
population vector decoding, 57
prediction, 69
predictive coding, 69
predictive coding, receptive fields,
60
probabilistic interpretation, 53
prosthetics, 38
rate coding, 48
rate-based modeling, 28
rate-based vs. spike-based models,
30
receptive field, olfactory, 62
receptive field, visual, 59
receptive field, visual applied, 61
robotics, 38
sequential Monte Carlo, 25, 55
single-compartment neuron models,
27
Spike-timing dependent plasticity
(STDP), 74
spiking neuron models, 23
statistical connectivity, 73
STDP, spike-time dependent plasticity, 59
stochastic processes, 53
template matching, decoding, 58
temporal coding, 52, 59
topography, 72
uncertainty, sources of, 53
Winner-take-all, 65, 81, 82
winner-take-all decoding, 57
INDEX
Bibliography
Aarts, E., Verhage, M., Veenvliet, J. V., Dolan, C. V., and van der Sluis, S. (2014).
A solution to dependency: using multilevel analysis to accommodate nested data.
Nature neuroscience, 17(4):491–496.
Abbott, J. T., Hamrick, J. B., Griffiths, T. L., et al. (2013). Approximating bayesian
inference with a sparse distributed memory system. In Proceedings of the 35th
annual conference of the cognitive science society.
Abbott, L. and Blum, K. I. (1996). Functional significance of long-term potentiation
for sequence learning and prediction. Cerebral Cortex, 6(3):406–416.
Abbott, L. and Kepler, T. B. (1990). Model neurons: From hodgkin-huxley to
hopfield. In Statistical mechanics of neural networks, pages 5–18. Springer.
Abbott, L. and Regehr, W. G. (2004).
431(7010):796–803.
Synaptic computation.
Nature,
Abraham, W. C. (2008). Metaplasticity: tuning synapses and networks for plasticity.
Nature Reviews Neuroscience, 9(5):387–387.
Abraham, W. C. and Bear, M. F. (1996). Metaplasticity: the plasticity of synaptic
plasticity. Trends in neurosciences, 19(4):126–130.
Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985). A learning algorithm for
boltzmann machines*. Cognitive science, 9(1):147–169.
Adachi, M. and Aihara, K. (1997). Associative dynamics in a chaotic neural network.
Neural Networks, 10(1):83–98.
Adam, Y. and Mizrahi, A. (2010). Circuit formation and maintenance—perspectives
from the mammalian olfactory bulb. Current opinion in neurobiology, 20(1):134–
140.
Adrian, E. D. (1926). The impulses produced by sensory nerve endings. The Journal
of physiology, 61(1):49–72.
135
136
BIBLIOGRAPHY
Aihara, K., Takabe, T., and Toyoda, M. (1990). Chaotic neural networks. Physics
letters A, 144(6):333–340.
Albright, T. D. (1984). Direction and orientation selectivity of neurons in visual
area mt of the macaque. Journal of neurophysiology, 52(6):1106–1130.
Allen, C. and Stevens, C. F. (1994). An evaluation of causes for unreliability
of synaptic transmission. Proceedings of the National Academy of Sciences,
91(22):10380–10383.
Alonso, J. and Chen, Y. (2009). Receptive field. 4(1):5393. Scholarpedia revision
136681.
Alonso, J.-M., Usrey, W. M., and Reid, R. C. (2001). Rules of connectivity between
geniculate cells and simple cells in cat primary visual cortex. The Journal of
Neuroscience, 21(11):4002–4015.
Alonso, M., Lepousez, G., Wagner, S., Bardy, C., Gabellec, M.-M., Torquet, N.,
and Lledo, P.-M. (2012). Activation of adult-born neurons facilitates learning and
memory. Nature neuroscience, 15(6):897–904.
Amari, S.-I. (1972). Learning patterns and pattern sequences by self-organizing nets
of threshold elements. Computers, IEEE Transactions on, 100(11):1197–1206.
Amari, S.-i. (1977). Dynamics of pattern formation in lateral-inhibition type neural
fields. Biological cybernetics, 27(2):77–87.
Amari, S.-i. (1983). Field theory of self-organizing neural nets. Systems, Man and
Cybernetics, IEEE Transactions on, (5):741–748.
Ambros-Ingerson, J., Granger, R., and Lynch, G. (1990). Simulation of paleocortex
performs hierarchical clustering. Science, 247(4948):1344–1348.
Amit, D. J. (1992). Modeling brain function: The world of attractor neural networks.
Cambridge University Press.
Amit, D. J., Gutfreund, H., and Sompolinsky, H. (1985). Spin-glass models of neural
networks. Physical Review A, 32(2):1007.
Amoore, J. E. (1964). Current status of the steric theory of odor. Annals of the
New York Academy of Sciences, 116(2):457–476.
Amunts, K., Kedo, O., Kindler, M., Pieperhoff, P., Mohlberg, H., Shah, N., Habel,
U., Schneider, F., and Zilles, K. (2005). Cytoarchitectonic mapping of the human
amygdala, hippocampal region and entorhinal cortex: intersubject variability and
probability maps. Anatomy and embryology, 210(5-6):343–352.
BIBLIOGRAPHY
137
Andreou, A. and Boahen, K. (1991). A contrast sensitive silicon retina with reciprocal synapses. Advances in Neural Information Processing Systems (NIPS),
4:764–772.
Andreou, A. G., Strohbehn, K., and Jenkins, R. (1991). Silicon retina for motion
computation. In Circuits and Systems, 1991., IEEE International Sympoisum on,
pages 1373–1376. IEEE.
Antonini, A. and Stryker, M. P. (1993). Rapid remodeling of axonal arbors in the
visual cortex. Science, 260(5115):1819–1821.
Apicella, A., Yuan, Q., Scanziani, M., and Isaacson, J. S. (2010). Pyramidal cells
in piriform cortex receive convergent input from distinct olfactory bulb glomeruli.
The Journal of Neuroscience, 30(42):14255–14260.
Araneda, R. C., Kini, A. D., and Firestein, S. (2000). The molecular receptive range
of an odorant receptor. Nature neuroscience, 3(12):1248–1255.
Assad, J. A. and Maunsell, J. H. (1995). Neuronal correlates of inferred motion in
primate posterior parietal cortex. Nature, 373(6514):518–521.
Atick, J. J. and Redlich, A. N. (1990). Towards a theory of early visual processing.
Neural Computation, 2(3):308–320.
Auffarth, B. (2013). Understanding smell—the olfactory stimulus problem. Neuroscience & Biobehavioral Reviews, 37(8):1667–1679.
Auffarth, B., Kaplan, B., and Lansner, A. (2011). Map formation in the olfactory
bulb by axon guidance of olfactory neurons. Frontiers in systems neuroscience, 5.
Axel, R. (2006). The molecular logic of smell. Scientific American, 16:68–75.
Aydede, M. (2010). The language of thought hypothesis. In Zalta, E. N., editor,
The Stanford Encyclopedia of Philosophy. Fall 2010 edition.
Bair, W. (2005). Visual receptive field organization. Current opinion in neurobiology,
15(4):459–464.
Balkovsky, E. and Shraiman, B. I. (2002). Olfactory search at high reynolds number.
Proceedings of the national academy of sciences, 99(20):12589–12593.
Bamford, S. A., Murray, A. F., and Willshaw, D. J. (2010). Synaptic rewiring for topographic mapping and receptive field development. Neural Networks, 23(4):517–
527.
Bar, M. (2007). The proactive brain: using analogies and associations to generate
predictions. Trends in cognitive sciences, 11(7):280–289.
138
BIBLIOGRAPHY
Bar, M. (2009). The proactive brain: memory for predictions. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521):1235–1243.
Bar, M. (2011). Predictions in the brain: Using our past to generate a future. Oxford
University Press.
Bargmann, C. I. (2012). Beyond the connectome: how neuromodulators shape neural
circuits. Bioessays, 34(6):458–465.
Bargmann, C. I. and Marder, E. (2013). From the connectome to brain function.
Nature methods, 10(6):483–490.
Barlow, H. B. (1972). Single units and sensation: a neuron doctrine for perceptual
psychology? Perception, (1):371–394.
Barrett, H. C. (2012). A hierarchical model of the evolution of human brain specializations. Proceedings of the national Academy of Sciences, 109(Supplement
1):10733–10740.
Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., and Friston,
K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4):695–
711.
Bate, A., Lindquist, M., Edwards, I. R., and Orre, R. (2002). A data mining
approach for signal detection and analysis. Drug Safety, 25(6):393–397.
Bechtel, W. (1994). Levels of description and explanation in cognitive science. Minds
and Machines, 4(1):1–25.
Bechtel, W. and Abrahamsen, A. (2002). Connectionism and the mind: Parallel
processing, dynamics, and evolution in networks,. Blackwell Publishing.
Beck, J. M., Latham, P. E., and Pouget, A. (2011). Marginalization in neural circuits
with divisive normalization. The Journal of Neuroscience, 31(43):15310–15319.
Beck, J. M., Ma, W. J., Kiani, R., Hanks, T., Churchland, A. K., Roitman, J.,
Shadlen, M. N., Latham, P. E., and Pouget, A. (2008). Probabilistic population
codes for bayesian decision making. Neuron, 60(6):1142–1152.
Ben-Yishai, R., Bar-Or, R. L., and Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences,
92(9):3844–3848.
Bendor, D. and Wang, X. (2005). The neuronal representation of pitch in primate
auditory cortex. Nature, 436(7054):1161–1165.
Benjaminsson, S. (2013). On large-scale neural simulations and applications in neuroinformatics.
BIBLIOGRAPHY
139
Benjaminsson, S., Fransson, P., and Lansner, A. (2010). A novel model-free data
analysis technique based on clustering in a mutual information space: application
to resting-state fmri. Frontiers in systems neuroscience, 4.
Benjaminsson, S., Herman, P., and Lansner, A. (2013). Neuromorphic Olfaction Chapter 6: Performance of a Computational Model of the Mammalian Olfactory
System, volume 3, page 173. CRC Press.
Benjaminsson, S. and Lansner, A. (2011). Adaptive sensor drift counteraction by a
modular neural network. In Chemical sensors, volume 36, pages E41–E41.
Benvenuti, G., B. A. M. G. C. F. (2011). Building a directional anticipatory response
along the motion trajectory in monkey area v1. Society for Neuroscience Meeting
2011, Washington DC, Program/Poster: 577.23/EE14.
Berg, H. C. and Purcell, E. M. (1977). Physics of chemoreception. Biophysical
journal, 20(2):193.
Bergel, A. (2010). Transforming the bcpnn learning rule for non-spiking units to a
learning rule for spiking units. Master’s Thesis in Biomedical Engineering; Royal
Institute of Technology, KTH, Stockholm, Sweden.
Berry, M. J., Brivanlou, I. H., Jordan, T. A., and Meister, M. (1999). Anticipation
of moving stimuli by the retina. Nature, 398(6725):334–338.
Berthet, P., Hellgren-Kotaleski, J., and Lansner, A. (2012). Action selection performance of a reconfigurable basal ganglia inspired model with hebbian–bayesian
go-nogo connectivity. Frontiers in behavioral neuroscience, 6.
Berthet, P. and Lansner, A. (2014). Optogenetic stimulation in a computational
model of the basal ganglia biases action selection and reward prediction error.
PloS one, 9(3):e90578.
Bhalla, U. S. (2014). Molecular computation in neurons: a modeling perspective.
Current opinion in neurobiology, 25:31–37.
Bi, G.-q. and Poo, M.-m. (1998). Synaptic modifications in cultured hippocampal
neurons: dependence on spike timing, synaptic strength, and postsynaptic cell
type. The Journal of neuroscience, 18(24):10464–10472.
Bialek, W. (1987). Physical limits to sensation and perception. Annual review of
biophysics and biophysical chemistry, 16(1):455–478.
Bialek, W. and Setayeshgar, S. (2005). Physical limits to biochemical signaling.
Proceedings of the National Academy of Sciences of the United States of America,
102(29):10040–10045.
140
BIBLIOGRAPHY
Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in
visual cortex. The Journal of Neuroscience, 2(1):32–48.
Bishop, C. M. et al. (1995). Neural networks for pattern recognition. Clarendon
press Oxford.
Biswal, B. B., Mennes, M., Zuo, X.-N., Gohel, S., Kelly, C., Smith, S. M., Beckmann,
C. F., Adelstein, J. S., Buckner, R. L., Colcombe, S., et al. (2010). Toward
discovery science of human brain function. Proceedings of the National Academy
of Sciences, 107(10):4734–4739.
Blaboa, R. M. and Grzywacz, N. M. (2000). The role of early retinal lateral inhibition: more than maximizing luminance information. Visual Neuroscience,
17(01):77–89.
Bliss, T. V., Collingridge, G. L., et al. (1993). A synaptic model of memory: longterm potentiation in the hippocampus. Nature, 361(6407):31–39.
Bliss, T. V. and Gardner-Medwin, A. (1973). Long-lasting potentiation of synaptic
transmission in the dentate area of the unanaesthetized rabbit following stimulation of the perforant path. The Journal of physiology, 232(2):357–374.
Bliss, T. V. and Lømo, T. (1973). Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the
perforant path. The Journal of physiology, 232(2):331–356.
Boccaletti, S., Bianconi, G., Criado, R., Del Genio, C., Gómez-Gardeñes, J., Romance, M., Sendina-Nadal, I., Wang, Z., and Zanin, M. (2014). The structure
and dynamics of multilayer networks. Physics Reports, 544(1):1–122.
Boley, H. (2008). Logical foundations of cognitive science, course material.
Bower, J. M. and Beeman, D. (1995). The book of GENESIS: exploring realistic
neural models with the GEneral NEural SImulation System. Electronic Library of
Science, The.
Bowers, J. S. (2009). On the biological plausibility of grandmother cells: implications
for neural network theories in psychology and neuroscience. Psychological review,
116(1):220.
Bowers, J. S. (2010). More on grandmother cells and the biological implausibility
of pdp models of cognition: a reply to plaut and mcclelland (2010) and quian
quiroga and kreiman (2010).
Box, G. E. and Draper, N. R. (1987). Empirical model-building and response surfaces, 1987. Wiley. High Occupancy Vehicle, 4:74.
BIBLIOGRAPHY
141
BrainScaleS (2011). Brain-inspired multiscale computation in neuromorphic hybrid
systems. http://brainscales.kip.uni-heidelberg.de/.
Brause, R., Langsdorf, T., and Hepp, M. (1999). Neural data mining for credit
card fraud detection. In Tools with Artificial Intelligence, 1999. Proceedings. 11th
IEEE International Conference on, pages 103–106. IEEE.
Bressloff, P. C. (2014). Waves in neural media. Lecture Notes on Mathematical
Modelling in the Life Sciences, Springer, New York.
Brett, M., Johnsrude, I. S., and Owen, A. M. (2002). The problem of functional
localization in the human brain. Nature reviews neuroscience, 3(3):243–249.
Brette, R. and Gerstner, W. (2005). Adaptive exponential integrate-and-fire model
as an effective description of neuronal activity. Journal of neurophysiology,
94(5):3637–3642.
Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., Diesmann, M., Morrison, A., Goodman, P. H., Harris Jr, F. C., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. Journal of
computational neuroscience, 23(3):349–398.
Briggman, K. L., Helmstaedter, M., and Denk, W. (2011). Wiring specificity in the
direction-selectivity circuit of the retina. Nature, 471(7337):183–188.
Broadwell, R. D. (1975). Olfactory relationships of the telencephalon and diencephalon in the rabbit. ii. an autoradiographic and horseradish peroxidase study
of the efferent connections of the anterior olfactory nucleus. Journal of Comparative Neurology, 164(4):389–409.
Brockwell, A. E., Rojas, A. L., and Kass, R. (2004). Recursive bayesian decoding of motor cortical signals by particle filtering. Journal of Neurophysiology,
91(4):1899–1907.
Brody, C. D. and Hopfield, J. (2003). Simple networks for spike-timing-based computation, with application to olfactory processing. Neuron, 37(5):843–852.
Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., and Wilson, M. A. (1998). A
statistical paradigm for neural spike train decoding applied to position prediction
from ensemble firing patterns of rat hippocampal place cells. The Journal of
Neuroscience, 18(18):7411–7425.
Brüderle, D., Petrovici, M. A., Vogginger, B., Ehrlich, M., Pfeil, T., Millner, S.,
Grübl, A., Wendt, K., Müller, E., Schwartz, M.-O., et al. (2011). A comprehensive workflow for general-purpose neural modeling with highly configurable
neuromorphic hardware systems. Biological cybernetics, 104(4-5):263–296.
142
BIBLIOGRAPHY
Brunel, N. and Van Rossum, M. C. (2007). Lapicque’s 1907 paper: from frogs to
integrate-and-fire. Biological cybernetics, 97(5-6):337–339.
Brunjes, P. C., Illig, K. R., and Meyer, E. A. (2005). A field guide to the anterior
olfactory nucleus (cortex). Brain research reviews, 50(2):305–335.
Buck, L. and Axel, R. (1991). A novel multigene family may encode odorant receptors: a molecular basis for odor recognition. Cell, 65(1):175–187.
Buck, L. B. (1996). Information coding in the vertebrate olfactory system. Annual
review of neuroscience, 19(1):517–544.
Buller, D. J. (2005). Adapting minds: Evolutionary psychology and the persistent
quest for human nature. MIT Press.
Bullmore, E. and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience,
10(3):186–198.
Buonomano, D. V. and Merzenich, M. M. (1998). Cortical plasticity: from synapses
to maps. Annual review of neuroscience, 21(1):149–186.
Burgess, N. (2014). The 2014 nobel prize in physiology or medicine: A spatial model
for cognitive neuroscience. Neuron, 84(6):1120–1125.
Burkitt, A. N. (2006). A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biological cybernetics, 95(1):1–19.
Burnham, K. P. and Anderson, D. R. (2002). Model selection and multimodel inference: a practical information-theoretic approach. Springer.
Butcher, J. C. (1987). The numerical analysis of ordinary differential equations:
Runge-Kutta and general linear methods. Wiley-Interscience.
Butts, D. A., Weng, C., Jin, J., Yeh, C.-I., Lesica, N. A., Alonso, J.-M., and Stanley,
G. B. (2007). Temporal precision in the neural code and the timescales of natural
vision. Nature, 449(7158):92–95.
Byrne, J. H., Heidelberger, R., and Waxham, M. N. (2014). From molecules to
networks: an introduction to cellular and molecular neuroscience. Academic Press.
Calimera, A., Macii, E., and Poncino, M. (2013). The human brain project and
neuromorphic computing. Functional neurology, 28(3):191.
Cang, J. and Feldheim, D. A. (2013). Developmental mechanisms of topographic
map formation and alignment. Annual review of neuroscience, 36:51–77.
BIBLIOGRAPHY
143
Caporale, N. and Dan, Y. (2008). Spike timing-dependent plasticity: a hebbian
learning rule. Annu. Rev. Neurosci., 31:25–46.
Carandini, M. and Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13(1):51–62.
Carandini, M., Heeger, D. J., and Movshon, J. A. (1997). Linearity and normalization in simple cells of the macaque primary visual cortex. The Journal of
Neuroscience, 17(21):8621–8644.
Carlson, N. R. (2012). Physiology of Behavior 11th Edition. Pearson.
Carpenter, G. A. and Grossberg, S. (2010). Adaptive resonance theory. Springer.
Carver, S., Roth, E., Cowan, N. J., and Fortune, E. S. (2008). Synaptic plasticity can
produce and enhance direction selectivity. PLoS computational biology, 4(2):e32.
Castillo, J., Muller, S., Caicedo, E., De Souza, A. F., and Bastos, T. (2014). Proposal
of a brain computer interface to command an autonomous car. In Biosignals
and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer
Living (BRC), 5th ISSNIP-IEEE, pages 1–6. IEEE.
Castro, J. B., Ramanathan, A., and Chennubhotla, C. S. (2013). Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization. PloS one, 8(9):e73289.
Chalmers, D. (1990). Why fodor and pylyshyn were wrong: The simplest refutation.
In Proceedings of the Twelfth Annual Conference of the Cognitive Science Society,
Cambridge, Mass.
Chapman, B. and Stryker, M. P. (1993). Development of orientation selectivity
in ferret visual cortex and effects of deprivation. The Journal of neuroscience,
13(12):5251–5262.
Chapuis, J. and Wilson, D. A. (2012). Bidirectional plasticity of cortical pattern
recognition and behavioral sensory acuity. Nature neuroscience, 15(1):155–161.
Chater, N., Tenenbaum, J. B., and Yuille, A. (2006). Probabilistic models of cognition: Conceptual foundations. Trends in cognitive sciences, 10(7):287–291.
Chow, A. Y., Chow, V. Y., Packo, K. H., Pollack, J. S., Peyman, G. A., and
Schuchard, R. (2004). The artificial silicon retina microchip for the treatment of
visionloss from retinitis pigmentosa. Archives of ophthalmology, 122(4):460–469.
Ciresan, D., Meier, U., and Schmidhuber, J. (2012). Multi-column deep neural
networks for image classification. In Computer Vision and Pattern Recognition
(CVPR), 2012 IEEE Conference on, pages 3642–3649. IEEE.
144
BIBLIOGRAPHY
Clark, A. (2013). Whatever next? predictive brains, situated agents, and the future
of cognitive science. Behavioral and Brain Sciences, 36(03):181–204.
Clark, A. (2014). Perceiving as predicting. Perception and Its Modalities, page 23.
Clopath, C., Büsing, L., Vasilaki, E., and Gerstner, W. (2009). Connectivity reflects coding: A model of voltage-based spike-timing-dependent-plasticity with
homeostasis. Nature.
Clopath, C., Ziegler, L., Vasilaki, E., Büsing, L., and Gerstner, W. (2008). Tagtrigger-consolidation: a model of early and late long-term-potentiation and depression. PLoS computational biology, 4(12):e1000248.
Connor, C. E. (2014).
17(12):1631–1632.
Cortical geography is destiny.
Nature neuroscience,
Coombes, S. (2005). Waves, bumps, and patterns in neural field theories. Biological
cybernetics, 93(2):91–108.
Coombes, S. (2006). Neural fields. 1(6):1373. Scholarpedia revision 138631.
Cossell, L., Iacaruso, M. F., Muir, D. R., Houlton, R., Sader, E. N., Ko, H., Hofer,
S. B., and Mrsic-Flogel, T. D. (2015). Functional organization of excitatory synaptic strength in primary visual cortex. Nature.
Coultrip, R., Granger, R., and Lynch, G. (1992). A cortical model of winner-take-all
competition via lateral inhibition. Neural networks, 5(1):47–54.
Courville, A. C., Daw, N. D., and Touretzky, D. S. (2006). Bayesian theories of
conditioning in a changing world. Trends in cognitive sciences, 10(7):294–300.
Craver, C. F. (2002). Interlevel experiments and multilevel mechanisms in the neuroscience of memory. Philosophy of Science, 69(S3):S83–S97.
Creem, S. H. and Proffitt, D. R. (2001). Defining the cortical visual systems:“what”,“where”, and “how”. Acta psychologica, 107(1):43–68.
Crowley, J. C. and Katz, L. C. (2002). Ocular dominance development revisited.
Current opinion in neurobiology, 12(1):104–109.
Curcio, C. A., Sloan, K. R., Kalina, R. E., and Hendrickson, A. E. (1990). Human
photoreceptor topography. Journal of Comparative Neurology, 292(4):497–523.
Dan, Y. and Poo, M.-m. (2004). Spike timing-dependent plasticity of neural circuits.
Neuron, 44(1):23–30.
Daoudal, G. and Debanne, D. (2003). Long-term plasticity of intrinsic excitability:
learning rules and mechanisms. Learning & Memory, 10(6):456–465.
BIBLIOGRAPHY
145
Davies, M. (1991). Concepts, connectionism, and the language of thought. Philosophy and connectionist theory, pages 229–257.
Davison, A. P., Feng, J., and Brown, D. (2003). Dendrodendritic inhibition and
simulated odor responses in a detailed olfactory bulb network model. Journal of
neurophysiology, 90(3):1921–1935.
Dayan, P. and Abbott, L. F. (2001). Theoretical neuroscience. Cambridge, MA:
MIT Press.
De Paola, V., Holtmaat, A., Knott, G., Song, S., Wilbrecht, L., Caroni, P., and
Svoboda, K. (2006). Cell type-specific structural plasticity of axonal branches
and boutons in the adult neocortex. Neuron, 49(6):861–875.
de Xivry, J.-J. O., Coppe, S., Blohm, G., and Lefevre, P. (2013). Kalman filtering
naturally accounts for visually guided and predictive smooth pursuit dynamics.
The Journal of Neuroscience, 33(44):17301–17313.
Debanne, D., Kopysova, I. L., Bras, H., and Ferrand, N. (1999). Gating of action potential propagation by an axonal a-like potassium conductance in the hippocampus: a new type of non-synaptic plasticity. Journal of Physiology-Paris,
93(4):285–296.
Debanne, D. and Poo, M.-M. (2010). Spike-timing dependent plasticity beyond
synapse–pre-and post-synaptic plasticity of intrinsic neuronal excitability. Frontiers in synaptic neuroscience, 2.
Delbruck, T. (1993). Silicon retina with correlation-based, velocity-tuned pixels.
Neural Networks, IEEE Transactions on, 4(3):529–541.
Delbruck, T., van Schaik, A., and Hasler, J. (2014). Research topic: neuromorphic engineering systems and applications. a snapshot of neuromorphic systems
engineering. Frontiers in neuroscience, 8.
Deneve, S., Duhamel, J.-R., and Pouget, A. (2007). Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of kalman filters.
The Journal of neuroscience, 27(21):5744–5756.
Deneve, S. and Pouget, A. (2004). Bayesian multisensory integration and crossmodal spatial links. Journal of Physiology-Paris, 98(1):249–258.
Denk, W., Briggman, K. L., and Helmstaedter, M. (2012). Structural neurobiology: missing link to a mechanistic understanding of neural computation. Nature
Reviews Neuroscience, 13(5):351–358.
146
BIBLIOGRAPHY
Desai, N. S., Rutherford, L. C., and Turrigiano, G. G. (1999). Plasticity in the
intrinsic excitability of cortical pyramidal neurons. Nature neuroscience, 2(6):515–
520.
D’Esposito, M., Postle, B. R., and Rypma, B. (2000). Prefrontal cortical contributions to working memory: evidence from event-related fmri studies. Experimental
Brain Research, 133(1):3–11.
DeYoe, E. A. and Van Essen, D. C. (1988). Concurrent processing streams in monkey
visual cortex. Trends in neurosciences, 11(5):219–226.
Dinkelbach, H. Ü., Vitay, J., Beuth, F., and Hamker, F. H. (2012). Comparison
of gpu-and cpu-implementations of mean-firing rate neural networks on parallel
hardware. Network: Computation in Neural Systems, 23(4):212–236.
Djuric, P. M., Kotecha, J. H., Zhang, J., Huang, Y., Ghirmai, T., Bugallo, M. F.,
and Miguez, J. (2003). Particle filtering. Signal Processing Magazine, IEEE,
20(5):19–38.
Doya, K. (2007). Bayesian brain: Probabilistic approaches to neural coding. MIT
press.
Egana, J., Aylwin, M. L., and Maldonado, P. (2005). Odor response properties of neighboring mitral/tufted cells in the rat olfactory bulb. Neuroscience,
134(3):1069–1080.
Eliasmith, C. and Bechtel, W. (2003). Symbolic versus subsymbolic. Encyclopedia
of cognitive science.
Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., and
Rasmussen, D. (2012). A large-scale model of the functioning brain. science,
338(6111):1202–1205.
Enns, J. T. and Lleras, A. (2008). What’s next? new evidence for prediction in
human vision. Trends in cognitive sciences, 12(9):327–333.
Erickson, R. P. (2001). The evolution and implications of population and modular
neural coding ideas. Progress in brain research, 130:9–29.
Erlhagen, W. and Schöner, G. (2002). Dynamic field theory of movement preparation. Psychological review, 109(3):545.
Ermentrout, B. (1998). Neural networks as spatio-temporal pattern-forming systems. Reports on progress in physics, 61(4):353.
Ernst, M. O. and Bülthoff, H. H. (2004). Merging the senses into a robust percept.
Trends in cognitive sciences, 8(4):162–169.
BIBLIOGRAPHY
147
FACETS-ITN (2009). Facets initial training network: From neuroscience to neuroinspired computing. http://facets.kip.uni-heidelberg.de/ITN/index.html.
Fairhall, A. (2014). The receptive field is dead. long live the receptive field? Current
opinion in neurobiology, 25:ix–xii.
Faisal, A. A., Selen, L. P., and Wolpert, D. M. (2008). Noise in the nervous system.
Nature Reviews Neuroscience, 9(4):292–303.
Farabet, C., Couprie, C., Najman, L., and LeCun, Y. (2013). Learning hierarchical
features for scene labeling. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 8(35):1915–1929.
Farahini, N., Hemani, A., Lansner, A., Clermidy, F., and Svensson, C. (2014). A
scalable custom simulation machine for the bayesian confidence propagation neural network model of the brain. In 2014 19th Asia and South Pacific Design
Automation Conference (ASP-DAC), pages 578–585. IEEE.
Farmer, T. A., Brown, M., and Tanenhaus, M. K. (2013). Prediction, explanation,
and the role of generative models in language processing. Behavioral and Brain
Sciences, 36(03):211–212.
Feldheim, D. A. and O’Leary, D. D. (2010). Visual map development: bidirectional
signaling, bifunctional guidance molecules, and competition. Cold Spring Harbor
perspectives in biology, 2(11):a001768.
Feldman, D. E. (2012).
75(4):556–571.
The spike-timing dependence of plasticity.
Neuron,
Feldman, J. A. and Ballard, D. H. (1982). Connectionist models and their properties.
Cognitive science, 6(3):205–254.
Felleman, D. J. and Van Essen, D. C. (1991). Distributed hierarchical processing in
the primate cerebral cortex. Cerebral cortex, 1(1):1–47.
Felsen, G., Shen, Y.-s., Yao, H., Spor, G., Li, C., and Dan, Y. (2002). Dynamic
modification of cortical orientation tuning mediated by recurrent connections.
Neuron, 36(5):945–954.
Feng, J. (2003). Computational neuroscience: a comprehensive approach. CRC
press.
Fernando, C. and Sojakka, S. (2003). Pattern recognition in a bucket. In Advances
in artificial life, pages 588–597. Springer.
Ferster, D. and Miller, K. D. (2000). Neural mechanisms of orientation selectivity
in the visual cortex. Annual review of neuroscience, 23(1):441–471.
148
BIBLIOGRAPHY
Fidjeland, A. and Shanahan, M. (2010). Accelerated simulation of spiking neural
networks using gpus. In IJCNN, pages 1–8.
Fiebig, F. (2012). Memory consolidation through reinstatement in a connectionist
model of hippocampus and neocortex. Master’s Thesis in Systems, Control and
Robotics; Royal Institute of Technology, KTH, Stockholm, Sweden.
Fiebig, F. and Lansner, A. (2014). Memory consolidation from seconds to weeks:
A three-stage neural network model with autonomous reinstatement dynamics.
Frontiers in Computational Neuroscience, 8:64.
Field, G. D., Gauthier, J. L., Sher, A., Greschner, M., Machado, T. A., Jepson,
L. H., Shlens, J., Gunning, D. E., Mathieson, K., Dabrowski, W., et al. (2010).
Functional connectivity in the retina at the resolution of photoreceptors. Nature,
467(7316):673–677.
Filler, A. G. (2009). The history, development and impact of computed imaging
in neurological diagnosis and neurosurgery: Ct, mri, and dti. Nature Precedings,
7(1):1–69.
Fiorillo, C. D. (2008). Towards a general theory of neural computation based on
prediction by single neurons. PLoS One, 3(10):e3298.
Firestein, S. (2001). How the olfactory system makes sense of scents. Nature,
413(6852):211–218.
Fishell, G. and Heintz, N. (2013). The neuron identity problem: form meets function.
Neuron, 80(3):602–612.
Fodor, J. and McLaughlin, B. P. (1990). Connectionism and the problem of systematicity: Why smolensky’s solution doesn’t work. Cognition, 35(2):183–204.
Fodor, J. A. (1983). The modularity of mind: An essay on faculty psychology. MIT
press.
Fodor, J. A. and Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture:
A critical analysis. Cognition, 28(1):3–71.
Fornito, A., Zalesky, A., and Breakspear, M. (2013). Graph analysis of the human
connectome: promise, progress, and pitfalls. Neuroimage, 80:426–444.
Fournier, J., Monier, C., Pananceau, M., and Frégnac, Y. (2011). Adaptation of
the simple or complex nature of v1 receptive fields to visual statistics. Nature
neuroscience, 14(8):1053–1060.
Franco, M. I., Turin, L., Mershin, A., and Skoulakis, E. M. (2011). Molecular
vibration-sensing component in drosophila melanogaster olfaction. Proceedings of
the National Academy of Sciences, 108(9):3797–3802.
BIBLIOGRAPHY
149
Freeman, W. J. (2000). Neurodynamics: An Exploration in Mesoscopic Brain Dynamics: An Exploration in Mesoscopic Brain Dynamics. Springer Science & Business Media.
Freeman, W. J. and Erwin, H. (2008). Freeman K-set. 3(2):3238. Scholarpedia
revision 91278.
French, R. M. (1999). Catastrophic forgetting in connectionist networks. Trends in
cognitive sciences, 3(4):128–135.
Frey, U. and Morris, R. G. (1997). Synaptic tagging and long-term potentiation.
Nature, 385(6616):533–536.
Friedberg, M. H., Lee, S. M., and Ebner, F. F. (1999). Modulation of receptive field
properties of thalamic somatosensory neurons by the depth of anesthesia. Journal
of Neurophysiology, 81(5):2243–2252.
Friston, K. (2009). The free-energy principle: a rough guide to the brain? Trends
in cognitive sciences, 13(7):293–301.
Friston, K. (2010). The free-energy principle: a unified brain theory?
Reviews Neuroscience, 11(2):127–138.
Nature
Friston, K. and Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society B: Biological Sciences,
364(1521):1211–1221.
Friston, K., Moran, R., and Seth, A. K. (2013). Analysing connectivity with
granger causality and dynamic causal modelling. Current opinion in neurobiology, 23(2):172–178.
Friston, K. J. (2011). Functional and effective connectivity: a review. Brain connectivity, 1(1):13–36.
Froemke, R. C., Poo, M.-m., and Dan, Y. (2005). Spike-timing-dependent synaptic
plasticity depends on dendritic location. Nature, 434(7030):221–225.
Fukunaga, I., Berning, M., Kollo, M., Schmaltz, A., and Schaefer, A. T. (2012). Two
distinct channels of olfactory bulb output. Neuron, 75(2):320–329.
Fukushima, K., Yamaguchi, Y., Yasuda, M., and Nagata, S. (1970). An electronic
model of the retina. Proceedings of the IEEE, 58(12):1950–1951.
Furber, S., Lester, D., Plana, L., Garside, J., Painkras, E., Temple, S., and Brown,
A. (2012). Overview of the spinnaker system architecture.
Furber, S. and Temple, S. (2007). Neural systems engineering. Journal of the Royal
Society interface, 4(13):193–206.
150
BIBLIOGRAPHY
Furber, S. and Temple, S. (2008). Neural systems engineering. In Fulcher, J. and
Jain, L., editors, Computational Intelligence: A Compendium, volume 115 of
Studies in Computational Intelligence, pages 763–796. Springer Berlin Heidelberg.
Furber, S. B., Lester, D. R., Plana, L. A., Garside, J. D., Painkras, E., Temple,
S., and Brown, A. D. (2013). Overview of the spinnaker system architecture.
Computers, IEEE Transactions on, 62(12):2454–2467.
Fyhn, M., Molden, S., Witter, M. P., Moser, E. I., and Moser, M.-B. (2004). Spatial
representation in the entorhinal cortex. Science, 305(5688):1258–1264.
Gaillard, I., Rouquier, S., and Giorgi, D. (2004). Olfactory receptors. Cellular and
Molecular Life Sciences CMLS, 61(4):456–469.
Gaillard, I., Rouquier, S., Pin, J.-P., Mollard, P., Richard, S., Barnabé, C., Demaille,
J., and Giorgi, D. (2002). A single olfactory receptor specifically binds a set of
odorant molecules. European Journal of Neuroscience, 15(3):409–418.
Gane, S., Georganakis, D., Maniati, K., Vamvakias, M., Ragoussis, N., Skoulakis,
E. M., and Turin, L. (2013). Molecular vibration-sensing component in human
olfaction. PloS one, 8(1):e55780.
Gao, Y., Black, M. J., Bienenstock, E., Shoham, S., and Donoghue, J. P. (2002).
Probabilistic inference of hand motion from neural activity in motor cortex. Advances in neural information processing systems, 1:213–220.
Garcia, C. R. (2010). A compartmental model of a spiking and adapting olfactory
receptor neuron for use in large-scale neuronal network models of the olfactory
system. Master’s Thesis in Biomedical Engineering; Royal Institute of Technology,
KTH, Stockholm, Sweden.
Gardner, J. W. and Bartlett, P. N. (1999). Electronic noses: principles and applications, volume 233. Oxford University Press New York.
Garson, J. (2012). Connectionism. In Zalta, E. N., editor, The Stanford Encyclopedia
of Philosophy. Winter 2012 edition.
Gayler, R. W. (2004). Vector symbolic architectures answer jackendoff’s challenges
for cognitive neuroscience. arXiv preprint cs/0412059.
Georgopoulos, A. P., Kalaska, J. F., Caminiti, R., and Massey, J. T. (1982). On
the relations between the direction of two-dimensional arm movements and cell
discharge in primate motor cortex. The Journal of Neuroscience, 2(11):1527–1537.
Georgopoulos, A. P., Schwartz, A. B., and Kettner, R. E. (1986). Neuronal population coding of movement direction. Science, 233(4771):1416–1419.
BIBLIOGRAPHY
151
Gerstein, G. L. and Mandelbrot, B. (1964). Random walk models for the spike
activity of a single neuron. Biophysical journal, 4(1 Pt 1):41.
Gerstner, W. and Brette, R. (2009). Adaptive exponential integrate-and-fire model.
Scholarpedia, 4(6):8427.
Gerstner, W., Kempter, R., van Hemmen, J. L., and Wagner, H. (1996). A neuronal
learning rule for sub-millisecond temporal coding. Nature, 383(LCN-ARTICLE1996-002):76–78.
Gerstner, W. and Kistler, W. M. (2002). Spiking neuron models: Single neurons,
populations, plasticity. Cambridge university press.
Gerstner, W., Ritz, R., and Van Hemmen, J. L. (1993). Why spikes? hebbian
learning and retrieval of time-resolved excitation patterns. Biological cybernetics,
69(5-6):503–515.
Gewaltig, M.-O. and Diesmann, M. (2007). Nest (neural simulation tool). Scholarpedia, 2(4):1430.
Ghosh, S., Larson, S. D., Hefzi, H., Marnoy, Z., Cutforth, T., Dokka, K., and
Baldwin, K. K. (2011). Sensory maps in the olfactory cortex defined by longrange viral tracing of single neurons. Nature, 472(7342):217–220.
Gire, D. H., Restrepo, D., Sejnowski, T. J., Greer, C., De Carlos, J. A., and LopezMascaraque, L. (2013a). Temporal processing in the olfactory system: can we see
a smell? Neuron, 78(3):416–432.
Gire, D. H., Whitesell, J. D., Doucette, W., and Restrepo, D. (2013b). Information
for decision-making and stimulus identification is multiplexed in sensory cortex.
Nature neuroscience.
Göhring, D., Latotzky, D., Wang, M., and Rojas, R. (2013). Semi-autonomous car
control using brain computer interfaces. In Intelligent Autonomous Systems 12,
pages 393–408. Springer.
Goldberg, J. A., Rokni, U., and Sompolinsky, H. (2004). Patterns of ongoing activity
and the functional architecture of the primary visual cortex. Neuron, 42(3):489–
500.
Goldblum, N. (2001). The brain-shaped mind: What the brain can tell us about the
mind. Cambridge University Press.
Goodale, M. and Milner, D. (2013). Sight unseen: An exploration of conscious and
unconscious vision. Oxford University Press.
152
BIBLIOGRAPHY
Gottfried, J. a. (2010). Central mechanisms of odour object perception. Nature
reviews. Neuroscience, 11(9):628–41.
Goulet, J. and Ermentrout, G. B. (2011). The mechanisms for compression and
reflection of cortical waves. Biological cybernetics, 105(3-4):253–268.
Granger, R. and Lynch, G. (1991). Higher olfactory processes: perceptual learning
and memory. Current opinion in neurobiology, 1(2):209–214.
Grill-Spector, K. and Malach, R. (2004). The human visual cortex. Annu. Rev.
Neurosci., 27:649–677.
Groh, J. M., Born, R. T., and Newsome, W. T. (1997). How is a sensory map read
out? effects of microstimulation in visual area mt on saccades and smooth pursuit
eye movements. The Journal of Neuroscience, 17(11):4312–4330.
Gronenberg, W., Raikhelkar, A., Abshire, E., Stevens, J., Epstein, E., Loyola,
K., Rauscher, M., and Buchmann, S. (2014). Honeybees (apis mellifera) learn
to discriminate the smell of organic compounds from their respective deuterated isotopomers. Proceedings of the Royal Society B: Biological Sciences,
281(1778):20133089.
Grossberg, S. (1976). Adaptive pattern classification and universal recoding: I.
parallel development and coding of neural feature detectors. Biological cybernetics,
23(3):121–134.
Grossberg, S. (2013). Adaptive resonance theory. 8(5):1569. Scholarpedia revision
145360.
Guillery, R. and Sherman, S. M. (2002). Thalamic relay functions and their role in
corticocortical communication: generalizations from the visual system. Neuron,
33(2):163–175.
Guo, K., Robertson, R. G., Pulgarin, M., Nevado, A., Panzeri, S., Thiele, A., and
Young, M. P. (2007). Spatio-temporal prediction and inference by v1 neurons.
European Journal of Neuroscience, 26(4):1045–1054.
Gurney, K. N., Humphries, M. D., and Redgrave, P. (2015). A new framework
for cortico-striatal plasticity: Behavioural theory meets in vitro data at the
reinforcement-action interface. PLoS biology, 13(1):e1002034.
Gutierrez-Osuna, R. (2002). Pattern analysis for machine olfaction: a review. Sensors Journal, IEEE, 2(3):189–202.
Guyonneau, R., VanRullen, R., and Thorpe, S. J. (2005). Neurons tune to the
earliest spikes through stdp. Neural Computation, 17(4):859–879.
BIBLIOGRAPHY
153
Haberly, L. B. and Bower, J. M. (1989). Olfactory cortex: model circuit for study
of associative memory? Trends in neurosciences, 12(7):258–264.
Haberly, L. B. and Price, J. L. (1977). The axonal projection patterns of the mitral
and tufted cells of the olfactory bulb in the rat. Brain research, 129(1):152–157.
Haddad, R., Khan, R., Takahashi, Y. K., Mori, K., Harel, D., and Sobel, N. (2008a).
A metric for odorant comparison. Nature methods, 5(5):425–429.
Haddad, R., Lanjuin, A., Madisen, L., Zeng, H., Murthy, V. N., and Uchida, N.
(2013). Olfactory cortical neurons read out a relative time code in the olfactory
bulb. Nature neuroscience, 16(7):949–957.
Haddad, R., Lapid, H., Harel, D., and Sobel, N. (2008b). Measuring smells. Current
opinion in neurobiology, 18(4):438–444.
Haddad, R., Weiss, T., Khan, R., Nadler, B., Mandairon, N., Bensafi, M., Schneidman, E., and Sobel, N. (2010). Global features of neural activity in the olfactory
system form a parallel code that predicts olfactory behavior and perception. The
Journal of Neuroscience, 30(27):9017–9026.
Hadley, R. F. (1994). Systematicity in connectionist language learning. Mind &
Language, 9(3):247–272.
Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., and Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801–806.
Hall, S., Bourke, P., and Guo, K. (2014). Low level constraints on dynamic contour
path integration. PloS one, 9(6):e98268.
Hamed, S. B., Duhamel, J.-R., Bremmer, F., and Graf, W. (2002). Visual receptive
field modulation in the lateral intraparietal area during attentive fixation and free
gaze. Cerebral Cortex, 12(3):234–245.
Harris, K. D. and Mrsic-Flogel, T. D. (2013). Cortical connectivity and sensory
coding. Nature, 503(7474):51–58.
Hasselmo, M. E. (1993). Acetylcholine and learning in a cortical associative memory.
Neural computation, 5(1):32–44.
Hastie, T., Tibshirani, R., Friedman, J., Hastie, T., Friedman, J., and Tibshirani,
R. (2009). The elements of statistical learning, volume 2. Springer.
Hawkins, D. M. (2004). The problem of overfitting. Journal of chemical information
and computer sciences, 44(1):1–12.
Hawkins, J. and Blakeslee, S. (2007). On intelligence. Macmillan.
154
BIBLIOGRAPHY
Haykin, S. S., Haykin, S. S., and Haykin, S. S. (2001). Kalman filtering and neural
networks. Wiley Online Library.
Haynes, J.-D. and Rees, G. (2006). Decoding mental states from brain activity in
humans. Nature Reviews Neuroscience, 7(7):523–534.
Hebb, D. O. (1949). Organization of behavior. Wiley, New York.
Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual
neuroscience, 9(02):181–197.
Heeger, D. J. (1993). Modeling simple-cell direction selectivity with normalized,
half-squared, linear operators. Journal of Neurophysiology, 70(5):1885–1898.
Hendin, O., Horn, D., and Tsodyks, M. V. (1998). Associative memory and segmentation in an oscillatory neural model of the olfactory bulb. Journal of Computational Neuroscience, 5(2):157–169.
Hensch, T. K. (2005). Critical period plasticity in local cortical circuits. Nature
Reviews Neuroscience, 6(11):877–888.
Herz, A. V., Gollisch, T., Machens, C. K., and Jaeger, D. (2006). Modeling singleneuron dynamics and computations: a balance of detail and abstraction. science,
314(5796):80–85.
Hilgetag, C. C. and Kaiser, M. (2004). Clustered organization of cortical connectivity. Neuroinformatics, 2(3):353–360.
Hill, A. (1936). Excitation and accommodation in nerve. Proceedings of the Royal
Society of London. Series B, Biological Sciences, 119(814):305–355.
Hines, M. L. and Carnevale, N. T. (1997). The neuron simulation environment.
Neural computation, 9(6):1179–1209.
Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A.,
Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks
for acoustic modeling in speech recognition: The shared views of four research
groups. Signal Processing Magazine, IEEE, 29(6):82–97.
Hinton, G., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep
belief nets. Neural computation, 18(7):1527–1554.
Hinton, G. and Sejnowski, T. (1983a). Analysing cooperative computation. In
Proceedings of the fifth annual conference of the cognitive science society.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data
with neural networks. Science, 313(5786):504–507.
BIBLIOGRAPHY
155
Hinton, G. E. and Sejnowski, T. J. (1983b). Optimal perceptual inference. In
Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,
pages 448–453. IEEE New York.
Hirsch, J. A. and Martinez, L. M. (2006). Circuits that build visual cortical receptive
fields. Trends in neurosciences, 29(1):30–39.
Hodgkin, A. L. and Huxley, A. F. (1952). A quantitative description of membrane
current and its application to conduction and excitation in nerve. The Journal of
physiology, 117(4):500–544.
Holst, A. (1997). The use of a Bayesian neural network model for classification
tasks. Dissertation in Computer Science, Stockholm University.
Holtmaat, A. and Svoboda, K. (2009). Experience-dependent structural synaptic
plasticity in the mammalian brain. Nature Reviews Neuroscience, 10(9):647–658.
Honey, C. J., Thivierge, J.-P., and Sporns, O. (2010). Can structure predict function
in the human brain? Neuroimage, 52(3):766–776.
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences,
79(8):2554–2558.
Hopfield, J. J. (1995). Pattern recognition computation using action potential timing
for stimulus representation. Nature, 376(6535):33–36.
Hopfield, J. J. (2007). Hopfield network. 2(5):1977. Scholarpedia revision 91362.
Hosoya, T., Baccus, S. A., and Meister, M. (2005). Dynamic predictive coding by
the retina. Nature, 436(7047):71–77.
Huang, Y. and Rao, R. P. (2011). Predictive coding. Wiley Interdisciplinary Reviews:
Cognitive Science, 2(5):580–593.
Hubel, D. H. and Wiesel, T. N. (1962). Receptive fields, binocular interaction
and functional architecture in the cat’s visual cortex. The Journal of physiology,
160(1):106–154.
Hubel, D. H. and Wiesel, T. N. (1963). Receptive fields of cells in striate cortex of
very young, visually inexperienced kittens. J. neurophysiol, 26(994):1002.
Huber, A. B., Kolodkin, A. L., Ginty, D. D., and Cloutier, J.-F. (2003). Signaling at
the growth cone: ligand-receptor complexes and the control of axon growth and
guidance. Annual review of neuroscience, 26(1):509–563.
156
BIBLIOGRAPHY
Huberman, A. D., Feller, M. B., and Chapman, B. (2008). Mechanisms underlying
development of visual maps and receptive fields. Annual review of neuroscience,
31:479.
Hung, C. P., Kreiman, G., Poggio, T., and DiCarlo, J. J. (2005). Fast readout of
object identity from macaque inferior temporal cortex. Science, 310(5749):863–
866.
Hutchison, R. M., Womelsdorf, T., Allen, E. A., Bandettini, P. A., Calhoun, V. D.,
Corbetta, M., Della Penna, S., Duyn, J. H., Glover, G. H., Gonzalez-Castillo, J.,
et al. (2013). Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage, 80:360–378.
Iiguni, Y., Sakai, H., and Tokumaru, H. (1992). A real-time learning algorithm
for a multilayered neural network based on the extended kalman filter. Signal
Processing, IEEE Transactions on, 40(4):959–966.
Ikemoto, S. (2003). Involvement of the olfactory tubercle in cocaine reward: intracranial self-administration studies. The Journal of neuroscience, 23(28):9305–9311.
Ikemoto, S. (2007). Dopamine reward circuitry: two projection systems from the
ventral midbrain to the nucleus accumbens–olfactory tubercle complex. Brain
research reviews, 56(1):27–78.
Ilg, U. J. and Thier, P. (2003). Visual tracking neurons in primate area mst are
activated by smooth-pursuit eye movements of an “imaginary” target. Journal of
Neurophysiology, 90(3):1489–1502.
Indiveri, G. and Horiuchi, T. K. (2011). Frontiers in neuromorphic engineering.
Frontiers in Neuroscience, 5.
Indiveri, G., Linares-Barranco, B., Hamilton, T. J., Van Schaik, A., EtienneCummings, R., Delbruck, T., Liu, S.-C., Dudek, P., Häfliger, P., Renaud, S.,
et al. (2011). Neuromorphic silicon neuron circuits. Frontiers in neuroscience, 5.
Isaacson, J. S. (2001). Mechanisms governing dendritic γ-aminobutyric acid (gaba)
release in the rat olfactory bulb. Proceedings of the National Academy of Sciences,
98(1):337–342.
Isaacson, J. S. (2010). Odor representations in mammalian cortical circuits. Current
opinion in neurobiology, 20(3):328–331.
Isard, M. and Blake, A. (1998). Condensation—conditional density propagation for
visual tracking. International journal of computer vision, 29(1):5–28.
Itti, L. and Koch, C. (2001). Computational modelling of visual attention. Nature
reviews neuroscience, 2(3):194–203.
BIBLIOGRAPHY
157
Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE
transactions on neural networks, 15(5):1063–1070.
Izhikevich, E. M. (2006). Polychronization: computation with spikes. Neural computation, 18(2):245–282.
Izhikevich, E. M. (2007a). Dynamical systems in neuroscience. MIT press.
Izhikevich, E. M. (2007b). Solving the distal reward problem through linkage of
stdp and dopamine signaling. Cerebral cortex, 17(10):2443–2452.
Izhikevich, E. M. and Desai, N. S. (2003). Relating stdp to bcm. Neural computation,
15(7):1511–1523.
Izhikevich, E. M. and Edelman, G. M. (2008). Large-scale model of mammalian thalamocortical systems. Proceedings of the national academy of sciences,
105(9):3593–3598.
Izhikevich, E. M. et al. (2003). Simple model of spiking neurons. IEEE Transactions
on neural networks, 14(6):1569–1572.
Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent
neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148:34.
Jaeger, H. (2007). Echo state network. 2(9):2330. Scholarpedia revision 143667.
Jain, A. K., Duin, R. P. W., and Mao, J. (2000). Statistical pattern recognition:
A review. Pattern Analysis and Machine Intelligence, IEEE Transactions on,
22(1):4–37.
James, W. (1984). Psychology, briefer course, volume 14. Harvard University Press.
Jansson, Y. (2014). Normalization in a cortical hypercolumn - the modulatory effects
of a highly structured recurrent spiking neural network. Master of Science Thesis
in Computer Science, Royal Institute of Technology, KTH, Stockholm, Sweden.
Jo, S. H., Chang, T., Ebong, I., Bhadviya, B. B., Mazumder, P., and Lu, W. (2010).
Nanoscale memristor device as synapse in neuromorphic systems. Nano letters,
10(4):1297–1301.
Junek, S., Kludt, E., Wolf, F., and Schild, D. (2010). Olfactory coding with patterns
of response latencies. Neuron, 67(5):872–884.
Jung, R., Jung, S. V., and Srimattirumalaparle, B. (2014). Neuromorphic controlled
powered orthotic and prosthetic system. US Patent 8,790,282.
158
BIBLIOGRAPHY
Kahneman, D. and Tversky, A. (1973). On the psychology of prediction. Psychological review, 80(4):237.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems.
Journal of Fluids Engineering, 82(1):35–45.
Kapadia, M. K., Westheimer, G., and Gilbert, C. D. (2000). Spatial distribution of
contextual interactions in primary visual cortex and in visual perception. Journal
of neurophysiology, 84(4):2048–2062.
Kaplan,
B.
(2013).
https://github.com/bernhardkaplan/bcpnnmt/tree/paper_submission. Link to the git hub repository containing the
simulation code for the motion-based prediction paper.
Kaplan,
B.
(2014a).
https://github.com/bernhardkaplan/bcpnnmt/tree/ijcnn_paper_submission. Link to the git hub repository containing the
simulation code for the IJCNN paper.
Kaplan, B. (2014b). https://github.com/bernhardkaplan/OlfactorySystem/tree/FrontiersPublicati
Link to the git hub repository containing the simulation code for the olfactory
system model.
Kaplan, B. A., Khoei, M. A., Lansner, A., and Perrinet, L. U. (2014). Signature
of an anticipatory response in area vi as modeled by a probabilistic model and a
spiking neural network. In Neural Networks (IJCNN), 2014 International Joint
Conference on, pages 3205–3212. IEEE.
Kaplan, B. A. and Lansner, A. (2014). A spiking neural network model of selforganized pattern recognition in the early mammalian olfactory system. Frontiers
in neural circuits, 8.
Kaplan, B. A., Lansner, A., Masson, G. S., and Perrinet, L. U. (2013). Anisotropic
connectivity implements motion-based prediction in a spiking neural network.
Frontiers in computational neuroscience, 7.
Kaschube, M. (2014). Neural maps versus salt-and-pepper organization in visual
cortex. Current opinion in neurobiology, 24:95–102.
Kastner, D. B. and Baccus, S. A. (2014). Insights from the retina into the diverse and
general computations of adaptation, detection, and prediction. Current opinion
in neurobiology, 25:63–69.
Keller, A. and Vosshall, L. B. (2004). A psychophysical test of the vibration theory
of olfaction. Nature neuroscience, 7(4):337–338.
Kempter, R., Gerstner, W., and Van Hemmen, J. L. (1999). Hebbian learning and
spiking neurons. Physical Review E, 59(4):4498.
BIBLIOGRAPHY
159
Khan, R. M., Luk, C.-H., Flinker, A., Aggarwal, A., Lapid, H., Haddad, R., and Sobel, N. (2007). Predicting odor pleasantness from odorant structure: pleasantness
as a reflection of the physical world. The Journal of Neuroscience, 27(37):10015–
10023.
Khoei, M. A. (2014). Motion-based position coding in the visual system: a computational study. Doctoral thesis, Aix-Marseille Université, École de la Vie et de la
Santé.
Khoei, M. A., Masson, G. S., and Perrinet, L. U. (2013). Motion-based prediction
explains the role of tracking in motion extrapolation. Journal of Physiology-Paris,
107(5):409–420.
Kim, D. H., Phillips, M. E., Chang, A. Y., Patel, H. K., Nguyen, K. T., and Willhite,
D. C. (2011). Lateral connectivity in the olfactory bulb is sparse and segregated.
Frontiers in neural circuits, 5.
Kingsbury, M. A. and Finlay, B. L. (2001). The cortex in multidimensional space:
where do cortical areas come from? Developmental Science, 4(2):125–142.
Kispersky, T. and White, J. A. (2008). Stochastic models of ion channel gating.
3(1):1327. Scholarpedia revision 137554.
Kleene, S. J. (2008). The electrochemical basis of odor transduction in vertebrate
olfactory cilia. Chemical senses, 33(9):839–859.
Kloppenburg, P. and Nawrot, M. P. (2014). Neural coding: Sparse but on time.
Current Biology, 24(19):R957–R959.
Knierim, J. J. and Zhang, K. (2012). Attractor dynamics of spatially correlated
neural activity in the limbic system. Annual review of neuroscience, 35:267–285.
Knight, B. W. (1972). Dynamics of encoding in a population of neurons. The Journal
of general physiology, 59(6):734–766.
Ko, H., Cossell, L., Baragli, C., Antolik, J., Clopath, C., Hofer, S. B., and MrsicFlogel, T. D. (2013). The emergence of functional microcircuits in visual cortex.
Nature, 496(7443):96–100.
Koch, C. (2004). Biophysics of computation: information processing in single neurons. Oxford university press.
Koch, C. and Ullman, S. (1987). Shifts in selective visual attention: towards the
underlying neural circuitry. In Matters of intelligence, pages 115–141. Springer.
Kohonen, T. (1988). Self-organization and associative memory. Self-Organization
and Associative Memory, 100 figs. XV, 312 pages.. Springer-Verlag Berlin Heidelberg New York. Also Springer Series in Information Sciences, volume 8, 1.
160
BIBLIOGRAPHY
Komiyama, T. and Luo, L. (2006). Development of wiring specificity in the olfactory
system. Current opinion in neurobiology, 16(1):67–73.
Konorski, J. (1948). Conditioned reflexes and neuron organization.
Kording, K. P. (2014). Bayesian statistics: relevant for the brain? Current opinion
in neurobiology, 25:130–133.
Körding, K. P. and Wolpert, D. M. (2006). Bayesian decision theory in sensorimotor
control. Trends in cognitive sciences, 10(7):319–326.
Kosaka, K., Toida, K., Aika, Y., and Kosaka, T. (1998). How simple is the organization of the olfactory glomerulus?: the heterogeneity of so-called periglomerular
cells. Neuroscience research, 30(2):101–110.
Kotaleski, J. H. and Blackwell, K. T. (2010). Modelling the molecular mechanisms
of synaptic plasticity using systems biology approaches. Nature Reviews Neuroscience, 11(4):239–251.
Koulakov, A. A. and Rinberg, D. (2011). Sparse incomplete representations: a
potential role of olfactory granule cells. Neuron, 72(1):124–136.
Kozma, R. and Freeman, W. J. (2001). Chaotic resonance—methods and applications for robust classification of noisy and variable patterns. International Journal
of Bifurcation and Chaos, 11(06):1607–1629.
Kravitz, D. J., Saleem, K. S., Baker, C. I., and Mishkin, M. (2011). A new neural
framework for visuospatial processing. Nature Reviews Neuroscience, 12(4):217–
230.
Krishnamurthy, P., Silberberg, G., and Lansner, A. (2012). A cortical attractor network with martinotti cells driven by facilitating synapses. PloS one, 7(4):e30752.
Krug, K., Akerman, C. J., and Thompson, I. D. (2001). Responses of neurons in
neonatal cortex and thalamus to patterned visual stimulation through the naturally closed lids. Journal of Neurophysiology, 85(4):1436–1443.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology.
Penguin.
Lamprecht, R. and LeDoux, J. (2004). Structural plasticity and memory. Nature
Reviews Neuroscience, 5(1):45–54.
Lansner, A. (2009). Associative memory models: from the cell-assembly theory to
biophysically detailed cortex simulations. Trends in neurosciences, 32(3):178–186.
BIBLIOGRAPHY
161
Lansner, A., Benjaminsson, S., and Johansson, C. (2009). From ann to biomimetic
information processing. In Biologically Inspired Signal Processing for Chemical
Sensing, pages 33–43. Springer.
Lansner, A. and Diesmann, M. (2012). Virtues, pitfalls, and methodology of neuronal
network modeling and simulations on supercomputers. In Computational Systems
Neurobiology, pages 283–315. Springer.
Lansner, A. and Ekeberg, Ö. (1989). A one-layer feedback artificial neural network
with a bayesian learning rule. International journal of neural systems, 1(01):77–
87.
Lansner, A. and Fransén, E. (1992). Modelling hebbian cell assemblies comprised of
cortical neurons. Network: Computation in Neural Systems, 3(2):105–119.
Lansner, A., Hemani, A., and Farahini, N. (2014). Spiking brain models: Computation, memory and communication constraints for custom hardware implementation. In Design Automation Conference (ASP-DAC), 2014 19th Asia and South
Pacific, pages 556–562. IEEE.
Lansner, A. and Holst, A. (1996). A higher order bayesian neural network with
spiking units. International Journal of Neural Systems, 7(02):115–128.
Lansner, A., Marklund, P., Sikström, S., and Nilsson, L.-G. (2013). Reactivation in
working memory: an attractor network model of free recall. PloS one, 8(8):e73776.
Lapicque, L. (1907). Recherches quantitatives sur l’excitation électrique des nerfs
traitée comme une polarisation. J. Physiol. Pathol. Gen, 9(1):620–635.
Larose, D. T. (2014). Discovering knowledge in data: an introduction to data mining.
John Wiley & Sons.
Lashley, K. S. (1951). The problem of serial order in behavior. Bobbs-Merrill.
Latham, P. E., Deneve, S., and Pouget, A. (2003). Optimal computation with
attractor networks. Journal of Physiology-Paris, 97(4):683–694.
Laughlin, S. B. (2001). Energy as a constraint on the coding and processing of
sensory information. Current opinion in neurobiology, 11(4):475–480.
Laurent, G. (2002). Olfactory network dynamics and the coding of multidimensional
signals. Nature reviews neuroscience, 3(11):884–895.
Laurent, G. and Davidowitz, H. (1994). Encoding of olfactory information with
oscillating neural assemblies. Science, 265(5180):1872–1875.
162
BIBLIOGRAPHY
Laurent, G., Stopfer, M., Friedrich, R. W., Rabinovich, M. I., Volkovskii, A., and
Abarbanel, H. D. (2001). Odor encoding as an active, dynamical process: experiments, computation, and theory. Annual review of neuroscience, 24(1):263–297.
Lazarini, F. and Lledo, P.-M. (2011). Is adult neurogenesis essential for olfaction?
Trends in neurosciences, 34(1):20–30.
Lazzaro, J. and Mead, C. (1989). Silicon modeling of pitch perception. Proceedings
of the National Academy of Sciences, 86(23):9597–9601.
Le Roux, N. and Bengio, Y. (2008). Representational power of restricted boltzmann
machines and deep belief networks. Neural Computation, 20(6):1631–1649.
Leal, W. S. (2013). Odorant reception in insects: roles of receptors, binding proteins,
and degrading enzymes. Annual review of entomology, 58:373–391.
Lebedev, M. A. and Nicolelis, M. A. (2006). Brain–machine interfaces: past, present
and future. TRENDS in Neurosciences, 29(9):536–546.
Lee, C., Rohrer, W., and Sparks, D. (1988). Population coding of saccadic eye
movements by neurons in the superior colliculus. Nature, 332:24.
Lee, D. K., Itti, L., Koch, C., and Braun, J. (1999). Attention activates winnertake-all competition among visual filters. Nature neuroscience, 2(4):375–381.
Lee, T. S. and Mumford, D. (2003). Hierarchical bayesian inference in the visual
cortex. JOSA A, 20(7):1434–1448.
Lennie, P. (2003). The cost of cortical computation. Current biology, 13(6):493–497.
Lepousez, G., Nissant, A., Bryant, A. K., Gheusi, G., Greer, C. A., and Lledo, P.-M.
(2014). Olfactory learning promotes input-specific synaptic plasticity in adultborn neurons. Proceedings of the National Academy of Sciences, 111(38):13984–
13989.
Lepousez, G., Valley, M. T., and Lledo, P.-M. (2013). The impact of adult neurogenesis on olfactory bulb circuits and computations. Annual review of physiology,
75:339–363.
Lerner, B., Guterman, H., Aladjem, M., et al. (1999). A comparative study of
neural network based feature extraction paradigms. Pattern Recognition Letters,
20(1):7–14.
Levy, S. D. and Gayler, R. (2008). Vector symbolic architectures: A new building
material for artificial general intelligence. In Proceedings of the 2008 conference
on Artificial General Intelligence 2008: Proceedings of the First AGI Conference,
pages 414–418. IOS Press.
BIBLIOGRAPHY
163
Levy, W. B. and Baxter, R. A. (1996). Energy efficient neural codes. Neural computation, 8(3):531–543.
Lewis, N., Bornat, Y., and Renaud, S. (2009). Spiking neural network hardware implementation: from single neurons to networks. In in NeuroComp’09, Conférence
de Neurosciences Computationnelles.
Li, G., Lou, Z., Wang, L., Li, X., and Freeman, W. J. (2005). Application of chaotic
neural model based on olfactory system on pattern recognitions. In Advances in
Natural Computation, pages 378–381. Springer.
Li, Y., Lu, H., Cheng, P.-l., Ge, S., Xu, H., Shi, S.-H., and Dan, Y. (2012). Clonally
related visual cortical neurons show similar stimulus feature selectivity. Nature,
486(7401):118–121.
Linares-Barranco, B. and Serrano-Gotarredona, T. (2009). Memristance can explain
spike-time-dependent-plasticity in neural synapses. Nature precedings, pages 1–4.
Lindeberg, T. (2013a). A computational theory of visual receptive fields. Biological
cybernetics, 107(6):589–635.
Lindeberg, T. (2013b). Invariance of visual operations at the level of receptive fields.
PLOS ONE, 8(7):e66990–1.
Linster, C. and Cleland, T. A. (2009). Glomerular microcircuits in the olfactory
bulb. Neural Networks, 22(8):1169–1173.
Linster, C. and Fontanini, A. (2014). Functional neuromodulation of chemosensation
in vertebrates. Current opinion in neurobiology, 29:82–87.
Lisman, J. and Spruston, N. (2005). Postsynaptic depolarization requirements for
ltp and ltd: a critique of spike timing-dependent plasticity. Nature neuroscience,
8(7):839–841.
Lisman, J. and Spruston, N. (2010). Questions about stdp as a general model of
synaptic plasticity. Frontiers in synaptic neuroscience, 2.
Liu, F. and Wang, X.-J. (2008). A common cortical circuit mechanism for perceptual
categorical discrimination and veridical judgment. PLoS Computational Biology,
4(12):e1000253+.
Liu, S.-C. and Delbruck, T. (2010). Neuromorphic sensory systems. Current opinion
in neurobiology, 20(3):288–295.
Liu, S.-C. and van Schaik, A. (2014). Neuromorphic sensors, cochlea. In Encyclopedia
of Computational Neuroscience, pages 1–5. Springer.
164
BIBLIOGRAPHY
Lomo, T. (1966). Frequency potentiation of excitatory synaptic activity in dentate
area of hippocampal formation. In Acta Physiologica Scandinavica, page 128.
BLACKWELL SCIENCE LTD PO BOX 88, OSNEY MEAD, OXFORD OX2
0NE, OXON, ENGLAND.
Lomp, O., Zibner, S. K. U., Richter, M., Rañó, I. n., and Schöner, G. (2013).
A software framework for cognition, embodiment, dynamics, and autonomy in
robotics: cedar. In Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A. E.,
Appollini, B., and Kasabov, N., editors, Artificial Neural Networks and Machine
Learning — ICANN 2013, volume 8131 of Lecture Notes in Computer Science,
pages 475–482. Springer Berlin Heidelberg.
London, M. and Häusser, M. (2005). Dendritic computation. Annu. Rev. Neurosci.,
28:503–532.
Longtin, A. (1993). Stochastic resonance in neuron models. Journal of statistical
physics, 70(1-2):309–327.
Louie, K., Khaw, M. W., and Glimcher, P. W. (2013). Normalization is a general neural mechanism for context-dependent decision making. Proceedings of the
National Academy of Sciences, 110(15):6139–6144.
Lu, H., Setiono, R., and Liu, H. (1996). Effective data mining using neural networks.
Knowledge and Data Engineering, IEEE Transactions on, 8(6):957–961.
Lundqvist, M. (2013). Oscillations and spike statistics in biophysical attractor networks.
Lundqvist, M., Herman, P., and Lansner, A. (2013). Effect of prestimulus alpha
power, phase, and synchronization on stimulus detection rates in a biophysical
attractor network model. The Journal of Neuroscience, 33(29):11817–11824.
Lundqvist, M., Rehn, M., Djurfeldt, M., and Lansner, A. (2006). Attractor dynamics
in a modular network model of neocortex. Network: Computation in Neural
Systems, 17(3):253–276.
Lyon, D. C., Nassi, J. J., and Callaway, E. M. (2010). A disynaptic relay from
superior colliculus to dorsal stream visual cortex in macaque monkey. Neuron,
65(2):270–279.
Lyon, R. F. and Mead, C. (1988). An analog electronic cochlea. Acoustics, Speech
and Signal Processing, IEEE Transactions on, 36(7):1119–1134.
Ma, L., Qiu, Q., Gradwohl, S., Scott, A., Elden, Q. Y., Alexander, R., Wiegraebe,
W., and Yu, C. R. (2012). Distributed representation of chemical features and
tunotopic organization of glomeruli in the mouse olfactory bulb. Proceedings of
the National Academy of Sciences, 109(14):5481–5486.
BIBLIOGRAPHY
165
Ma, W. J., Beck, J. M., Latham, P. E., and Pouget, A. (2006). Bayesian inference
with probabilistic population codes. Nature neuroscience, 9(11):1432–1438.
Ma, W. J. and Pouget, A. (2008). Linking neurons to behavior in multisensory
perception: A computational review. Brain research, 1242:4–12.
Maass, W., Natschläger, T., and Markram, H. (2002). Real-time computing without
stable states: A new framework for neural computation based on perturbations.
Neural computation, 14(11):2531–2560.
MacKay, D. J. (1995). Probable networks and plausible predictions-a review of practical bayesian methods for supervised neural networks. Network: Computation in
Neural Systems, 6(3):469–505.
Maex, R. and Orban, G. (1996). Model circuit of spiking neurons generating directional selectivity in simple cells. Journal of Neurophysiology, 75(4):1515–1545.
Mahon, B. Z. and Cantlon, J. F. (2011). The specialization of function: Cognitive
and neural perspectives. Cognitive neuropsychology, 28(3-4):147–155.
Mahowald, M. (1994a). An analog VLSI system for stereoscopic vision. Springer.
Mahowald, M. (1994b). The silicon retina. In An Analog VLSI System for Stereoscopic Vision, pages 4–65. Springer.
Malcolm Dyson, G. (1938). The scientific basis of odour. Journal of the Society of
Chemical Industry, 57(28):647–651.
Malnic, B., Hirono, J., Sato, T., and Buck, L. B. (1999). Combinatorial receptor
codes for odors. Cell, 96(5):713–723.
Marc, R. E., Jones, B. W., Lauritzen, J. S., Watt, C. B., and Anderson, J. R. (2012).
Building retinal connectomes. Current opinion in neurobiology, 22(4):568–574.
Marc, R. E., Jones, B. W., Watt, C. B., Anderson, J. R., Sigulinsky, C., and Lauritzen, S. (2013). Retinal connectomics: towards complete, accurate networks.
Progress in retinal and eye research, 37:141–162.
Marco, S., Gutiérrez-Gálvez, A., Lansner, A., Martinez, D., Rospars, J., Beccherelli,
R., Perera, A., Pearce, T. C., Verschure, P., and Persaud, K. (2014). A biomimetic
approach to machine olfaction, featuring a very large-scale chemical sensor array
and embedded neuro-bio-inspired computation. Microsystem technologies, 20(45):729–742.
Marcus, G. F. (2006). Cognitive architecture and descent with modification. Cognition, 101(2):443–465.
166
BIBLIOGRAPHY
Margrie, T. W. and Schaefer, A. T. (2003). Theta oscillation coupled spike latencies
yield computational vigour in a mammalian sensory system. The Journal of
physiology, 546(2):363–374.
Markov, N. T. and Kennedy, H. (2013). The importance of being hierarchical.
Current opinion in neurobiology, 23(2):187–194.
Markram, H. (2006).
7(2):153–160.
The blue brain project.
Nature Reviews Neuroscience,
Markram, H. (2012). The human brain project. Scientific American, 306(6):50–55.
Markram, H. (2014). Understanding the brain: Organizational and scientific challenges. From Physics to Daily Life: Applications in Biology, Medicine, and Healthcare, pages 179–190.
Markram, H., Gerstner, W., and Sjöström, P. J. (2012). Spike-timing-dependent
plasticity: a comprehensive overview. Frontiers in synaptic neuroscience, 4.
Markram, H., Lübke, J., Frotscher, M., and Sakmann, B. (1997). Regulation
of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science,
275(5297):213–215.
Marr, D. (1982). A computational investigation into the human representation and
processing of visual information. WH San Francisco: Freeman and Company.
Martone, M. E., Gupta, A., and Ellisman, M. H. (2004). E-neuroscience: challenges
and triumphs in integrating distributed data from molecules to brains. Nature
neuroscience, 7(5):467–472.
Masquelier, T. (2012). Relative spike time coding and stdp-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a
computational model. Journal of computational neuroscience, 32(3):425–441.
Masquelier, T., Guyonneau, R., and Thorpe, S. J. (2009a). Competitive stdp-based
spike pattern learning. Neural computation, 21(5):1259–1276.
Masquelier, T., Hugues, E., Deco, G., and Thorpe, S. J. (2009b). Oscillations,
phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning
scheme. The Journal of neuroscience, 29(43):13484–13493.
Mauk, M. D. and Buonomano, D. V. (2004). The neural basis of temporal processing.
Annu. Rev. Neurosci., 27:307–340.
McCulloch, W. S. and Pitts, W. (1943). A logical calculus of the ideas immanent in
nervous activity. The bulletin of mathematical biophysics, 5(4):115–133.
BIBLIOGRAPHY
Mead, C. (1990). Neuromorphic electronic systems.
78(10):1629–1636.
167
Proceedings of the IEEE,
Mead, C. and Ismail, M. (1989). Analog VLSI implementation of neural systems.
Springer.
Meier, K. (2013). How to simulate the brain without a computer. In Biologically
Inspired Cognitive Architectures 2012, pages 37–37. Springer.
Meister, M. and Bonhoeffer, T. (2001). Tuning and topography in an odor map on
the rat olfactory bulb. The journal of neuroscience, 21(4):1351–1360.
Meli, C. and Lansner, A. (2013). A modular attractor associative memory with
patchy connectivity and weight pruning. Network: Computation in Neural Systems, 24(4):129–150.
Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan,
F., Jackson, B. L., Imam, N., Guo, C., Nakamura, Y., et al. (2014). A million
spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673.
Meunier, D., Achard, S., Morcom, A., and Bullmore, E. (2009a). Age-related
changes in modular organization of human brain functional networks. Neuroimage, 44(3):715–723.
Meunier, D., Lambiotte, R., and Bullmore, E. T. (2010). Modular and hierarchically
modular organization of brain networks. Frontiers in neuroscience, 4.
Meunier, D., Lambiotte, R., Fornito, A., Ersche, K. D., and Bullmore, E. T. (2009b).
Hierarchical modularity in human brain functional networks. Frontiers in neuroinformatics, 3.
Miller, K. D., Erwin, E., and Kayser, A. (1999). Is the development of orientation
selectivity instructed by activity? Journal of neurobiology, 41(1):44–57.
Miller, W. T., Werbos, P. J., and Sutton, R. S. (1995). Neural networks for control.
MIT press.
Milner, A. D. and Goodale, M. A. (2008). Two visual systems re-viewed. Neuropsychologia, 46(3):774–785.
Milnor, J. W. (2006). Attractor. 1(11):1815. Scholarpedia revision 91013.
Min, R., Santello, M., and Nevian, T. (2012). The computational power of astrocyte
mediated synaptic plasticity. Frontiers in computational neuroscience, 6.
Mishkin, M., Ungerleider, L. G., and Macko, K. A. (1983). Object vision and spatial
vision: two cortical pathways. Trends in neurosciences, 6:414–417.
168
BIBLIOGRAPHY
Misra, J. and Saha, I. (2010). Artificial neural networks in hardware: A survey of
two decades of progress. Neurocomputing, 74(1):239–255.
Miura, K., Mainen, Z. F., and Uchida, N. (2012). Odor representations in olfactory
cortex: distributed rate coding and decorrelated population activity. Neuron,
74(6):1087–1098.
Miyamichi, K., Amat, F., Moussavi, F., Wang, C., Wickersham, I., Wall, N. R.,
Taniguchi, H., Tasic, B., Huang, Z. J., He, Z., et al. (2011). Cortical representations of olfactory input by trans-synaptic tracing. Nature, 472(7342):191–196.
Moini, A. (2000). Vision Chips. Springer Science & Business Media.
Moiseff, A. and Konishi, M. (1981). Neuronal and behavioral sensitivity to binaural
time differences in the owl. The Journal of Neuroscience, 1(1):40–48.
Mombaerts, P. (2006). Axonal wiring in the mouse olfactory system. Annu. Rev.
Cell Dev. Biol., 22:713–737.
Mombaerts, P., Wang, F., Dulac, C., Chao, S. K., Nemes, A., Mendelsohn, M.,
Edmondson, J., and Axel, R. (1996). Visualizing an olfactory sensory map. Cell,
87(4):675–686.
Montague, P. R., Dolan, R. J., Friston, K. J., and Dayan, P. (2012). Computational
psychiatry. Trends in cognitive sciences, 16(1):72–80.
Mori, K. (1995). Relation of chemical structure to specificity of response in olfactory
glomeruli. Current opinion in neurobiology, 5(4):467–474.
Mori, K., Nagao, H., and Yoshihara, Y. (1999). The olfactory bulb: coding and
processing of odor molecule information. Science, 286(5440):711–715.
Mori, K. and Shepherd, G. M. (1994). Emerging principles of molecular signal
processing by mitral/tufted cells in the olfactory bulb. In Seminars in cell biology,
volume 5, pages 65–74. Elsevier.
Morishita, H., Miwa, J. M., Heintz, N., and Hensch, T. K. (2010). Lynx1, a cholinergic brake, limits plasticity in adult visual cortex. Science, 330(6008):1238–1240.
Morrison, A., Aertsen, A., and Diesmann, M. (2007). Spike-timing-dependent plasticity in balanced random networks. Neural computation, 19(6):1437–1467.
Moser, M.-B. and Moser, E. I. (2015). 2014 nobel prize in physiology or medicine.
Journal of Investigative Medicine, 63(1).
Mountcastle, V. B. (1997). The columnar organization of the neocortex. Brain,
120(4):701–722.
BIBLIOGRAPHY
169
Mozzachiodi, R. and Byrne, J. H. (2010). More than synaptic plasticity: role of
nonsynaptic plasticity in learning and memory. Trends in neurosciences, 33(1):17–
26.
Murthy, V. N. (2011). Olfactory maps in the brain. Annual review of neuroscience,
34:233–258.
Musallam, S., Corneil, B., Greger, B., Scherberger, H., and Andersen, R. (2004).
Cognitive control signals for neural prosthetics. Science, 305(5681):258–262.
Nagayama, S., Enerva, A., Fletcher, M. L., Masurkar, A. V., Igarashi, K. M., Mori,
K., and Chen, W. R. (2010). Differential axonal projection of mitral and tufted
cells in the mouse main olfactory system. Frontiers in neural circuits, 4.
Nagayama, S., Homma, R., and Imamura, F. (2014). Neuronal organization of
olfactory bulb circuits. Frontiers in neural circuits, 8.
Nageswaran, J. M., Dutt, N., Krichmar, J. L., Nicolau, A., and Veidenbaum, A. V.
(2009). A configurable simulation environment for the efficient simulation of largescale spiking neural networks on graphics processors. Neural Networks, 22(5):791–
800.
Nassi, J. J., Lyon, D. C., and Callaway, E. M. (2006). The parvocellular lgn provides
a robust disynaptic input to the visual motion area mt. Neuron, 50(2):319–327.
Newsome, W. T. and Pare, E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (mt). The Journal of
Neuroscience, 8(6):2201–2211.
Nijhawan, R. and Wu, S. (2009). Compensating time delays with neural predictions:
are predictions sensory or motor? Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences, 367(1891):1063–1078.
Niklasson, L. and van Gelder, T. (1994). Can connectionist models exhibit nonclassical structure sensitivity. In Proceedings of the Cognitive Science Society,
pages 664–669. Citeseer.
Oaksford, M. and Chater, N. (2007). Bayesian rationality the probabilistic approach
to human reasoning. Oxford University Press.
Oram, M. W., Földiák, P., Perrett, D. I., and Sengpiel, F. (1998). Theideal homunculus’: decoding neural population signals. Trends in neurosciences, 21(6):259–265.
Orre, R., Lansner, A., Bate, A., and Lindquist, M. (2000). Bayesian neural networks
with confidence estimations applied to data mining. Computational Statistics &
Data Analysis, 34(4):473–493.
170
BIBLIOGRAPHY
Page, M. (2000). Connectionist modelling in psychology: A localist manifesto. Behavioral and Brain Sciences, 23(04):443–467.
Paik, S.-B. and Ringach, D. L. (2011). Retinal origin of orientation maps in visual
cortex. Nature neuroscience, 14(7):919–925.
Painkras, E., Plana, L. A., Garside, J., Temple, S., Davidson, S., Pepper, J., Clark,
D., Patterson, C., and Furber, S. (2012). Spinnaker: a multi-core system-onchip for massively-parallel neural net simulation. In Custom Integrated Circuits
Conference (CICC), 2012 IEEE, pages 1–4. IEEE.
Pao, Y.-H. (1989). Adaptative pattern recognition and neural networks.
Pearce, T. C., Schiffman, S. S., Nagle, H. T., and Gardner, J. W. (2006). Handbook
of machine olfaction: electronic nose technology. John Wiley & Sons.
Pearl, J. (2000). Causality: models, reasoning and inference, volume 29. Cambridge
Univ Press.
Perez-Orive, J., Mazor, O., Turner, G. C., Cassenaer, S., Wilson, R. I., and Laurent,
G. (2002). Oscillations and sparsening of odor representations in the mushroom
body. Science, 297(5580):359–365.
Perrinet, L. U. and Masson, G. S. (2012). Motion-Based prediction is sufficient to
solve the aperture problem. Neural Computation, 24(10):2726–2750.
Pfister, J.-P. and Gerstner, W. (2006). Triplets of spikes in a model of spike timingdependent plasticity. The Journal of neuroscience, 26(38):9673–9682.
Phillips, C., Zeki, S., and Barlow, H. (1984). Localization of function in the cerebral
cortex: past, present and future. Brain, 107(1):328–361.
Piccinini, G. and Shagrir, O. (2014). Foundations of computational neuroscience.
Current opinion in neurobiology, 25:25–30.
Piccolino, M. (1998). Animal electricity and the birth of electrophysiology: the
legacy of luigi galvani. Brain research bulletin, 46(5):381–407.
Pinching, A. and Powell, T. (1971). The neuron types of the glomerular layer of the
olfactory bulb. Journal of Cell Science, 9(2):305–345.
Pinotsis, D., Robinson, P., beim Graben, P., and Friston, K. (2014). Neural masses
and fields: modeling the dynamics of brain activity. Frontiers in computational
neuroscience, 8.
Pinotsis, D. A., Leite, M., and Friston, K. J. (2013). On conductance-based neural
field models. Frontiers in computational neuroscience, 7.
BIBLIOGRAPHY
171
Pitkow, X. and Meister, M. (2012). Decorrelation and efficient coding by retinal
ganglion cells. Nature neuroscience, 15(4):628–635.
Plaut, D. C. and McClelland, J. L. (2010). Locating object knowledge in the brain:
Comment on bowers’s (2009) attempt to revive the grandmother cell hypothesis.
Polat, U., Mizobe, K., Pettet, M. W., Kasamatsu, T., and Norcia, A. M. (1998).
Collinear stimuli regulate visual responses depending on cell’s contrast threshold.
Nature, 391(6667):580–584.
Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data?
Trends in cognitive sciences, 10(2):59–63.
Pomerleau, D. A. (1991). Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1):88–97.
Poo, C. and Isaacson, J. S. (2009). Odor representations in olfactory cortex:“sparse”
coding, global inhibition, and oscillations. Neuron, 62(6):850–861.
Pospischil, M., Toledo-Rodriguez, M., Monier, C., Piwkowska, Z., Bal, T., Frégnac,
Y., Markram, H., and Destexhe, A. (2008). Minimal hodgkin–huxley type models
for different classes of cortical and thalamic neurons. Biological cybernetics, 99(45):427–441.
Pouget, A., Dayan, P., and Zemel, R. (2000). Information processing with population
codes. Nature Reviews Neuroscience, 1(2):125–132.
Pouget, A., Dayan, P., and Zemel, R. S. (2003). Inference and computation with
population codes. Annual review of neuroscience, 26(1):381–410.
Pouget, A., Zhang, K., Deneve, S., and Latham, P. E. (1998). Statistically efficient
estimation using population coding. Neural computation, 10(2):373–401.
Price, D. J., Willshaw, D. J., and Society, G. P. (2000). Mechanisms of cortical
development, volume 6. Oxford University Press Oxford.
Priebe, N. J., Lisberger, S. G., and Movshon, J. A. (2006). Tuning for spatiotemporal
frequency and speed in directionally selective neurons of macaque striate cortex.
The Journal of Neuroscience, 26(11):2941–2950.
Quiroga, R. Q. (2012). Concept cells: the building blocks of declarative memory
functions. Nature Reviews Neuroscience, 13(8):587–597.
Quiroga, R. Q., Kreiman, G., Koch, C., and Fried, I. (2008). Sparse but not
‘grandmother-cell’coding in the medial temporal lobe. Trends in cognitive sciences, 12(3):87–91.
172
BIBLIOGRAPHY
Quiroga, R. Q. and Panzeri, S. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nature Reviews Neuroscience,
10(3):173–185.
Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., and Fried, I. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435(7045):1102–
1107.
Raichle, M. E. and Mintun, M. A. (2006). Brain work and brain imaging. Annu.
Rev. Neurosci., 29:449–476.
Rall, W., Shepherd, G., Reese, T., and Brightman, M. (1966). Dendrodendritic
synaptic pathway for inhibition in the olfactory bulb. Experimental neurology,
14(1):44–56.
Rall, W. and Shepherd, G. M. (1968). Theoretical reconstruction of field potentials
and dendrodendritic synaptic interactions in olfactory bulb. J. Neurophysiol,
31(6):884–915.
Rao, R. P. (2004). Bayesian computation in recurrent neural circuits. Neural computation, 16(1):1–38.
Rao, R. P. and Ballard, D. H. (1999). Predictive coding in the visual cortex: a
functional interpretation of some extra-classical receptive-field effects. Nature
neuroscience, 2(1):79–87.
Rao, R. P. and Sejnowski, T. J. (2003). Self–organizing neural systems based on
predictive learning. Philosophical Transactions of the Royal Society of London.
Series A: Mathematical, Physical and Engineering Sciences, 361(1807):1149–1175.
Rasmussen, D. and Eliasmith, C. (2011). A neural model of rule generation in
inductive reasoning. Topics in Cognitive Science, 3(1):140–153.
Rauss, K., Schwartz, S., and Pourtois, G. (2011). Top-down effects on early visual
processing in humans: a predictive coding framework. Neuroscience & Biobehavioral Reviews, 35(5):1237–1253.
Redies, C. and Puelles, L. (2001). Modularity in vertebrate brain development and
evolution. Bioessays, 23(12):1100–1111.
Redondo, R. L. and Morris, R. G. (2011). Making memories last: the synaptic
tagging and capture hypothesis. Nature Reviews Neuroscience, 12(1):17–30.
Reid, R. C., Alonso, J.-M., et al. (1995). Specificity of monosynaptic connections
from thalamus to visual cortex. Nature, 378(6554):281–283.
BIBLIOGRAPHY
173
Renaud, S., Tomas, J., Bornat, Y., Daouzli, A., and Saïghi, S. (2007). Neuromimetic
ics with analog cores: an alternative for simulating spiking neural networks. In
Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on,
pages 3355–3358. IEEE.
Ressler, K. J., Sullivan, S. L., and Buck, L. B. (1993). A zonal organization of
odorant receptor gene expression in the olfactory epithelium. Cell, 73(3):597–609.
Ressler, K. J., Sullivan, S. L., and Buck, L. B. (1994). Information coding in the
olfactory system: evidence for a stereotyped and highly organized epitope map in
the olfactory bulb. Cell, 79(7):1245–1255.
Reynolds, J. H. and Heeger, D. J. (2009). The normalization model of attention.
Neuron, 61(2):168–185.
Ribrault, C., Sekimoto, K., and Triller, A. (2011). From the stochasticity of molecular processes to the variability of synaptic transmission. Nature Reviews Neuroscience, 12(7):375–387.
Richert, M., Nageswaran, J. M., Dutt, N., and Krichmar, J. L. (2011). An efficient
simulation environment for modeling large-scale cortical processing. Frontiers in
neuroinformatics, 5.
Rieke, F. (1999). Spikes: exploring the neural code. MIT press.
Rieke, F. and Baylor, D. (1998). Single-photon detection by rod cells of the retina.
Reviews of Modern Physics, 70(3):1027.
Riesenhuber, M. and Poggio, T. (1999). Hierarchical models of object recognition
in cortex. Nature neuroscience, 2(11):1019–1025.
Ringach, D. L. (2004). Haphazard wiring of simple receptive fields and orientation
columns in visual cortex. Journal of neurophysiology, 92(1):468–476.
Ringach, D. L. (2007). On the origin of the functional architecture of the cortex.
PloS one, 2(2):e251.
Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge university
press.
Rochefort, N. L., Narushima, M., Grienberger, C., Marandi, N., Hill, D. N., and
Konnerth, A. (2011). Development of direction selectivity in mouse cortical neurons. Neuron, 71(3):425–432.
Rolls, E. T. (2007). An attractor network in the hippocampus: theory and neurophysiology. Learning & Memory, 14(11):714–731.
174
BIBLIOGRAPHY
Rolls, E. T. and Deco, G. (2002). Computational neuroscience of vision. Oxford
university press Oxford.
Rolls, E. T. and Deco, G. (2010). The noisy brain: stochastic dynamics as a principle
of brain function, volume 34. Oxford university press Oxford.
Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage
and organization in the brain. Psychological review, 65(6):386.
Rospars, J.-P., Gu, Y., Grémiaux, A., and Lucas, P. (2010). Odour transduction in
olfactory receptor neurons. Chinese Journal of Physiology, 53(6):364–372.
Roy, A. (2012). A theory of the brain: localist representation is used widely in the
brain. Frontiers in psychology, 3.
Roy, A. (2013). An extension of the localist representation theory: grandmother
cells are also widely used in the brain. Frontiers in psychology, 4.
Roy, A. (2014). On findings of category and other concept cells in the brain: Some
theoretical perspectives on mental representation. Cognitive Computation, pages
1–6.
Rubinstein, J. T. (1995). Threshold fluctuations in an n sodium channel model of
the node of ranvier. Biophysical journal, 68(3):779.
Rumelhart, D. E., McClelland, J. L., Group, P. R., et al. (1988). Parallel distributed
processing, volume 1. IEEE.
Ryan, J., Lin, M.-J., and Miikkulainen, R. (1998). Intrusion detection with neural
networks. Advances in neural information processing systems, pages 943–949.
Sakano, H. (2010). Neural map formation in the mouse olfactory system. Neuron,
67(4):530–542.
Sakmann, B. and Neher, E. (2009). Single-channel recording. Springer Science &
Business Media.
Salakhutdinov, R. and Hinton, G. E. (2009). Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pages 448–455.
Salzman, C. D. and Newsome, W. T. (1994). Neural mechanisms for forming a
perceptual decision. Science, 264(5156):231–237.
Sand, A., Schmidt, T. M., and Kofuji, P. (2012). Diverse types of ganglion cell
photoreceptors in the mammalian retina. Progress in retinal and eye research,
31(4):287–302.
BIBLIOGRAPHY
175
Sandamirskaya, Y., Zibner, S. K., Schneegans, S., and Schöner, G. (2013). Using
dynamic field theory to extend the embodiment stance toward higher cognition.
New Ideas in Psychology, 31(3):322–339.
Sandberg, A. (2003). Bayesian attractor neural network models of memory. Dissertation in Computer Science, Stockholm University.
Sandberg, A., Lansner, A., and Petersson, K. M. (2002). A bayesian attractor
network with incremental learning. Network: Computation in neural systems,
13(2):179–194.
Sarter, M., Berntson, G. G., and Cacioppo, J. T. (1996). Brain imaging and cognitive neuroscience: Toward strong inference in attributing function to structure.
American Psychologist, 51(1):13.
Sato, T. K., Nauhaus, I., and Carandini, M. (2012). Traveling waves in visual cortex.
Neuron, 75(2):218–229.
Sceniak, M. P., Ringach, D. L., Hawken, M. J., and Shapley, R. (1999). Contrast’s effect on spatial summation by macaque v1 neurons. Nature neuroscience,
2(8):733–739.
Schaefer, A. T. and Margrie, T. W. (2007). Spatiotemporal representations in the
olfactory system. Trends in neurosciences, 30(3):92–100.
Schain, M., Benjaminsson, S., Varnäs, K., Forsberg, A., Halldin, C., Lansner, A.,
Farde, L., and Varrone, A. (2013). Arterial input function derived from pairwise correlations between pet-image voxels. Journal of Cerebral Blood Flow &
Metabolism, 33(7):1058–1065.
Schemmel, J., Bruderle, D., Grubl, A., Hock, M., Meier, K., and Millner, S. (2010).
A wafer-scale neuromorphic hardware system for large-scale neural modeling. In
Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 1947–1950. IEEE.
Schemmel, J., Fieres, J., and Meier, K. (2008). Wafer-scale integration of analog
neural networks. In Neural Networks, 2008. IJCNN 2008.(IEEE World Congress
on Computational Intelligence). IEEE International Joint Conference on, pages
431–438. IEEE.
Schild, D. and Restrepo, D. (1998). Transduction mechanisms in vertebrate olfactory
receptor cells. Physiological Reviews, 78(2):429–466.
Schmid, M. C., Mrowka, S. W., Turchi, J., Saunders, R. C., Wilke, M., Peters,
A. J., Frank, Q. Y., and Leopold, D. A. (2010). Blindsight depends on the lateral
geniculate nucleus. Nature, 466(7304):373–377.
176
BIBLIOGRAPHY
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural
Networks, 61:85–117.
Schöner, G., Dose, M., and Engels, C. (1995). Dynamics of behavior: Theory
and applications for autonomous robot architectures. Robotics and autonomous
systems, 16(2):213–245.
Schrader, S., Gewaltig, M.-O., Körner, U., and Körner, E. (2009). Cortext: a
columnar model of bottom-up and top-down processing in the neocortex. Neural
Networks, 22(8):1055–1070.
Secundo, L., Snitz, K., and Sobel, N. (2014). The perceptual logic of smell. Current
opinion in neurobiology, 25:107–115.
Sejnowski, T. J., Koch, C., and Churchland, P. S. (1988). Computational neuroscience. Science, 241(4871):1299–1306.
Senn, W., Markram, H., and Tsodyks, M. (2001). An algorithm for modifying
neurotransmitter release probability based on pre-and postsynaptic spike timing.
Neural Computation, 13(1):35–67.
Sereno, M. I., Dale, A., Reppas, J., Kwong, K., Belliveau, J., Brady, T., Rosen, B.,
and Tootell, R. (1995). Borders of multiple visual areas in humans revealed by
functional magnetic resonance imaging. Science, 268(5212):889–893.
Seriès, P. and Seitz, A. R. (2013). Learning what to expect (in visual perception).
Frontiers in human neuroscience, 7.
Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y. (2014).
Overfeat: Integrated recognition, localization and detection using convolutional
networks. In International Conference on Learning Representations (ICLR 2014).
arXiv preprint arXiv:1312.6229.
Seung, H. S. and Sümbül, U. (2014). Neuronal cell types and connectivity: lessons
from the retina. Neuron, 83(6):1262–1272.
Shadmehr, R., Smith, M. A., and Krakauer, J. W. (2010). Error correction, sensory prediction, and adaptation in motor control. Annual review of neuroscience,
33:89–108.
Sharpee, T. O. (2013). Computational identification of receptive fields. Annual
review of neuroscience, 36:103–120.
Shatz, C. J. (1994). Role for spontaneous neural activity in the patterning of connections between retina and lgn during visual system development. International
Journal of Developmental Neuroscience, 12(6):531–546.
BIBLIOGRAPHY
177
Shatz, C. J. (1996). Emergence of order in visual system development. Proceedings
of the National Academy of Sciences, 93(2):602–608.
Shepherd, G. M. (1972). Synaptic organization of the mammalian olfactory bulb.
Physiological Reviews, 52(4):864–917.
Shepherd, G. M. (1987). A molecular vocabulary for olfactiona. Annals of the New
York Academy of Sciences, 510(1):98–103.
Shepherd, G. M. (1994). Discrimination of molecular signals by the olfactory receptor
neuron. Neuron, 13(4):771–790.
Shepherd, G. M., Chen, W. R., Willhite, D., Migliore, M., and Greer, C. A. (2007).
The olfactory granule cell: from classical enigma to central role in olfactory processing. Brain research reviews, 55(2):373–382.
Sherman, S. M. and Guillery, R. (1998). On the actions that one nerve cell can
have on another: distinguishing “drivers” from “modulators”. Proceedings of the
National Academy of Sciences, 95(12):7121–7126.
Sherman, S. M. and Guillery, R. (2002). The role of the thalamus in the flow of
information to the cortex. Philosophical Transactions of the Royal Society of
London. Series B: Biological Sciences, 357(1428):1695–1708.
Sherrington, C. (1966). The integrative action of the nervous system. CUP Archive.
Shoemaker, P. A. (2015). Neuronal networks with nmdars and lateral inhibition
implement winner-takes-all. Frontiers in Computational Neuroscience, 9:12.
Sietsma, J. and Dow, R. J. (1991). Creating artificial neural networks that generalize.
Neural networks, 4(1):67–79.
Silverstein, D. N. and Lansner, A. (2011). Is attentional blink a byproduct of neocortical attractors? Frontiers in computational neuroscience, 5.
Sincich, L. C., Park, K. F., Wohlgemuth, M. J., and Horton, J. C. (2004). Bypassing
v1: a direct geniculate input to area mt. Nature neuroscience, 7(10):1123–1128.
Singer, W. (1994). Putative functions of temporal correlations in neocortical processing. Large-scale neuronal theories of the brain, pages 201–237.
Singer, W. and Gray, C. M. (1995). Visual feature integration and the temporal
correlation hypothesis. Annual review of neuroscience, 18(1):555–586.
Sjöström, J. and Gerstner, W. (2010). Spike-timing dependent plasticity. 5(2):1362.
Scholarpedia revision 142314.
178
BIBLIOGRAPHY
Sjöström, P. J., Turrigiano, G. G., and Nelson, S. B. (2001). Rate, timing, and
cooperativity jointly determine cortical synaptic plasticity. Neuron, 32(6):1149–
1164.
Skaliora, I., Adams, R., and Blakemore, C. (2000). Morphology and growth patterns
of developing thalamocortical axons. The Journal of Neuroscience, 20(10):3650–
3662.
Skottun, B. C., De Valois, R. L., Grosof, D. H., Movshon, J. A., Albrecht, D. G., and
Bonds, A. (1991). Classifying simple and complex cells on the basis of response
modulation. Vision research, 31(7):1078–1086.
Smolensky, P. (1988a). Connectionism, constituency, and the language of thought.
University of Colorado at Boulder.
Smolensky, P. (1988b). The constituent structure of connectionist mental states: A
reply to fodor and pylyshyn. The Southern Journal of Philosophy, 26(S1):137–161.
Song, S., Miller, K. D., and Abbott, L. F. (2000). Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience,
3(9):919–926.
Soodak, R. E. (1987). The retinal ganglion cell mosaic defines orientation columns
in striate cortex. Proceedings of the National Academy of Sciences, 84(11):3936–
3940.
Sosulski, D. L., Bloom, M. L., Cutforth, T., Axel, R., and Datta, S. R. (2011). Distinct representations of olfactory information in different cortical centres. Nature,
472(7342):213–216.
Sporns, O., Chialvo, D. R., Kaiser, M., and Hilgetag, C. C. (2004). Organization,
development and function of complex brain networks. Trends in cognitive sciences,
8(9):418–425.
Spratling, M. W. (2010). Predictive coding as a model of response properties in
cortical area v1. The Journal of Neuroscience, 30(9):3531–3543.
Squire, L. R. (1992). Memory and the hippocampus: a synthesis from findings with
rats, monkeys, and humans. Psychological review, 99(2):195.
Squire, L. R. (2009). The legacy of patient hm for neuroscience. Neuron, 61(1):6–9.
Srihasam, K., Mandeville, J. B., Morocz, I. A., Sullivan, K. J., and Livingstone,
M. S. (2012). Behavioral and anatomical consequences of early versus late symbol
training in macaques. Neuron, 73(3):608–619.
BIBLIOGRAPHY
179
Srihasam, K., Vincent, J. L., and Livingstone, M. S. (2014). Novel domain formation reveals proto-architecture in inferotemporal cortex. Nature neuroscience,
17(12):1776–1783.
Stein, R. B. (1965). A theoretical analysis of neuronal variability. Biophysical
Journal, 5(2):173.
Stephan, K. E. and Mathys, C. (2014). Computational approaches to psychiatry.
Current opinion in neurobiology, 25:85–92.
Stettler, D. D. and Axel, R. (2009). Representations of odor in the piriform cortex.
Neuron, 63(6):854–864.
Stewart, T. C., Choo, X., and Eliasmith, C. (2010). Symbolic reasoning in spiking
neurons: A model of the cortex/basal ganglia/thalamus loop. In Proceeding of the
32nd Annual Meeting of the Cognitive Science Society, pages 1100–1105. Cognitive
Science Society Austin, TX.
Steyvers, M., Griffiths, T. L., and Dennis, S. (2006). Probabilistic inference in human
semantic memory. Trends in Cognitive Sciences, 10(7):327–334.
Su, C.-Y., Menuz, K., and Carlson, J. R. (2009). Olfactory perception: receptors,
cells, and circuits. Cell, 139(1):45–59.
Summerfield, C. and de Lange, F. P. (2014). Expectation in perceptual decision
making: neural and computational mechanisms. Nature Reviews Neuroscience.
Sur, M. and Leamey, C. A. (2001). Development and plasticity of cortical areas and
networks. Nature Reviews Neuroscience, 2(4):251–262.
Takahashi, Y. K., Kurosaki, M., Hirono, S., and Mori, K. (2004). Topographic
representation of odorant molecular features in the rat olfactory bulb. Journal of
neurophysiology, 92(4):2413–2427.
Takeuchi, T., Duszkiewicz, A. J., and Morris, R. G. (2014). The synaptic plasticity and memory hypothesis: encoding, storage and persistence. Philosophical
Transactions of the Royal Society B: Biological Sciences, 369(1633):20130288.
Tan Jie Rui, J. (2015). Relations between the four fundamental electronic variables and devices that implement these relations. Image file is available on
wikipedia and is licensed under the creative commons attribution-share alike 3.0
unported license. http://commons.wikimedia.org/wiki/File:Two-terminal_nonlinear_circuit_elements.svg, accessed 5-March-2015.
Tang, C., Chehayeb, D., Srivastava, K., Nemenman, I., and Sober, S. J. (2014).
Millisecond-scale motor encoding in a cortical vocal area.
PLoS biology,
12(12):e1002018.
180
BIBLIOGRAPHY
Tavares, V. G., Tabarce, S., Principe, J. C., and De Oliveira, P. G. (2007). Freeman olfactory cortex model: A multiplexed kii network implementation. Analog
Integrated Circuits and Signal Processing, 50(3):251–259.
Teschl, G. (2012). Ordinary differential equations and dynamical systems, volume
140. American Mathematical Soc.
Teulière, C., Forestier, S., Lonini, L., Zhang, C., Zhao, Y., Shi, B., and Triesch, J.
(2014). Self-calibrating smooth pursuit through active efficient coding. Robotics
and Autonomous Systems.
Theodoridis, S. and Koutroumbas, K. (2006). Pattern Recognition, chapter 2.2,
pages 13–19. Academic Press, 3rd edition.
Thorpe, S. (1998). Localized versus distributed representations. In The handbook of
brain theory and neural networks, pages 549–552. MIT Press.
Thorpe, S., Delorme, A., and Van Rullen, R. (2001). Spike-based strategies for rapid
processing. Neural networks, 14(6):715–725.
Thrun, S. (2010). Toward robotic cars. Communications of the ACM, 53(4):99–106.
Touhara, K. and Vosshall, L. B. (2009). Sensing odorants and pheromones with
chemosensory receptors. Annual Review of Physiology, 71:307–332.
Trappenberg, T. (2009). Fundamentals of computational neuroscience. Oxford University Press.
Treue, S., Hol, K., and Rauber, H.-J. (2000). Seeing multiple directions of motion—physiology and psychophysics. Nature neuroscience, 3(3):270–276.
Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Network: Computation in Neural Systems, 4(3):259–284.
Trier, Ø. D., Jain, A. K., and Taxt, T. (1996). Feature extraction methods for
character recognition-a survey. Pattern recognition, 29(4):641–662.
Tully, P. J., Hennig, M. H., and Lansner, A. (2014). Synaptic and nonsynaptic plasticity approximating probabilistic inference. Frontiers in synaptic neuroscience,
6.
Turin, L. (1996). A spectroscopic mechanism for primary olfactory reception. Chemical Senses, 21(6):773–791.
Turrigiano, G. G. and Nelson, S. B. (2000). Hebb and homeostasis in neuronal
plasticity. Current opinion in neurobiology, 10(3):358–364.
BIBLIOGRAPHY
181
Turrigiano, G. G. and Nelson, S. B. (2004). Homeostatic plasticity in the developing
nervous system. Nature Reviews Neuroscience, 5(2):97–107.
Tversky, A. and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and
biases. science, 185(4157):1124–1131.
Uchida, N., Poo, C., and Haddad, R. (2014). Coding and transformations in the
olfactory system. Annual review of neuroscience, 37:363–385.
Ungerleider, L. G. and Haxby, J. V. (1994). ‘what’and ‘where’in the human brain.
Current opinion in neurobiology, 4(2):157–165.
van den Heuvel, M. P. and Sporns, O. (2013). Network hubs in the human brain.
Trends in cognitive sciences, 17(12):683–696.
Van Essen, D. C. and Gallant, J. L. (1994). Neural mechanisms of form and motion
processing in the primate visual system. Neuron, 13(1):1–10.
Van Essen, D. C. and Maunsell, J. H. (1983). Hierarchical organization and functional streams in the visual cortex. Trends in neurosciences, 6:370–375.
Van Horn, S. C., Erişir, A., and Sherman, S. M. (2000). Relative distribution of
synapses in the a-laminae of the lateral geniculate nucleus of the cat. Journal of
Comparative Neurology, 416(4):509–520.
Van Rossum, M. C., Bi, G. Q., and Turrigiano, G. G. (2000). Stable hebbian
learning from spike timing-dependent plasticity. The Journal of Neuroscience,
20(23):8812–8821.
Van Rullen, R. and Thorpe, S. J. (2001). Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural computation,
13(6):1255–1283.
VanRullen, R., Guyonneau, R., and Thorpe, S. J. (2005). Spike times make sense.
Trends in neurosciences, 28(1):1–4.
Vassar, R., Chao, S. K., Sitcheran, R., Nun, J. M., Vosshall, L. B., Axel, R., et al.
(1994). Topographic organization of sensory projections to the olfactory bulb.
Cell, 79(6):981–991.
Vassar, R., Ngai, J., and Axel, R. (1993). Spatial segregation of odorant receptor
expression in the mammalian olfactory epithelium. Cell, 74(2):309–318.
Velik, R. and Bruckner, D. (2008). Neuro-symbolic networks: Introduction to a new
information processing principle. In Industrial Informatics, 2008. INDIN 2008.
6th IEEE International Conference on, pages 1042–1047. IEEE.
182
BIBLIOGRAPHY
Vilares, I. and Kording, K. (2011). Bayesian models: the structure of the world, uncertainty, behavior, and the brain. Annals of the New York Academy of Sciences,
1224(1):22–39.
Villringer, A. and Chance, B. (1997). Non-invasive optical spectroscopy and imaging
of human brain function. Trends in neurosciences, 20(10):435–442.
Vinje, W. E. and Gallant, J. L. (2000). Sparse coding and decorrelation in primary
visual cortex during natural vision. Science, 287(5456):1273–1276.
Vogginger, B., Schüffny, R., Lansner, A., Cederström, L., Partzsch, J., and Höppner,
S. (2015). Reducing the computational footprint for real-time bcpnn learning.
Name: Frontiers in Neuroscience, 9(2).
Volpi, N. C., Quinton, J. C., and Pezzulo, G. (2014). How active perception and attractor dynamics shape perceptual categorization: A computational model. Neural Networks, 60:1–16.
Wachowiak, M., McGann, J. P., Heyward, P. M., Shao, Z., Puche, A. C., and
Shipley, M. T. (2005). Inhibition of olfactory receptor neuron input to olfactory
bulb glomeruli mediated by suppression of presynaptic calcium influx. Journal of
neurophysiology, 94(4):2700–2712.
Wandell, B. A., Dumoulin, S. O., and Brewer, A. A. (2007). Visual field maps in
human cortex. Neuron, 56(2):366–383.
Wassle, H., Boycott, B., and Illing, R.-B. (1981). Morphology and mosaic of on-and
off-beta cells in the cat retina and some functional considerations. Proceedings of
the Royal Society of London. Series B. Biological Sciences, 212(1187):177–195.
Watson, A. (1997). Why can’t a computer be more like a brain?
277(5334):1934–1936.
Science,
Watt, A. J., van Rossum, M. C., MacLeod, K. M., Nelson, S. B., and Turrigiano,
G. G. (2000). Activity coregulates quantal ampa and nmda currents at neocortical
synapses. Neuron, 26(3):659–670.
Watts, L., Kerns, D. A., Lyon, R. F., and Mead, C. A. (1992). Improved implementation of the silicon cochlea. Solid-State Circuits, IEEE Journal of, 27(5):692–700.
Wei, W. and Feller, M. B. (2011). Organization and development of directionselective circuits in the retina. Trends in neurosciences, 34(12):638–645.
Weinberger, N. M. (1995). Dynamic regulation of receptive fields and maps in the
adult sensory cortex. Annual review of neuroscience, 18:129.
BIBLIOGRAPHY
183
Weiss, Y., Simoncelli, E. P., and Adelson, E. H. (2002). Motion illusions as optimal
percepts. Nature neuroscience, 5(6):598–604.
Weliky, M. (1999). Recording and manipulating the in vivo correlational structure
of neuronal activity during visual cortical development. Journal of neurobiology,
41(1):25–32.
Wesson, D. W. and Wilson, D. A. (2010). Smelling sounds: olfactory–auditory
sensory convergence in the olfactory tubercle. The Journal of Neuroscience,
30(8):3013–3021.
White, L. E., Coppola, D. M., and Fitzpatrick, D. (2001). The contribution of
sensory experience to the maturation of orientation selectivity in ferret visual
cortex. Nature, 411(6841):1049–1052.
White, L. E. and Fitzpatrick, D. (2007). Vision and cortical map development.
Neuron, 56(2):327–338.
Willhite, D. C., Nguyen, K. T., Masurkar, A. V., Greer, C. A., Shepherd, G. M.,
and Chen, W. R. (2006). Viral tracing identifies distributed columnar organization in the olfactory bulb. Proceedings of the National Academy of Sciences,
103(33):12592–12597.
Wilson, A. D. and Baietto, M. (2009). Applications and advances in electronic-nose
technologies. Sensors, 9(7):5099–5148.
Wilson, D. and Sullivan, R. (2011a). Cortical Processing of Odor Objects. Neuron,
72(4):506–519.
Wilson, D. A. (2001). Receptive fields in the rat piriform cortex. Chemical senses,
26(5):577–584.
Wilson, D. A., Kadohisa, M., and Fletcher, M. L. (2006). Cortical contributions to
olfaction: plasticity and perception. In Seminars in cell & developmental biology,
volume 17, pages 462–470. Elsevier.
Wilson, D. A. and Stevenson, R. J. (2003). The fundamental role of memory in
olfactory perception. Trends in neurosciences, 26(5):243–247.
Wilson, D. A. and Sullivan, R. M. (2011b). Cortical processing of odor objects.
Neuron, 72(4):506–519.
Wilson, H. R. and Cowan, J. D. (1972). Excitatory and inhibitory interactions in
localized populations of model neurons. Biophysical journal, 12(1):1.
Wilson, H. R. and Cowan, J. D. (1973). A mathematical theory of the functional
dynamics of cortical and thalamic nervous tissue. Kybernetik, 13(2):55–80.
184
BIBLIOGRAPHY
Wilson, M. T., Robinson, P. A., O’Neill, B., and Steyn-Ross, D. A. (2012). Complementarity of spike-and rate-based dynamics of neural systems. PLoS computational biology, 8(6):e1002560.
Wilson, R. I. and Mainen, Z. F. (2006). Early events in olfactory processing. Annu.
Rev. Neurosci., 29:163–201.
Wise, P. M., Olsson, M. J., and Cain, W. S. (2000). Quantification of odor quality.
Chemical senses, 25(4):429–443.
Wixted, J. T., Squire, L. R., Jang, Y., Papesh, M. H., Goldinger, S. D., Kuhn,
J. R., Smith, K. A., Treiman, D. M., and Steinmetz, P. N. (2014). Sparse and
distributed coding of episodic memory in neurons of the human hippocampus.
Proceedings of the National Academy of Sciences, 111(26):9621–9626.
Wöhler, C. and Anlauf, J. K. (2001). Real-time object recognition on image sequences with the adaptable time delay neural network algorithm—applications
for autonomous vehicles. Image and Vision Computing, 19(9):593–618.
Wolpert, D. H. (1992). Stacked generalization. Neural networks, 5(2):241–259.
Womelsdorf, T., Anton-Erxleben, K., Pieper, F., and Treue, S. (2006). Dynamic
shifts of visual receptive fields in cortical area mt by spatial attention. Nature
neuroscience, 9(9):1156–1160.
Wong, R. O., Meister, M., and Shatz, C. J. (1993). Transient period of correlated bursting activity during development of the mammalian retina. Neuron,
11(5):923–938.
Wright, R. H. (1954). Odour and molecular vibration. i. quantum and thermodynamic considerations. Journal of Applied Chemistry, 4(11):611–615.
y Cajal, S. R. (1995). Histology of the nervous system of man and vertebrates,
volume 1. Oxford University Press, USA.
Yao, Y. and Freeman, W. J. (1990). Model of biological pattern recognition with
spatially chaotic dynamics. Neural networks, 3(2):153–170.
Young, M. P. et al. (1992). Objective analysis of the topological organization of the
primate cortical visual system. Nature, 358(6382):152–155.
Young, M. P. and Yamane, S. (1992). Sparse population coding of faces in the
inferotemporal cortex. Science, 256(5061):1327–1331.
Yu, T. and Cauwenberghs, G. (2010). Analog vlsi biophysical neurons and synapses
with programmable membrane channel kinetics. Biomedical Circuits and Systems,
IEEE Transactions on, 4(3):139–148.
BIBLIOGRAPHY
185
Yu, Y., McTavish, T. S., Hines, M. L., Shepherd, G. M., Valenti, C., and Migliore,
M. (2013). Sparse distributed representation of odors in a large-scale olfactory
bulb circuit. PLoS computational biology, 9(3):e1003014.
Yuille, A. and Kersten, D. (2006). Vision as bayesian inference: analysis by synthesis? Trends in cognitive sciences, 10(7):301–308.
Yuille, A. L. and Grzywacz, N. M. (1989). A winner-take-all mechanism based on
presynaptic inhibition feedback. Neural Computation, 1(3):334–347.
Yuste, R., Nelson, D. A., Rubin, W. W., and Katz, L. C. (1995). Neuronal domains
in developing neocortex: mechanisms of coactivation. Neuron, 14(1):7–17.
Zarzo, M. (2007). The sense of smell: molecular basis of odorant recognition. Biological Reviews, 82(3):455–479.
Zeki, S., Watson, J., Lueck, C., Friston, K. J., Kennard, C., and Frackowiak, R.
(1991). A direct demonstration of functional specialization in human visual cortex.
The Journal of neuroscience, 11(3):641–649.
Zhang, L. I., Tan, A. Y., Schreiner, C. E., and Merzenich, M. M. (2003). Topography
and synaptic shaping of direction selectivity in primary auditory cortex. Nature,
424(6945):201–205.
Zhang, L. I., Tao, H. W., Holt, C. E., Harris, W. A., and Poo, M.-m. (1998). A
critical window for cooperation and competition among developing retinotectal
synapses. Nature, 395(6697):37–44.
Zohary, E., Scase, M., and Braddick, O. (1996). Integration across directions in
dynamic random dot displays: Vector summation or winner take all? Vision
Research, 36(15):2321–2331.
Zou, D.-J., Chesler, A., and Firestein, S. (2009). How the olfactory bulb got its
glomeruli: a just so story? Nature Reviews Neuroscience, 10(8):611–618.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement